CN111222374A - Lie detection data processing method and device, computer equipment and storage medium - Google Patents

Lie detection data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111222374A
CN111222374A CN201811420233.1A CN201811420233A CN111222374A CN 111222374 A CN111222374 A CN 111222374A CN 201811420233 A CN201811420233 A CN 201811420233A CN 111222374 A CN111222374 A CN 111222374A
Authority
CN
China
Prior art keywords
detection model
lie detection
pupil
lie
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811420233.1A
Other languages
Chinese (zh)
Inventor
袁智华
于永昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huiruisitong Information Technology Co Ltd
Original Assignee
Guangzhou Huiruisitong Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huiruisitong Information Technology Co Ltd filed Critical Guangzhou Huiruisitong Information Technology Co Ltd
Priority to CN201811420233.1A priority Critical patent/CN111222374A/en
Publication of CN111222374A publication Critical patent/CN111222374A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Abstract

The application relates to a lie detection model generation method, a device, a lie detection method, a device, computer equipment and a storage medium, wherein a plurality of training images with pupil characteristics are obtained to construct an initial lie detection model, the training images and corresponding labels are input into the initial lie detection model, the pupil characteristics of each training image are extracted through the initial lie detection model, a classification result is determined according to the pupil characteristics, whether the initial lie detection model meets a preset convergence condition or not is determined according to the pupil characteristics, the classification result and the corresponding labels, when the preset convergence condition is not met, the parameters of the initial lie detection model are updated until the preset convergence condition is met, a target lie detection model is obtained, and an image to be recognized is output into the lie detection model to obtain a corresponding recognition result. Pupil recognition is carried out through collected image data, the recognition capability of the lie detection model on pupil features is trained, the pupil features can be accurately extracted through the trained lie detection model, and the accuracy of lie detection is improved.

Description

Lie detection data processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a lie detection model generation method, apparatus, lie detection method, apparatus, computer device, and storage medium.
Background
The lie detection technology is widely applied to criminal investigation of anti-greedy departments at present, a large amount of equipment is required to be worn by a person to be detected in a traditional lie detection mode through heart rate measurement, respiration measurement or sweat measurement, excessive psychological burden is given to the person to be detected, action and expression recognition are adopted in the other mode, although the influence on the psychological burden of the person to be detected is small, the interference of disguised expressions cannot be avoided, and the result of the test is inaccurate.
Disclosure of Invention
To solve the above technical problem or at least partially solve the above technical problem, the present application provides a lie detection model generation method, apparatus, lie detection method, apparatus, computer device, and storage medium.
A lie detection model generation method, comprising:
acquiring a plurality of training images with pupil characteristics, which contain a plurality of user identifications, wherein the training images carry tape labels;
constructing an initial lie detection model, and inputting each training image and a corresponding label into the initial lie detection model;
extracting pupil characteristics of each training image through an initial lie detection model, and determining a classification result of a user identifier corresponding to each training image according to the pupil characteristics;
determining whether the initial lie detection model meets a preset convergence condition or not according to the pupil characteristics, the classification result and the corresponding labels of each training image;
and when the preset convergence condition is not met, updating the model parameters of the initial lie detection model until the preset convergence condition is met to obtain the target lie detection model.
A lie detection model generation apparatus comprising:
the training data acquisition module is used for acquiring a plurality of training images with pupil characteristics, which contain a plurality of user identifications, and the training images carry tags;
the data input module is used for constructing an initial lie detection model and inputting each training image and the corresponding label into the initial lie detection model;
the classification recognition module is used for extracting the pupil characteristics of each training image through the initial lie detection model and determining the classification result of the user identification corresponding to each training image according to the pupil characteristics;
the judging module is used for determining whether the initial lie detection model meets a preset convergence condition according to the pupil characteristics, the classification result and the corresponding labels of each training image;
and the model determining module is used for updating the model parameters of the initial lie detection model when the preset convergence condition is not met until the preset convergence condition is met to obtain the target lie detection model.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a plurality of training images with pupil characteristics, which contain a plurality of user identifications, wherein the training images carry tape labels;
constructing an initial lie detection model, and inputting each training image and a corresponding label into the initial lie detection model;
extracting pupil characteristics of each training image through an initial lie detection model, and determining a classification result of a user identifier corresponding to each training image according to the pupil characteristics;
determining whether the initial lie detection model meets a preset convergence condition or not according to the pupil characteristics, the classification result and the corresponding labels of each training image;
and when the preset convergence condition is not met, updating the model parameters of the initial lie detection model until the preset convergence condition is met to obtain the target lie detection model.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a plurality of training images with pupil characteristics, which contain a plurality of user identifications, wherein the training images carry tape labels;
constructing an initial lie detection model, and inputting each training image and a corresponding label into the initial lie detection model;
extracting pupil characteristics of each training image through an initial lie detection model, and determining a classification result of a user identifier corresponding to each training image according to the pupil characteristics;
determining whether the initial lie detection model meets a preset convergence condition or not according to the pupil characteristics, the classification result and the corresponding labels of each training image;
and when the preset convergence condition is not met, updating the model parameters of the initial lie detection model until the preset convergence condition is met to obtain the target lie detection model.
The lie detection model generation method, the device, the computer equipment and the storage medium acquire a plurality of training images with pupil characteristics, the training images are provided with labels, an initial lie detection model is constructed, each training image and the corresponding label are input into the initial lie detection model, the pupil characteristics of each training image are extracted through the initial lie detection model, the classification result of the user identification corresponding to each training image is determined according to the pupil characteristics, whether the initial lie detection model meets the preset convergence condition or not is determined according to the pupil characteristics, the classification result and the corresponding label of each training image, and when the preset convergence condition is not met, the model parameters of the initial lie detection model are updated until the preset convergence condition is met, so that the target lie detection model is obtained. Pupil identification is carried out through the collected image data, the identification capability of the lie detection model on the pupil characteristics is trained, the pupil characteristics can be accurately extracted, and the accuracy of lie detection is improved.
A method of lie detection, comprising:
acquiring image data containing pupil characteristics;
inputting the image data into the target lie detection model generated in the lie detection model generation method, extracting the pupil characteristics of the image data through the lie detection model, and classifying the image data according to the pupil characteristics to obtain the corresponding lie detection identification result.
A lie detection device comprising:
the data acquisition module is used for acquiring image data containing pupil characteristics;
and the lie detection module is used for inputting the image data into the target lie detection model, extracting the pupil characteristics of the image data through the target lie detection model, and classifying the image data according to the pupil characteristics to obtain a corresponding lie detection identification result.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring image data, wherein the image data comprises pupil characteristics;
inputting the image data into the target lie detection model generated in the lie detection model generation method, extracting the pupil characteristics of the image data through the target lie detection model, and classifying the image data according to the pupil characteristics to obtain the corresponding lie detection identification result.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring image data containing pupil characteristics;
inputting the image data into the target lie detection model generated in the lie detection model generation method, extracting the pupil characteristics of the image data through the target lie detection model, and classifying the image data according to the pupil characteristics to obtain the corresponding lie detection identification result.
The lie detection method, the device, the computer equipment and the storage medium acquire the image data containing the pupil characteristics, input the image data into the target lie detection model generated in the lie detection model generation method, extract the pupil characteristics of the image data through the target lie detection model, and classify the image data according to the pupil characteristics to obtain the corresponding lie detection recognition result. The lie detection is carried out through the generated lie detection model, only the pupil characteristics of the user need to be extracted, the interference caused by the camouflage expression is avoided, and the accuracy of the test is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a diagram illustrating an exemplary application of a lie detection model generation method;
FIG. 2 is a schematic flow chart diagram illustrating a lie detection model generation method in one embodiment;
FIG. 3 is a flow chart illustrating the determining step in one embodiment;
FIG. 4 is a flowchart illustrating the classification identification step in one embodiment;
FIG. 5 is a flow chart illustrating the classification identification step in another embodiment;
FIG. 6 is a flow diagram illustrating a lie detection method in accordance with an embodiment;
FIG. 7 is a block diagram of the lie detection model generation apparatus in one embodiment;
FIG. 8 is a block diagram of a determination module in one embodiment;
FIG. 9 is a block diagram of the structure of a classification identification module in one embodiment;
fig. 10 is a block diagram showing the structure of a lie detection model generation apparatus in another embodiment;
FIG. 11 is a block diagram of the structure of a lie detection device in one embodiment;
FIG. 12 is a diagram of an internal framework of a computer device, under an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is an application environment diagram of a lie detection model generation method in an embodiment. Referring to fig. 1, the lie detection model generation method is applied to a lie detection system. The lie detection system comprises a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The method comprises the steps that a server or a terminal obtains a plurality of training images with pupil characteristics and containing a plurality of user identifications, the training images are provided with labels, an initial lie detection model is constructed, each training image and the corresponding label are input into the initial lie detection model, the pupil characteristics of each training image are extracted through the initial lie detection model, the classification result of the user identification corresponding to each training image is determined according to the pupil characteristics, whether the initial lie detection model meets preset convergence conditions or not is determined according to the pupil characteristics, the classification result and the corresponding label of each training image, when the preset convergence conditions are not met, model parameters of the initial lie detection model are updated until the preset convergence conditions are met, and the target lie detection model is obtained. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in FIG. 2, a lie detection model generation method is provided. The embodiment is mainly illustrated by applying the method to the terminal 110 (or the server 120) in fig. 1. Referring to fig. 2, the lie detection model generation method specifically includes the following steps:
step S201, acquiring a plurality of training images with pupil characteristics, which include a plurality of user identifiers, and the training images carry tags.
Specifically, the user identifier is tag data for identifying a user, and the training image is a picture taken by the shooting device, wherein the training image includes pupil features of a person, and each training image carries a tag identifying the image. The label is data for identifying each training image, and includes a lie label and a no-lie label.
In one embodiment, before acquiring a plurality of training images including a plurality of user identifications, the method further includes: and marking each training image to obtain the label of each training image. For example, video data and corresponding tag data of a user when answering a designed question are acquired, and the tag data is corresponding to the video data.
In one embodiment, before acquiring a plurality of training images including a plurality of user identifications, the method further includes: and preprocessing each training image. Wherein the preprocessing includes cropping, rotating, dessicating, color space transforming, etc. the image. The image is preprocessed, so that the influence caused by noise and the like can be avoided, and the accuracy of identification is influenced.
Step S202, an initial lie detection model is constructed, and each training image and the corresponding label are input into the initial lie detection model.
Step S203, extracting the pupil characteristics of each training image through the initial lie detection model, and determining the classification result of the user identification corresponding to each training image according to the pupil characteristics.
Specifically, the initial lie detection model is an unfinished trained model. The pupil characteristics are data for describing the pupil of a person, and the data includes information such as shape characteristics, size, and the like of the pupil characteristics. The pupil is a small circular hole at the center of the iris in the human eye and is a passage for light to enter the eye. The psychological process of the user is accompanied by the change of the pupil and is not controlled by the will of people, so the pupil characteristics of the user can be used as the judgment index for judging whether the user lies. The classification result refers to a result obtained after the training image is subjected to classification and recognition through the initial lie detection model, and the classification result includes two results of lying and not lying. The method comprises the steps of self-defining and constructing an initial lie detection model according to user requirements, inputting training images and corresponding labels into the initial lie detection model, carrying out feature extraction and classification on the training images through the initial lie detection model, and obtaining classification results of all the training images, wherein the classification structure comprises lying and non-lying. The pupil characteristics are the specific characteristics of people, and the change of the pupil is consistent with the change process of the physiological activities of people, so that the detection of whether the person lies or not through the pupil characteristics is more accurate.
And step S204, determining whether the initial lie detection model meets a preset convergence condition or not according to the pupil characteristics, the classification result and the corresponding labels of each training image.
And step S205, when the preset convergence condition is not met, updating the model parameters of the initial lie detection model until the preset convergence condition is met to obtain the target lie detection model.
Specifically, the preset convergence condition is a condition that a user determines whether the model converges or not, which is set by the user in advance, and determines whether the model terminates training or not according to the preset convergence condition. The preset convergence condition may include accuracy of the pupil feature, accuracy of classification result and corresponding label judgment classification, and the like.
In one embodiment, the accuracy of the pupil features and the accuracy of the classification obtained according to the classification result and the corresponding label may be evaluated separately, and then, the comprehensive reference may be performed according to the separate evaluation result, or the evaluation may be performed as a whole to obtain an evaluation result of whether the preset convergence condition is satisfied. And according to the convergence condition of the pupil characteristics and the classification recognition result on the determined model of the model, over-fitting and under-fitting of the model can be avoided.
When the preset convergence condition is not met, adjusting model parameters of the initial lie detection model according to the pupil characteristics of the training image, the classification result and the corresponding labels, training the training image again according to the adjusted model parameters, judging whether the initial lie detection model converges again, when the preset convergence condition is not met, repeating the processes of parameter adjustment, training and judgment until the judgment result is that the preset convergence condition is met, ending the training of the initial lie detection model, and obtaining the target lie detection model.
In one embodiment, the initial lie detection model is a support vector machine model, the support vector machine model is used for training data, when a preset convergence condition is not met, a learning factor of the support vector machine model is adjusted according to the pupil features of the training image, the classification result and the corresponding labels, whether the support vector machine model is converged is judged again, when the preset convergence condition is not met, the processes of adjusting the learning factor, training and judging are repeated, and when the judgment result is that the preset convergence condition is met, the training of the support vector machine model is ended, and the target lie detection model is obtained.
The lie detection model generation method comprises the steps of obtaining a plurality of training images with pupil characteristics and containing a plurality of user identifications, carrying labels on the training images, constructing an initial lie detection model, inputting each training image and the corresponding label into the initial lie detection model, extracting the pupil characteristics of each training image through the initial lie detection model, determining the classification result of the user identification corresponding to each training image according to the pupil characteristics, determining whether the initial lie detection model meets a preset convergence condition according to the pupil characteristics, the classification result and the corresponding label of each training image, and updating model parameters of the initial lie detection model when the preset convergence condition is not met until the preset convergence condition is met to obtain the target lie detection model. Training is carried out on the image with the label to obtain a target lie detection model which can extract the pupil characteristics of the user and detect a lie according to the extracted pupil characteristics. Through training a large amount of picture data carrying labels, the pupil characteristics of the user can be extracted more accurately, and therefore whether the user lies or not is tested according to the extracted pupil characteristics accurately, and the accuracy of the model is improved.
In one embodiment, as shown in fig. 3, step S204 includes:
step S2041, obtaining a loss function of the initial lie detection model, and calculating a loss value according to the pupil characteristics of each training image.
And step S2042, counting the recognition accuracy of the initial lie detection model according to the classification result of each training image and the corresponding label.
Step S2043, when the loss value meets the preset loss condition and the recognition accuracy is greater than the preset accuracy, determining that the initial lie detection model meets the preset convergence condition.
In particular, the loss function is a function for measuring the accuracy of the features extracted by the initial lie detection model. And substituting the pupil characteristics extracted by the initial lie detection model into a loss function, calculating the loss value of the pupil characteristics, and performing weighted summation on the loss values of the pupil characteristics by self-definition to obtain the loss value of the whole model. And determining the recognition accuracy of the initial lie detection model according to the matching degree of the classification result obtained by classifying and recognizing each training image by the initial lie detection model and the label corresponding to each training image, wherein the classification result and the corresponding label are consistent to represent that the recognition is correct, otherwise, the recognition is failed, counting the number of the training images with correct recognition, and obtaining the recognition accuracy according to the ratio of the counted number to the number of the training images. And when whether the preset convergence condition is met or not is determined according to the loss value and the recognition accuracy of the model, the determination can be performed in a single judgment mode or a combined judgment mode.
In one embodiment, the judgment is performed by adopting a principle of separate judgment, the preset convergence condition comprises a loss condition and an identification condition, and when the loss value of the model meets the loss condition and the identification accuracy meets the identification condition, the initial lie detection model meets the preset convergence condition to obtain the target lie detection model.
In one embodiment, the judgment is performed by adopting a joint judgment principle, the loss condition and the identification condition are taken as a whole, and the loss value and the identification accuracy of the model meet the preset convergence condition by adjusting the model parameters of the initial lie detection model. The model is trained by setting the loss function of the model and the classification recognition accuracy, so that the low adaptability of the model to untrained images caused by over-fitting and under-fitting of the model can be avoided, the adaptability of the model to the images is improved, and the lie detection accuracy is improved.
In one embodiment, as shown in fig. 4, step S203 includes:
step S2031, edge detection is carried out on each training image to obtain a corresponding edge detection image.
Step S2032, detecting a circular area in the edge detection image by adopting Hough transform, and calculating the area diameter of the circular area.
Step S2033, obtaining the pupil diameter and the corresponding iris diameter from the area diameter, and calculating the ratio of the pupil diameter and the iris diameter.
And S2034, when the ratio meets a preset ratio, the classification result of the user identification corresponding to each training image is lie.
Specifically, the purpose of edge detection is to find a set of pixels in an image with a drastic change in brightness, and the set can be used for describing a contour. Common edge detection algorithms include the Canny algorithm Roberts Cross algorithm, Prewitt algorithm, Sobel algorithm, Kirsch algorithm, compass algorithm, Marr-Hildreth algorithm, Laplacian algorithm, and the like. And detecting through an edge detection algorithm to obtain a corresponding edge detection image. After the edge detection image is obtained, a circular area contained in the edge detection image is detected by adopting Hough transform, and the area diameter of the circular area is calculated. Wherein the hough transform is an algorithm for detecting geometrical shapes in the image. And determining the pupil and the iris corresponding to the pupil from the detected circular region, calculating the diameter of the pupil and the diameter of the corresponding iris, and calculating the ratio of the diameter of the pupil to the diameter of the iris. And determining the classification result of the user identification corresponding to each training image according to the ratio.
When the classification result of the user identifier corresponding to each training image is determined according to the ratio, the classification result can be determined according to the change rule of the ratio and whether the ratio meets a preset ratio, if the ratio meets the preset ratio, the classification result corresponding to the training image is lie, otherwise, the classification result is not lie. Recognition accuracy can be improved based on the ratio relative to the ratio based on the pupil diameter or iris diameter alone.
In one embodiment, as shown in fig. 5, the lie detection model generation method further includes:
step S301, calculating the ratio of the pupil diameter to the iris diameter in a plurality of training images with labels of no lying, and weighting and summing the ratios to obtain a preset ratio.
Specifically, the ratio of the pupil diameter to the iris diameter of a plurality of training images of which the same user label is not lie is calculated, and the corresponding ratios of the training images are subjected to weighted summation to obtain the preset ratio of the same user.
In this embodiment, step S2034 includes:
step S2034a, selecting the maximum ratio from the ratios of the training images corresponding to the user identifications, calculating the difference between the maximum ratio and the preset ratio, and when the difference meets the preset difference, the classification result of the user identification corresponding to each training image is lie.
Specifically, the maximum ratio is selected from the specific gravities of the training images corresponding to the user identifiers, and the difference between the maximum ratio and the preset ratio is calculated, where the difference can be determined by calculating the difference, the ratio, and the like between the two ratios. And determining the classification result corresponding to each training image according to whether the difference meets the preset difference, wherein the condition that the preset difference meets indicates that the classification result corresponding to each training image is lie, and the condition that the classification result is not lie is not met. The maximum ratio is used as a standard for determining whether the user lies or not, so that errors can be reduced, and the accuracy of judgment is improved.
In an embodiment, the lie detection model generation method further includes:
step S302, when the transformation state of the ratio of the training images corresponding to each user identifier meets the preset transformation state, the classification result of the user identifier corresponding to each training image is lie. .
Specifically, video data of each user identifier in a period of continuous time is obtained, frame processing is performed on the video data to obtain a plurality of time sequence images, the ratio of pupils and irises of the time sequence images is calculated, whether the change of the ratio corresponding to the time sequence images meets a preset change rule or not is judged, and the fact that the classification result corresponding to each training image is lie is met. The ratio is increased when speaking, the change rule of the ratio is changed from small to large in a process that the user does not lie in a period of time until the user lies in the process that the user does not lie subsequently, and then in the process of changing, if the ratio corresponding to the time sequence image of the user in a period of time meets the change rule, the user lies, otherwise, the user is not panicked. The psychological process of the user can be more accurately mastered by adopting the pupil change of the user, so that the detection accuracy of the model is improved.
In one embodiment, as shown in fig. 6, there is provided a lie detection method including:
step S401, image data is acquired, and the image data includes pupil features.
Step S402, inputting the image data into the target lie detection model generated in the test model generation method, extracting the pupil characteristics of the image data through the target lie detection model, and classifying the image data according to the pupil characteristics to obtain the corresponding lie detection recognition result.
Specifically, image data shot by a shooting device is obtained, the image data comprises pupil features, the image data is input into a target lie detection model generated by the test model generation method, and feature extraction and classification are carried out on the image data through the target lie detection model. The classification process of the feature extraction process is identical to the processing process in step S203, and is not described herein again. The lie detection can be realized by collecting images through the camera, the equipment is simple, the attention of the testee can not be aroused, the psychological burden of the testee is not increased, the pupil change caused by the emotion change can not be artificially controlled, and the testing accuracy is improved by applying the principle.
In one embodiment, Canny's algorithm is used to perform edge detection on the image data. The image is converted into a gray image from an RGB image, Gaussian filtering is carried out on the gray image to obtain a smooth image, and the influence of noise irrelevant to the pupil characteristics on the test is reduced. And calculating image gradient, calculating the edge amplitude and angle of the image according to the gradient, refining the edge, and performing non-maximum suppression on the edge amplitude. And adopting double thresholds for edge connection, wherein the double thresholds comprise a first threshold and a second threshold, the first threshold is larger than the second threshold, edges lower than the second threshold are discarded, and edges higher than the first threshold are reserved. And reserving the edge with the difference value smaller than the preset difference threshold value with the first threshold value, and discarding the rest to obtain the final edge. And setting the reserved edge as 1, and setting the rest pixel points as 0 to obtain a binary image.
In one embodiment, the detection of the circle is performed using a hough transform. The radius of the circle is known, and the center of the circle can be found in polar coordinate rho-theta space through Hough transform. And (3) setting each pixel point on the edge obtained before as the center of a circle, drawing a circle by taking r as the radius, enabling the circle to pass through the center of the circle, superposing the results, and taking the peak point as the center of the circle. Because the radius of the circle is unknown in the actual situation, the solution is carried out in the rho-theta-r space, each edge point can obtain a cone, a peak point in the space is searched, and the peak point is the mapping of the required circle center coordinate and the radius.
In a specific embodiment, the lie detection method includes:
and acquiring video data of a simulation range scene, and framing the common data of the model to obtain a plurality of training images corresponding to each user identification. The crime scene simulation comprises the following steps: a thief enters the ABC laboratory to conduct theft. When he enters the laboratory, pushing a chair against the door, he has turned the cabinet over to steal 100 dollars from a drawer in the laboratory, and then has quietly left the laboratory. The key information set in this scene simulation consists of three: a chair at the doorway; money is stolen; the cash amount of the stolen money is 100. Three sets of problems have been set for the above information in lie detection experiments: 1. is a thief entering a laboratory, is there a chair at the doorway? 2. Is it money the thief takes from the laboratory? 3. Is the amount of stolen cash 100 dollars?
In one embodiment, to increase reaction effectiveness and to meet the original reaction value conversion to standard fraction conditions and to avoid being tested too fatigued, each set of questions consists of 4 control questions and 1 key question, two consecutive passes (30 total effective questions) occur, the key questions are always in the middle of each round of questions, and the control questions are in a changed order.
Role design: a crime group (obtaining critical information before testing) and an innocent group (not obtaining any information before testing) are set.
In the testing process, video data containing the facial expression of the testee is collected by using a camera, and the video data is framed to obtain a plurality of time series images containing the pupil characteristics of the testee. Since there may be differences in pupil diameter between different persons, the ratio of pupil diameter to iris diameter is selected as a feature for analysis. The method comprises the steps of firstly measuring the ratio of the pupil diameter to the iris diameter when a tested person is calmer, then selecting a video within a period of time when the tested person answers a question, and extracting the ratio of the pupil diameter to the iris diameter from the video. Through the analysis of measured data, the ratio of the pupil diameter to the iris diameter of a person who does not lie changes slightly, but the ratio of the pupil diameter to the iris diameter of a person who lies changes greatly, a change threshold value is set, if the change is larger than the change threshold value, the person to be tested can be considered to lie, and if the change is larger than the change threshold value, the person to be tested is considered to lie, and if the change is positive, the person to be tested is considered to not lie.
Inputting the training image corresponding to each tested object into an initial lie detection model, wherein the initial lie detection model is a support vector machine. And calculating the ratio of the pupil diameter to the iris diameter in each training image through the initial lie detection model, determining the classification result of each image according to the calculated ratio, and converging the model to obtain the target lie detection model when the ratio meets the preset ratio and the identification of the classification result correctly meets the preset identification rate.
And calculating the ratio of the plurality of training images when the testee is calmer through the target lie detection model, and calculating the average value of the ratio of the plurality of training images. And calculating the ratio of the sequence images of the testee answering the questions under the same light environment, selecting the maximum value of the ratio, comparing the maximum value with the average value to obtain a corresponding comparison result, and when the comparison structure is larger than the change threshold, judging that the classification result of the user corresponding to the user identification lies.
In one embodiment, after recording the video data of the answer questions of the testees, the 3s video of the answer questions is intercepted. And (3) framing the video data corresponding to each second, selecting 10 frames of images, namely 30 frames of images in total, calculating the ratio of the pupil diameter and the iris diameter of the testee in the images, and determining the classification result of the testee according to the ratio. The lie detection method adopts the support vector machine for training and lie detection, can map the features of the image data into a high-dimensional space, realizes linear separability, and improves the accuracy of lie detection.
Fig. 2-6 are schematic flow diagrams illustrating a lie detection model generation method according to an embodiment. It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided a lie detection model generation apparatus 200 including:
the training data acquiring module 201 is configured to acquire a plurality of training images with pupil features, which include a plurality of user identifiers, where the training images carry tags.
And the data input module 202 is used for constructing an initial lie detection model and inputting each training image and the corresponding label into the initial lie detection model.
And the classification and recognition module 203 is configured to extract pupil features of each training image through the initial lie detection model, and determine a classification result of the user identifier corresponding to each training image according to the pupil features.
The determining module 204 is configured to determine whether the initial lie detection model meets a preset convergence condition according to the pupil features, the classification result, and the corresponding labels of each training image.
And the model determining module 205 is configured to, when the preset convergence condition is not met, update the model parameters of the initial lie detection model until the preset convergence condition is met to obtain the target lie detection model.
In one embodiment, as shown in FIG. 8, the determining module 204 includes:
and the loss value calculating unit 2041 is configured to obtain a loss function of the initial lie detection model, and calculate a loss value according to the pupil features of each training image.
And the recognition accuracy calculating unit 2042 is configured to calculate the recognition accuracy of the initial lie detection model according to the classification result of each training image and the corresponding label.
The determining unit 2043 is configured to determine that the initial lie detection model satisfies the preset convergence condition when the loss value satisfies the preset loss condition and the recognition accuracy is greater than the preset accuracy.
In one embodiment, as shown in FIG. 9, the classification identification module 203 includes:
the edge detection unit 2031 is configured to perform edge detection on each training image to obtain a corresponding edge detection image.
The diameter calculation unit 2032 detects a circular region in the edge detection image by hough transform, and calculates the region diameter of the circular region.
A ratio calculating unit 2033, configured to obtain the pupil diameter and the corresponding iris diameter from the region diameter, and calculate a ratio between the pupil diameter and the iris diameter.
The first classifying unit 2034 is configured to, when the ratio satisfies the preset ratio, classify the user identifier corresponding to each training image as lying.
In one embodiment, as shown in fig. 10, the lie detection model generation apparatus further includes:
the preset ratio calculating unit 301 is configured to calculate a ratio of a pupil diameter to an iris diameter in a plurality of training images with tags of no lie, and perform weighted summation on each ratio to obtain a preset ratio.
The first classifying unit 2034 is further configured to select a maximum ratio from the ratios of the plurality of training images corresponding to each user identifier, calculate a difference between the maximum ratio and a preset ratio, and when the difference satisfies the preset difference, the classification result of the user identifier corresponding to each training image is lie.
In one embodiment, the classification identifying module 203 further includes:
the second classifying unit 2035 is configured to, when the transformation state of the ratio of the plurality of training images corresponding to each user identifier satisfies the preset transformation state, determine that the classification result of the user identifier corresponding to each training image is lie.
In one embodiment, as shown in fig. 11, there is provided a lie detection device including:
the data acquiring module 401 is configured to acquire image data, where the image data includes pupil features.
The lie detection module 402 is configured to input the image data into the target lie detection model generated in the test model generation method, extract pupil features of the image data through the target lie detection model, and classify the image data according to the pupil features to obtain a corresponding lie detection recognition result.
FIG. 12 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 (or the server 120) in fig. 1. As shown in fig. 12, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may further store a computer program, which, when executed by the processor, causes the processor to implement the lie detection model generation method. The internal memory may also store a computer program, and the computer program, when executed by the processor, may cause the processor to perform the lie detection model generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the panic detection model generation device and the panic detection device provided by the present application can be implemented in the form of a computer program, and the computer program can be run on a computer device as shown in fig. 12. The memory of the computer device may store various program modules constituting the lie detection model generation apparatus and/or the lie detection apparatus, such as the training data acquisition module 201, the data input module 202, the classification recognition module 203, the judgment module 204, and the model determination module 205 shown in fig. 7, the data acquisition module 401, and the lie detection module 402 shown in fig. 11. The program modules constitute computer programs that cause the processor to execute the lie detection model generation methods and/or the steps in the lie detection methods of the embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 12 may perform, by the training data acquisition module 201 in the lie detection model generation apparatus shown in fig. 7, acquiring a plurality of training images with pupil features containing a plurality of user identifications, the training images carrying tags. The computer device may be configured to construct an initial lie detection model by the data input module 202, and input each training image and corresponding label into the initial lie detection model. The computer device may perform, through the classification recognition module 203, extraction of pupil features of each training image through the initial lie detection model, and determine a classification result of the user identifier corresponding to each training image according to the pupil features. The computer device may determine whether the initial lie detection model satisfies the preset convergence condition according to the pupil features, the classification result, and the corresponding label of each training image through the determination module 204. The computer device may execute, through the model determining module 205, updating the model parameters of the initial lie detection model when the preset convergence condition is not satisfied until the preset convergence condition is satisfied, so as to obtain the target lie detection model.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: the method comprises the steps of obtaining a plurality of training images with pupil characteristics and containing a plurality of user identifications, carrying labels on the training images, constructing an initial lie detection model, inputting each training image and the corresponding label into the initial lie detection model, extracting the pupil characteristics of each training image through the initial lie detection model, determining the classification result of the user identification corresponding to each training image according to the pupil characteristics, determining whether the initial lie detection model meets a preset convergence condition or not according to the pupil characteristics, the classification result and the corresponding label of each training image, updating model parameters of the initial lie detection model when the preset convergence condition is not met, and obtaining the target lie detection model until the preset convergence condition is met.
In one embodiment, determining whether the initial lie detection model satisfies a preset convergence condition according to the pupil features, the classification result and the corresponding labels of each training image includes: obtaining a loss function of the initial lie detection model, calculating a loss value according to the pupil characteristics of each training image, counting the recognition accuracy of the initial lie detection model according to the classification result of each training image and the corresponding label, and determining that the initial lie detection model meets the preset convergence condition when the loss value meets the preset loss condition and the recognition accuracy is greater than the preset accuracy.
In one embodiment, extracting the pupil features of each training image through the initial lie detection model, and determining the classification result of the user identifier corresponding to each training image according to the pupil features of each training image includes: the method comprises the steps of carrying out edge detection on each training image to obtain a corresponding edge detection image, detecting a circular area in the edge detection image by adopting Hough transform, calculating the area diameter of the circular area, obtaining a pupil diameter and a corresponding iris diameter from the area diameter, calculating the ratio of the pupil diameter to the corresponding iris diameter, and when the ratio meets a preset ratio, judging that the classification result of a user identifier corresponding to each training image is lie.
In one embodiment, the tags include lying and not lying, the computer program when executed by the processor further performing the steps of: calculating the ratio of the pupil diameter to the iris diameter in a plurality of training images of which the labels are not lie, weighting and summing the ratios to obtain a preset ratio, and when the ratio meets the preset ratio, determining that the classification result of the user identification corresponding to each training image is lie, wherein the classification result comprises the following steps: and selecting the maximum ratio from the ratios of the plurality of training images corresponding to each user identifier, calculating the difference between the maximum ratio and the preset ratio, and when the difference meets the preset difference, determining that the classification result of the user identifier corresponding to each training image is lie.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the transformation state of the ratio of the plurality of training images corresponding to each user identifier meets the preset transformation state, the classification result of the user identifier corresponding to each training image is lie.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: the method comprises the steps of obtaining a plurality of training images with pupil characteristics and containing a plurality of user identifications, carrying labels on the training images, constructing an initial lie detection model, inputting each training image and the corresponding label into the initial lie detection model, extracting the pupil characteristics of each training image through the initial lie detection model, determining the classification result of the user identification corresponding to each training image according to the pupil characteristics, determining whether the initial lie detection model meets a preset convergence condition or not according to the pupil characteristics, the classification result and the corresponding label of each training image, updating model parameters of the initial lie detection model when the preset convergence condition is not met, and obtaining the target lie detection model until the preset convergence condition is met.
In one embodiment, determining whether the initial lie detection model satisfies a preset convergence condition according to the pupil features, the classification result and the corresponding labels of each training image includes: obtaining a loss function of the initial lie detection model, calculating a loss value according to the pupil characteristics of each training image, counting the recognition accuracy of the initial lie detection model according to the classification result of each training image and the corresponding label, and determining that the initial lie detection model meets the preset convergence condition when the loss value meets the preset loss condition and the recognition accuracy is greater than the preset accuracy.
In one embodiment, extracting the pupil features of each training image through the initial lie detection model, and determining the classification result of the user identifier corresponding to each training image according to the pupil features of each training image includes: the method comprises the steps of carrying out edge detection on each training image to obtain a corresponding edge detection image, detecting a circular area in the edge detection image by adopting Hough transform, calculating the area diameter of the circular area, obtaining a pupil diameter and a corresponding iris diameter from the area diameter, calculating the ratio of the pupil diameter to the corresponding iris diameter, and when the ratio meets a preset ratio, judging that the classification result of a user identifier corresponding to each training image is lie.
In one embodiment, the tags include lying and not lying, the computer program when executed by the processor further performing the steps of: calculating the ratio of the pupil diameter to the iris diameter in a plurality of training images of which the labels are not lie, weighting and summing the ratios to obtain a preset ratio, and when the ratio meets the preset ratio, determining that the classification result of the user identification corresponding to each training image is lie, wherein the classification result comprises the following steps: and selecting the maximum ratio from the ratios of the plurality of training images corresponding to each user identifier, calculating the difference between the maximum ratio and the preset ratio, and when the difference meets the preset difference, determining that the classification result of the user identifier corresponding to each training image is lie.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the transformation state of the ratio of the plurality of training images corresponding to each user identifier meets the preset transformation state, the classification result of the user identifier corresponding to each training image is lie.
For example, the computer device shown in fig. 12 may be used to acquire image data containing pupil features through the data acquisition module 401 in the lie detection apparatus shown in fig. 11. The computer device may input the image data into the target lie detection model generated in the above test model generation method through the lie detection module 402, extract the pupil features of the image data through the target lie detection model, and classify the image data according to the pupil features to obtain the corresponding lie detection recognition result.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring image data containing pupil characteristics, inputting the image data into a target lie detection model, extracting the pupil characteristics of the image data through the target lie detection model, and classifying the image data according to the pupil characteristics to obtain a corresponding lie detection identification result.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring image data containing pupil characteristics, inputting the image data into a target lie detection model, extracting the pupil characteristics of the image data through the target lie detection model, and classifying the image data according to the pupil characteristics to obtain a corresponding lie detection identification result.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A lie detection model generation method, the method comprising:
acquiring a plurality of training images with pupil characteristics, wherein the training images comprise a plurality of user identifications, and the training images carry labels;
constructing an initial lie detection model, and inputting each training image and the corresponding label into the initial lie detection model;
extracting pupil characteristics of each training image through the initial lie detection model, and determining a classification result of a user identifier corresponding to each training image according to the pupil characteristics;
determining whether the initial lie detection model meets a preset convergence condition or not according to the pupil characteristics, the classification result and the corresponding labels of each training image;
and when the preset convergence condition is not met, updating the model parameters of the initial lie detection model until the preset convergence condition is met to obtain the target lie detection model.
2. The method according to claim 1, wherein the determining whether the initial lie detection model satisfies a preset convergence condition according to the pupil features, the classification result, and the corresponding labels of each of the training images comprises:
obtaining a loss function of the initial lie detection model, and calculating a loss value according to the pupil characteristics of each training image;
counting the recognition accuracy of the initial lie detection model according to the classification result of each training image and the corresponding label;
and when the loss value meets a preset loss condition and the recognition accuracy is greater than a preset accuracy, determining that the initial lie detection model meets a preset convergence condition.
3. The method according to claim 1, wherein the extracting, by the initial lie detection model, the pupil features of each of the training images, and determining, according to the pupil features of each of the training images, a classification result of the user identifier corresponding to each of the training images comprises:
carrying out edge detection on each training image to obtain a corresponding edge detection image;
detecting a circular area in the edge detection image by adopting Hough transform, and calculating the area diameter of the circular area;
acquiring a pupil diameter and a corresponding iris diameter from the region diameter, and calculating the ratio of the pupil diameter to the corresponding iris diameter;
and when the ratio meets a preset ratio, the classification result of the user identification corresponding to each training image is lie.
4. The method of claim 3, wherein the label includes lying and not lying, the method further comprising:
calculating the ratio of the pupil diameter to the iris diameter in a plurality of training images with the labels not lying, and weighting and summing the ratios to obtain a preset ratio;
when the ratio satisfies a preset ratio, the classification result of the user identifier corresponding to each training image is lie, and the method comprises the following steps:
selecting the maximum ratio from the ratios of the plurality of training images corresponding to each user identifier, calculating the difference between the maximum ratio and the preset ratio, and when the difference meets the preset difference, determining that the classification result of the user identifier corresponding to each training image is lie.
5. The method of claim 3, further comprising:
and when the transformation state of the ratio of the plurality of training images corresponding to each user identification meets the preset transformation state, the classification result of the user identification corresponding to each training image is lie.
6. A lie detection method based on the lie detection model of any one of claims 1 to 5, the method comprising:
acquiring image data, wherein the image data comprises pupil characteristics;
inputting the image data into the target lie detection model, extracting the pupil characteristics through the target lie detection model, and classifying the image data according to the pupil characteristics to obtain a corresponding lie detection identification result.
7. A lie detection model generation apparatus, characterized in that the apparatus comprises:
the training data acquisition module is used for acquiring a plurality of training images with pupil characteristics, wherein the training images contain a plurality of user identifications and carry labels;
the data input module is used for constructing an initial lie detection model and inputting each training image and the corresponding label into the initial lie detection model;
the classification and recognition module is used for extracting the pupil characteristics of each training image through the initial lie detection model and determining the classification result of the user identification corresponding to each training image according to the pupil characteristics;
the judging module is used for determining whether the initial lie detection model meets a preset convergence condition according to the pupil characteristics, the classification result and the corresponding labels of each training image;
and the model determining module is used for updating the model parameters of the initial lie detection model when the preset convergence condition is not met until the preset convergence condition is met to obtain the target lie detection model.
8. A lie detection device constructed based on the lie detection model generation device of claim 7, the device comprising:
the data acquisition module is used for acquiring image data containing pupil characteristics;
and the lie detection module is used for inputting the image data into a target lie detection model, extracting the pupil characteristics through the target lie detection model, and classifying the image data according to the pupil characteristics to obtain a corresponding lie detection identification result.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 5 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN201811420233.1A 2018-11-26 2018-11-26 Lie detection data processing method and device, computer equipment and storage medium Pending CN111222374A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811420233.1A CN111222374A (en) 2018-11-26 2018-11-26 Lie detection data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811420233.1A CN111222374A (en) 2018-11-26 2018-11-26 Lie detection data processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111222374A true CN111222374A (en) 2020-06-02

Family

ID=70805556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811420233.1A Pending CN111222374A (en) 2018-11-26 2018-11-26 Lie detection data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111222374A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723242A (en) * 2021-08-20 2021-11-30 湖南全航信息通信有限公司 Visual lie detection method based on video terminal, electronic device and medium
CN113729708A (en) * 2021-09-10 2021-12-03 上海理工大学 Lie evaluation method based on eye movement technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202060785U (en) * 2011-03-31 2011-12-07 上海天岸电子科技有限公司 Human eye pupil lie detector
CN202604845U (en) * 2011-12-12 2012-12-19 张占强 Pupillometric lie detector based on platform TMS320DM642
CN103440510A (en) * 2013-09-02 2013-12-11 大连理工大学 Method for positioning characteristic points in facial image
CN105160318A (en) * 2015-08-31 2015-12-16 北京旷视科技有限公司 Facial expression based lie detection method and system
CN106667506A (en) * 2016-12-21 2017-05-17 上海与德信息技术有限公司 Method and device for detecting lies on basis of electrodermal response and pupil change
CN108197594A (en) * 2018-01-23 2018-06-22 北京七鑫易维信息技术有限公司 The method and apparatus for determining pupil position

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202060785U (en) * 2011-03-31 2011-12-07 上海天岸电子科技有限公司 Human eye pupil lie detector
CN202604845U (en) * 2011-12-12 2012-12-19 张占强 Pupillometric lie detector based on platform TMS320DM642
CN103440510A (en) * 2013-09-02 2013-12-11 大连理工大学 Method for positioning characteristic points in facial image
CN105160318A (en) * 2015-08-31 2015-12-16 北京旷视科技有限公司 Facial expression based lie detection method and system
CN106667506A (en) * 2016-12-21 2017-05-17 上海与德信息技术有限公司 Method and device for detecting lies on basis of electrodermal response and pupil change
CN108197594A (en) * 2018-01-23 2018-06-22 北京七鑫易维信息技术有限公司 The method and apparatus for determining pupil position

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723242A (en) * 2021-08-20 2021-11-30 湖南全航信息通信有限公司 Visual lie detection method based on video terminal, electronic device and medium
CN113723242B (en) * 2021-08-20 2024-04-26 湖南全航信息通信有限公司 Visual lie detection method based on video terminal, electronic equipment and medium
CN113729708A (en) * 2021-09-10 2021-12-03 上海理工大学 Lie evaluation method based on eye movement technology
CN113729708B (en) * 2021-09-10 2023-06-20 上海理工大学 Lie judgment method based on eye movement technology

Similar Documents

Publication Publication Date Title
CN109359548B (en) Multi-face recognition monitoring method and device, electronic equipment and storage medium
US10948990B2 (en) Image classification by brain computer interface
WO2020207423A1 (en) Skin type detection method, skin type grade classification method and skin type detection apparatus
Miura et al. Feature extraction of finger vein patterns based on iterative line tracking and its application to personal identification
US9892315B2 (en) Systems and methods for detection of behavior correlated with outside distractions in examinations
CN105160318A (en) Facial expression based lie detection method and system
CN111222380B (en) Living body detection method and device and recognition model training method thereof
US11908240B2 (en) Micro-expression recognition method based on multi-scale spatiotemporal feature neural network
CN110348385B (en) Living body face recognition method and device
Błażek et al. An unorthodox view on the problem of tracking facial expressions
Villalobos-Castaldi et al. A new spontaneous pupillary oscillation-based verification system
Busey et al. Characterizing human expertise using computational metrics of feature diagnosticity in a pattern matching task
CN104679967B (en) A kind of method for judging psychological test reliability
CN111222374A (en) Lie detection data processing method and device, computer equipment and storage medium
Singh et al. Detection of stress, anxiety and depression (SAD) in video surveillance using ResNet-101
CN113033387A (en) Intelligent assessment method and system for automatically identifying chronic pain degree of old people
CN110598607B (en) Non-contact and contact cooperative real-time emotion intelligent monitoring system
CN111507124A (en) Non-contact video lie detection method and system based on deep learning
Rafiqi et al. Work-in-progress, PupilWare-M: Cognitive load estimation using unmodified smartphone cameras
CN109255318A (en) Based on multiple dimensioned and multireel lamination Fusion Features fingerprint activity test methods
Wang et al. Study on correlation between subjective and objective metrics for multimodal retinal image registration
Odya et al. User authentication by eye movement features employing SVM and XGBoost classifiers
CN112487980A (en) Micro-expression-based treatment method, device, system and computer-readable storage medium
Vasavi et al. Regression modelling for stress detection in humans by assessing most prominent thermal signature
Liu et al. Robust real-time heart rate prediction for multiple subjects from facial video using compressive tracking and support vector machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 510000 no.2-8, North Street, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: Guangzhou huiruisitong Technology Co.,Ltd.

Address before: No.405, no.28-29, Jinyuan Road, Banqiao Village, Nancun Town, Panyu District, Guangzhou, Guangdong 510000

Applicant before: GUANGZHOU HUIRUI SITONG INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information