CN113239724A - Psychological assessment analysis method based on big data - Google Patents

Psychological assessment analysis method based on big data Download PDF

Info

Publication number
CN113239724A
CN113239724A CN202110351801.2A CN202110351801A CN113239724A CN 113239724 A CN113239724 A CN 113239724A CN 202110351801 A CN202110351801 A CN 202110351801A CN 113239724 A CN113239724 A CN 113239724A
Authority
CN
China
Prior art keywords
host
user
facial expression
big data
analysis result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110351801.2A
Other languages
Chinese (zh)
Inventor
熊倩
王宇骁
王政
王学春
张志亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Fengyun Jihui Intelligent Technology Co ltd
Original Assignee
Chongqing Fengyun Jihui Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Fengyun Jihui Intelligent Technology Co ltd filed Critical Chongqing Fengyun Jihui Intelligent Technology Co ltd
Priority to CN202110351801.2A priority Critical patent/CN113239724A/en
Publication of CN113239724A publication Critical patent/CN113239724A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Abstract

The invention relates to the technical field of video analysis, and discloses a psychological assessment analysis method based on big data, which comprises the following steps that firstly, facial expressions of a user are shot through a face recognition lens to form facial expression images, and the facial expression images are sent to a human face emotion analysis host; meanwhile, the answer sound of the user is sent to the host through the pickup; step two, the human face emotion analysis host identifies and analyzes the received facial expression image to obtain a video analysis result, and sends the video analysis result to the host; and step three, the host judges the authenticity of the received user answer sound according to the received video analysis result. The invention can accurately evaluate the authenticity of the answer information of the user.

Description

Psychological assessment analysis method based on big data
Technical Field
The invention relates to the technical field of video analysis, in particular to a psychological assessment analysis method based on big data.
Background
In order to ensure the authenticity of the information answered by the user, some technical means are used for judging the authenticity of the answered information. The detection is carried out by a conventional method based on the principle of the skin-electricity.
When the method of the skin electricity principle is adopted, a tested person is required to wear the sensor, the tested person is easy to have conflicting resistance, and the analysis result is easy to deviate due to the influence of various factors such as environment, personal physique and the like.
Disclosure of Invention
The invention aims to provide a psychological assessment analysis method based on big data.
The basic scheme provided by the invention is as follows: a psychological assessment analysis method based on big data comprises the following steps:
the method comprises the following steps that firstly, facial expressions of a user are shot through a face recognition lens to form facial expression images, and the facial expression images are sent to a human face emotion analysis host; meanwhile, the answer sound of the user is sent to the host through the pickup; step two, the human face emotion analysis host identifies and analyzes the received facial expression image to obtain a video analysis result, and sends the video analysis result to the host; and step three, the host judges the authenticity of the received user answer sound according to the received video analysis result.
The scheme has the advantages that:
the authenticity of the answer sound can be judged by the answer sound of the user when answering the question and synchronous facial expression recognition. The authenticity of the answer can be judged on site, and effective electronic evidence can be conveniently and quickly acquired.
Further, in the second step, the host sends the video analysis result to the display screen for display.
The analysis result that the field personnel can see is convenient for.
Further, in the third step, the host computer extracts the answer information and the emotion information from the answer sound, and the host computer judges the authenticity of the answer information according to the received video analysis result and the emotion information.
The answer information of the user can be accurately judged.
Further, the host comprises a central processing unit, a questionnaire is preset in the central processing unit, the questionnaire is sent to the display screen by the central processing unit for display, and when the user completes the questionnaire, the face recognition camera sends the shot facial expression image to the face emotion analysis host.
Is convenient for intelligent questioning and avoids artificial induction.
Further, the face recognition lens moves correspondingly along with the head movement of the user.
The facial expression images can be conveniently and fully collected.
Further, a non-contact sign recognition module is arranged in the central processing unit.
The sign recognition is convenient for the user to answer the questions.
Further, the non-contact sign recognition module is used for comparing and judging the changes of the heart rate, the heart rate variability, the blood oxygen and the blood pressure.
Through the change of the physical signs, whether the user lies or not can be judged more accurately by combining the answer sound of the user.
Drawings
Fig. 1 is a schematic system structure diagram of a big data-based psychological assessment analysis method according to a first embodiment of the present invention.
Detailed Description
The following is further detailed by the specific embodiments:
example one
The embodiment is basically as shown in the attached figure 1: the psychological assessment and analysis system based on big data comprises a host, and a human face emotion analysis host, a human face recognition lens, a time module, a display module, a built-in/external battery, a sound pickup, an outer side camera, an inner side camera, a display unit and a background server which are respectively connected with the host, wherein the human face emotion analysis host is connected with the human face recognition lens.
In this embodiment, both the display module and the display unit may adopt display screens, only the display screen as the display module is facing the public, and the other display screen as the display unit is only facing auditors including judges.
The host comprises a central processing unit, and other devices and modules connected with the central processing unit.
The face recognition lens is used for collecting facial expressions of users to form facial expression images and sending the facial expression images to the face emotion analysis host and the central processing unit of the host; and the central processing unit sends the received facial expression image to the display unit, so that the auditor can synchronously see the facial expression of the user.
And face emotion analysis host computer, including microprocessor and the video analysis module who is connected with microprocessor, the video analysis module carries out analysis to face collection image and obtains normal or unusual preliminary analysis result, and the preliminary analysis result that microprocessor received carries out secondary analysis and obtains the secondary analysis result, sends for the host computer, and the host computer sends the secondary analysis result and shows for display module.
The central processing unit is preset with a questionnaire, the central processing unit sends the questionnaire to the display screen for display, and when the user finishes the questionnaire, the face recognition camera sends the shot facial expression image to the video analysis module of the central processing unit. Besides the questionnaire, whether answer information is correct or not can be judged more accurately by combining facial expression recognition, and the actual psychological state of the user can be judged more accurately.
The pickup synchronously sends answer sounds of the user for answering the questionnaire to the central processing unit, questionnaire results are preset in the central processing unit, the central processing unit extracts answer information according to the received answer sounds, and the central processing unit compares the answer information with the questionnaire results to obtain psychological analysis preliminary results. The answer sheet is answered in a voice answering mode, so that the facial expression images can be conveniently collected, and the emotional information in the voice is helpful for judgment.
And the central processing unit combines the primary analysis result with the psychological analysis primary result to obtain a secondary analysis result. The judgment result is more accurate.
The central processor is internally provided with a non-contact sign recognition module. A physiological analysis unit is arranged in the non-contact physical sign recognition module.
The facial expression analysis module firstly screens effective facial expression images capable of identifying information content from the facial expression images, and then extracts abnormal facial expression images from the effective facial expression images. The physiological analysis unit includes the diseased and damaged states.
The non-contact sign recognition module is used for comparing and judging the changes of the heart rate, the heart rate variability, the blood oxygen and the blood pressure. And carrying out real-time psychological emotion detection, calculation and tracking on the tested person in a non-contact manner, and presenting a psychological change curve. The physiological index changes (heart rate, heart rate variability, blood oxygen and blood pressure changes) of the tested person are monitored in real time, and a comprehensive evaluation can be made on the psychological condition of the tested person by combining the results generated by the questionnaire psychological survey method.
It should be noted that the central processing unit and the microprocessor in the present embodiment are general processing chips having an arithmetic function, and the calculation amounts thereof may be the same or different, and are only named as differences herein.
When the system is adopted to perform psychological assessment analysis, the method comprises the following steps:
the method comprises the following steps that firstly, facial expressions of a user are shot through a face recognition lens to form facial expression images, and the facial expression images are sent to a human face emotion analysis host; meanwhile, the answer sound of the user is sent to the host through the pickup; step two, the human face emotion analysis host identifies and analyzes the received facial expression image to obtain a video analysis result, and sends the video analysis result to the host; and step three, the host judges the authenticity of the received user answer sound according to the received video analysis result.
In the second step, the host sends the video analysis result to the display screen for display. The analysis result that the field personnel can see is convenient for.
And in the third step, the host computer extracts the answer information and the emotion information from the answer sound, and judges the authenticity of the answer information according to the received video analysis result and the emotion information. The answer information of the user can be accurately judged.
The method can judge the authenticity of the answer sound by the answer sound of the user when answering the question and synchronizing the facial expression recognition. The authenticity of the answer can be judged on site, and effective electronic evidence can be conveniently and quickly acquired.
Specifically, the following devices are adopted in this embodiment to construct a big data-based psychological assessment analysis system:
the human face emotion analysis host computer:
the material is as follows: an all aluminum alloy;
electrical index: 19V/4.7A, 90W;
CPU model: intel I78750H, CPU frequency: 2.20GHz, CPU core: 6 cores (12 threads);
the memory type is as follows: DDR4, memory capacity: 16G;
a network interface: 1 kilomega network port;
wireless support: wireless 2.4G &5G dual-frequency Wifi, Bluetooth 4.2;
USB interface: 3.0 USB and 4 USB;
HDMI output interface: 1, the number of the active ingredients is 1;
MiniDP output interface: 1, the number of the active ingredients is 1;
the number of power supplies: 1, the number of the active ingredients is 1;
software functions: physiological index output and nervous emotion output; outputting macro expression, micro expression and micro action;
face recognition camera lens:
lens: focal length f =2.8mm, horizontal field angle 120 °;
minimum illuminance: 0.05 Lux @ (F1.8, AGC ON);
digital noise reduction: 2D, 3D digital noise reduction;
color space/compression: H.264/MJPEG/YUY 2/NV 12;
USB interface: 1 way, USB 3.0: type B, female, input voltage: 5V (USB power supply)
Input current: 600MA (MAX);
working temperature: -10 ℃ to 40 ℃;
power consumption: 3.0W (MAX);
size: 194mm x 34mm x 42mm (without stent);
net weight: 0.34 KG.
The micro-expression recognition and non-contact human body characteristic auxiliary recognition are used together with the auditing all-in-one machine, the face recognition lens collects the face of a user being audited in real time and transmits the face to the face emotion analysis host, and the face emotion analysis host analyzes the collected real-time image and transmits an analysis result to an inner side display screen of the auditing all-in-one machine. The auditor can judge what the current emotion of the auditor is, whether the content is correct, whether the fact is concealed or not and whether the auditor lies or not according to the analysis results of micro-expression recognition and non-contact human body characteristic auxiliary recognition, and can monitor the physiological index changes (heart rate, heart rate variability, blood oxygen and blood pressure changes) of the auditor in real time, dynamically monitor the physiological health level of the auditor in conversation, and give a real-time alarm prompt when abnormality occurs.
The embodiment is based on AI artificial intelligence original technology and patent algorithm, integrates psychology, biophysiology, machine vision, deep learning and other technologies, acquires the physiological and psychological indexes which cannot be subjectively controlled by a person by a non-contact method, and identifies and quantifies the nervous and psychological mood of the person by combining information such as micro-expression and micro-action. The physiological health level of the tested person is dynamically monitored in the process of monitoring the physiological index changes (heart rate, heart rate variability, blood oxygen and blood pressure changes) of the tested person in real time, and a real-time alarm prompt is given when abnormality occurs. And carrying out real-time psychological emotion detection, calculation and tracking on the tested person in a non-contact manner, and presenting a psychological change curve. The authenticity judgment can be carried out on the user response information more accurately.
Example two
In this embodiment, a psychological analysis unit and a physiological analysis unit are disposed in the microprocessor, when the microprocessor receives an abnormal primary analysis result, the microprocessor performs secondary analysis on the acquired image corresponding to the abnormal primary analysis result through the psychological analysis unit and the physiological analysis unit to obtain a secondary analysis result, and the microprocessor starts the alarm module to operate according to the secondary analysis result. And the secondary analysis ensures that the analysis result is more accurate.
The facial expression analysis module is internally stored with a facial expression corresponding table, different facial expressions, corresponding psychological analysis links and physiological analysis links are prestored, and when the shot facial expression image cannot be successfully compared with the prestored facial expression, the facial expression image is judged to be abnormal. When the contrast similarity rate of the facial image expression which is shot and the prestored facial expression is more than 75%, namely the current facial image expression is judged to be similar to the prestored facial expression, a corresponding physiological analysis link and a corresponding psychological analysis link can be entered.
The psychological analysis link is linked with a psychological analysis unit in the central processing unit, and can analyze the psychological condition of the user through the transmitted current facial expression image. When the person is in a violent state and a fragile state, the alarm of the opposite condition is triggered respectively, namely, the alarm is triggered by the forward dangerous behavior and the reverse dangerous behavior.
Similarly, the physiological analysis link is linked with a physiological analysis unit in the central processing unit, and can analyze the physiological condition of the user through the transmitted current facial expression image. When the device is in the damage state, an alarm is triggered.
The facial expression analysis module firstly screens effective facial expression images capable of identifying information content from the facial expression images, and then extracts abnormal facial expression images from the effective facial expression images. Facial expression images without expression information are screened out, so that the analysis result obtained through the effective facial expression images is more accurate.
The central processing unit is connected with a non-contact sign recognition module used for carrying out physiological detection on the user. Through non-contact sign identification module, can detect out user's actual health state under the condition that the user does not know.
EXAMPLE III
In this embodiment, the face recognition lens moves correspondingly along with the movement of the head of the user. The facial expression of the user in answering can be conveniently captured, and the real effectiveness of answering information expression can be further judged. The face recognition lens adopts a camera device which can freely move at the height position and can rotate by three hundred sixty degrees.
Example four
In this embodiment, a voice recognition module is provided in the central processing unit, and is used to recognize the voice information and the emotion information in the answer voice, respectively, the voice information is extracted to become initial answer information, the emotion information and the initial analysis result of the facial expression are used together to correct the initial answer information, the answer information that the user is not happy is deleted, the final answer information is formed, and the evaluation result corresponding to the answer information is retrieved from the central processing unit and sent to the display screen as a secondary analysis result.
EXAMPLE five
In this embodiment, the questionnaire can be presented in two presentation modes, i.e., a display screen and a speaker. When the display screen is used for displaying, different background colors are provided according to the gender and the age of the user in a targeted manner, so that the user can answer the questions in a relaxed state. Specifically, the central processing unit is preset with a background display table, and different background colors are respectively set for users of different ages and different sexes, for example, a female age of 24-38 years is light purple in the background color of the questionnaire, while a male age of the same age is light gray in the background color of the questionnaire. Different questionnaire background colors are set for different users, so that the users can be visually guided to relax, the most appropriate answer is made, and the subconscious answer is more real.
And when carrying out questionnaire voice broadcast with loudspeaker, adopt different voice broadcast to the user of different sexes different age bracket. Specifically, the central processing unit is preset with a sound playing report, and different playing sound selections are set for users of different ages and different sexes, for example, a female in the age range of 24-38 years old has a gentle and soft male sound, while a male in the same age range has a crisp and quick female sound. Different playing sounds are set for different users, so that the users can be guided to relax in the sense of hearing, answers most suitable for the instinct are made, and the answers of subconscious people are more real.
The foregoing is merely an example of the present invention, and common general knowledge in the field of known specific structures and characteristics is not described herein in any greater extent than that known in the art at the filing date or prior to the priority date of the application, so that those skilled in the art can now appreciate that all of the above-described techniques in this field and have the ability to apply routine experimentation before this date can be combined with one or more of the present teachings to complete and implement the present invention, and that certain typical known structures or known methods do not pose any impediments to the implementation of the present invention by those skilled in the art. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (7)

1. A psychological assessment analysis method based on big data is characterized in that a first step is that facial expressions of a user are shot through a face recognition lens to form facial expression images, and the facial expression images are sent to a facial emotion analysis host; meanwhile, the answer sound of the user is sent to the host through the pickup; step two, the human face emotion analysis host identifies and analyzes the received facial expression image to obtain a video analysis result, and sends the video analysis result to the host; and step three, the host judges the authenticity of the received user answer sound according to the received video analysis result.
2. The big data-based psychological assessment analysis method according to claim 1, wherein in step two, the host computer sends the video analysis result to the display screen for display.
3. The psychological assessment analysis method based on big data as claimed in claim 1, wherein in step three, the host computer extracts the answer information and the emotion information from the answer sound, and the host computer judges the authenticity of the answer information according to the received video analysis result and emotion information.
4. The psychological assessment analysis method based on big data according to claim 1, wherein the host comprises a central processing unit, a questionnaire is preset in the central processing unit, the central processing unit sends the questionnaire to a display screen for display, and the face recognition camera sends the photographed facial expression image to the human face emotion analysis host when the user completes the questionnaire.
5. The big data-based psychological assessment analysis method according to claim 2, wherein said face recognition lens moves correspondingly following the head movement of the user.
6. The mental evaluation analysis method based on big data according to claim 1, wherein a non-contact sign recognition module is arranged in the central processing unit.
7. The big data based mental assessment analysis method according to claim 8, wherein said non-contact physical sign recognition module is used to compare and determine heart rate, heart rate variability, blood oxygen, blood pressure changes.
CN202110351801.2A 2021-03-31 2021-03-31 Psychological assessment analysis method based on big data Withdrawn CN113239724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110351801.2A CN113239724A (en) 2021-03-31 2021-03-31 Psychological assessment analysis method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110351801.2A CN113239724A (en) 2021-03-31 2021-03-31 Psychological assessment analysis method based on big data

Publications (1)

Publication Number Publication Date
CN113239724A true CN113239724A (en) 2021-08-10

Family

ID=77130766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110351801.2A Withdrawn CN113239724A (en) 2021-03-31 2021-03-31 Psychological assessment analysis method based on big data

Country Status (1)

Country Link
CN (1) CN113239724A (en)

Similar Documents

Publication Publication Date Title
US7319780B2 (en) Imaging method and system for health monitoring and personal security
CN103561652B (en) Method and system for assisting patients
CN112971746A (en) Psychological assessment system
WO2020119355A1 (en) Method for evaluating multi-modal emotional understanding capability of patient with autism spectrum disorder
US20040210159A1 (en) Determining a psychological state of a subject
KR20190005219A (en) Augmented Reality Systems and Methods for User Health Analysis
CN105147304B (en) A kind of stimulus information preparation method of personal traits value test
CN110367934B (en) Health monitoring method and system based on non-voice body sounds
US20180060757A1 (en) Data annotation method and apparatus for enhanced machine learning
JP6396351B2 (en) Psychosomatic state estimation device, psychosomatic state estimation method, and eyewear
CN107411753A (en) A kind of wearable device for identifying gait
EP3897388B1 (en) System and method for reading and analysing behaviour including verbal, body language and facial expressions in order to determine a person's congruence
KR102178294B1 (en) Apparatus for Dizziness Treatment Based on Virtual Reality and Driving Method Thereof
KR20200056660A (en) Pain monitoring method and apparatus using tiny motion in facial image
CN110755091A (en) Personal mental health monitoring system and method
KR102171742B1 (en) Senior care system and method therof
CN113239724A (en) Psychological assessment analysis method based on big data
WO2022065446A1 (en) Feeling determination device, feeling determination method, and feeling determination program
US20230095350A1 (en) Focus group apparatus and system
Gupta et al. Recognition of human mental stress using machine learning paradigms
KR102188076B1 (en) method and apparatus for using IoT technology to monitor elderly caregiver
Bi et al. Measuring children’s eating behavior with a wearable device
CN113040773A (en) Data acquisition and processing method
WO2020132941A1 (en) Identification method and related device
Mansouri Benssassi et al. Wearable assistive technologies for autism: opportunities and challenges

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210810

WW01 Invention patent application withdrawn after publication