CN110110574A - The acquisition methods and mask method of psychological pressure parameter - Google Patents
The acquisition methods and mask method of psychological pressure parameter Download PDFInfo
- Publication number
- CN110110574A CN110110574A CN201810088154.9A CN201810088154A CN110110574A CN 110110574 A CN110110574 A CN 110110574A CN 201810088154 A CN201810088154 A CN 201810088154A CN 110110574 A CN110110574 A CN 110110574A
- Authority
- CN
- China
- Prior art keywords
- face
- moment
- target object
- video
- facial image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
Landscapes
- Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The present invention provides the acquisition methods and mask method of a kind of psychological pressure parameter, wherein mask method includes: step 11: recording the video data of target object, the psychology for receiving target object feedback checks oneself scale, any moment when video data being enabled to record is the i-1 moment, and enabling any moment filled in when psychology checks oneself scale is the i-2 moment;Step 12: by analyzing video data, obtaining target object in the electrocardiosignal parameter, expressive features and micro- expressive features at i-1 moment;Scale, which is checked oneself, by analytic psychology obtains target object in the psychological pressure grade at i-2 moment;Step 13: the psychological pressure grade at i-2 moment being inputted into Bayesian network model, obtains the psychological pressure grade at i-1 moment;Step 14: i-1 moment electrocardiosignal parameter, expressive features and the corresponding psychological pressure grade of micro- expressive features are labeled as to the psychological pressure grade at i-1 moment.Based on method of the invention, the psychological pressure grade for snugly monitoring or tracking multiple target objects and variation tendency may be implemented.
Description
Technical field
The present invention relates to computer field, in particular to a kind of the acquisition methods and mask method of psychological pressure parameter.
Background technique
With the rapid development of economy, the social life rhythm constantly accelerated and increasingly fierce social competition, so that
Today's society everyone facing to increasing psychology (spirit) pressure.The disease of the mankind current 75%~90% is and the heart
It is related to manage pressure, so the assessment and identification of psychological pressure, the health of the mankind is had a very important significance with development.
Currently, psychological pressure status assessment and the research method of identification are more, it is broadly divided into based on physiology signal
The methods of detection, the detection based on human body behavior, the detection based on human body face response feature.Based on physiological signal, (electrocardio is exhaled
Inhale signal etc.) detection method identification accuracy it is higher;Since behavior, facial expression etc. can be dominated by consciousness of personality, based on single
One behavior or facial response feature are not high enough come the robustness for assessing psychological pressure state.
Current physiological signal collection mode is contact.Common contact type measurement mainly utilize sensor directly or
Contact human body, achieve the purpose that medical information detect, the disadvantage of contact measurement method be measurement operation it is more complicated,
Measurement period is longer, contact skin can bring discomfort to measured, has certain constraint to human body in detection process, and transported
It is dynamic to be affected.Secondly, the acquisition of physiological signal must obtain, user agrees to and cooperation could be implemented, for certain applied fields
Scape, needs the psychological condition of hidden monitoring objective object, and contact method can not just be implemented.
For the ease of obtaining the psychological condition of target object, it is necessary to it is sufficiently high based on non-contact to develop a kind of robustness
The psychological pressure state evaluating method of method.
Summary of the invention
In view of this, the present invention provides the acquisition methods and mask method of a kind of psychological pressure parameter, to solve hidden prison
The problem of surveying target object psychological pressure situation.
The present invention provides a kind of mask method of psychological pressure parameter, the psychological pressure parameter include electrocardiosignal parameter,
Expressive features and micro- expressive features, this method comprises:
Step 11: recording the video data of target object, the psychology for receiving target object feedback checks oneself scale, enables video
Any moment when data recording is the i-1 moment, when any moment for enabling target object fill in when psychology checks oneself scale is i-2
It carves;
Step 12: by analyzing video data, obtaining target object in electrocardiosignal parameter, the expressive features at i-1 moment
With micro- expressive features;Scale, which is checked oneself, by analytic psychology obtains target object in the psychological pressure grade at i-2 moment;
Step 13: the psychological pressure grade at i-2 moment being inputted into Bayesian network model, obtains the psychology pressure at i-1 moment
Power grade;
Step 14: by i-1 moment electrocardiosignal parameter, expressive features and the corresponding psychological pressure grade mark of micro- expressive features
Note is the psychological pressure grade at i-1 moment.
The present invention also provides a kind of acquisition methods of psychological pressure parameter, which includes electrocardiosignal ginseng
Number, expressive features and micro- expressive features, this method comprises:
Step 21: whether the duration for detecting video data to be analyzed is greater than third preset duration, if so, thening follow the steps
22;
Step 22: being analysed to video data by third preset duration cutting is N number of sub-video, one by one to each sub-video
Execute step 23;
Step 23: Face datection being carried out to n-th of sub-video, n=1,2 ... N number face if detecting face
Step 24 is executed afterwards;
Step 24: the face in n-th of sub-video of tracking numbers corresponding facial image, judges that the face traced into is numbered
Whether the quantity of corresponding facial image is greater than the 4th preset value, if so, any moment when n-th of sub-video being enabled to record is
The j moment executes step 25, if not, return step 23, continues to test other faces in n-th of sub-video, until n-th
Sub-video detection finishes;
Step 25: analysis face numbers corresponding facial image, and the face for obtaining the j moment numbers corresponding target object
Electrocardiosignal parameter, expressive features and micro- expressive features, return step 23 continue to test other faces in n-th of sub-video,
Until n-th of sub-video detection finishes.
The present invention defines a kind of completely new psychological pressure parameter, which includes electrocardiosignal parameter, expression
Parameter and micro- expression parameter, all psychological pressure parameters can be by analyzing user video data acquisition, because of video data
Be easy for the non-contact data obtained, without the cooperation of target object, will not routine work to target object and life bring
Interference.Meanwhile the psychological pressure parameter of the application had not only included electrocardiosignal parameter, but also included expression parameter and micro- expression parameter,
Overcome based on single behavior or facial response feature that the robustness for identifying psychological pressure state (or grade) is not high enough to be lacked
It falls into.
In order to improve essence of the machine learning model based on psychological pressure parameter identification user psychology pressure state of the invention
Degree, the invention proposes the mask methods of psychological pressure parameter, the i.e. generation method of sample data, compare contact method, this
The sample data for inventing obtained psychological pressure parameter is easier to obtain, and so may insure machine based on sufficient sample data
Learning model identifies that the accuracy and robustness of user psychology pressure state, trained machine learning model can be based on video
Data, snugly the psychological pressure state and variation tendency of tracking or monitoring objective object, can obtain multiple target objects simultaneously
Psychological pressure state.
Detailed description of the invention
Fig. 1 is the flow chart of the mask method of psychological pressure parameter of the present invention;
Fig. 2 is the flow chart of the acquisition methods of psychological pressure parameter of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, right in the following with reference to the drawings and specific embodiments
The present invention is described in detail.
The present invention proposes a kind of mask method of psychological pressure parameter, the psychological pressure parameter include electrocardiosignal parameter,
Expressive features and micro- expressive features, as shown in Figure 1, this method comprises:
Step 11 (S101): recording the video data of target object, and the psychology for receiving target object feedback checks oneself scale,
Any moment when video data being enabled to record is the i-1 moment, and target object is enabled to fill in any moment when psychology checks oneself scale
For the i-2 moment;
The recording duration of video data is generally no greater than 30 minutes, and the application is not specifically limited in this embodiment, in practical recording
When, it is determined, be can control at 1 minute, 5 minutes or 10 minutes, it should be ensured that target user is recording according to the state of target object
Video data is in relatively stable psychological pressure state.Psychology checks oneself scale can be when video data be recorded, before recording
Or filled in after recording, to obtain the true psychological pressure state that target user closes on the moment in recorded video data.
The expressive features of the application refer to the corresponding mood of expression or emotional semantic classification, such as happiness, anger, grief and joy, while the application
Expressive features be also possible to mood or the corresponding characteristic value of emotional semantic classification, such as the expression that various expression recognition methods extract is special
Value indicative, different expressive features values correspond to different mood or emotional semantic classification.
Correspondingly, micro- expressive features of the application correspond to the local or subtle expressive features of face, can be various micro- tables
The local expressive features value that feelings recognition methods is extracted, such as the location variation within a preset time of human face characteristic point.
The i-1 moment in step 11 is the label time of the psychological pressure parameter obtained by video data.
Step 12 (S102): by analyzing video data, target object is obtained in the electrocardiosignal parameter at i-1 moment, table
Feelings feature and micro- expressive features;Scale, which is checked oneself, by analytic psychology obtains target object in the psychological pressure grade at i-2 moment;
Step 13 (S103): the psychological pressure grade at i-2 moment is inputted into Bayesian network model, obtains the i-1 moment
Psychological pressure grade;
When video record moment and psychology are checked oneself user is in different psychological conditions when scale is filled in when, utilize pattra leaves
This network model infers the psychological pressure grade of adjacent moment, can reduce mark error.
Step 14 (S104): by i-1 moment electrocardiosignal parameter, expressive features and the corresponding psychological pressure of micro- expressive features
Grade is labeled as the psychological pressure grade at i-1 moment.
Psychological pressure parameter proposed by the present invention includes: electrocardiosignal parameter, expression parameter and micro- expression parameter, all
Psychological pressure parameter can by analyze user video data acquisition, because video data be easy for obtain non-contact data,
Without the cooperation of target object, can be used for snugly tracking or the psychological pressure of monitoring objective object.Meanwhile the psychology of the application
Pressure parameter had not only included electrocardiosignal parameter, but also included expression parameter and micro- expression parameter, was overcome based on single behavior or face
Portion's response feature identifies the not high enough defect of the robustness of psychological pressure state.
In order to improve essence of the machine learning model based on psychological pressure parameter identification user psychology pressure state of the invention
Degree, the invention proposes the mask methods of the psychological pressure parameter of Fig. 1, the i.e. generation method of sample data, compare contact side
Method, the sample data for the psychological pressure parameter that the present invention obtains are easier to obtain, so can be true based on sufficient sample data
Protect the accuracy and robustness of machine learning model identification user psychology pressure state.
The present invention does not limit the method that psychological pressure parameter of the invention is obtained according to video data, arbitrarily can be from view
The method that frequency obtains psychological pressure parameter of the invention in is suitable for the present invention, and psychological pressure ginseng of the present invention is given below
Several acquisition methods citings:
(1) electrocardiosignal parameter
Obtained facial image signal resolution is RGB triple channel by the facial image for tracking target object in video data
The triple channel signal is inputted independent component analysis ICA (Independent Component Analysis) algorithm by signal,
ICA exports Independent sources signal, carries out peak detection to the Independent sources signal and spectrum analysis obtains ecg signal data, statistical
Obtained ecg signal data is analysed, using representative value therein as the data of the electrocardiosignal parameter at i-1 moment.
The representative value of ecg signal data can be the average value or intermediate value of the ecg signal data.
Preferentially, when picture signal being resolved to triple channel signal, the picture signal of face predeterminable area need to only be parsed,
By the characteristic point of locating human face's image, according to the image of feature point extraction face predeterminable area, by the image solution of predeterminable area
Analysis is RGB triple channel signal.
Face predeterminable area may include: face forehead region, left cheek region and/or right cheek region.
The electrocardiosignal parameter of the application includes heart rate, low-and high-frequency signal power, heart rate variability HRV etc..
(2) expressive features
The facial image for tracking target object in video data carries out local binary patterns LBP (Local to facial image
Binary Patterns) feature extraction, the LBP characteristic value of facial image is obtained, (the institute that the video data obtains is statisticallyd analyze
Have) LBP characteristic value, using representative value therein as the characteristic value of the expressive features at i-1 moment.
The representative value of expressive features can be the maximum LBP characteristic value of accounting in probability distribution.
Further, carrying out local binary patterns LBP feature extraction to facial image may include: the eye to facial image
Portion, mouth and forehead region carry out LBP feature extraction.
(3) micro- expressive features
The position for tracking the characteristic point of the facial image of target object in video data, detects the position of characteristic point second
Whether change in preset time, if it is, extracting the changed characteristic point information in position and corresponding change in location
Amount, statisticallys analyze (all) characteristic point informations and corresponding location variation that the video data obtains, by representative value therein
Micro- expressive features as the i-1 moment.
Such as: the time range that the second preset time can be 0.04 second or other micro- expressions occur.
The representative value of micro- expressive features, can be the maximum characteristic point information of accounting in probability distribution and corresponding position becomes
The mean value or maximum of the maximum characteristic point information of accounting and the corresponding location variation of this feature point in change amount or probability distribution
Value.
The sample data of psychological pressure parameter of the present invention is obtained based on above method, has chosen representative value as psychological pressure
The characterization value of parameter can avoid the influence of enchancement factor as far as possible, and it is related to psychological pressure grade to improve psychological pressure parameter
Property.Based on the sample data training machine mode of learning, it can be ensured that the accuracy of identification of machine learning mode.The application is to engineering
Practise model without limitation, any artificial intelligence model.After machine learning model trains, it can come into operation.
As shown in Fig. 2, the psychological pressure parameter includes the heart the invention also includes a kind of acquisition methods of psychological pressure parameter
Electric signal parameter, expressive features and micro- expressive features, this method comprises:
Step 21: whether the duration for detecting video data to be analyzed is greater than third preset duration, if so, thening follow the steps
22。
Video data to be analyzed can be history video data, be also possible to the video data that real time monitoring obtains.
Third preset duration can be 5 minutes or 10 minutes or other video detection periods.
Step 22: being analysed to video data by third preset duration cutting is N number of sub-video, one by one to each sub-video
Execute step 23.
Step 23: Face datection being carried out to n-th of sub-video, n=1,2 ... N number face if detecting face
Step 24 is executed afterwards.
Step 24: the face in n-th of sub-video of tracking numbers corresponding facial image, judges that the face traced into is numbered
Whether the quantity of corresponding facial image is greater than the 4th preset value, if so, any moment when n-th of sub-video being enabled to record is
The j moment executes step 25, if not, return step 23, continues to test other faces in n-th of sub-video, until n-th
Sub-video detection finishes.
The 4th preset value is arranged in step 24, is in order to ensure being analyzed to obtain having for psychological pressure parameter with facial image
Certain data precision.
Step 25: analysis face numbers corresponding facial image, and the face for obtaining the j moment numbers corresponding target object
Electrocardiosignal parameter, expressive features and micro- expressive features, return step 23 continue to test other faces in n-th of sub-video,
Until n-th of sub-video detection finishes.
Step 25 is every to be executed primary, can obtain the psychological pressure parameter of a target object in n-th of sub-video, but the
It may include the facial image of multiple target objects in n sub-video, it is therefore desirable to the target object all to n-th of sub-video
All it is identified and analyzed.After one sub- video analysis, then carry out next sub-video analysis.
The psychological pressure parameter of the either objective object at the j moment obtained by Fig. 2 is inputted into trained machine learning
Model, so that it may obtain the target object in the psychological pressure state grade at j moment.Therefore the present processes are based on, it can be with
Based on video data, the psychological pressure parameter of target object is obtained, this method is not necessarily to the cooperation of target object, will not be to target pair
The routine work and life of elephant bring interference, and can obtain the psychological pressure state of multiple target objects simultaneously, are particularly suitable for
For occasions such as office, train, hospitals, for the hidden psychological pressure state and its change for monitoring or tracking multiple target objects
Change trend.
In the method for Fig. 2, step 21 further include: if the duration of video data to be analyzed is preset less than or equal to third
It is long, then follow the steps 26;
Step 26: Face datection being carried out to video to be analyzed and executes step after numbering face if detecting face
27;
Step 27: the face tracked in video to be analyzed numbers corresponding facial image, judges that the face traced into is numbered
Whether the quantity of corresponding facial image is greater than the 4th preset value, if so, any moment when enabling video record to be analyzed is k
Moment executes step 28, if not, return step 26, continues to test other faces in video to be analyzed, until view to be analyzed
Frequency detection finishes;
Step 28: analysis face numbers corresponding facial image, and the face for obtaining the k moment numbers corresponding target object
Electrocardiosignal parameter, expressive features and micro- expressive features, return step 26 continue to test other faces in video to be analyzed,
Until video detection to be analyzed finishes.
In above-mentioned steps 25, analysis face numbers corresponding facial image, and the face for obtaining the j moment numbers corresponding mesh
The electrocardiosignal parameter of mark object includes: that face is numbered corresponding facial image signal resolution for RGB triple channel signal, by three
Channel signal input independent component analysis ICA algorithm, ICA export Independent sources signal, to Independent sources signal carry out peak detection and
Spectrum analysis obtains ecg signal data, and the face that n-th of sub-video of statistical analysis obtains numbers corresponding ecg signal data,
Corresponding target object is numbered in the data of the electrocardiosignal parameter at j moment using representative value therein as face.
In above-mentioned steps 28, analysis face numbers corresponding facial image, and the face for obtaining the k moment numbers corresponding mesh
The electrocardiosignal parameter of mark object includes: that face is numbered corresponding facial image signal resolution for RGB triple channel signal, by three
Channel signal input independent component analysis ICA algorithm, ICA export Independent sources signal, to Independent sources signal carry out peak detection and
Spectrum analysis obtains ecg signal data, statisticallys analyze the face that video to be analyzed obtains and numbers corresponding ecg signal data,
Corresponding target object is numbered in the data of the electrocardiosignal parameter at k moment using representative value therein as face.
Further, face is numbered corresponding facial image signal resolution as RGB triple channel signal includes: locating human face
The characteristic point for numbering corresponding facial image, according to the image of feature point extraction face predeterminable area, by the image of predeterminable area
Resolve to RGB triple channel signal.
The representative value of ecg signal data can be with the flat of a sub-video or the corresponding ecg signal data of video to be analyzed
Mean value or intermediate value.
Face predeterminable area may include: face forehead region, left cheek region and/or right cheek region.
In above-mentioned steps 25, analysis face numbers corresponding facial image, and the face for obtaining the j moment numbers corresponding mesh
Mark the expressive features of object further include: corresponding facial image is numbered to face and carries out local binary patterns LBP feature extraction, is obtained
The LBP characteristic value of corresponding facial image is numbered to face, the face number that n-th of sub-video data of statistical analysis obtains corresponds to
(all) LBP characteristic values, number corresponding target object in the expressive features at j moment for representative value therein as face.
In above-mentioned steps 28, analysis face numbers corresponding facial image, and the face for obtaining the k moment numbers corresponding mesh
Mark the expressive features of object further include: corresponding facial image is numbered to face and carries out local binary patterns LBP feature extraction, is obtained
The LBP characteristic value that corresponding facial image is numbered to face statisticallys analyze the face number that video data to be analyzed obtains and corresponds to
(all) LBP characteristic values, number corresponding target object in the expressive features at k moment for representative value therein as face.
The representative value of expressive features can be in a sub-video or video to be analyzed the maximum LBP of accounting in probability distribution
Characteristic value.
Further, carrying out local binary patterns LBP feature extraction to facial image may include: the eye to facial image
Portion, mouth and forehead region carry out LBP feature extraction.
In above-mentioned steps 25, analysis face numbers corresponding facial image, and the face for obtaining the j moment numbers corresponding mesh
Mark micro- expressive features of object further include: tracking face numbers the position of the characteristic point of corresponding facial image, detects characteristic point
Position whether change in the second preset time, if it is, extracting the changed characteristic point information in position and right
The location variation answered, the face for statisticalling analyze n-th of sub-video data number corresponding (all) characteristic point informations and correspondence
Location variation, number corresponding target object in micro- expressive features at j moment for representative value therein as face.
In above-mentioned steps 28, analysis face numbers corresponding facial image, and the face for obtaining the k moment numbers corresponding mesh
Mark micro- expressive features of object further include: tracking face numbers the position of the characteristic point of corresponding facial image, detects characteristic point
Position whether change in the second preset time, if it is, extracting the changed characteristic point information in position and right
The location variation answered, the face for statisticalling analyze video data to be analyzed number corresponding (all) characteristic point informations and corresponding
Location variation numbers corresponding target object in micro- expressive features at k moment for representative value therein as face.
The representative value of micro- expressive features, can be in a sub-video or video to be analyzed that accounting is maximum in probability distribution
The maximum characteristic point information of accounting and this feature point are corresponding in characteristic point information and corresponding location variation or probability distribution
Location variation mean value or maximum value.
The foregoing is merely illustrative of the preferred embodiments of the present invention, not to limit scope of the invention, it is all
Within the spirit and principle of technical solution of the present invention, any modification, equivalent substitution, improvement and etc. done should be included in this hair
Within bright protection scope.
Claims (15)
1. a kind of mask method of psychological pressure parameter, which is characterized in that the psychological pressure parameter include electrocardiosignal parameter,
Expressive features and micro- expressive features, which comprises
Step 11: recording the video data of target object, the psychology for receiving target object feedback checks oneself scale, described in order
Any moment when video data is recorded is the i-1 moment, when the target object being enabled to fill in any when psychology checks oneself scale
Carving is the i-2 moment;
Step 12: by analyzing the video data, obtain the target object the i-1 moment electrocardiosignal parameter,
Expressive features and micro- expressive features;The scale acquisition target object is checked oneself at the i-2 moment by analyzing the psychology
Psychological pressure grade;
Step 13: the psychological pressure grade at the i-2 moment being inputted into Bayesian network model, obtains the heart at the i-1 moment
Manage pressure rating;
Step 14: by electrocardiosignal parameter, expressive features described in the i-1 moment and the corresponding psychological pressure grade mark of micro- expressive features
Note is the psychological pressure grade at the i-1 moment.
2. the method according to claim 1, wherein tracking the face of target object described in the video data
Image, is RGB triple channel signal by the facial image signal resolution, and the triple channel signal is inputted independent component analysis
ICA algorithm, the ICA export Independent sources signal, carry out peak detection to the Independent sources signal and spectrum analysis obtains electrocardio
Signal data statisticallys analyze the ecg signal data, using representative value therein as the data of the electrocardiosignal parameter.
3. according to the method described in claim 2, it is characterized in that, it is described by the facial image signal resolution be RGB threeway
Road signal includes:
The characteristic point for positioning the facial image will be described default according to the image of the feature point extraction face predeterminable area
The image analysis in region is RGB triple channel signal.
4. according to the method described in claim 3, it is characterized in that, the face predeterminable area includes face forehead region, a left side
Cheek region and/or right cheek region.
5. the method according to claim 1, wherein tracking the face of target object described in the video data
Image carries out local binary patterns LBP feature extraction to the facial image, obtains the LBP characteristic value of the facial image, unites
Meter analyzes the LBP characteristic value that the video data obtains, using representative value therein as the expressive features.
6. the method according to claim 1, wherein tracking the face of target object described in the video data
Whether the position of the characteristic point of image, the position for detecting the characteristic point change in the second preset time, if it is,
The changed characteristic point information in the position and corresponding location variation are extracted, statisticallys analyze what the video data obtained
Characteristic point information and corresponding location variation, using representative value therein as micro- expressive features.
7. a kind of acquisition methods of psychological pressure parameter, which is characterized in that the psychological pressure parameter include electrocardiosignal parameter,
Expressive features and micro- expressive features, which comprises
Step 21: whether the duration for detecting video data to be analyzed is greater than third preset duration, if so, thening follow the steps 22;
Step 22: it is N number of sub-video that the video data to be analyzed, which is pressed the cutting of third preset duration, is executed to each sub-video
Step 23;
Step 23: Face datection being carried out to n-th of sub-video, n=1,2 ... N number the face if detecting face
Step 24 is executed afterwards;
Step 24: the face in tracking n-th of sub-video numbers corresponding facial image, judges to trace into described
Whether the quantity that face numbers corresponding facial image is greater than the 4th preset value, if so, when n-th of sub-video being enabled to record
Any moment be the j moment, execute step 25, if not, return step 23, continues to test its in n-th of sub-video
His face, until n-th of sub-video detection finishes;
Step 25: analyzing the face and number corresponding facial image, the face for obtaining the j moment numbers corresponding mesh
The electrocardiosignal parameter, expressive features and the micro- expressive features of object are marked, return step 23 continues to test n-th of son
Other faces in video, until n-th of sub-video detection finishes.
8. method according to claim 7, which is characterized in that the step 21 further include: if video data to be analyzed when
It is long to be less than or equal to third preset duration, then follow the steps 26;
Step 26: Face datection being carried out to the video to be analyzed, if detecting face, step will be executed after face number
Rapid 27;
Step 27: the face in the tracking video to be analyzed numbers corresponding facial image, judges to trace into described
Whether the quantity that face numbers corresponding facial image is greater than the 4th preset value, if so, when enabling the video record to be analyzed
Any moment be the k moment, execute step 28, if not, return step 26, continues to test other in the video to be analyzed
Face, until the video detection to be analyzed finishes;
Step 28: analyzing the face and number corresponding facial image, the face for obtaining the k moment numbers corresponding mesh
The electrocardiosignal parameter, expressive features and the micro- expressive features of object are marked, return step 26 continues to test the view to be analyzed
Other faces in frequency, until the video detection to be analyzed finishes.
9. method according to claim 7, which is characterized in that in the step 25, the analysis face number is corresponding
Facial image, it includes: by institute that the face for obtaining the j moment, which numbers the electrocardiosignal parameter of corresponding target object,
Stating face and numbering corresponding facial image signal resolution is RGB triple channel signal, and the triple channel signal is inputted isolated component
ICA algorithm is analyzed, the ICA exports Independent sources signal, carries out peak detection to the Independent sources signal and spectrum analysis obtains
Ecg signal data statisticallys analyze the face that n-th of sub-video obtains and numbers the corresponding ecg signal data,
Corresponding target object is numbered in the number of the electrocardiosignal parameter at the j moment using representative value therein as the face
According to.
10. method according to claim 8, which is characterized in that in the step 28, the analysis face number is corresponding
Facial image, obtain the k moment the face number the electrocardiosignal parameter of corresponding target object, include: by
It is RGB triple channel signal that the face, which numbers corresponding facial image signal resolution, by independent point of triple channel signal input
Amount analysis ICA algorithm, the ICA export Independent sources signal, carry out peak detection to the Independent sources signal and spectrum analysis obtains
To ecg signal data, statisticallys analyze the face that the video to be analyzed obtains and number the corresponding electrocardiosignal number
According to numbering corresponding target object in the electrocardiosignal parameter at the k moment for representative value therein as the face
Data.
11. method according to claim 9 or 10, which is characterized in that described that the face is numbered corresponding face figure
As signal resolution is that RGB triple channel signal includes:
The characteristic point that the face numbers corresponding facial image is positioned, according to the figure of the feature point extraction face predeterminable area
The image analysis of the predeterminable area is RGB triple channel signal by picture.
12. according to claim 7 method, which is characterized in that in the step 25, the analysis face numbers corresponding people
Face image, it includes: to the face that the face for obtaining the j moment, which numbers the expressive features of corresponding target object,
It numbers corresponding facial image and carries out local binary patterns LBP feature extraction, obtain the face and number corresponding facial image
LBP characteristic value, statistically analyze the face that n-th of sub-video data obtains and number corresponding LBP characteristic value, by it
In representative value number corresponding target object in the expressive features at the j moment as the face.
13. according to claim 8 method, which is characterized in that in the step 28, the analysis face numbers corresponding people
Face image, it includes: to the face that the face for obtaining the k moment, which numbers the expressive features of corresponding target object,
It numbers corresponding facial image and carries out local binary patterns LBP feature extraction, obtain the face and number corresponding facial image
LBP characteristic value, statistically analyze the face that the video data to be analyzed obtains and number corresponding LBP characteristic value, by it
In representative value number corresponding target object in the expressive features at the k moment as the face.
14. the method according to the description of claim 7 is characterized in that in the step 25, the analysis face number pair
The facial image answered, it includes: to chase after that the face for obtaining the j moment, which numbers micro- expressive features of corresponding target object,
Face described in track numbers the position of the characteristic point of corresponding facial image, detects the position of the characteristic point in the second preset time
Inside whether change, if it is, extracting the changed characteristic point information in the position and corresponding location variation, unites
The face that meter analyzes n-th of sub-video data numbers corresponding characteristic point information and corresponding location variation, will
Representative value therein numbers corresponding target object in micro- expressive features at the j moment as the face.
15. according to the method described in claim 8, it is characterized in that, in the step 28, the analysis face number pair
The facial image answered, it includes: to chase after that the face for obtaining the k moment, which numbers micro- expressive features of corresponding target object,
Face described in track numbers the position of the characteristic point of corresponding facial image, detects the position of the characteristic point in the second preset time
Inside whether change, if it is, extracting the changed characteristic point information in the position and corresponding location variation, unites
The face that meter analyzes the video data to be analyzed numbers corresponding characteristic point information and corresponding location variation, by it
In representative value number corresponding target object in micro- expressive features at the k moment as the face.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810088154.9A CN110110574A (en) | 2018-01-30 | 2018-01-30 | The acquisition methods and mask method of psychological pressure parameter |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810088154.9A CN110110574A (en) | 2018-01-30 | 2018-01-30 | The acquisition methods and mask method of psychological pressure parameter |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110110574A true CN110110574A (en) | 2019-08-09 |
Family
ID=67483059
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810088154.9A Pending CN110110574A (en) | 2018-01-30 | 2018-01-30 | The acquisition methods and mask method of psychological pressure parameter |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110110574A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110916635A (en) * | 2019-11-15 | 2020-03-27 | 北京点滴灵犀科技有限公司 | Psychological pressure grading and training method and device |
CN113229790A (en) * | 2021-05-17 | 2021-08-10 | 浙江大学 | Non-contact mental stress assessment system |
WO2023002636A1 (en) * | 2021-07-21 | 2023-01-26 | 株式会社ライフクエスト | Stress assessment device, stress assessment method, and program |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1525259A (en) * | 1975-10-03 | 1978-09-20 | Smith R | Radio transmitter pressure and speed measuring writing pe |
WO2001031629A1 (en) * | 1999-10-29 | 2001-05-03 | Sony Corporation | Signal processing device and method therefor and program storing medium |
CN1711961A (en) * | 2004-06-22 | 2005-12-28 | 索尼株式会社 | Bio-information processing apparatus and video/sound reproduction apparatus |
CN102542849A (en) * | 2012-01-20 | 2012-07-04 | 东南大学 | Formative evaluation system |
CN103442252A (en) * | 2013-08-21 | 2013-12-11 | 宇龙计算机通信科技(深圳)有限公司 | Method and device for processing video |
CN106063702A (en) * | 2016-05-23 | 2016-11-02 | 南昌大学 | A kind of heart rate detection system based on facial video image and detection method |
CN106264568A (en) * | 2016-07-28 | 2017-01-04 | 深圳科思创动实业有限公司 | Contactless emotion detection method and device |
CN106407935A (en) * | 2016-09-21 | 2017-02-15 | 俞大海 | Psychological test method based on face images and eye movement fixation information |
CN107273661A (en) * | 2017-05-18 | 2017-10-20 | 深圳市前海安测信息技术有限公司 | Questionnaire Survey System and method for health control |
CN107405072A (en) * | 2014-11-11 | 2017-11-28 | 全球压力指数企业有限公司 | For generating the stress level information of individual and the system and method for compressive resilience horizontal information |
-
2018
- 2018-01-30 CN CN201810088154.9A patent/CN110110574A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1525259A (en) * | 1975-10-03 | 1978-09-20 | Smith R | Radio transmitter pressure and speed measuring writing pe |
WO2001031629A1 (en) * | 1999-10-29 | 2001-05-03 | Sony Corporation | Signal processing device and method therefor and program storing medium |
CN1711961A (en) * | 2004-06-22 | 2005-12-28 | 索尼株式会社 | Bio-information processing apparatus and video/sound reproduction apparatus |
CN102542849A (en) * | 2012-01-20 | 2012-07-04 | 东南大学 | Formative evaluation system |
CN103442252A (en) * | 2013-08-21 | 2013-12-11 | 宇龙计算机通信科技(深圳)有限公司 | Method and device for processing video |
CN107405072A (en) * | 2014-11-11 | 2017-11-28 | 全球压力指数企业有限公司 | For generating the stress level information of individual and the system and method for compressive resilience horizontal information |
CN106063702A (en) * | 2016-05-23 | 2016-11-02 | 南昌大学 | A kind of heart rate detection system based on facial video image and detection method |
CN106264568A (en) * | 2016-07-28 | 2017-01-04 | 深圳科思创动实业有限公司 | Contactless emotion detection method and device |
CN106407935A (en) * | 2016-09-21 | 2017-02-15 | 俞大海 | Psychological test method based on face images and eye movement fixation information |
CN107273661A (en) * | 2017-05-18 | 2017-10-20 | 深圳市前海安测信息技术有限公司 | Questionnaire Survey System and method for health control |
Non-Patent Citations (2)
Title |
---|
TATSUYA SHIBATA ET AL: "Emotion recognition modeling of sitting postures by using pressure sensors and accelerometers", 《PROCEEDINGS OF THE 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR2012)》 * |
张海等: "贝叶斯网络及其在心理领域的应用", 《现代预防医学》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110916635A (en) * | 2019-11-15 | 2020-03-27 | 北京点滴灵犀科技有限公司 | Psychological pressure grading and training method and device |
CN113229790A (en) * | 2021-05-17 | 2021-08-10 | 浙江大学 | Non-contact mental stress assessment system |
WO2023002636A1 (en) * | 2021-07-21 | 2023-01-26 | 株式会社ライフクエスト | Stress assessment device, stress assessment method, and program |
JPWO2023002636A1 (en) * | 2021-07-21 | 2023-01-26 | ||
JP7323248B2 (en) | 2021-07-21 | 2023-08-08 | 株式会社ライフクエスト | STRESS DETERMINATION DEVICE, STRESS DETERMINATION METHOD, AND PROGRAM |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | On the usability of electroencephalographic signals for biometric recognition: A survey | |
Gravina et al. | Automatic methods for the detection of accelerative cardiac defense response | |
KR101689021B1 (en) | System for determining psychological state using sensing device and method thereof | |
CN106037720B (en) | Mix the medical application system of continuous information analytical technology | |
Matsubara et al. | Emotional arousal estimation while reading comics based on physiological signal analysis | |
US10368792B2 (en) | Method for detecting deception and predicting interviewer accuracy in investigative interviewing using interviewer, interviewee and dyadic physiological and behavioral measurements | |
CN110110574A (en) | The acquisition methods and mask method of psychological pressure parameter | |
Dobbins et al. | Signal processing of multimodal mobile lifelogging data towards detecting stress in real-world driving | |
Gunawardhane et al. | Non invasive human stress detection using key stroke dynamics and pattern variations | |
Di Lascio et al. | Laughter recognition using non-invasive wearable devices | |
CN109528217A (en) | A kind of mood detection and method for early warning based on physiological vibrations analysis | |
CN110459291A (en) | Aiding smoking cessation small watersheds and method based on mobile intelligent terminal | |
CN116322479A (en) | Electrocardiogram processing system for detecting and/or predicting cardiac events | |
CN115299947A (en) | Psychological scale confidence evaluation method and system based on multi-modal physiological data | |
Tiwari et al. | Breathing rate complexity features for “in-the-wild” stress and anxiety measurement | |
Amin et al. | A wearable exam stress dataset for predicting grades using physiological signals | |
CN115089179A (en) | Psychological emotion insights analysis method and system | |
CN111317446A (en) | Sleep structure automatic analysis method based on human muscle surface electric signals | |
Li et al. | Multi-modal emotion recognition based on deep learning of EEG and audio signals | |
Tiwari et al. | Stress and anxiety measurement" in-the-wild" using quality-aware multi-scale hrv features | |
Nogueira et al. | A regression-based method for lightweight emotional state detection in interactive environments | |
CN109124619A (en) | A kind of personal emotion arousal recognition methods using multi-channel information synchronization | |
CN112515675B (en) | Emotion analysis method based on intelligent wearable device | |
US20230054041A1 (en) | Computer-implemented method for generating an annotated photoplethysmography (ppg) signal | |
Pradhan et al. | Multi-day analysis of wrist electromyogram-based biometrics for authentication and personal identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20220405 |
|
AD01 | Patent right deemed abandoned |