CN109766917A - Interview video data handling procedure, device, computer equipment and storage medium - Google Patents
Interview video data handling procedure, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109766917A CN109766917A CN201811546820.5A CN201811546820A CN109766917A CN 109766917 A CN109766917 A CN 109766917A CN 201811546820 A CN201811546820 A CN 201811546820A CN 109766917 A CN109766917 A CN 109766917A
- Authority
- CN
- China
- Prior art keywords
- micro
- interviewee
- expression
- obtains
- training set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000014509 gene expression Effects 0.000 claims abstract description 205
- 230000002996 emotional effect Effects 0.000 claims abstract description 68
- 238000012549 training Methods 0.000 claims description 132
- 230000011218 segmentation Effects 0.000 claims description 32
- 238000003066 decision tree Methods 0.000 claims description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 27
- 238000005070 sampling Methods 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 15
- 239000000284 extract Substances 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 13
- 238000001914 filtration Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 235000013399 edible fruits Nutrition 0.000 claims description 5
- 238000007637 random forest analysis Methods 0.000 abstract description 5
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 9
- 230000006399 behavior Effects 0.000 description 4
- 210000004709 eyebrow Anatomy 0.000 description 4
- 239000000203 mixture Substances 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000000452 restraining effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
This application involves the Random Forest model in artificial intelligence, a kind of interview video data handling procedure, device, computer equipment and storage medium are provided.The described method includes: obtaining interviewee's video, interview picture is intercepted from interviewee's video every preset duration;Face datection is carried out according to the interview picture, face picture is obtained, micro- expressive features is obtained according to the face picture;Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified, micro- expression output feature is obtained;The micro- expression of interviewee is obtained according to micro- expression output feature and the corresponding relationship for presetting micro- expression, interviewee's current emotional states are obtained according to the micro- expression of the interviewee, interviewee's current emotional states are sent into interview terminal.Resource can be saved using this method.
Description
Technical field
This application involves Internet technical fields, more particularly to a kind of interview method, apparatus based on micro- expression, calculate
Machine equipment and storage medium.
Background technique
The micro- expression (ME) of face is to cause the of short duration and involuntary quick facial expression for hiding certain true emotional.Standard
Micro- expression duration between 1/5 to 1/25, usually only occur in the privileged site of face.Currently, enterprise is in the face of progress
When examination, usually enterprise interviewer and interviewee are determined whether interviewee meets enterprise's occupation and are needed by aspectant exchange
It asks.However, carrying out interview examination by artificial experience, enter trade-after and the unmatched feelings of enterprise's position by the way that interviewee can occur
Condition wastes enterprise and a large amount of manpower and material resources of employee.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of interview video data processing that can save resource
Method, apparatus, computer equipment and storage medium.
A kind of interview video data handling procedure, which comprises
Interviewee's video is obtained, intercepts interview picture from interviewee's video every preset duration;
Face datection is carried out according to the interview picture, face picture is obtained, micro- expression is obtained according to the face picture
Feature;
Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified, it is special to obtain micro- expression output
Sign;
The micro- expression of interviewee is obtained according to micro- expression output feature and the corresponding relationship for presetting micro- expression, according to described
The micro- expression of interviewee obtains interviewee's current emotional states, and interviewee's current emotional states are sent interview terminal.
It is described in one of the embodiments, to be obtained according to micro- expression output feature with the corresponding relationship for presetting micro- expression
To the micro- expression of interviewee, interviewee's current emotional states are obtained according to the micro- expression of the interviewee, the interviewee is worked as into cause
Not-ready status sends interview terminal, comprising:
The micro- expression of interviewee is obtained according to micro- expression output feature and the corresponding relationship for presetting micro- expression, according to described
The micro- expression of interviewee obtains interviewee's current emotional states;
It obtains interviewee and answers voice messaging, answer voice messaging according to the interviewee and obtain interviewee's answer text envelope
Breath;
The answer keyword in interviewee's answer text information is extracted using keyword extraction algorithm, according to described time
It answers keyword and interviewee's current emotional states determines asked questions, by interviewee's current emotional states and described mention
Terminal is interviewed in transmission of asking questions.
The interviewee, which is extracted, using keyword extraction algorithm in one of the embodiments, answers returning in text information
Answer keyword, comprising:
The answer text information is segmented, word segmentation result is obtained, is filtered, is filtered according to the word segmentation result
As a result;
Candidate keywords figure is established according to filter result, obtains the default initial power of word node in the candidate keywords figure
Weight;
Candidate keywords figure described in loop iteration, until word node weights are obtained, to institute's predicate section when reaching preset condition
Point weight Bit-reversed obtains the word of present count as keyword according to ranking results.
The interviewee, which is extracted, using keyword extraction algorithm in one of the embodiments, answers returning in text information
Answer keyword, comprising:
The answer text information is segmented, word segmentation result is obtained, is filtered, is filtered according to the word segmentation result
As a result;
The probability that preset themes are calculated according to filter result, it is corresponding according to the probability calculation preset themes of the preset themes
Word classification;
The classification of institute's predicate is calculated to the probability of the preset themes, according to the classification of institute's predicate to the probability of the preset themes
Obtain keyword.
It is described in one of the embodiments, that Face datection is carried out according to the interview picture, face picture is obtained, according to
The face picture obtains micro- expressive features, comprising:
User's face detection algorithm carries out Face datection to the interview picture, obtains face picture;
The face picture is divided according to preset condition, human face region is obtained, extracts the micro- of the human face region
Expressive features obtain micro- expressive features of the face picture.
The training step of micro- Expression Recognition model in one of the embodiments, comprising:
History interviewee video and the corresponding micro- expression label of history interviewee's video are obtained, original training set is obtained;
It puts back to sampling at random from the original training set, obtains target training set;
The corresponding micro- expressive features collection of history is obtained according to the target training set, according to the micro- expressive features collection of the history
Obtain the micro- expressive features collection of target;
It is obtained dividing expressive features according to the micro- expressive features collection of the target, using the division expressive features to the mesh
Mark training set is divided, and sub- training set is obtained, using the sub- training set as target training set;
It returns and the corresponding micro- expressive features collection of history is obtained according to the target training set, it is special according to the micro- expression of the history
Collection obtains the execution of the step of target micro- expressive features collection and obtains objective decision tree when reaching preset condition;
Return the step of putting back to sampling at random from the original training set, obtaining target training set execution, when reaching
When the objective decision tree of preset number, micro- Expression Recognition model is obtained.
A kind of interview video data processing apparatus, described device include:
Picture interception module intercepts face from interviewee's video every preset duration for obtaining interviewee's video
Attempt piece;
Feature obtains module, for carrying out Face datection according to the interview picture, face picture is obtained, according to the people
Face picture obtains micro- expressive features;
Feature recognition module is known for inputting micro- expressive features in the micro- Expression Recognition model trained
Not, micro- expression output feature is obtained;
Micro- expression obtains module, for obtaining face with the corresponding relationship for presetting micro- expression according to micro- expression output feature
The micro- expression of examination person obtains interviewee's current emotional states according to the micro- expression of the interviewee, by interviewee's current emotional shape
State sends interview terminal.
Described device in one of the embodiments, further include:
Initial sample obtains module, for obtaining history interviewee video and the corresponding micro- expression mark of history interviewee's video
Label, obtain original training set;
Training set obtains module, for putting back to sampling at random from the original training set, obtains target training set;
Feature set obtains module, for obtaining the corresponding micro- expressive features collection of history according to the target training set, according to
The micro- expressive features collection of history obtains the micro- expressive features collection of target;
Sub- training set obtains module, divides expressive features for obtaining according to the micro- expressive features collection of the target, uses institute
It states division expressive features to divide the target training set, obtains sub- training set, instructed the sub- training set as target
Practice collection;
Decision tree obtains module, obtains the corresponding micro- expressive features collection of history according to the target training set for returning,
The step of obtaining target micro- expressive features collection according to the micro- expressive features collection of history execution is obtained when reaching preset condition
Objective decision tree;
Model obtains module, puts back to sampling at random from the original training set for returning, obtains target training set
The step of execute, when reaching the objective decision tree of preset number, obtain micro- Expression Recognition model.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
Device performs the steps of when executing the computer program
Interviewee's video is obtained, intercepts interview picture from interviewee's video every preset duration;
Face datection is carried out according to the interview picture, face picture is obtained, micro- expression is obtained according to the face picture
Feature;
Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified, it is special to obtain micro- expression output
Sign;
The micro- expression of interviewee is obtained according to micro- expression output feature and the corresponding relationship for presetting micro- expression, according to described
The micro- expression of interviewee obtains interviewee's current emotional states, and interviewee's current emotional states are sent interview terminal.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
It is performed the steps of when row
Interviewee's video is obtained, intercepts interview picture from interviewee's video every preset duration;
Face datection is carried out according to the interview picture, face picture is obtained, micro- expression is obtained according to the face picture
Feature;
Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified, it is special to obtain micro- expression output
Sign;
The micro- expression of interviewee is obtained according to micro- expression output feature and the corresponding relationship for presetting micro- expression, according to described
The micro- expression of interviewee obtains interviewee's current emotional states, and interviewee's current emotional states are sent interview terminal.
Above-mentioned interview video data handling procedure, device, computer equipment and storage medium, by obtaining interviewee's view
Frequently, interview picture is intercepted from interviewee's video every preset duration;Face datection is carried out according to interview picture, obtains face figure
Piece obtains micro- expressive features according to face picture;Micro- expressive features are inputted in the micro- Expression Recognition model trained and are known
Not, micro- expression output feature is obtained;The micro- table of interviewee is obtained according to the corresponding relationship that micro- expression exports feature and presets micro- expression
Feelings obtain interviewee's current emotional states according to the micro- expression of interviewee, and interviewee's current emotional states are sent interview terminal.It is logical
It crosses the video of acquisition interviewee and micro- expression of interviewee is identified, obtain interviewee's current emotional states, can pass through
Micro- Expression Recognition obtains the corresponding interviewee's current emotional states of micro- expression, convenient for deeper into understanding interviewee the case where, know
Others just avoids finding the unmatched situation in post into trade-after, saves resource.
Detailed description of the invention
Fig. 1 is the application scenario diagram that video data handling procedure is interviewed in one embodiment;
Fig. 2 is the flow diagram that video data handling procedure is interviewed in one embodiment;
Fig. 3 is to obtain the flow diagram of asked questions in one embodiment;
Fig. 4 is to obtain the flow diagram of text key word in one embodiment;
Fig. 5 is to obtain the flow diagram of text key word in another embodiment;
Fig. 6 is to obtain the flow diagram of micro- expressive features in one embodiment;
Fig. 7 is the flow diagram of the micro- Expression Recognition model of training in one embodiment;
Fig. 8 is the structural block diagram that video data processing apparatus is interviewed in one embodiment;
Fig. 9 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
Interview video data handling procedure provided by the present application, can be applied in application environment as shown in Figure 1.Its
In, video acquisition device 102 is communicated by network with server 104, and interview terminal 106 and server 104 pass through network
It is communicated.Server 104 obtains the interviewee's video acquired by video acquisition device 102, every preset duration from interview
Interception interview picture in person's video;Server 104 carries out Face datection according to interview picture, face picture is obtained, according to face
Picture obtains micro- expressive features;Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified by server 104,
Obtain micro- expression output feature;Server 104 is interviewed according to the corresponding relationship that micro- expression exports feature and presets micro- expression
The micro- expression of person obtains interviewee's current emotional states according to the micro- expression of interviewee, and interviewee's current emotional states are sent and are interviewed
Terminal 106.Wherein, terminal 102 can be, but not limited to be various personal computers, laptop, smart phone, tablet computer
With portable wearable device, server 104 can use the server set of the either multiple server compositions of independent server
Group realizes.
In one embodiment, it as shown in Fig. 2, providing a kind of interview video data handling procedure, applies in this way
It is illustrated for server in Fig. 1, comprising the following steps:
S202 obtains interviewee's video, and interview picture is intercepted from interviewee's video every preset duration.
Wherein, interviewee's video is the video of the interviewee that is acquired in real time by video acquisition device in interview, video
Acquisition device is the device for acquiring video in real time, for example, camera.
Specifically, the video for interviewee's Interview in progress that server acquisition is acquired in real time by video acquisition device, every
Interview picture is intercepted from interviewee's video every preset duration, for example, extracting intercepting video frames figure from video at interval of 5 seconds
Piece.
S204, carries out Face datection according to interview picture, obtains face picture, and it is special to obtain micro- expression according to face picture
Sign.
Wherein, Face datection refers to that accurate calibration in the picture goes out position and the size of face.
Specifically, Face datection is carried out to the interview picture that is truncated to, obtain face picture to get to the position of face and
Then size extracts the specifying information of micro- expressive features of face according to obtained face picture.Such as: micro- expression of extraction is special
The specifying information of sign includes that eyebrow interior angle raises up etc..
Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified by S206, obtain micro- expression output
Feature.
Wherein, micro- Expression Recognition model is according to the picture intercepted in history interviewee's video history corresponding with the picture
What micro- expression was trained using random forests algorithm.Micro- expression output is characterized in the spy for reflecting the micro- expression of interviewee
Sign.For example, when carrying out Expression Recognition, each decision tree can all export the corresponding feature of a micro- expression, statistics in forest
The accounting of micro- expression of all decision tree outputs, the high feature of accounting are that model exports feature.Such as 100 decision trees, wherein
98 output features are the calm corresponding features of expression, and 2 output is the corresponding feature of other expressions, then Expression Recognition mould
The result of type identification is the calm corresponding output feature of expression.
Specifically, micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified by server, are obtained micro-
Expression exports feature.
S208 obtains the micro- expression of interviewee according to the corresponding relationship that micro- expression exports feature and presets micro- expression, according to face
The micro- expression of examination person obtains interviewee's current emotional states, and interviewee's current emotional states are sent interview terminal.
Wherein, current emotional states are for embodying the current psychological condition of interviewee, for example, the corresponding heart of micro- expression of cooling down
Reason state, which can be, to be steady and calm, then the current psychological condition of interviewee is to be steady and calm at this time.
Specifically, server exports feature according to micro- expression and presets the corresponding relationship of micro- expression, and it is micro- to have obtained interviewee
Expression determines according to micro- expression and emotional state corresponding relationship and interviews the corresponding interviewee's current emotional states of micro- expression, by face
Examination person's current emotional states send interview terminal, and interview terminal receives interviewee's current emotional states, and interviewer can basis
Interviewee's current emotional states of display deeper into understanding interviewee the case where, identify the talent, avoid into trade-after find post
Unmatched situation.
In one embodiment, interviewee's current emotional states can be converted to voice by server, and voice messaging is sent out
It is sent to voice device, voice device is made to receive voice and is played, interviewer can be made to receive interviewee by voice device
Current emotional states, to carry out judging whether to meet job requirement to interviewee.
In one embodiment, server can also be determined according to interviewee's current emotional states interviewee exist fraud or
Person lies the probability of behavior, sends interview terminal for the probability of obtained fraud or behavior of lying and shows.For example know
Not Chu micro- expression be the micro- expression lied, the number of appearance is more than preset threshold, then the probability for behavior of cheating or lie is high,
Interview terminal is sent by the probability height for behavior of cheating or lie to prompt.
Above-mentioned interview video data handling procedure, by obtaining interviewee's video, every preset duration from interviewee's video
Picture is interviewed in middle interception, is carried out Face datection according to interview picture, is obtained face picture, and it is special to obtain micro- expression according to face picture
Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified by sign, micro- expression output feature are obtained, according to micro-
Expression output feature and the corresponding relationship for presetting micro- expression obtain the micro- expression of interviewee, obtain interviewee according to the micro- expression of interviewee
Interviewee's current emotional states are sent interview terminal, can obtain micro- expression pair by micro- Expression Recognition by current emotional states
The interviewee's current emotional states answered, convenient for deeper into understanding interviewee the case where, identify the talent, avoid into trade-after find hilllock
The unmatched situation in position, saves resource.
In one embodiment, as shown in figure 3, step S208, i.e., export feature according to micro- expression and preset micro- expression
Corresponding relationship obtains the micro- expression of interviewee, obtains interviewee's current emotional states according to the micro- expression of interviewee, and interviewee is current
Emotional state sends interview terminal, comprising steps of
S302 obtains the micro- expression of interviewee according to the corresponding relationship that micro- expression exports feature and presets micro- expression, according to face
The micro- expression of examination person obtains interviewee's current emotional states.
Specifically, server obtains the micro- expression of interviewee according to the corresponding relationship that expression exports feature and presets micro- expression,
Interviewee's current emotional states are obtained according to the micro- expression of interviewee.
S304 obtains interviewee and answers voice messaging, answers voice messaging according to interviewee and obtains interviewee's answer text
Information.
Specifically, server gets the answer multi-language voice information acquired by voice communication device, returns what is obtained
It answers voice messaging and carries out text conversion using voice conversion algorithm, obtain examination person and answer text information.
S304 extracts the answer keyword in interviewee's answer text information using keyword extraction algorithm, according to answer
Keyword and interviewee's current emotional states determine asked questions, and interviewee's current emotional states and asked questions are sent interview
Terminal.
Specifically, server extracts the answer keyword in interviewee's answer text information, root using key extracted algorithm
Asked questions are determined according to answer keyword and interviewee's current emotional states, and interviewee's current emotional states and asked questions are sent out
Send interview terminal.Wherein, asked questions can have multiple, and interviewer can choose any one problem and put question to.Wherein, it closes
LDA, TextRank etc. can be used in keyword extraction algorithm.
In the above-described embodiments, by obtaining interviewee according to the corresponding relationship that micro- expression exports feature and presets micro- expression
Micro- expression obtains interviewee's current emotional states according to the micro- expression of interviewee, obtains interviewee and answers voice messaging, according to interview
Person answers voice messaging and obtains interviewee's answer text information, uses keyword extraction algorithm to extract interviewee and answers text information
In answer keyword, determine asked questions according to keyword and interviewee's current emotional states are answered, interviewee worked as into cause
Not-ready status and asked questions send interview terminal.It is current by the keyword and interviewee that are extracted according to interviewee's voice messaging
Emotional state determines asked questions, can be convenient interviewer and puts question to, convenient for deeper into understanding interviewee the case where, mention
High efficiency and accuracy.
In one embodiment, as shown in figure 4, step 306, i.e., extract interviewee using keyword extraction algorithm and answer text
Answer keyword in this information, comprising steps of
S402 will answer text information participle, obtain word segmentation result, be filtered according to word segmentation result, obtain filtering knot
Fruit.
Specifically, server will answer text information and carry out word segmentation processing, and carry out part-of-speech tagging processing, obtain participle knot
Fruit is filtered according to obtained word segmentation result, is filled into stop words, retains the word of specified part of speech, and forms the set of word.Its
In, stop words refers to compared with other words, the word of no physical meaning.It may include English character, number, mathematical character, punctuate
Symbol and the extra-high Chinese word character etc. of frequency of use.For example, remove in the text punctuation mark, everyday words and " noun, verb,
Word except adjective, adverbial word ".
S404 establishes candidate keywords figure according to filter result, and word node is default initial in acquisition candidate keywords figure
Weight.
Wherein, candidate keywords figure is the oriented authorized graph being made of the word obtained after filtering.It is oriented refer to according to
The sequential build candidate keywords figure for answering text has the right to refer to the degree of correlation answered in text between word and word
Specifically, using each word obtained after filtering as the node of candidate keywords figure, according to preset window size
The side between word and word is formed using cooccurrence relation, obtains candidate keywords figure.Word node is pre- in acquisition candidate keywords figure
If initial weight.Wherein, the default initial weight of word node can be 1.
S406, loop iteration candidate keywords figure, until word node weights are obtained, to word node when reaching preset condition
Weight Bit-reversed obtains the word of present count as keyword according to ranking results.
Specifically, it is used in candidate keywords figureIteration
The weight of each node is propagated, until obtaining word node weights when restraining or reaching default the number of iterations.Wherein, V table in formula
Show that word node, WS indicate word node weights.W indicates side right weight, is obtained according to the similarity of the word node of side composition, and d is damping
Coefficient, value range are 0 to 1, represent the probability that a certain specified point from figure is directed toward any other point, and general value is 0.85,
In indicates the point set for being directed toward the word node, and Out indicates the point set for the word node that the word node is directed toward.Then server is to word
Node weight Bit-reversed.The ranking results of word node are obtained according to ranking results, then successively select present count from big to small
The word of the word node of amount, using the word selected as keyword.
In the above-described embodiments, by the way that text information participle will be answered, word segmentation result is obtained, was carried out according to word segmentation result
Filter is filtered as a result, establishing candidate keywords figure according to filter result, and word node is default first in acquisition candidate keywords figure
Beginning weight, loop iteration candidate keywords figure, until obtaining word node weights when reaching preset condition, being fallen to word node weights
Sequence sequence, the word for obtaining present count according to ranking results more accurately and easily can obtain keyword, mention as keyword
High efficiency.
In one embodiment, as shown in figure 5, step 306, i.e., extract interviewee using keyword extraction algorithm and answer text
Answer keyword in this information, comprising steps of
S502 will answer text information participle, obtain word segmentation result, be filtered according to word segmentation result, obtain filtering knot
Fruit.
Specifically, server carries out word segmentation processing to obtained answer text information, carries out part of speech mark to the word after participle
Note, is then filtered classification results, i.e. stop words is outwelled in filtering, retains the word of specified part of speech.Obtain filtered word.
S504 calculates the probability of preset themes according to filter result, according to the probability calculation preset themes pair of preset themes
The word classification answered.
Wherein, preset themes refer to the various themes of pre-set text, extract the master in text by LDA algorithm
Topic.
Specifically, the probability that each word respectively corresponds each preset themes is calculated according to the word obtained after filtering, united respectively
The probability for counting corresponding all words under each preset themes, using one group of word of the corresponding maximum probability of preset themes as word point
Class.It is calculated using the joint publication of LDA, each iteration only changes the value of a dimension, until convergence output is to be estimated
Parameter.When in LDA, dimension is exactly word finder, when each iteration, is distributed according to the theme of other words to estimate current word
Theme probability, i.e. the theme distribution of exclusion current word, calculate current word according to the distribution of the theme of other words and the word observed
The probability of theme
S506 calculates word classification to the probability of preset themes, obtains keyword to the probability of preset themes according to word classification.
Specifically, word classification is calculated to the probability of each preset themes, then the corresponding theme of word classification is to calculate
The corresponding preset themes of maximum probability then obtain institute's predicate then using the preset themes as descriptor and classify corresponding theme
Word.
In the above-described embodiments, text information participle will be answered, word segmentation result is obtained, is filtered according to word segmentation result,
Obtain filter result.The probability that preset themes are calculated according to filter result, according to the probability calculation preset themes pair of preset themes
The word classification answered.Word classification is calculated to the probability of preset themes, keyword, energy are obtained to the probability of preset themes according to word classification
Keyword that is enough more accurate and easily obtaining answering text, improves efficiency.
In one embodiment, as shown in fig. 6, step 204, carries out Face datection according to interview picture, obtain face figure
Piece obtains micro- expressive features according to face picture, comprising:
S602, user's face detection algorithm opposite attempt piece and carry out Face datection, obtain face picture;
Wherein, method for detecting human face is based on as histogram feature, color characteristic, template characteristic, structure feature and Haar are special
Sign etc. is detected using Adaboost learning algorithm.
Specifically, server user face detection algorithm opposite attempts piece progress Face datection, obtains face picture.
Face picture is divided according to preset condition, obtains human face region by S604, extracts micro- expression of human face region
Feature obtains micro- expressive features of face picture.
Specifically, server divides face according to the condition pre-set, obtains human face region.Wherein, in advance
If condition, which refers to, is divided into upper face, lower face and other parts according to by face.Then server extracts the micro- of human face region
Expressive features obtain micro- expressive features of personnel's picture.
In the above-described embodiments, piece is attempted by using Face datection algorithm opposite and carries out Face datection, obtain face figure
Face picture is divided according to preset condition, obtains human face region by piece, is extracted micro- expressive features of human face region, is obtained
Micro- expressive features of face picture.It being capable of the quick face picture that must be needed by using Face datection algorithm.It improves to obtain
The efficiency of face picture.
In one embodiment, as shown in fig. 7, the training step of micro- Expression Recognition model, comprising:
S702 obtains history interviewee video and the corresponding micro- expression label of history interviewee's video, obtains initial sample
Collection.
Wherein, micro- expression label is micro- every the corresponding history interviewee of preset time point in usage history interviewee's video
The label that expression obtains.
Specifically, server gets history interviewee video and the corresponding micro- expression label of history interviewee's video, will
History interviewee video and the micro- expression label of corresponding history are as original training set.
S704 puts back to sampling at random from original training set, obtains target training set.
Wherein, target training set refers to the set that the sample randomly selected from original training set obtains.
Specifically, server puts back to sampling at random from original training set, obtains target training set.
S706 obtains the corresponding micro- expressive features collection of history according to target training set, is obtained according to the micro- expressive features collection of history
To the micro- expressive features collection of target.
Wherein, micro- expressive features collection refers to the set being made of micro- expressive features, which interviewed by history
Person's video obtains.
Specifically, server carries out Face datection according to history interviewee's video in target training set and obtains interviewee's
Face picture, obtains the micro- expressive features of history according to face picture, obtains the micro- table of history according to the micro- expressive features of obtained history
Then feelings feature set chooses any number of feature from the micro- expressive features concentration of history in one's power and obtains the micro- expressive features collection of target.
S708 obtains dividing expressive features according to the micro- expressive features collection of target, is trained using expressive features are divided to target
Collection is divided, and sub- training set is obtained, using sub- training set as target training set.
Wherein, dividing expressive features is the feature for dividing the sample in target training set.For example, dividing expressive features
It raises up for eyebrow interior angle, then the sample that the eyebrow interior angle in target training set raises up is divided into a sub- training set, by target
The sample that eyebrow interior angle in training set does not raise up is divided into another sub- training set.
Specifically, server chooses optimal division expressive features using gini index according to the micro- expressive features collection of target,
Target training set is divided using obtained division expressive features, obtains sub- training set.It is instructed sub- training set as target
Practice collection, when the target training set is first node, which is root node, then left instruction can be obtained after being divided
Lian Ji and right training set all regard left training set and right training set as target training set.
S708 is returned and is obtained the corresponding micro- expressive features collection of history according to target training set, according to the micro- expressive features of history
Collection obtains the execution of the step of target micro- expressive features collection and obtains objective decision tree when reaching preset condition.
Wherein, preset condition refers to that micro- expression label of sample in target training set is identical.
Specifically, server judges whether to have reached preset condition, when not up to preset condition, i.e., in target training set
Micro- expression label of sample is not identical, and server return step S706 is executed at this time, until micro- table in target training set
When feelings label is all identical, i.e. when micro- expression label in each sub- training set is all identical, objective decision tree is just obtained.For example,
Obtained micro- expression label in a sub- training set is all uneasiness, then illustrates that the sub- training set label is identical, then son training
Collection has just reached preset condition, at this time, it is also necessary to judge whether other sub- training sets have reached preset condition, when all sons
Training set has all reached preset condition, has just obtained an objective decision tree at this time.
S708, return the step of putting back to sampling at random from original training set, obtaining target training set execution, when reaching
When the objective decision tree of preset number, micro- Expression Recognition model is obtained.
Specifically, when obtaining objective decision tree, whether the quantity that server judges objective decision tree has reached default
Number just obtained micro- expression Random Forest model when the quantity of objective decision tree has reached preset number.Micro- expression
Random Forest model is just used as micro- Expression Recognition model.When the quantity of objective decision tree is not up to preset number, server is returned
It returns step S704 to be executed, until the quantity of objective decision tree has reached preset number.
In above-described embodiment, by obtaining history interviewee video and the corresponding micro- expression label of history interviewee's video,
Obtain original training set.It puts back to sampling at random from original training set, obtains target training set.It is obtained according to target training set
The corresponding micro- expressive features collection of history obtains the micro- expressive features collection of target according to the micro- expressive features collection of history.According to the micro- table of target
Feelings feature set obtains dividing expressive features, is divided using expressive features are divided to target training set, obtains sub- training set, will
Sub- training set is as target training set.It returns and the corresponding micro- expressive features collection of history is obtained according to target training set, according to history
Micro- expressive features collection obtains the execution of the step of target micro- expressive features collection and obtains objective decision tree when reaching preset condition.It returns
The step of putting back to sampling at random from original training set, obtaining target training set execution is returned, when the target for reaching preset number
When decision tree, micro- Expression Recognition model is obtained.The micro- Expression Recognition model obtained using random forests algorithm, can be improved micro- table
The accuracy rate of feelings identification, and trains micro- Expression Recognition model in advance, directly carries out using micro- Expression Recognition can be improved
Efficiency.
It should be understood that although each step in the flow chart of Fig. 2-7 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-7
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
In one embodiment, as shown in figure 8, providing a kind of interview video data processing apparatus 900, comprising: picture
Interception module 902, feature obtain module 904, feature recognition module 906 and micro- expression and obtain module 908, in which:
Map interception module 902 is intercepted from interviewee's video for obtaining interviewee's video every preset duration
Interview picture;
Feature obtains module 904, for carrying out Face datection according to the interview picture, face picture is obtained, according to institute
It states face picture and obtains micro- expressive features;
Feature recognition module 906 is carried out for inputting micro- expressive features in the micro- Expression Recognition model trained
Identification obtains micro- expression output feature;
Micro- expression obtains module 908, for being obtained according to micro- expression output feature with the corresponding relationship for presetting micro- expression
To the micro- expression of interviewee, interviewee's current emotional states are obtained according to the micro- expression of the interviewee, the interviewee is worked as into cause
Not-ready status sends interview terminal.
In one embodiment, micro- expression obtains module 908, comprising:
State obtains module, micro- for obtaining interviewee according to the corresponding relationship that micro- expression exports feature and presets micro- expression
Expression obtains interviewee's current emotional states according to the micro- expression of interviewee;
Text obtains module, answers voice messaging for obtaining interviewee, answers voice messaging according to interviewee and obtain face
Examination person answers text information;
Problem determination module, it is crucial for using keyword extraction algorithm to extract the answer that interviewee answers in text information
Word determines asked questions according to answer keyword and interviewee's current emotional states, by interviewee's current emotional states and enquirement
Problem sends interview terminal.
In one embodiment, problem determination module, comprising:
Text word segmentation module obtains word segmentation result, was carried out according to word segmentation result for that will answer text information participle
Filter, obtains filter result;
Word figure establishes module, for establishing candidate keywords figure according to filter result, obtains word section in candidate keywords figure
The default initial weight of point;
Keyword obtains module, is used for loop iteration candidate keywords figure, until obtaining word node when reaching preset condition
Weight obtains the word of present count as keyword according to ranking results to word node weights Bit-reversed.
In one embodiment, problem determination module, comprising:
Filter result obtains module, for that will answer text information participle, obtains word segmentation result, is carried out according to word segmentation result
Filtering, obtains filter result;
Word classified calculating module, for calculating the probability of preset themes according to filter result, according to the probability of preset themes
Calculate the corresponding word classification of preset themes;
Probability evaluation entity is classified according to word to the general of preset themes for calculating word classification to the probability of preset themes
Rate obtains keyword.
In one embodiment, feature obtains module 904, comprising:
Face detection module attempts piece for user's face detection algorithm opposite and carries out Face datection, obtains face picture;
Face division module obtains human face region, extracts face for dividing face picture according to preset condition
Micro- expressive features in region obtain micro- expressive features of face picture.
In one embodiment, video data processing apparatus 900 is interviewed, comprising:
Initial sample obtains module, for obtaining history interviewee video and the corresponding micro- expression mark of history interviewee's video
Label, obtain original training set;
Training set obtains module, for putting back to sampling at random from the original training set, obtains target training set;
Feature set obtains module, for obtaining the corresponding micro- expressive features collection of history according to the target training set, according to
The micro- expressive features collection of history obtains the micro- expressive features collection of target;
Sub- training set obtains module, divides expressive features for obtaining according to the micro- expressive features collection of the target, uses institute
It states division expressive features to divide the target training set, obtains sub- training set, instructed the sub- training set as target
Practice collection;
Decision tree obtains module, obtains the corresponding micro- expressive features collection of history according to the target training set for returning,
The step of obtaining target micro- expressive features collection according to the micro- expressive features collection of history execution is obtained when reaching preset condition
Objective decision tree;
Model obtains module, puts back to sampling at random from the original training set for returning, obtains target training set
The step of execute, when reaching the objective decision tree of preset number, obtain micro- Expression Recognition model.
Specific restriction about interview video data processing apparatus may refer to above for interview video data processing
The restriction of method, details are not described herein.Modules in above-mentioned interview video data processing apparatus can be fully or partially through
Software, hardware and combinations thereof are realized.Above-mentioned each module can be embedded in the form of hardware or independently of the place in computer equipment
It manages in device, can also be stored in a software form in the memory in computer equipment, in order to which processor calls execution or more
The corresponding operation of modules.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 9.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is for storing interview video data.The network interface of the computer equipment is used to pass through with external terminal
Network connection communication.To realize a kind of interview video data handling procedure when the computer program is executed by processor.
It will be understood by those skilled in the art that structure shown in Fig. 9, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with
Computer program, the processor execute computer program when perform the steps of obtain interviewee's video, every preset duration from
Interception interview picture in interviewee's video;Face datection is carried out according to interview picture, face picture is obtained, is obtained according to face picture
To micro- expressive features;Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified, micro- expression output is obtained
Feature;The micro- expression of interviewee is obtained according to the corresponding relationship that micro- expression exports feature and presets micro- expression, according to the micro- table of interviewee
Feelings obtain interviewee's current emotional states, and interviewee's current emotional states are sent interview terminal.
In one embodiment, it is also performed the steps of when processor executes computer program and spy is exported according to micro- expression
It seeks peace and presets the corresponding relationship of micro- expression and obtain the micro- expression of interviewee, interviewee's current emotional shape is obtained according to the micro- expression of interviewee
State;It obtains interviewee and answers voice messaging, answer voice messaging according to interviewee and obtain interviewee's answer text information;Use pass
Keyword extraction algorithm extracts the answer keyword in interviewee's answer text information, works as cause according to answer keyword and interviewee
Not-ready status determines asked questions, and interviewee's current emotional states and asked questions are sent interview terminal.
In one embodiment, text information point will be answered by also performing the steps of when processor executes computer program
Word obtains word segmentation result, is filtered according to word segmentation result, obtains filter result;Candidate keywords are established according to filter result
Figure obtains the default initial weight of word node in candidate keywords figure;Loop iteration candidate keywords figure, until reaching default item
When part, word node weights are obtained, to word node weights Bit-reversed, obtain the word of present count as crucial according to ranking results
Word.
In one embodiment, text information point will be answered by also performing the steps of when processor executes computer program
Word obtains word segmentation result, is filtered according to word segmentation result, obtains filter result;Preset themes are calculated according to filter result
Probability is classified according to the corresponding word of the probability calculation preset themes of preset themes;Calculate probability of the word classification to preset themes, root
Keyword is obtained to the probability of preset themes according to word classification.
In one embodiment, user's face detection algorithm is also performed the steps of when processor executes computer program
Opposite attempts piece and carries out Face datection, obtains face picture;Face picture is divided according to preset condition, obtains face area
Micro- expressive features of human face region are extracted in domain, obtain micro- expressive features of face picture.
In one embodiment, it is also performed the steps of when processor executes computer program and obtains history interviewee view
Frequency micro- expression label corresponding with history interviewee's video, obtains original training set;It puts back to and adopts at random from original training set
Sample obtains target training set;The corresponding micro- expressive features collection of history is obtained according to target training set, according to the micro- expressive features of history
Collection obtains the micro- expressive features collection of target;It is obtained dividing expressive features according to the micro- expressive features collection of target, uses division expressive features
Target training set is divided, sub- training set is obtained, using sub- training set as target training set;It returns according to target training set
The step of obtaining the corresponding micro- expressive features collection of history, obtaining target micro- expressive features collection according to the micro- expressive features collection of history is held
Row, when reaching preset condition, obtains objective decision tree;Sampling is put back in return at random from original training set, obtains target
The step of training set, executes, and when reaching the objective decision tree of preset number, obtains micro- Expression Recognition model.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor obtains interviewee's video, cuts from interviewee's video every preset duration
Take interview picture;Face datection is carried out according to interview picture, face picture is obtained, micro- expressive features is obtained according to face picture;
Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified, micro- expression output feature is obtained;According to micro- table
Feelings output feature and the corresponding relationship for presetting micro- expression obtain the micro- expression of interviewee, obtain interviewee according to the micro- expression of interviewee and work as
Interviewee's current emotional states are sent interview terminal by preceding emotional state.
In one embodiment, it also performs the steps of when computer program is executed by processor and is exported according to micro- expression
Feature and the corresponding relationship for presetting micro- expression obtain the micro- expression of interviewee, obtain interviewee's current emotional according to the micro- expression of interviewee
State;It obtains interviewee and answers voice messaging, answer voice messaging according to interviewee and obtain interviewee's answer text information;It uses
Keyword extraction algorithm extracts the answer keyword in interviewee's answer text information, current according to answer keyword and interviewee
Emotional state determines asked questions, and interviewee's current emotional states and asked questions are sent interview terminal.
In one embodiment, text information will be answered by also performing the steps of when computer program is executed by processor
Participle, obtains word segmentation result, is filtered according to word segmentation result, obtain filter result;Candidate key is established according to filter result
Word figure obtains the default initial weight of word node in candidate keywords figure;Loop iteration candidate keywords figure, until reaching default
When condition, word node weights are obtained, to word node weights Bit-reversed, obtain the word of present count as crucial according to ranking results
Word.
In one embodiment, text information will be answered by also performing the steps of when computer program is executed by processor
Participle, obtains word segmentation result, is filtered according to word segmentation result, obtain filter result;Preset themes are calculated according to filter result
Probability, classified according to the corresponding word of the probability calculation preset themes of preset themes;The probability that word is classified to preset themes is calculated,
Keyword is obtained to the probability of preset themes according to word classification.
In one embodiment, it also performs the steps of when computer program is executed by processor and is calculated using Face datection
Method opposite attempts piece and carries out Face datection, obtains face picture;Face picture is divided according to preset condition, obtains face
Micro- expressive features of human face region are extracted in region, obtain micro- expressive features of face picture.
In one embodiment, it is also performed the steps of when computer program is executed by processor and obtains history interviewee
Video and the corresponding micro- expression label of history interviewee's video, obtain original training set;It is put back at random from original training set
Sampling, obtains target training set;The corresponding micro- expressive features collection of history is obtained according to target training set, it is special according to the micro- expression of history
Collection obtains the micro- expressive features collection of target;It is obtained dividing expressive features according to the micro- expressive features collection of target, it is special using expression is divided
Sign divides target training set, sub- training set is obtained, using sub- training set as target training set;It returns according to target training
The step of collection obtains the corresponding micro- expressive features collection of history, obtains target micro- expressive features collection according to the micro- expressive features collection of history is held
Row, when reaching preset condition, obtains objective decision tree;Sampling is put back in return at random from original training set, obtains target
The step of training set, executes, and when reaching the objective decision tree of preset number, obtains micro- Expression Recognition model.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of interview video data handling procedure, which comprises
Interviewee's video is obtained, intercepts interview picture from interviewee's video every preset duration;
Face datection is carried out according to the interview picture, face picture is obtained, micro- expressive features is obtained according to the face picture;
Micro- expressive features are inputted in the micro- Expression Recognition model trained and are identified, micro- expression output feature is obtained;
The micro- expression of interviewee is obtained according to micro- expression output feature and the corresponding relationship for presetting micro- expression, according to the interview
The micro- expression of person obtains interviewee's current emotional states, and interviewee's current emotional states are sent interview terminal.
2. the method according to claim 1, wherein described export feature according to micro- expression and preset micro- table
The corresponding relationship of feelings obtains the micro- expression of interviewee, interviewee's current emotional states is obtained according to the micro- expression of the interviewee, by institute
It states interviewee's current emotional states and sends interview terminal, comprising:
The micro- expression of interviewee is obtained according to micro- expression output feature and the corresponding relationship for presetting micro- expression, according to the interview
The micro- expression of person obtains interviewee's current emotional states;
It obtains interviewee and answers voice messaging, answer voice messaging according to the interviewee and obtain interviewee's answer text information;
The answer keyword in interviewee's answer text information is extracted using keyword extraction algorithm, is closed according to the answer
Keyword and interviewee's current emotional states determine asked questions, by interviewee's current emotional states and described ask
Topic sends interview terminal.
3. according to the method described in claim 2, being answered it is characterized in that, extracting the interviewee using keyword extraction algorithm
Answer keyword in text information, comprising:
The answer text information is segmented, word segmentation result is obtained, is filtered according to the word segmentation result, obtains filtering knot
Fruit;
Candidate keywords figure is established according to filter result, obtains the default initial weight of word node in the candidate keywords figure;
Candidate keywords figure described in loop iteration, until word node weights are obtained, to institute's predicate node weight when reaching preset condition
It reorders, obtains the word of present count as keyword according to ranking results.
4. according to the method described in claim 2, being answered it is characterized in that, extracting the interviewee using keyword extraction algorithm
Answer keyword in text information, comprising:
The answer text information is segmented, word segmentation result is obtained, is filtered according to the word segmentation result, obtains filtering knot
Fruit;
The probability that preset themes are calculated according to filter result, according to the corresponding word of probability calculation preset themes of the preset themes
Classification;
The classification of institute's predicate is calculated to the probability of the preset themes, the probability of the preset themes is obtained according to the classification of institute's predicate
Keyword.
5. being obtained the method according to claim 1, wherein described carry out Face datection according to the interview picture
To face picture, micro- expressive features are obtained according to the face picture, comprising:
User's face detection algorithm carries out Face datection to the interview picture, obtains face picture;
The face picture is divided according to preset condition, obtains human face region, extracts micro- expression of the human face region
Feature obtains micro- expressive features of the face picture.
6. the method according to claim 1, wherein the training step of micro- Expression Recognition model, comprising:
History interviewee video and the corresponding micro- expression label of history interviewee's video are obtained, original training set is obtained;
It puts back to sampling at random from the original training set, obtains target training set;
The corresponding micro- expressive features collection of history is obtained according to the target training set, is obtained according to the micro- expressive features collection of the history
The micro- expressive features collection of target;
It is obtained dividing expressive features according to the micro- expressive features collection of the target, the target is instructed using the division expressive features
Practice collection to be divided, sub- training set is obtained, using the sub- training set as target training set;
It returns and the corresponding micro- expressive features collection of history is obtained according to the target training set, according to the micro- expressive features collection of the history
The step of obtaining target micro- expressive features collection execution obtains objective decision tree when reaching preset condition;
The step of return puts back to sampling at random from the original training set, obtains target training set execution, it is default when reaching
When the objective decision tree of number, micro- Expression Recognition model is obtained.
7. a kind of interview video data processing apparatus, which is characterized in that described device includes:
Map interception module intercepts interview figure every preset duration for obtaining interviewee's video from interviewee's video
Piece;
Feature obtains module, for carrying out Face datection according to the interview picture, face picture is obtained, according to the face figure
Piece obtains micro- expressive features;
Feature recognition module is identified for inputting micro- expressive features in the micro- Expression Recognition model trained, is obtained
Feature is exported to micro- expression;
Micro- expression obtains module, for obtaining interviewee with the corresponding relationship for presetting micro- expression according to micro- expression output feature
Micro- expression obtains interviewee's current emotional states according to the micro- expression of the interviewee, and interviewee's current emotional states are sent out
Send interview terminal.
8. device according to claim 7, which is characterized in that described device further include:
Initial sample obtains module, for obtaining history interviewee video and the corresponding micro- expression label of history interviewee's video,
Obtain original training set;
Training set obtains module, for putting back to sampling at random from the original training set, obtains target training set;
Feature set obtains module, for obtaining the corresponding micro- expressive features collection of history according to the target training set, according to described
The micro- expressive features collection of history obtains the micro- expressive features collection of target;
Sub- training set obtains module, divides expressive features for obtaining according to the micro- expressive features collection of the target, uses described stroke
Point expressive features divide the target training set, sub- training set are obtained, using the sub- training set as target training set;
Decision tree obtains module, obtains the corresponding micro- expressive features collection of history according to the target training set for returning, according to
The micro- expressive features collection of history obtains the execution of the step of target micro- expressive features collection and obtains target when reaching preset condition
Decision tree;
Model obtains module, puts back to sampling at random from the original training set for returning, obtains the step of target training set
It is rapid to execute, when reaching the objective decision tree of preset number, obtain micro- Expression Recognition model.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 6 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 6 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811546820.5A CN109766917A (en) | 2018-12-18 | 2018-12-18 | Interview video data handling procedure, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811546820.5A CN109766917A (en) | 2018-12-18 | 2018-12-18 | Interview video data handling procedure, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109766917A true CN109766917A (en) | 2019-05-17 |
Family
ID=66451496
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811546820.5A Pending CN109766917A (en) | 2018-12-18 | 2018-12-18 | Interview video data handling procedure, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109766917A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211591A (en) * | 2019-06-24 | 2019-09-06 | 卓尔智联(武汉)研究院有限公司 | Interview data analysing method, computer installation and medium based on emotional semantic classification |
CN110222623A (en) * | 2019-05-31 | 2019-09-10 | 深圳市恩钛控股有限公司 | Micro- expression analysis method and system |
CN110265025A (en) * | 2019-06-13 | 2019-09-20 | 赵斌 | A kind of interview contents recording system with voice and video equipment |
CN110458018A (en) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | A kind of test method, device and computer readable storage medium |
CN110555374A (en) * | 2019-07-25 | 2019-12-10 | 深圳壹账通智能科技有限公司 | resource sharing method and device, computer equipment and storage medium |
CN110688499A (en) * | 2019-08-13 | 2020-01-14 | 深圳壹账通智能科技有限公司 | Data processing method, data processing device, computer equipment and storage medium |
CN110728182A (en) * | 2019-09-06 | 2020-01-24 | 平安科技(深圳)有限公司 | Interviewing method and device based on AI interviewing system and computer equipment |
CN110909218A (en) * | 2019-10-14 | 2020-03-24 | 平安科技(深圳)有限公司 | Information prompting method and system in question-answering scene |
CN110941990A (en) * | 2019-10-22 | 2020-03-31 | 泰康保险集团股份有限公司 | Method and device for evaluating human body actions based on skeleton key points |
CN111222854A (en) * | 2020-01-15 | 2020-06-02 | 中国平安人寿保险股份有限公司 | Interview method, device and equipment based on interview robot and storage medium |
CN111222837A (en) * | 2019-10-12 | 2020-06-02 | 中国平安财产保险股份有限公司 | Intelligent interviewing method, system, equipment and computer storage medium |
CN111611860A (en) * | 2020-04-22 | 2020-09-01 | 西南大学 | Micro-expression occurrence detection method and detection system |
WO2021000408A1 (en) * | 2019-07-04 | 2021-01-07 | 平安科技(深圳)有限公司 | Interview scoring method and apparatus, and device and storage medium |
WO2021027329A1 (en) * | 2019-08-15 | 2021-02-18 | 深圳壹账通智能科技有限公司 | Image recognition-based information push method and apparatus, and computer device |
CN113780993A (en) * | 2021-09-09 | 2021-12-10 | 平安科技(深圳)有限公司 | Data processing method, device, equipment and readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106910048A (en) * | 2017-03-07 | 2017-06-30 | 佛山市融信通企业咨询服务有限公司 | A kind of remote interview system with psychological auxiliary judgment |
US20170311863A1 (en) * | 2015-02-13 | 2017-11-02 | Omron Corporation | Emotion estimation device and emotion estimation method |
CN107705808A (en) * | 2017-11-20 | 2018-02-16 | 合光正锦(盘锦)机器人技术有限公司 | A kind of Emotion identification method based on facial characteristics and phonetic feature |
CN108537160A (en) * | 2018-03-30 | 2018-09-14 | 平安科技(深圳)有限公司 | Risk Identification Method, device, equipment based on micro- expression and medium |
CN108717663A (en) * | 2018-05-18 | 2018-10-30 | 深圳壹账通智能科技有限公司 | Face label fraud judgment method, device, equipment and medium based on micro- expression |
-
2018
- 2018-12-18 CN CN201811546820.5A patent/CN109766917A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170311863A1 (en) * | 2015-02-13 | 2017-11-02 | Omron Corporation | Emotion estimation device and emotion estimation method |
CN106910048A (en) * | 2017-03-07 | 2017-06-30 | 佛山市融信通企业咨询服务有限公司 | A kind of remote interview system with psychological auxiliary judgment |
CN107705808A (en) * | 2017-11-20 | 2018-02-16 | 合光正锦(盘锦)机器人技术有限公司 | A kind of Emotion identification method based on facial characteristics and phonetic feature |
CN108537160A (en) * | 2018-03-30 | 2018-09-14 | 平安科技(深圳)有限公司 | Risk Identification Method, device, equipment based on micro- expression and medium |
CN108717663A (en) * | 2018-05-18 | 2018-10-30 | 深圳壹账通智能科技有限公司 | Face label fraud judgment method, device, equipment and medium based on micro- expression |
Non-Patent Citations (2)
Title |
---|
MICHEL OWAYJAN ET AL: "The design and development of a Lie Detection System using facial micro-expressions", 《2012 2ND INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTATIONAL TOOLS FOR ENGINEERING APPLICATIONS (ACTEA)》, pages 33 - 38 * |
吴凡: "微表情和微动作在企业招聘面试中的运用", 《经济师》, no. 12, pages 20 - 22 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222623A (en) * | 2019-05-31 | 2019-09-10 | 深圳市恩钛控股有限公司 | Micro- expression analysis method and system |
CN110265025A (en) * | 2019-06-13 | 2019-09-20 | 赵斌 | A kind of interview contents recording system with voice and video equipment |
CN110211591B (en) * | 2019-06-24 | 2021-12-21 | 卓尔智联(武汉)研究院有限公司 | Interview data analysis method based on emotion classification, computer device and medium |
CN110211591A (en) * | 2019-06-24 | 2019-09-06 | 卓尔智联(武汉)研究院有限公司 | Interview data analysing method, computer installation and medium based on emotional semantic classification |
WO2021000408A1 (en) * | 2019-07-04 | 2021-01-07 | 平安科技(深圳)有限公司 | Interview scoring method and apparatus, and device and storage medium |
CN110458018A (en) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | A kind of test method, device and computer readable storage medium |
CN110555374A (en) * | 2019-07-25 | 2019-12-10 | 深圳壹账通智能科技有限公司 | resource sharing method and device, computer equipment and storage medium |
CN110688499A (en) * | 2019-08-13 | 2020-01-14 | 深圳壹账通智能科技有限公司 | Data processing method, data processing device, computer equipment and storage medium |
WO2021027029A1 (en) * | 2019-08-13 | 2021-02-18 | 深圳壹账通智能科技有限公司 | Data processing method and device, computer apparatus, and storage medium |
WO2021027329A1 (en) * | 2019-08-15 | 2021-02-18 | 深圳壹账通智能科技有限公司 | Image recognition-based information push method and apparatus, and computer device |
CN110728182A (en) * | 2019-09-06 | 2020-01-24 | 平安科技(深圳)有限公司 | Interviewing method and device based on AI interviewing system and computer equipment |
CN110728182B (en) * | 2019-09-06 | 2023-12-26 | 平安科技(深圳)有限公司 | Interview method and device based on AI interview system and computer equipment |
CN111222837A (en) * | 2019-10-12 | 2020-06-02 | 中国平安财产保险股份有限公司 | Intelligent interviewing method, system, equipment and computer storage medium |
CN110909218A (en) * | 2019-10-14 | 2020-03-24 | 平安科技(深圳)有限公司 | Information prompting method and system in question-answering scene |
CN110941990A (en) * | 2019-10-22 | 2020-03-31 | 泰康保险集团股份有限公司 | Method and device for evaluating human body actions based on skeleton key points |
CN110941990B (en) * | 2019-10-22 | 2023-06-16 | 泰康保险集团股份有限公司 | Method and device for evaluating human body actions based on skeleton key points |
CN111222854A (en) * | 2020-01-15 | 2020-06-02 | 中国平安人寿保险股份有限公司 | Interview method, device and equipment based on interview robot and storage medium |
CN111222854B (en) * | 2020-01-15 | 2024-04-09 | 中国平安人寿保险股份有限公司 | Interview robot-based interview method, interview device, interview equipment and storage medium |
CN111611860A (en) * | 2020-04-22 | 2020-09-01 | 西南大学 | Micro-expression occurrence detection method and detection system |
CN111611860B (en) * | 2020-04-22 | 2022-06-28 | 西南大学 | Micro-expression occurrence detection method and detection system |
CN113780993A (en) * | 2021-09-09 | 2021-12-10 | 平安科技(深圳)有限公司 | Data processing method, device, equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109766917A (en) | Interview video data handling procedure, device, computer equipment and storage medium | |
CN107633207B (en) | AU characteristic recognition methods, device and storage medium | |
CN113283551B (en) | Training method and training device of multi-mode pre-training model and electronic equipment | |
CN109960725B (en) | Text classification processing method and device based on emotion and computer equipment | |
WO2020140665A1 (en) | Method and apparatus for quality detection of double-recorded video, and computer device and storage medium | |
CN110021439A (en) | Medical data classification method, device and computer equipment based on machine learning | |
CN109767261A (en) | Products Show method, apparatus, computer equipment and storage medium | |
CN110909137A (en) | Information pushing method and device based on man-machine interaction and computer equipment | |
CN109508638A (en) | Face Emotion identification method, apparatus, computer equipment and storage medium | |
CN109034069B (en) | Method and apparatus for generating information | |
CN109461073A (en) | Risk management method, device, computer equipment and the storage medium of intelligent recognition | |
CN105913507B (en) | A kind of Work attendance method and system | |
CN111444723A (en) | Information extraction model training method and device, computer equipment and storage medium | |
CN109886110A (en) | Micro- expression methods of marking, device, computer equipment and storage medium | |
CN110147833A (en) | Facial image processing method, apparatus, system and readable storage medium storing program for executing | |
Paul et al. | Extraction of facial feature points using cumulative histogram | |
US20230410221A1 (en) | Information processing apparatus, control method, and program | |
CN115862120B (en) | Face action unit identification method and equipment capable of decoupling separable variation from encoder | |
CN112632248A (en) | Question answering method, device, computer equipment and storage medium | |
CN109766772A (en) | Risk control method, device, computer equipment and storage medium | |
CN109101956B (en) | Method and apparatus for processing image | |
CN113076905B (en) | Emotion recognition method based on context interaction relation | |
Karappa et al. | Detection of sign-language content in video through polar motion profiles | |
CN109829388A (en) | Video data handling procedure, device and computer equipment based on micro- expression | |
CN111783725A (en) | Face recognition method, face recognition device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |