CN108764047A - Group's emotion-directed behavior analysis method and device, electronic equipment, medium, product - Google Patents
Group's emotion-directed behavior analysis method and device, electronic equipment, medium, product Download PDFInfo
- Publication number
- CN108764047A CN108764047A CN201810395944.1A CN201810395944A CN108764047A CN 108764047 A CN108764047 A CN 108764047A CN 201810395944 A CN201810395944 A CN 201810395944A CN 108764047 A CN108764047 A CN 108764047A
- Authority
- CN
- China
- Prior art keywords
- image
- group
- mood
- facial image
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of groups emotion-directed behavior analysis method and device, electronic equipment, medium, products, wherein method includes:Image Acquisition is carried out to the group including at least one people, obtains at least one group's image;Recognition of face is executed at least one group's image respectively, at least one facial image is obtained, and/or human bioequivalence is executed at least one group's image respectively, obtains at least one human body image;Based at least one facial image and/or at least one human body image, the mood of at least one people and/or behavior in group are determined.The embodiment of the present invention improves speed and the accuracy of analysis with respect to manual analysis mood and/or behavior, and can realize while analyze the mood and behavior of group.
Description
Technical field
The present invention relates to computer visions, especially a kind of groups emotion-directed behavior analysis method and device, electronic equipment, Jie
Matter, product.
Background technology
The identification of physical condition, learning state and psychological condition of the student on classroom, to school's teaching management work
It says particularly significant.Teacher needs to know whether the attention of student on classroom is concentrated, whether energy is abundant, knowledge point is either with or without listening
Understand, when partial students have psychology or health problem, it is also desirable to be found in time.But in practice, teacher can not the class of taking into account
Each student of grade, it is also not possible to the state change of each student is concerned about while giving lessons.
Invention content
Kind of groups mood analytical technology provided in an embodiment of the present invention.
One side according to the ... of the embodiment of the present invention, the kind of groups emotion-directed behavior analysis method provided, including:
Image Acquisition is carried out to the group including at least one people, obtains at least one group's image;
Recognition of face is executed at least one group's image respectively, obtains at least one facial image, and/or respectively
Human bioequivalence is executed at least one group's image, obtains at least one human body image;
Based at least one facial image and/or at least one human body image, determine in the group at least
The mood of one people and/or behavior.
Optionally, it based at least one facial image, determines in the group before the mood of at least one people,
Further include:
It is matched with the standard faces image to prestore in database based at least one facial image, the data
At least one standard faces and its identity information are stored in library;
The corresponding identity information of the facial image is determined based on matching result.
Optionally, described to be based at least one facial image, it determines the mood of at least one people in the group, wraps
It includes:
The corresponding human face expression of the facial image is determined based on the facial image;
The mood classification of the corresponding people of the facial image is determined based on the human face expression.
Optionally, described that the corresponding human face expression of the facial image is determined based on the facial image, including:
Facial expression classification is carried out at least one facial image based on facial expression classification network, determines the people
The corresponding human face expression of face image;The facial expression classification network is instructed by the sample facial image of known facial expression classification
Practice and obtains.
Optionally, described to determine that the mood of the corresponding people of the facial image is classified based on the human face expression, including:
Human face expression human face expression corresponding at least one mood classification is matched, it is true based on matching result
The mood classification of the corresponding people of the fixed facial image, the mood classification include normal mood and improper mood, it is described just
Reason thread corresponds at least one human face expression, and the improper mood corresponds at least one human face expression.
Optionally, improper mood is corresponded in response to the human face expression, executes intervention operation.
Optionally, it is described be based at least one facial image, determine at least one people in the group mood it
Afterwards, further include:
Classified based on the corresponding identity information of the facial image and its corresponding mood, obtains the mood point of the group
As a result, the mood analysis result is recorded including at least one mood, every mood record includes one in the group for analysis
Personal corresponding identity information and mood classification;
The mood analysis result of the group is sent to client, and is executed and is done according to the feedback information of the client
Pre-operation.
Optionally, the mood record further includes the corresponding temporal information of the mood classification;
It is described to be classified based on the corresponding identity information of the facial image and its corresponding mood, obtain the feelings of the group
Thread analysis result, including:
The corresponding classification of being in a bad mood of the identity information in setting time is obtained, it is corresponding extremely to obtain the identity information
A few mood record, determines the mood analysis result of the group.
Optionally, further include:The face of the corresponding people of the facial image is determined based at least one facial image
Direction.
Optionally, the facial court that the corresponding people of the facial image is determined based at least one facial image
To, including:
Face orientation classification is carried out at least one facial image based on face orientation sorter network, determines the people
The corresponding face orientation of face image;The face orientation sorter network is obtained by the sample facial image training of known face orientation
?.
Optionally, further include:Based on the corresponding face orientation of the facial image, the corresponding people of the facial image is determined
Attitude classification, attitude classification includes normal attitude and improper attitude, and the normal attitude corresponds at least one face
Towards classification, the improper attitude corresponds at least one face orientation classification.
Optionally, improper attitude is corresponded in response to the face orientation, executes intervention operation.
Optionally, described to be based on the corresponding face orientation of the facial image, determine the corresponding people's of the facial image
After attitude classification, further include:
Classified based on the corresponding identity information of the facial image and its corresponding attitude, obtains the attitude point of the group
As a result, the Analysis on attitude result is recorded including at least one attitude, every attitude record includes one in the group for analysis
Personal corresponding identity information and attitude classification;
The Analysis on attitude result of the group is issued into client, and intervention is executed according to the feedback information of the client
Operation.
Optionally, the attitude record further includes the corresponding temporal information of the attitude classification;
It is described to be classified based on the corresponding identity information of the facial image and its corresponding attitude, obtain the state of the group
Analysis result is spent, including:
The corresponding all attitudes classification of the identity information in setting time are obtained, it is corresponding extremely to obtain the identity information
A few attitude record, determines the Analysis on attitude result of the group.
Optionally, described that human bioequivalence is executed at least one group's image respectively, obtain at least one human figure
Picture, including:
It is identified from the corresponding group's image of the facial image using human bioequivalence network and obtains at least one people
Body image;
The corresponding human figure of the facial image is obtained from least one human body image based on the facial image
Picture.
Optionally, described to be based at least one human body image, it determines the behavior of at least one people in the group, wraps
It includes:
The corresponding action classification of the human body image is determined based on the human body image;
The behavior classification of the corresponding people of the human body image is determined based on the action classification.
Optionally, described that the corresponding action classification of the human body image is determined based on the human body image, including:
The classification of motion is carried out at least one human body image based on classification of motion network, determines the human body image pair
The action classification answered;Sample human body image training of the action recognition network Jing Guo known action classification obtains.
Optionally, described to determine that the behavior of the corresponding people of the human body image is classified based on the action classification, including:
Action classification action classification corresponding at least one behavior classification is matched, it is true based on matching result
The behavior classification of the corresponding people of the fixed human body image, the behavior classification include normal behaviour and improper behavior, it is described just
Chang Hangwei corresponds at least one action classification, and the improper behavior corresponds at least one action classification.
Optionally, improper action is corresponded in response to the action classification, executes intervention operation.
Optionally, the row that the corresponding people of the human body image is determined based on the corresponding action classification of the human body image
After classification, further include:
Classified based on the corresponding identity information of the human body image and its corresponding behavior, obtains the behavior point of the group
As a result, the behavioural analysis result includes at least one behavior record, every behavior record includes one in the group for analysis
Personal corresponding identity information and behavior classification;
The behavioural analysis result of the group is sent to client, and is executed and is done according to the feedback information of the client
Pre-operation.
Optionally, the behavior record further includes the corresponding temporal information of the behavior classification;
It is described to be classified based on the corresponding identity information of the human body image and its corresponding behavior, obtain the row of the group
For analysis result, including:
The corresponding all behaviors classification of the identity information in setting time are obtained, it is corresponding extremely to obtain the identity information
A few behavior record, determines the behavioural analysis result of the group.
Optionally, which is characterized in that the intervention operation includes following at least one:
Prompt message is sent, sends a warning message, show the facial image, the corresponding identity of the display human face expression
Information.
Optionally, the described pair of group including at least one people carries out Image Acquisition, obtains at least one group's image, wraps
It includes:
Image Acquisition is carried out to the group respectively by least one image capture device, obtains at least one group's figure
Picture, each described image collecting device correspond to group's image.
Optionally, described that recognition of face is executed at least one group's image respectively, obtain at least one face figure
Picture, including:
Recognition of face is executed to each group's image respectively, obtains at least one facial image group respectively;Each institute
It includes at least one facial image to state facial image group;
Duplicate checking processing is carried out to each facial image group, determines that at least one people is corresponding at least one in the group
Facial image, each of described group correspond to a facial image.
Optionally, described that duplicate checking processing is carried out to each facial image group, determine at least one people couple in the group
At least one facial image answered, including:
By the facial image in each facial image group and each face in other described facial image groups
Image is matched;
Repeater's face image in each facial image group is determined based on matching result;
The corresponding at least one facial image of at least one people in the group is determined based on repeater's face image.
Optionally, described that at least one people corresponding at least one in the group is determined based on repeater's face image
A facial image, including:
Determine that a face quality reaches the facial image of preset quality from at least two repeaters face image,
Determine everyone a corresponding facial image in the group.
Other side according to the ... of the embodiment of the present invention, the kind of groups emotion-directed behavior analytical equipment provided, including:
Image acquisition unit obtains at least one group for carrying out Image Acquisition to the group including at least one people
Image;
Recognition unit obtains at least one face for executing recognition of face at least one group's image respectively
Image, and/or human bioequivalence is executed at least one group's image respectively, obtain at least one human body image;
Analytic unit determines institute for being based at least one facial image and/or at least one human body image
State the mood of at least one people and/or behavior in group.
Optionally, further include:
Identity recognizing unit, for based on the standard faces image to prestore at least one facial image and database
It is matched, at least one standard faces and its identity information is stored in the database;The people is determined based on matching result
The corresponding identity information of face image.
Optionally, the analytic unit, including:
Expression Recognition module determines the corresponding human face expression of the facial image based on the facial image;
Mood sort module, for determining that the mood of the corresponding people of the facial image is classified based on the human face expression.
Optionally, the Expression Recognition module is specifically used for based on facial expression classification network at least one people
Face image carries out facial expression classification, determines the corresponding human face expression of the facial image;The facial expression classification network warp
The sample facial image training for crossing known facial expression classification obtains.
Optionally, the mood sort module is specifically used for the human face expression is corresponding at least one mood classification
Human face expression matched, determine that the mood of the corresponding people of the facial image is classified based on matching result, the mood point
Class includes normal mood and improper mood, and the normal mood corresponds at least one human face expression, the improper mood pair
Answer at least one human face expression.
Optionally, the mood sort module is additionally operable to correspond to improper mood in response to the human face expression, executes dry
Pre-operation.
Optionally, the analytic unit further includes:
Group's mood module is obtained for being classified based on the corresponding identity information of the facial image and its corresponding mood
The mood analysis result of the group is obtained, the mood analysis result includes that at least one mood records, every mood note
Record includes the corresponding identity information of a people and mood classification in the group;
Emotional feedback module, for the mood analysis result of the group to be sent to client, and according to the client
The feedback information at end executes intervention operation.
Optionally, the mood record further includes the corresponding temporal information of the mood classification;
Group's mood module, specifically for obtaining in setting time, the identity information is corresponding to be in a bad mood point
Class obtains the corresponding at least one mood record of the identity information, determines the mood analysis result of the group.
Optionally, further include:
Face orientation analytic unit, for determining the corresponding people of the facial image based at least one facial image
Face orientation.
Optionally, the face orientation analytic unit, for being based on face orientation sorter network at least one people
Face image carries out face orientation classification, determines the corresponding face orientation of the facial image;The face orientation sorter network warp
The sample facial image training for crossing known face orientation obtains.
Optionally, the face orientation analytic unit is additionally operable to be based on the corresponding face orientation of the facial image, determine
The attitude classification of the corresponding people of the facial image, the attitude classification include normal attitude and improper attitude, it is described normally
Attitude corresponds at least one face orientation classification, and the improper attitude corresponds at least one face orientation classification.
Optionally, the face orientation analytic unit is additionally operable to correspond to improper attitude in response to the face orientation, hold
Row intervention operation.
Optionally, the face orientation analytic unit further includes:
Collective attitude module is obtained for being classified based on the corresponding identity information of the facial image and its corresponding attitude
The Analysis on attitude of the group is obtained as a result, the Analysis on attitude result includes at least one attitude record, every attitude is remembered
Record includes the corresponding identity information of a people and attitude classification in the group;
Attitude feedback module, for the Analysis on attitude result of the group to be issued client, and according to the client
Feedback information execute intervention operation.
Optionally, the attitude record further includes the corresponding temporal information of the attitude classification;
The collective attitude module is obtained for obtaining the corresponding all attitudes classification of the identity information in setting time
The corresponding at least one attitude record of the identity information is obtained, determines the Analysis on attitude result of the group.
Optionally, the recognition unit, for utilizing human bioequivalence network from the corresponding group of the facial image
Identification obtains at least one human body image in image;Based on the facial image institute is obtained from least one human body image
State the corresponding human body image of facial image.
Optionally, the analytic unit, including:
Action recognition module, for determining the corresponding action classification of the human body image based on the human body image;
Behavior sort module, for determining that the behavior of the corresponding people of the human body image is classified based on the action classification.
Optionally, the action recognition module is specifically used for based on classification of motion network at least one human figure
As carrying out the classification of motion, the corresponding action classification of the human body image is determined;The action recognition network passes through known action class
Other sample human body image training obtains.
Optionally, the behavior sort module is specifically used for the action classification is corresponding at least one behavior classification
Action classification matched, determine that the behavior of the corresponding people of the human body image is classified based on matching result, the behavior point
Class includes normal behaviour and improper behavior, and the normal behaviour corresponds at least one action classification, the improper behavior pair
Answer at least one action classification.
Optionally, the behavior sort module is additionally operable to correspond to improper action in response to the action classification, executes dry
Pre-operation.
Optionally, the analytic unit further includes:
Group behavior module is obtained for being classified based on the corresponding identity information of the human body image and its corresponding behavior
The behavioural analysis of the group is obtained as a result, the behavioural analysis result includes at least one behavior record, every behavior is remembered
Record includes the corresponding identity information of a people and behavior classification in the group;
Behavior feedback module, for the behavioural analysis result of the group to be sent to client, and according to the client
The feedback information at end executes intervention operation.
Optionally, the behavior record further includes the corresponding temporal information of the behavior classification;
The group behavior module, specifically for the corresponding all behaviors point of the identity information in acquisition setting time
Class obtains corresponding at least one behavior record of the identity information, determines the behavioural analysis result of the group.
Optionally, the intervention operation includes following at least one:
Prompt message is sent, sends a warning message, show the facial image, the corresponding identity of the display human face expression
Information.
Optionally, described image acquiring unit, for by least one image capture device respectively to the group into
Row Image Acquisition, obtains at least one group's image, and each described image collecting device corresponds to group's image.
Optionally, the recognition unit, including:
Stock discrimination module obtains at least one respectively for executing recognition of face to each group's image respectively
Facial image group;Each facial image group includes at least one facial image;
Duplicate checking module determines at least one people in the group for carrying out duplicate checking processing to each facial image group
Corresponding at least one facial image, each of described group correspond to a facial image.
Optionally, the duplicate checking module, including:
Matching module, being used for will be in the facial image and other described facial image groups in each facial image group
Each facial image matched;Repeater's face image in each facial image group is determined based on matching result;
Facial image determining module, for determining at least one people couple in the group based on repeater's face image
At least one facial image answered.
Optionally, the facial image determining module is specifically used for from at least two repeaters face image really
A fixed face quality reaches the facial image of preset quality, determines everyone a corresponding facial image in the group.
Other side according to the ... of the embodiment of the present invention, a kind of electronic equipment provided, including processor, the processor
Including group's emotion-directed behavior analytical equipment described in any one as above.
Other side according to the ... of the embodiment of the present invention, a kind of electronic equipment provided, including:Memory, for storing
Executable instruction;
And processor, it is as above any one to complete to execute the executable instruction for being communicated with the memory
The operation of item group's emotion-directed behavior analysis method.
Other side according to the ... of the embodiment of the present invention, a kind of computer storage media provided, for storing computer
The instruction that can be read, described instruction are performed the operation for executing group's emotion-directed behavior analysis method described in any one as above.
Other side according to the ... of the embodiment of the present invention, a kind of computer program product provided, including it is computer-readable
Code, when the computer-readable code is run in equipment, the processor in the equipment executes for realizing such as taking up an official post
The instruction for group's emotion-directed behavior analysis method of anticipating.
The kind of groups emotion-directed behavior analysis method provided based on the above embodiment of the present invention and device, electronic equipment, Jie
Matter, product carry out Image Acquisition to the group including at least one people, obtain at least one group's image;Respectively at least one
A group's image executes recognition of face, obtains at least one facial image, and/or respectively at least one group's image executor
Body identifies, obtains at least one human body image;Based at least one facial image and/or at least one human body image, group is determined
The mood of at least one people and/or behavior in body, opposite manual analysis mood and/or behavior improve the speed of analysis and accurate
Degree, and can realize while the mood and behavior of group are analyzed.
Below by drawings and examples, technical scheme of the present invention will be described in further detail.
Description of the drawings
The attached drawing of a part for constitution instruction describes the embodiment of the present invention, and together with description for explaining
The principle of the present invention.
The present invention can be more clearly understood according to following detailed description with reference to attached drawing, wherein:
Fig. 1 is the flow chart of group's emotion-directed behavior analysis method one embodiment of the present invention.
Fig. 2 is the structural schematic diagram of group's emotion-directed behavior analytical equipment one embodiment of the present invention.
Fig. 3 is suitable for for realizing the structural representation of the terminal device of the embodiment of the present invention or the electronic equipment of server
Figure.
Specific implementation mode
Carry out the various exemplary embodiments of detailed description of the present invention now with reference to attached drawing.It should be noted that:Unless in addition having
Body illustrates that the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally
The range of invention.
Simultaneously, it should be appreciated that for ease of description, the size of attached various pieces shown in the drawings is not according to reality
Proportionate relationship draw.
It is illustrative to the description only actually of at least one exemplary embodiment below, is never used as to the present invention
And its application or any restrictions that use.
Technology, method and apparatus known to person of ordinary skill in the relevant may be not discussed in detail, but suitable
In the case of, the technology, method and apparatus should be considered as part of specification.
It should be noted that:Similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined, then it need not be further discussed in subsequent attached drawing in a attached drawing.
Fig. 1 is the flow chart of group's emotion-directed behavior analysis method one embodiment of the present invention.As shown in Figure 1, the embodiment
Method includes:
Step 110, Image Acquisition is carried out to the group including at least one people, obtains at least one group's image.
Optionally, group can include but is not limited to the people under classroom scene Students ' group or other crowd massing scenes
Group.
Since the usual number of group is more, the case where leakage is clapped is susceptible to using an image capture device, it optionally, can
The image capture device in different angle is set to group's progress Image Acquisition by multiple, is based on multiple images collecting device pair
The same group carries out Image Acquisition, then the comprehensive proprietary facial image obtained in current group, in case follow-up Emotion identification.
Step 120, recognition of face is executed at least one group's image respectively, obtains at least one facial image, and/or
Human bioequivalence is executed at least one group's image respectively, obtains at least one human body image.
Optionally, group's image of acquisition includes at least one facial image, in order to which the mood to everyone is divided
Analysis, needs to split out everyone face from group's image, obtains everyone corresponding facial image, at this point, if
It, can also be into avoid repeating carrying out Emotion identification to the people in the presence of the identical face acquired based on multiple images collecting device
Pedestrian's face duplicate removal makes the facial image for corresponding to same people only retain one.
In group, in addition to face needs to pay close attention to, body action can also show its psychological activity to a certain extent,
When body behavior occurs abnormal, also need to pay close attention to;And usually the distance between face and human body is less than setting value, because
This, can determine its corresponding human body image based on facial image.
Step 130, it is based at least one facial image and/or at least one human body image, is determined at least one in group
The mood of people and/or behavior.
Based on the kind of groups emotion-directed behavior analysis method that the above embodiment of the present invention provides, to including at least one people's
Group carries out Image Acquisition, obtains at least one group's image;Recognition of face is executed at least one group's image respectively, is obtained
At least one facial image, and/or human bioequivalence is executed at least one group's image respectively, obtain at least one human figure
Picture;Based at least one facial image and/or at least one human body image, determine in group the mood of at least one people and/or
Behavior, opposite manual analysis mood and/or behavior improve speed and the accuracy of analysis, and can realize while to the feelings of group
Thread and behavior are analyzed.
In one or more optional embodiments, can also include before executing operation 130:
It is matched with the standard faces image to prestore in database based at least one facial image, is stored in database
At least one standard faces and its identity information;
The corresponding identity information of facial image is determined based on matching result.
It in order to preferably analyze the mood of each of current group, and is distinguish and treats, need to obtain
The corresponding identity information of each facial image, the standard faces predicted by database and its identity information, when the face of acquisition
When image is matched with standard faces, you can obtain the corresponding identity information of the facial image.
In one or more optional embodiments, operation 130, including:
The corresponding human face expression of facial image is determined based on facial image.
Optionally, human face expression can include but is not limited to:Glad, tranquil, sad, detest, is feared, is surprised indignation.
Optionally, it is based on facial expression classification network and facial expression classification is carried out at least one facial image, determine people
The corresponding human face expression of face image.
Wherein, facial expression classification network is obtained by the sample facial image training of known facial expression classification.Mood
Identification needs to detect face with Face datection network, then uses the expression of facial expression classification Network Recognition face, such as:It is high
Emerging, sad etc., its network structure of the facial expression classification network of the present embodiment use is unlimited, need to only pass through known human face expression point
The human face expression in facial image can be correctly identified after the sample facial image training of class.
The mood classification of the corresponding people of facial image is determined based on human face expression.
Optionally, human face expression human face expression corresponding at least one mood classification is matched, based on matching knot
Fruit determines the mood classification of the corresponding people of facial image, and mood classification includes normal mood and improper mood, normal mood pair
At least one human face expression, improper mood is answered to correspond at least one human face expression.
Usually being to the Emotion identification of group therefore can be by face table in order to recognize the people in improper mood
Feelings according to its corresponding mood be divided into normal mood (such as including:It is glad, tranquil) and improper mood (such as including:Sad,
Detest, indignation, fear, is surprised), when the expression in facial image corresponds to the expression that normal mood includes, you can judge the face
The mood of the corresponding people of image is normal mood;And the expression in facial image corresponds to the expression that improper mood includes, i.e.,
It can determine whether that the mood of the corresponding people of the facial image is improper mood.Which specific human face expression corresponds to normal mood, which
Human face expression does not correspond to improper mood, can be set as needed.
Optionally, improper mood is corresponded in response to human face expression;Execute intervention operation.
When occurring the people of improper mood in group, manual intervention is generally required, to solve potential problem, such as:When
In classroom, there is student improper mood occur, quality of instruction may be influenced, it at this time can be by school administrator, class
Director, teacher, parents of student etc., are intervened.
It is at least one in group based at least one facial image, determining in one or more optional embodiments
After the mood of people, can also include:
Classified based on the corresponding identity information of facial image and its corresponding mood, obtain the mood analysis result of group,
Mood analysis result includes at least one mood record, every mood record include in group the corresponding identity information of a people with
Mood is classified;
The mood analysis result of group is sent to client, and intervention operation is executed according to the feedback information of client.
For multiple people in group, everyone corresponds to different moods, when number is excessive in group, if directly
It shows facial image and mood classification, vast resources and the time of auditor can be occupied, the mood analysis result based on statistics can
It is directly obtained one-to-one identity information and mood classification, it is alternatively possible to checked in table form, it can be quick
Everyone mood classification in group is grasped, when there is mood classification to be treated, corresponding identity can be based on
Information quickly determines corresponding people.
Optionally, mood record further includes the corresponding temporal information of mood classification;
Classified based on the corresponding identity information of facial image and its corresponding mood, obtain the mood analysis result of group,
Including:
The corresponding classification of being in a bad mood of identity information in setting time is obtained, corresponding at least one feelings of identity information are obtained
Thread records, and determines the mood analysis result of group.
The present embodiment not only realizes real-time mood analysis, can also be counted to the mood in setting time, to prevent
The case where only judging by accident, frequency that total evaluation result in setting time is occurred with the classification of various moods and number into
Row assessment.
Optionally, intervention operation can include but is not limited to following at least one:It sends prompt message, send alarm letter
Breath, display facial image, the corresponding identity information of display human face expression.
Auditor's quick attention can be enable to the people of improper mood by intervention operation, and be resolved, such as:Work as
When bearing now improper mood, sends prompt message or warning information is intervened to teacher or parent.The prompt message sent out
Or warning information includes but not limited to:Real-time Alarm short message, Real-time Alarm mail or Real-time Alarm calling phone etc., it is specific to alert
Rule can be by school according to actual conditions self-defining, the application is not particularly limited.
In one or more optional embodiments, the method for the present invention can also include:Based at least one facial image
Determine the face orientation of the corresponding people of facial image.
Optionally, face orientation can include but is not limited to:Front, bow, come back, a left side is turned one's head, the right side is turned one's head, it is excessive to carry on the back.
Optionally, the face orientation of the corresponding people of facial image is determined based at least one facial image, including:
Face orientation classification is carried out at least one facial image based on face orientation sorter network, determines facial image pair
The face orientation answered;Face orientation sorter network is obtained by the sample facial image training of known face orientation.
The face orientation classification that facial image can be realized based on the face orientation sorter network after training, is also based on and is worked as
Before the ratio-dependent of face-image that recognizes, for example, the facial image currently recognized is whole faces, it is believed that the face
For just facing towards and when recognizing 1/2 face, it is believed that the face is side direction, but not based on face-image ratio
It can accurately classify to the specific direction of face, and face can more accurately be identified based on trained face orientation sorter network
Portion's direction, to recognize the attitude for working as forefathers.
Optionally, further include:Based on the corresponding face orientation of facial image, the attitude point of the corresponding people of facial image is determined
Class.
Wherein, attitude classification includes normal attitude and improper attitude, and normal attitude corresponds at least one face orientation point
Class, improper attitude correspond at least one face orientation classification.
For Psychological Angle, the attitude meeting part of people is embodied in facial orientation, and such as front indicates proper attitude,
And it is usually that undiscipline is not absorbed in bow, turn one's head, carrying on the back first-class;The present embodiment can correspond to not homomorphism by default different face orientations
Degree, when there is preset face orientation, by searching for corresponding attitude, you can determine state residing for the corresponding people of the facial image
Degree, using the basis as processing.Which specific corresponding normal attitude of face orientation classification, which face orientation classification it is corresponding it is non-just
Normality degree can be set as needed.
Improper attitude is corresponded in response to face orientation, executes intervention operation.
When occurring the people of improper attitude in group, manual intervention is generally required, to solve potential problem, such as:When
In classroom, there is student improper attitude occur, quality of instruction may be influenced, at this time can by school administrator, the form master,
The intervention of teacher, parents of student etc. solve these problems.
In one or more optional embodiments, it is based on the corresponding face orientation of facial image, determines facial image pair
After the attitude classification of the people answered, further include:
Classified based on the corresponding identity information of facial image and its corresponding attitude, obtain the Analysis on attitude of group as a result,
Analysis on attitude result includes at least one attitude record, every attitude record include in group the corresponding identity information of a people with
Attitude is classified;
The Analysis on attitude result of group is issued into client, and intervention operation is executed according to the feedback information of client.
For multiple people in group, everyone corresponds to different attitudes, when number is excessive in group, if directly
It shows facial image and attitude classification, vast resources and the time of auditor can be occupied, the Analysis on attitude based on statistics is as a result, can
It is directly obtained one-to-one identity information and attitude classification, it is alternatively possible to checked in table form, it can be quick
Everyone attitude classification in group is grasped, when there is attitude classification to be treated, corresponding identity can be based on
Information quickly determines corresponding people.
Optionally, attitude record further includes the corresponding temporal information of attitude classification;
Classified based on the corresponding identity information of facial image and its corresponding attitude, obtain the Analysis on attitude of group as a result,
Including:
The corresponding all attitude classification of identity information in setting time are obtained, corresponding at least one state of identity information is obtained
Degree record, determines the Analysis on attitude result of group.
The present embodiment not only realizes real-time Analysis on attitude, can also be counted to the attitude in setting time, to prevent
The case where only judging by accident, frequency that total evaluation result in setting time is occurred with the classification of various attitudes and number into
Row assessment.
Wherein intervention operation can include but is not limited to following at least one:Prompt message is sent, sends a warning message, show
It lets others have a look at face image, the corresponding identity information of display human face expression.
Auditor's quick attention can be enable to the people of improper attitude by intervention operation, and be resolved, such as:Work as
When bearing now improper attitude, sends prompt message or warning information is intervened to teacher or parent.The prompt message sent out
Or warning information includes but not limited to:Real-time Alarm short message, Real-time Alarm mail or Real-time Alarm calling phone etc., it is specific to alert
Rule can be by school according to actual conditions self-defining, the application is not particularly limited.
In one or more optional embodiments, human bioequivalence is executed at least one group's image respectively, is obtained extremely
A few human body image, including:
It is identified from the corresponding group's image of facial image using human bioequivalence network and obtains at least one human body image;
The corresponding human body image of facial image is obtained from least one human body image based on facial image.
Have which kind of human bioequivalence network ripe human bioequivalence network, the unlimited fixture body of the present embodiment use in the prior art
Human bioequivalence is carried out, after obtaining human body image based on human bioequivalence network, is based on the distance between human body image and facial image
Determine the corresponding facial image of human body image, you can obtain the corresponding identity information of the human body image, and can realize in conjunction with face
Image and human body image carry out comprehensive analysis to the mood of the people and behavior.
In one or more optional embodiments, operation 130 can also include:
The corresponding action classification of human body image is determined based on human body image;
Optionally, action classification can include but is not limited to:Yawn, stretch oneself, sleeping, talking, play mobile phone, raise one's hand,
It stands up.
The behavior classification of the corresponding people of human body image is determined based on action classification.
Optionally, it is based on classification of motion network and the classification of motion is carried out at least one human body image, determine human body image pair
The action classification answered.
Wherein, sample human body image training of the action recognition network Jing Guo known action classification obtains.Pass through action recognition
Network can accurately identify the action classification corresponding to the corresponding people of current human's image, and it is corresponding to be also based on human body image
Action classification determines the behavior classification of the corresponding people of human body image.
In the present embodiment, at least one action classification can be corresponded at least one behavior classification by default,
It is optionally, action classification is corresponding at least one behavior classification when the action classification that identification obtains belongs to which kind of behavior classification
Action classification matched, based on matching result determine the corresponding people of human body image behavior classify, behavior classification include just
Chang Hangwei and improper behavior, normal behaviour correspond at least one action classification, and improper behavior corresponds at least one action class
Not.
In order to distinguish, behavior can be divided into normal behaviour and improper behavior, when in group someone occur it is non-just
Chang Hangwei (such as:Mobile phone is played on classroom) manual intervention is needed at this time, to solve the problems, such as, which specific action classification corresponds to just
Chang Hangwei, which action classification correspond to improper behavior and can be set as needed.
Optionally, improper action is corresponded in response to action classification, executes intervention operation.
As the people for occurring improper behavior in group, manual intervention is generally required, to solve potential problem, such as:When
In classroom, there is student improper behavior occur, the quality of education may be influenced, it at this time can be by school administrator, the head of a theatrical troupe
Appoint, teacher, parents of student etc. send out prompt message or warning information (such as:Real-time Alarm short message, Real-time Alarm mail or reality
When alerting call phone etc.), the rule specifically alerted can be by school according to actual conditions self-defining, and the application does not do specific limit
System.
In one or more optional embodiments, human body image pair is being determined based on the corresponding action classification of human body image
After the behavior classification of the people answered, can also include:
Classified based on the corresponding identity information of human body image and its corresponding behavior, obtain the behavioural analysis of group as a result,
Behavioural analysis result include at least one behavior record, every behavior record include in group the corresponding identity information of a people and
Behavior is classified;
The behavioural analysis result of group is sent to client, and intervention operation is executed according to the feedback information of client.
For multiple people in group, everyone corresponds to different behaviors, when number is excessive in group, if directly
It shows human body image and behavior classification, vast resources and the time of auditor can be occupied, the behavioural analysis based on statistics is as a result, can
It is directly obtained one-to-one identity information and behavior classification, it is alternatively possible to checked in table form, it can be quick
Everyone behavior classification in group is grasped, when there is behavior classification to be treated, corresponding identity can be based on
Information quickly determines corresponding people.
Optionally, behavior record further includes the corresponding temporal information of behavior classification;
Classified based on the corresponding identity information of human body image and its corresponding behavior, obtain the behavioural analysis of group as a result,
Including:
The corresponding all behavior classification of identity information in setting time are obtained, corresponding at least one row of identity information is obtained
For record, the behavioural analysis result of group is determined.
The present embodiment not only realizes real-time behavioural analysis, can also be counted to the behavior in setting time, to prevent
The case where only judging by accident, frequency that total evaluation result in setting time is occurred with various actions classification and number into
Row assessment.
Optionally, intervention operation can include but is not limited to following at least one:It sends prompt message, send alarm letter
Breath, display facial image, the corresponding identity information of display human face expression.
Auditor's quick attention can be enable to the people of improper behavior by intervention operation, and be resolved, such as:Work as
When bearing now improper behavior, sends prompt message or warning information is intervened to teacher or parent.The prompt message sent out
Or warning information includes but not limited to:Real-time Alarm short message, Real-time Alarm mail or Real-time Alarm calling phone etc., it is specific to alert
Rule can be by school according to actual conditions self-defining, the application is not particularly limited.
In one or more optional embodiments, operation 110 may include:
Image Acquisition is carried out to group respectively by least one image capture device, obtains at least one group's image,
Each image capture device corresponds to group's image.
Since the usual number of group is more, in order to more fully obtain face images, the present embodiment uses at least one
A image capture device is (such as:In classroom, multiple cameras are arranged in different location) group's image is acquired, each image is adopted
Collect equipment each moment available group's image, certainly, if camera acquisition is video flowing, is in processing procedure
It is handled using every frame image as group's image.
Optionally, operating 120 may include:
Recognition of face is executed to each group's image respectively, obtains at least one facial image group respectively;Each face figure
As group includes at least one facial image;
Duplicate checking processing is carried out to each facial image group, determines the corresponding at least one face figure of at least one people in group
Picture, each of group correspond to a facial image.
Since the image capture device for being arranged in different location is to carry out Image Acquisition for the same group, it adopts
The facial image collected may be gone up very big in the presence of repetition, and the present embodiment passes through the institute to being obtained based at least one group's image
There is facial image to carry out duplicate checking, everyone is made to correspond to a facial image, not repeater's face image acquisition is not omitted in realization,
More efficiently utilize computing resource.
Optionally, duplicate checking processing is carried out to each facial image group, determines that at least one people is corresponding at least one in group
Facial image, including:
Facial image in each facial image group is matched with each facial image in other facial image groups;
Repeater's face image in each facial image group is determined based on matching result;
The corresponding at least one facial image of at least one people in group is determined based on repeater's face image.
By will be (multiple based on being obtained in the same group image from each facial image obtained in group's image
Duplicate checking is not needed between facial image) it is matched respectively with each facial image obtained in other group's images, work as presence
When matched facial image, using at least two matched facial images as repeater's face image, repeated from least two
One is determined in facial image as the corresponding facial image of the people.
Optionally, determine that a face quality reaches the face figure of preset quality from least two repeater's face images
Picture determines everyone a corresponding facial image in group.
The evaluation index of face quality can include but is not limited to:Clarity, facial angle, face size etc., usually may be used
Preferably preferably even or reach the facial image of preset quality standard to obtain face quality from multiple repeater's face images,
The face quality of the facial image of acquisition is better, and the result that follow-up progress Emotion identification obtains is more accurate.
In an optional example, group's mood analysis method of the present invention is applied in classroom:
By each side installing a camera in front of classroom, the real-time video for acquiring classroom includes time in classroom
With the break time;It is used for video analysis and storage in school computer center deployment video server;In school computer center or third party
Trust server Platform deployment business system server;The video of camera acquisition is transmitted to video in real time by campus area network
Result is reported to service server by server, video server after it is parsed and is handled;Then teacher passes through business
The system background that server is supported checks statistical data and result;When there are special circumstances to occur or specific demand, alarm is sent out
Individual facial image is supplied to teacher to check by information or prompt message.
Processing of the video server to two-path video:Two cameras are disposed in each classroom, in the hope of obtaining the covering of bigger
Area and wider array of angle have better recognition effect to student's side face, the behaviors such as turn one's head.Video server handles two-way simultaneously
Video, when two-way judgement result unanimously then uses result;When two-way judgement result is inconsistent, then use is determined as abnormal knot
Fruit;When judging result is empty all the way for certain, then uses the judgement of another way as a result, when two-way is all sky, then abandon judging.
Pupilage identifies:System does Face datection and recognition of face, depends on whether to establish comparison data library.It should
With establishing standardized class and educational administration's information frame, by manually importing and api interface imports two kinds of forms, school is taught
Business information imported into system, realizes synchronizing information, before system comes into operation, can do the face acquisition of each student, base
In this identification of the realization to student;
Parsing of the video server to video content, including the Face datection on basis, recognition of face, it is also (high to expression
Emerging, tranquil, sad, detest, indignation is feared, surprised), action (yawn, stretch oneself, sleeping, talking, play mobile phone, raise one's hand,
It is vertical), face orientation (bow, come back, a left side is turned one's head, the right side is turned one's head, carry on the back it is excessive) identification.
The identification of student's emotion-directed behavior may include following three categories, wherein:Expression, action and face orientation;Behavior is examined
Survey and identification need to each of classroom carry out human testing, for everyone, call trained behavior model into
The detection and identification that every trade is.Technically call human testing, Face datection and disaggregated model.Optionally, by being produced in engineering
Multi-thread design on product end, three tasks can distribute on different threads, further ensure emotion-directed behavior identification
Real-time.
Operation system is derived from following three kinds according to the initial data of machine recognition by product and service design
Raw critical state information:
The physical condition of student is energetic, fatigue, or has fallen asleep:If recognizing student to yawn
Or it stretches oneself, then it is assumed that the tired fatigue of student's body is sleeping if recognized, is directly being judged to sleeping, if with
Upper behavior does not all have, then inversely judges that the life body is normal, energetic;The total evaluation of certain period is occurred with various actions
Frequency and number evaluate and test;
Whether the attitude towards study of student, study are absorbed in:If recognizing student to whisper to each other, play mobile phone, turn one's head, turning over
Body is spoken equal behaviors with heel row, then it is assumed that life study is not enough absorbed in, if without the above behavior, inversely judges in the life
Class is relatively absorbed in;Frequency that the total evaluation of certain period is occurred with various actions and number are evaluated and tested;
The psychological condition of student is healthy and upgoing, or passive need to be paid close attention to:If Expression Recognition to it is sad, detest, anger
Anger is feared, then it is assumed that belongs to passive attitude and needs to pay close attention to, if be identified as glad and tranquil, then it is assumed that belong to positive psychology
Healthy and upgoing;Frequency that the total evaluation of certain period is occurred with various actions and number are evaluated and tested.
After the status data information of the above student obtains, counting and visualize in system background (can pass through
The table or chart of statistics are shown), teacher can check specifying information by visualizing backstage, can also be carried out to alarm regulation
Setting, once being happened as defined in alarm regulation, system will just send a warning message from trend designated person.
The invention has the characteristics that:Hardware-dependent is relatively low, and video is acquired by disposing camera;Software for Design according to
It is bad extremely low, it need not be set by complicated product design and service logic, but mood and behavior are completed by algorithm itself
Identification;It manually relies on relatively low, does not need teacher substantially or administrative staff interfere system;Real-time difficulty is relatively low, classroom
Interior installation camera, has disposed server;Cost dependence is relatively low, because not excessive hardware and artificial input require,
Therefore overall cost is relatively low;Based on the emotion-directed behavior recognition result that machine learning ability obtains, not only area coverage is wide, Er Qiezhun
True rate higher.
The advantageous effect of realization includes:Based on artificial intelligence and machine learning the relevant technologies, mood, the behavior of group are solved
The problem of analysis, efficiency faster, accuracy higher;The hardware deployment of lightweight solves the problems, such as information collection, breaks away to hard
The transition of part relies on, and does not need complicated engineering and technique;The limitation for breaching people, in business and product level, by big
Measure video data training obtained from recognition capability, can more accurately judge mood and the behavior of people, be it is traditional only
Only rely on product design, service logic design software systems it is incomparable.
One of ordinary skill in the art will appreciate that:Realize that all or part of step of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer read/write memory medium, the program
When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes:ROM, RAM, magnetic disc or light
The various media that can store program code such as disk.
Fig. 2 is the structural schematic diagram of group's emotion-directed behavior analytical equipment one embodiment of the present invention.The device of the embodiment
It can be used for realizing the above-mentioned each method embodiment of the disclosure.As shown in Fig. 2, the device of the embodiment includes:
Image acquisition unit 21 obtains at least one group for carrying out Image Acquisition to the group including at least one people
Body image;
Recognition unit 22 obtains at least one face figure for executing recognition of face at least one group's image respectively
Picture, and/or human bioequivalence is executed at least one group's image respectively, obtain at least one human body image.
Optionally, group's image of acquisition includes at least one facial image, in order to which the mood to everyone is divided
Analysis, needs to split out everyone face from group's image, obtains everyone corresponding facial image, at this point, if
It, can also be into avoid repeating carrying out Emotion identification to the people in the presence of the identical face acquired based on multiple images collecting device
Pedestrian's face duplicate removal makes the facial image for corresponding to same people only retain one.
In group, in addition to face needs to pay close attention to, body action can also show its psychological activity to a certain extent,
When body behavior occurs abnormal, also need to pay close attention to;And usually the distance between face and human body is less than setting value, because
This, can determine its corresponding human body image based on facial image.
Analytic unit 23 determines in group extremely for being based at least one facial image and/or at least one human body image
The mood of a few people and/or behavior.
Based on the kind of groups emotion-directed behavior analytical equipment that the above embodiment of the present invention provides, to including at least one people's
Group carries out Image Acquisition, obtains at least one group's image;Recognition of face is executed at least one group's image respectively, is obtained
At least one facial image, and/or human bioequivalence is executed at least one group's image respectively, obtain at least one human figure
Picture;Based at least one facial image and/or at least one human body image, determine in group the mood of at least one people and/or
Behavior, opposite manual analysis mood and/or behavior improve speed and the accuracy of analysis, and can realize while to the feelings of group
Thread and behavior are analyzed.
In one or more optional embodiments, further include:
Identity recognizing unit, for being carried out based on the standard faces image to prestore at least one facial image and database
It matches, at least one standard faces and its identity information is stored in database;Determine that facial image is corresponding based on matching result
Identity information.
It in order to preferably analyze the mood of each of current group, and is distinguish and treats, need to obtain
The corresponding identity information of each facial image, the standard faces predicted by database and its identity information, when the face of acquisition
When image is matched with standard faces, you can obtain the corresponding identity information of the facial image.
In one or more optional embodiments, analytic unit 23, including:
Expression Recognition module determines the corresponding human face expression of facial image based on facial image;
Optionally, human face expression can include but is not limited to:Glad, tranquil, sad, detest, is feared, is surprised indignation.
Mood sort module, for determining that the mood of the corresponding people of facial image is classified based on human face expression.
Usually being to the Emotion identification of group therefore can be by face table in order to recognize the people in improper mood
Feelings according to its corresponding mood be divided into normal mood (such as including:It is glad, tranquil) and improper mood (such as including:Sad,
Detest, indignation, fear, is surprised), when the expression in facial image corresponds to the expression that normal mood includes, you can judge the face
The mood of the corresponding people of image is normal mood;And the expression in facial image corresponds to the expression that improper mood includes, i.e.,
It can determine whether that the mood of the corresponding people of the facial image is improper mood.
Optionally, Expression Recognition module, be specifically used for based on facial expression classification network at least one facial image into
Row facial expression classification determines the corresponding human face expression of facial image;Facial expression classification network is by known human face expression point
The sample facial image training of class obtains.
Optionally, mood sort module is specifically used for human face expression face table corresponding at least one mood classification
Feelings are matched, and determine that the mood of the corresponding people of facial image is classified based on matching result, mood classification include normal mood with
Improper mood, normal mood correspond at least one human face expression, and improper mood corresponds at least one human face expression.
Optionally, mood sort module is additionally operable to correspond to improper mood in response to human face expression, executes intervention operation.
In one or more optional embodiments, analytic unit 23 further includes:
Group's mood module obtains group for classifying based on the corresponding identity information of facial image and its corresponding mood
The mood analysis result of body.
Wherein, mood analysis result is recorded including at least one mood, and every mood record includes a people couple in group
Identity information and the mood classification answered;
Emotional feedback module, for the mood analysis result of group to be sent to client, and according to the feedback of client
Information executes intervention operation.
For multiple people in group, everyone corresponds to different moods, when number is excessive in group, if directly
It shows facial image and mood classification, vast resources and the time of auditor can be occupied, the mood analysis result based on statistics can
It is directly obtained one-to-one identity information and mood classification, it is alternatively possible to checked in table form, it can be quick
Everyone mood classification in group is grasped, when there is mood classification to be treated, corresponding identity can be based on
Information quickly determines corresponding people.
Optionally, mood record further includes the corresponding temporal information of the mood classification;
Group's mood module obtains body specifically for obtaining the corresponding classification of being in a bad mood of identity information in setting time
The corresponding at least one mood record of part information, determines the mood analysis result of group.
Optionally, intervention operation can include but is not limited to following at least one:It sends prompt message, send alarm letter
Breath, display facial image, the corresponding identity information of display human face expression.
In one or more optional embodiments, further include:
Face orientation analytic unit, the facial court for determining the corresponding people of facial image based at least one facial image
To.
Optionally, face orientation can include but is not limited to:Front, bow, come back, a left side is turned one's head, the right side is turned one's head, it is excessive to carry on the back.
Optionally, face orientation analytic unit, for based on face orientation sorter network at least one facial image into
Row face orientation is classified, and determines the corresponding face orientation of facial image.
Wherein, face orientation sorter network is obtained by the sample facial image training of known face orientation.
Optionally, face orientation analytic unit is additionally operable to be based on the corresponding face orientation of facial image, determines facial image
The attitude of corresponding people is classified.
Wherein, attitude classification includes normal attitude and improper attitude, and normal attitude corresponds at least one face orientation point
Class, improper attitude correspond at least one face orientation classification.
For Psychological Angle, the attitude meeting part of people is embodied in facial orientation, and such as front indicates proper attitude,
And it is usually that undiscipline is not absorbed in bow, turn one's head, carrying on the back first-class;The present embodiment can correspond to not homomorphism by default different face orientations
Degree, when there is preset face orientation, by searching for corresponding attitude, you can determine state residing for the corresponding people of the facial image
Degree, using the basis as processing.
Optionally, face orientation analytic unit is additionally operable to correspond to improper attitude in response to face orientation, executes and intervene behaviour
Make.
Optionally, face orientation analytic unit further includes:
Collective attitude module obtains group for classifying based on the corresponding identity information of facial image and its corresponding attitude
For the Analysis on attitude of body as a result, Analysis on attitude result is recorded including at least one attitude, every attitude record includes one in group
The corresponding identity information of people and attitude classification;
Attitude feedback module, for the Analysis on attitude result of group to be issued client, and according to the feedback letter of client
Breath executes intervention operation.
Optionally, attitude record further includes the corresponding temporal information of attitude classification;
Collective attitude module obtains identity letter for obtaining the corresponding all attitude classification of identity information in setting time
The corresponding at least one attitude record is ceased, determines the Analysis on attitude result of group.
Optionally, intervention operation includes following at least one:Prompt message is sent, sends a warning message, show face figure
Picture, the corresponding identity information of display human face expression.
In one or more optional embodiments, recognition unit 22, for utilizing human bioequivalence network from facial image
Identification obtains at least one human body image in corresponding group's image;It is obtained from least one human body image based on facial image
The corresponding human body image of facial image.
Have which kind of human bioequivalence network ripe human bioequivalence network, the unlimited fixture body of the present embodiment use in the prior art
Human bioequivalence is carried out, after obtaining human body image based on human bioequivalence network, is based on the distance between human body image and facial image
Determine the corresponding facial image of human body image, you can obtain the corresponding identity information of the human body image, and can realize in conjunction with face
Image and human body image carry out comprehensive analysis to the mood of the people and behavior.
In one or more optional embodiments, analytic unit 23, including:
Action recognition module, for determining the corresponding action classification of human body image based on human body image;
Optionally, action classification can include but is not limited to:Yawn, stretch oneself, sleeping, talking, play mobile phone, raise one's hand,
It stands up.
Behavior sort module, for determining that the behavior of the corresponding people of human body image is classified based on action classification.
In the present embodiment, at least one action classification can be corresponded at least one behavior classification by default,
It is optionally, action classification is corresponding at least one behavior classification when the action classification that identification obtains belongs to which kind of behavior classification
Action classification matched, based on matching result determine the corresponding people of human body image behavior classify, behavior classification include just
Chang Hangwei and improper behavior, normal behaviour correspond at least one action classification, and improper behavior corresponds at least one action class
Not.
Optionally, action recognition module is specifically used for based on classification of motion network at least one human body image into action
Work is classified, and determines the corresponding action classification of human body image;Sample human body image of the action recognition network Jing Guo known action classification
Training obtains.
Optionally, behavior sort module is specifically used for action classification action class corresponding at least one behavior classification
It is not matched, determines that the behavior of the corresponding people of human body image is classified based on matching result.
Wherein, behavior classification includes normal behaviour and improper behavior, and normal behaviour corresponds at least one action classification, non-
Normal behaviour corresponds at least one action classification.
Optionally, behavior sort module is additionally operable to correspond to improper action in response to action classification, executes intervention operation.
Optionally, analytic unit further includes:
Group behavior module obtains group for classifying based on the corresponding identity information of human body image and its corresponding behavior
As a result, behavioural analysis result includes at least one behavior record, every behavior record includes one in group for the behavioural analysis of body
The corresponding identity information of people and behavior classification;
Behavior feedback module, for the behavioural analysis result of group to be sent to client, and according to the feedback of client
Information executes intervention operation.
Optionally, behavior record further includes the corresponding temporal information of behavior classification;
Group behavior module is specifically used for obtaining the corresponding all behavior classification of identity information in setting time, obtains body
Corresponding at least one behavior record of part information, determines the behavioural analysis result of group.
Optionally, intervention operation includes following at least one:Prompt message is sent, sends a warning message, show face figure
Picture, the corresponding identity information of display human face expression.
In one or more optional embodiments, image acquisition unit 21, for being set by least one Image Acquisition
Back-up is other to carry out Image Acquisition to group, obtains at least one group's image, and each image capture device corresponds to group's figure
Picture.
Since the usual number of group is more, in order to more fully obtain face images, the present embodiment uses at least one
A image capture device is (such as:In classroom, multiple cameras are arranged in different location) group's image is acquired, each image is adopted
Collect equipment each moment available group's image, certainly, if camera acquisition is video flowing, is in processing procedure
It is handled using every frame image as group's image.
Optionally, recognition unit 22, including:
Stock discrimination module obtains at least one face respectively for executing recognition of face to each group's image respectively
Image group;Each facial image group includes at least one facial image;
Duplicate checking module determines that at least one people is corresponding extremely in group for carrying out duplicate checking processing to each facial image group
A few facial image, each of group correspond to a facial image.
Optionally, duplicate checking module, including:
Matching module is used for the facial image in each facial image group and each facial image in other facial image groups
It is matched;Repeater's face image in each facial image group is determined based on matching result;
Facial image determining module, for determining that at least one people is corresponding at least in group based on repeater's face image
One facial image.
Optionally, facial image determining module is specifically used for determining a people from least two repeater's face images
Face quality reaches the facial image of preset quality, determines everyone a corresponding facial image in group.
According to the other side of the embodiment of the present disclosure, a kind of electronic equipment provided, including processor, processor include
Group's emotion-directed behavior analytical equipment described in any of the above-described embodiment of the disclosure.
According to the other side of the embodiment of the present disclosure, a kind of electronic equipment provided, including:Memory, for storing
Executable instruction;
And processor, for being communicated with memory any of the above-described implementation of the disclosure is completed to execute executable instruction
Group's emotion-directed behavior analysis method described in example.
According to the other side of the embodiment of the present disclosure, a kind of computer storage media provided, for storing computer
The instruction that can be read, when instruction is executed by processor, which executes the group described in any of the above-described embodiment of the disclosure
Emotion-directed behavior analysis method.
According to the other side of the embodiment of the present disclosure, a kind of computer program product provided, including it is computer-readable
Code, when computer-readable code is run in equipment, the processor in equipment executes in any of the above-described embodiment of the disclosure
Group's emotion-directed behavior analysis method.
It should be understood that the terms such as " first " in the embodiment of the present disclosure, " second " are used for the purpose of distinguishing, and be not construed as
Restriction to the embodiment of the present disclosure.
It should also be understood that in the disclosure, " multiple " can refer to two or more, "at least one" can refer to one,
Two or more.
It should also be understood that for the either component, data or the structure that are referred in the disclosure, clearly limited or preceding no
In the case of opposite enlightenment given hereinlater, one or more may be generally understood to.
It should also be understood that the disclosure highlights the difference between each embodiment to the description of each embodiment,
Same or similar place can be referred to mutually, for sake of simplicity, no longer repeating one by one.
The embodiment of the present disclosure additionally provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down
Plate computer, server etc..Below with reference to Fig. 3, it illustrates suitable for for realizing the terminal device or service of the embodiment of the present application
The structural schematic diagram of the electronic equipment 300 of device:As shown in figure 3, electronic equipment 300 includes one or more processors, communication unit
Deng one or more of processors are for example:One or more central processing unit (CPU) 301, and/or one or more figures
As processor (GPU) 313 etc., processor can according to the executable instruction being stored in read-only memory (ROM) 302 or from
Executable instruction that storage section 308 is loaded into random access storage device (RAM) 303 and execute various actions appropriate and place
Reason.Communication unit 312 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) network interface card.
Processor can be communicated with read-only memory 302 and/or random access storage device 303 to execute executable instruction,
It is connected with communication unit 312 by bus 304 and is communicated with other target devices through communication unit 312, is implemented to complete the application
The corresponding operation of any one method that example provides obtains at least for example, carrying out Image Acquisition to the group including at least one people
One group's image;Recognition of face is executed at least one group's image respectively, obtains at least one facial image, and/or divide
It is other that human bioequivalence is executed at least one group's image, obtain at least one human body image;Based at least one facial image and/
Or at least one human body image, determine the mood of at least one people and/or behavior in group.
In addition, in RAM 303, it can also be stored with various programs and data needed for device operation.CPU301,ROM302
And RAM303 is connected with each other by bus 304.In the case where there is RAM303, ROM302 is optional module.RAM303 is stored
Executable instruction, or executable instruction is written into ROM302 at runtime, executable instruction makes central processing unit 301 execute
The corresponding operation of above-mentioned communication means.Input/output (I/O) interface 305 is also connected to bus 304.Communication unit 312 can integrate
Setting, may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
It is connected to I/O interfaces 305 with lower component:Importation 306 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 307 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 308 including hard disk etc.;
And the communications portion 309 of the network interface card including LAN card, modem etc..Communications portion 309 via such as because
The network of spy's net executes communication process.Driver 310 is also according to needing to be connected to I/O interfaces 305.Detachable media 311, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 310, as needed in order to be read from thereon
Computer program be mounted into storage section 308 as needed.
It should be noted that framework as shown in Figure 3 is only a kind of optional realization method, it, can root during concrete practice
The component count amount and type of above-mentioned Fig. 3 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component
It sets, separately positioned or integrally disposed and other implementations, such as separable settings of GPU313 and CPU301 or can also be used
GPU313 is integrated on CPU301, the separable setting of communication unit, can also be integrally disposed on CPU301 or GPU313, etc..
These interchangeable embodiments each fall within protection domain disclosed in the disclosure.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be tangibly embodied in machine readable
Computer program on medium, computer program include the program code for method shown in execution flow chart, program code
It may include the corresponding instruction of corresponding execution method and step provided by the embodiments of the present application, for example, the group to including at least one people
Body carries out Image Acquisition, obtains at least one group's image;Recognition of face is executed at least one group's image respectively, is obtained extremely
A few facial image, and/or human bioequivalence is executed at least one group's image respectively, obtain at least one human body image;
Based at least one facial image and/or at least one human body image, the mood and/or row of at least one people in group are determined
For.In such embodiments, which can be downloaded and installed by communications portion 309 from network, and/or
It is mounted from detachable media 311.When the computer program is executed by central processing unit (CPU) 301, execute the application's
The operation for the above-mentioned function of being limited in method.
Disclosed method and device, equipment may be achieved in many ways.For example, software, hardware, firmware can be passed through
Or any combinations of software, hardware, firmware realize disclosed method and device, equipment.The step of for method
Sequence is stated merely to illustrate, the step of disclosed method is not limited to sequence described in detail above, unless with other
Mode illustrates.In addition, in some embodiments, the disclosure can be also embodied as recording program in the recording medium, this
A little programs include for realizing according to the machine readable instructions of disclosed method.Thus, the disclosure also covers storage for holding
The recording medium gone according to the program of disclosed method.
The description of the disclosure provides for the sake of example and description, and is not exhaustively or by the disclosure
It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.It selects and retouches
It states embodiment and is to more preferably illustrate the principle and practical application of the disclosure, and those skilled in the art is enable to manage
Solve various embodiments with various modifications of the disclosure to design suitable for special-purpose.
Claims (10)
1. a kind of groups emotion-directed behavior analysis method, which is characterized in that including:
Image Acquisition is carried out to the group including at least one people, obtains at least one group's image;
Recognition of face is executed at least one group's image respectively, obtains at least one facial image, and/or respectively to institute
It states at least one group's image and executes human bioequivalence, obtain at least one human body image;
Based at least one facial image and/or at least one human body image, determine at least one in the group
The mood of people and/or behavior.
2. according to the method described in claim 1, it is characterized in that, based at least one facial image, determine described in
In group before the mood of at least one people, further include:
It is matched with the standard faces image to prestore in database based at least one facial image, in the database
Store at least one standard faces and its identity information;
The corresponding identity information of the facial image is determined based on matching result.
3. method according to claim 1 or 2, which is characterized in that it is described to be based at least one facial image, it determines
The mood of at least one people in the group, including:
The corresponding human face expression of the facial image is determined based on the facial image;
The mood classification of the corresponding people of the facial image is determined based on the human face expression.
4. according to the method described in claim 3, it is characterized in that, described determine the facial image based on the facial image
Corresponding human face expression, including:
Facial expression classification is carried out at least one facial image based on facial expression classification network, determines the face figure
As corresponding human face expression;The facial expression classification network is obtained by the sample facial image training of known facial expression classification
?.
5. method according to claim 3 or 4, which is characterized in that described to determine the face based on the human face expression
The mood of the corresponding people of image is classified, including:
Human face expression human face expression corresponding at least one mood classification is matched, institute is determined based on matching result
The mood classification of the corresponding people of facial image is stated, the mood classification includes normal mood and improper mood, the positive reason
Thread corresponds at least one human face expression, and the improper mood corresponds at least one human face expression.
6. a kind of groups emotion-directed behavior analytical equipment, which is characterized in that including:
Image acquisition unit obtains at least one group's image for carrying out Image Acquisition to the group including at least one people;
Recognition unit, for at least one group's image execution recognition of face, obtaining at least one facial image respectively,
And/or human bioequivalence is executed at least one group's image respectively, obtain at least one human body image;
Analytic unit determines the group for being based at least one facial image and/or at least one human body image
The mood of at least one people and/or behavior in body.
7. a kind of electronic equipment, which is characterized in that including processor, the processor includes group's feelings described in claim 6
Thread behavioural analysis device.
8. a kind of electronic equipment, which is characterized in that including:Memory, for storing executable instruction;
And processor, for being communicated with the memory to execute the executable instruction to complete claim 1 to 5 times
The operation for group's emotion-directed behavior analysis method of anticipating.
9. a kind of computer storage media, for storing computer-readable instruction, which is characterized in that described instruction is performed
When perform claim require 1 to 5 any one described in group's emotion-directed behavior analysis method operation.
10. a kind of computer program product, including computer-readable code, which is characterized in that when the computer-readable code
When being run in equipment, the processor in the equipment executes for realizing group's mood described in claim 1 to 5 any one
The instruction of behavior analysis method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810395944.1A CN108764047A (en) | 2018-04-27 | 2018-04-27 | Group's emotion-directed behavior analysis method and device, electronic equipment, medium, product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810395944.1A CN108764047A (en) | 2018-04-27 | 2018-04-27 | Group's emotion-directed behavior analysis method and device, electronic equipment, medium, product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108764047A true CN108764047A (en) | 2018-11-06 |
Family
ID=64012456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810395944.1A Pending CN108764047A (en) | 2018-04-27 | 2018-04-27 | Group's emotion-directed behavior analysis method and device, electronic equipment, medium, product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108764047A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376689A (en) * | 2018-11-20 | 2019-02-22 | 图普科技(广州)有限公司 | Population analysis method and device |
CN109523441A (en) * | 2018-12-20 | 2019-03-26 | 合肥凌极西雅电子科技有限公司 | A kind of Teaching Management Method and system based on video identification |
CN109934150A (en) * | 2019-03-07 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | A kind of meeting participation recognition methods, device, server and storage medium |
CN110147729A (en) * | 2019-04-16 | 2019-08-20 | 深圳壹账通智能科技有限公司 | User emotion recognition methods, device, computer equipment and storage medium |
CN110443152A (en) * | 2019-07-15 | 2019-11-12 | 广东校园卫士网络科技有限责任公司 | A kind of students ' behavior anticipation and management method based on scene early warning |
CN111079692A (en) * | 2019-12-27 | 2020-04-28 | 广东德融汇科技有限公司 | Campus behavior analysis method based on face recognition for K12 education stage |
CN111401198A (en) * | 2020-03-10 | 2020-07-10 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111738216A (en) * | 2020-07-23 | 2020-10-02 | 北京文香信息技术有限公司 | Behavior detection method, device and equipment |
CN111833861A (en) * | 2019-04-19 | 2020-10-27 | 微软技术许可有限责任公司 | Artificial intelligence based event evaluation report generation |
CN111862521A (en) * | 2019-04-28 | 2020-10-30 | 杭州海康威视数字技术股份有限公司 | Behavior thermodynamic diagram generation and alarm method and device, electronic equipment and storage medium |
CN113326723A (en) * | 2020-12-24 | 2021-08-31 | 杭州海康威视数字技术股份有限公司 | Emotion recognition method, device, equipment and system |
Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5774591A (en) * | 1995-12-15 | 1998-06-30 | Xerox Corporation | Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images |
CN1503194A (en) * | 2002-11-26 | 2004-06-09 | 中国科学院计算技术研究所 | Status identification method by using body information matched human face information |
CN1581149A (en) * | 2004-03-25 | 2005-02-16 | 东南大学 | Method for constituting man-machine interface using humen's sentiment and sentiment variation information |
CN101588443A (en) * | 2009-06-22 | 2009-11-25 | 费炜 | Statistical device and detection method for television audience ratings based on human face |
CN101908139A (en) * | 2010-07-15 | 2010-12-08 | 华中科技大学 | Method for supervising learning activities of learning machine user |
CN102799877A (en) * | 2012-09-11 | 2012-11-28 | 上海中原电子技术工程有限公司 | Method and system for screening face images |
CN102881239A (en) * | 2011-07-15 | 2013-01-16 | 鼎亿数码科技(上海)有限公司 | Advertisement playing system and method based on image identification |
CN102945624A (en) * | 2012-11-14 | 2013-02-27 | 南京航空航天大学 | Intelligent video teaching system based on cloud calculation model and expression information feedback |
CN104598869A (en) * | 2014-07-25 | 2015-05-06 | 北京智膜科技有限公司 | Intelligent advertisement pushing method based on human face recognition device |
CN105653037A (en) * | 2015-12-31 | 2016-06-08 | 张小花 | Interactive system and method based on behavior analysis |
CN105701466A (en) * | 2016-01-13 | 2016-06-22 | 杭州奇客科技有限公司 | Rapid all angle face tracking method |
CN106294489A (en) * | 2015-06-08 | 2017-01-04 | 北京三星通信技术研究有限公司 | Content recommendation method, Apparatus and system |
CN106599881A (en) * | 2016-12-30 | 2017-04-26 | 首都师范大学 | Student state determination method, device and system |
CN106851579A (en) * | 2017-03-27 | 2017-06-13 | 华南师范大学 | The method that teacher's mobile data is recorded and analyzed based on indoor positioning technologies |
CN106909896A (en) * | 2017-02-17 | 2017-06-30 | 竹间智能科技(上海)有限公司 | Man-machine interactive system and method for work based on character personality and interpersonal relationships identification |
CN106970743A (en) * | 2017-03-27 | 2017-07-21 | 宇龙计算机通信科技(深圳)有限公司 | A kind of icon sort method, device and mobile terminal |
CN107085512A (en) * | 2017-04-24 | 2017-08-22 | 广东小天才科技有限公司 | A kind of audio frequency playing method and mobile terminal |
CN107292240A (en) * | 2017-05-24 | 2017-10-24 | 深圳市深网视界科技有限公司 | It is a kind of that people's method and system are looked for based on face and human bioequivalence |
CN107463608A (en) * | 2017-06-20 | 2017-12-12 | 上海汇尔通信息技术有限公司 | A kind of information-pushing method and system based on recognition of face |
CN107463874A (en) * | 2017-07-03 | 2017-12-12 | 华南师范大学 | The intelligent safeguard system of Emotion identification method and system and application this method |
CN107480452A (en) * | 2017-08-17 | 2017-12-15 | 深圳先进技术研究院 | Multi-user's mood monitoring method, device, equipment and storage medium |
CN107491717A (en) * | 2016-06-13 | 2017-12-19 | 科大讯飞股份有限公司 | The detection method that cheats at one's exam and device |
CN107729882A (en) * | 2017-11-19 | 2018-02-23 | 济源维恩科技开发有限公司 | Emotion identification decision method based on image recognition |
CN107767313A (en) * | 2017-05-18 | 2018-03-06 | 青岛陶知电子科技有限公司 | A kind of intelligent interaction tutoring system with emotion recognition function |
CN107862240A (en) * | 2017-09-19 | 2018-03-30 | 深圳韵脉智能科技有限公司 | A kind of face tracking methods of multi-cam collaboration |
-
2018
- 2018-04-27 CN CN201810395944.1A patent/CN108764047A/en active Pending
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5774591A (en) * | 1995-12-15 | 1998-06-30 | Xerox Corporation | Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images |
CN1503194A (en) * | 2002-11-26 | 2004-06-09 | 中国科学院计算技术研究所 | Status identification method by using body information matched human face information |
CN1581149A (en) * | 2004-03-25 | 2005-02-16 | 东南大学 | Method for constituting man-machine interface using humen's sentiment and sentiment variation information |
CN101588443A (en) * | 2009-06-22 | 2009-11-25 | 费炜 | Statistical device and detection method for television audience ratings based on human face |
CN101908139A (en) * | 2010-07-15 | 2010-12-08 | 华中科技大学 | Method for supervising learning activities of learning machine user |
CN102881239A (en) * | 2011-07-15 | 2013-01-16 | 鼎亿数码科技(上海)有限公司 | Advertisement playing system and method based on image identification |
CN102799877A (en) * | 2012-09-11 | 2012-11-28 | 上海中原电子技术工程有限公司 | Method and system for screening face images |
CN102945624A (en) * | 2012-11-14 | 2013-02-27 | 南京航空航天大学 | Intelligent video teaching system based on cloud calculation model and expression information feedback |
CN104598869A (en) * | 2014-07-25 | 2015-05-06 | 北京智膜科技有限公司 | Intelligent advertisement pushing method based on human face recognition device |
CN106294489A (en) * | 2015-06-08 | 2017-01-04 | 北京三星通信技术研究有限公司 | Content recommendation method, Apparatus and system |
CN105653037A (en) * | 2015-12-31 | 2016-06-08 | 张小花 | Interactive system and method based on behavior analysis |
CN105701466A (en) * | 2016-01-13 | 2016-06-22 | 杭州奇客科技有限公司 | Rapid all angle face tracking method |
CN107491717A (en) * | 2016-06-13 | 2017-12-19 | 科大讯飞股份有限公司 | The detection method that cheats at one's exam and device |
CN106599881A (en) * | 2016-12-30 | 2017-04-26 | 首都师范大学 | Student state determination method, device and system |
CN106909896A (en) * | 2017-02-17 | 2017-06-30 | 竹间智能科技(上海)有限公司 | Man-machine interactive system and method for work based on character personality and interpersonal relationships identification |
CN106970743A (en) * | 2017-03-27 | 2017-07-21 | 宇龙计算机通信科技(深圳)有限公司 | A kind of icon sort method, device and mobile terminal |
CN106851579A (en) * | 2017-03-27 | 2017-06-13 | 华南师范大学 | The method that teacher's mobile data is recorded and analyzed based on indoor positioning technologies |
CN107085512A (en) * | 2017-04-24 | 2017-08-22 | 广东小天才科技有限公司 | A kind of audio frequency playing method and mobile terminal |
CN107767313A (en) * | 2017-05-18 | 2018-03-06 | 青岛陶知电子科技有限公司 | A kind of intelligent interaction tutoring system with emotion recognition function |
CN107292240A (en) * | 2017-05-24 | 2017-10-24 | 深圳市深网视界科技有限公司 | It is a kind of that people's method and system are looked for based on face and human bioequivalence |
CN107463608A (en) * | 2017-06-20 | 2017-12-12 | 上海汇尔通信息技术有限公司 | A kind of information-pushing method and system based on recognition of face |
CN107463874A (en) * | 2017-07-03 | 2017-12-12 | 华南师范大学 | The intelligent safeguard system of Emotion identification method and system and application this method |
CN107480452A (en) * | 2017-08-17 | 2017-12-15 | 深圳先进技术研究院 | Multi-user's mood monitoring method, device, equipment and storage medium |
CN107862240A (en) * | 2017-09-19 | 2018-03-30 | 深圳韵脉智能科技有限公司 | A kind of face tracking methods of multi-cam collaboration |
CN107729882A (en) * | 2017-11-19 | 2018-02-23 | 济源维恩科技开发有限公司 | Emotion identification decision method based on image recognition |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376689A (en) * | 2018-11-20 | 2019-02-22 | 图普科技(广州)有限公司 | Population analysis method and device |
CN109523441A (en) * | 2018-12-20 | 2019-03-26 | 合肥凌极西雅电子科技有限公司 | A kind of Teaching Management Method and system based on video identification |
CN109934150A (en) * | 2019-03-07 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | A kind of meeting participation recognition methods, device, server and storage medium |
CN109934150B (en) * | 2019-03-07 | 2022-04-05 | 百度在线网络技术(北京)有限公司 | Conference participation degree identification method, device, server and storage medium |
CN110147729A (en) * | 2019-04-16 | 2019-08-20 | 深圳壹账通智能科技有限公司 | User emotion recognition methods, device, computer equipment and storage medium |
CN111833861A (en) * | 2019-04-19 | 2020-10-27 | 微软技术许可有限责任公司 | Artificial intelligence based event evaluation report generation |
CN111862521A (en) * | 2019-04-28 | 2020-10-30 | 杭州海康威视数字技术股份有限公司 | Behavior thermodynamic diagram generation and alarm method and device, electronic equipment and storage medium |
CN110443152A (en) * | 2019-07-15 | 2019-11-12 | 广东校园卫士网络科技有限责任公司 | A kind of students ' behavior anticipation and management method based on scene early warning |
CN111079692A (en) * | 2019-12-27 | 2020-04-28 | 广东德融汇科技有限公司 | Campus behavior analysis method based on face recognition for K12 education stage |
CN111079692B (en) * | 2019-12-27 | 2023-07-07 | 广东德融汇科技有限公司 | Campus behavior analysis method based on face recognition and used in K12 education stage |
CN111401198A (en) * | 2020-03-10 | 2020-07-10 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111401198B (en) * | 2020-03-10 | 2024-04-23 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111738216A (en) * | 2020-07-23 | 2020-10-02 | 北京文香信息技术有限公司 | Behavior detection method, device and equipment |
CN113326723A (en) * | 2020-12-24 | 2021-08-31 | 杭州海康威视数字技术股份有限公司 | Emotion recognition method, device, equipment and system |
CN113326723B (en) * | 2020-12-24 | 2024-04-05 | 杭州海康威视数字技术股份有限公司 | Emotion recognition method, device, equipment and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764047A (en) | Group's emotion-directed behavior analysis method and device, electronic equipment, medium, product | |
CN111181939B (en) | Network intrusion detection method and device based on ensemble learning | |
CN109740446A (en) | Classroom students ' behavior analysis method and device | |
CN109522815A (en) | A kind of focus appraisal procedure, device and electronic equipment | |
CN109034036A (en) | A kind of video analysis method, Method of Teaching Quality Evaluation and system, computer readable storage medium | |
CN109271886A (en) | A kind of the human body behavior analysis method and system of examination of education monitor video | |
CN108833409A (en) | webshell detection method and device based on deep learning and semi-supervised learning | |
CN109460727A (en) | A kind of examination hall monitoring system and method based on Human bodys' response | |
CN110163154A (en) | Video monitoring system based on artificial intelligence | |
Ancheta et al. | FEDSecurity: implementation of computer vision thru face and eye detection | |
CN115168887B (en) | Mobile terminal stealth processing method and device based on differential authority privacy protection | |
CN112215700A (en) | Credit face audit method and device | |
CN112163572A (en) | Method and device for identifying object | |
CN109086657A (en) | A kind of ear detection method, system and model based on machine learning | |
CN114639152A (en) | Multi-modal voice interaction method, device, equipment and medium based on face recognition | |
CN113704505A (en) | Big data user demand analysis method based on intelligent education and server | |
CN113268870A (en) | Monte Carlo-based image recognition reliability evaluation method under outdoor environment condition | |
CN115132228B (en) | Language capability grading method and system | |
CN112347889B (en) | Substation operation behavior identification method and device | |
Yu et al. | A multi-scale feature selection method for steganalytic feature GFR | |
CN114842401A (en) | Method and system for capturing and classifying human body actions | |
CN113010868B (en) | On-line test authentication examination management system | |
CN112885356B (en) | Voice recognition method based on voiceprint | |
CN114511817A (en) | Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors | |
Grigoriev | Service for Monitoring and Control of Remote Testing by Video Information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181106 |