CN109034090A - A kind of emotion recognition system and method based on limb action - Google Patents

A kind of emotion recognition system and method based on limb action Download PDF

Info

Publication number
CN109034090A
CN109034090A CN201810893076.XA CN201810893076A CN109034090A CN 109034090 A CN109034090 A CN 109034090A CN 201810893076 A CN201810893076 A CN 201810893076A CN 109034090 A CN109034090 A CN 109034090A
Authority
CN
China
Prior art keywords
limb action
camera
recognition system
emotion recognition
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810893076.XA
Other languages
Chinese (zh)
Inventor
马磊
万晶
沈晓燕
杨凡凡
鞠峰
陶春伶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN201810893076.XA priority Critical patent/CN109034090A/en
Publication of CN109034090A publication Critical patent/CN109034090A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of emotion recognition system and method based on limb action, including limb action information acquisition system, limb action information acquisition system is made of camera and computer analytical equipment, and camera acquisition walks limb action information and is transmitted to computer analytical equipment and is analyzed and processed.The premise that the present invention classifies from eye movement test research mood, it is no longer dependent on the extracting method of manual characteristic information, but learn the angle from deep learning to extract limb action global characteristics information, utilize TensorFlow newly developed, the softwares such as Python, classify to limb action, and be mapped to corresponding happiness, in sad and neutral mood.

Description

A kind of emotion recognition system and method based on limb action
Technical field
The present invention relates to emotion recognition technical field, specially a kind of emotion recognition system and side based on limb action Method.
Background technique
Emotion identification refers to the ability that emotional information is inferred from other people facial expressions, phonetic sound reconciliation limb action.Closely Phase, application of the optokinetics in terms of Emotion identification, to based on deep learning Study of recognition mood in the extraction of characteristic information Bring good thinking.When facial expression is not significant, mood is identified using limb action, based on 2 kinds of basic moods The data foundation of (actively and passive, i.e., glad and sad) and neutral mood as Emotion abstract, identifies according to limb action Mood mainly carries out in terms of picture classification and real-time detection classification two, collects the pictures of a large amount of tennis player, benefit Pictures are labeled at TensorFlow, Python, Jupyter, OpenCV environment with convolutional neural networks, are trained, Classification.Limbs Emotion identification distinguishes mood by the physiology or unphysiologic signal for obtaining people automatically, preferably to help Help interpersonal exchange and the friendly natural man-machine interaction of realization.Mood is the feeling for combining people, thought and act A kind of state.And mood is considered as always that the mankind are irrational or the source of deviation, influences thinking and the behaviour side of people Formula etc..So correctly cognitive ability and social contact ability can be improved in identification mood.
Emotion identification is the hot issue of current manual's intelligence and machine learning research field, and Emotion identification mostly uses at present Facial expression, body behavior and speech signal analysis method.Between people and people when the contact ac of short distance, it may pay close attention to It is more facial expression and the sound of other side to identify mood;But facial table can not be identified when remote contact ac In the case where feelings or sound, need to distinguish mood by other informations such as limb actions.With the development of eye movement analysis technology, Carrying out Emotional Picture cognition by eye movement technique has very important practical value and meaning with mood assessments.Eye movement equipment can be with The location information of a large amount of observation objects of record, the psychological cognition that these information can assist people to find observer are regular.Work as sight When the person's of examining watching difference is observed the mood of object and identified, always fixed one's eyes upon first in the features of most critical Position.With the fast development of computer vision technique, people increasingly wish automatically to carry out emotion recognition by computer.And The research and development of eye movement test also brings many idea and mentality of designing to computer identification mood.
With advances in technology, information organization form is enriched from single text to including audio, image, video etc. Various forms including multi-medium data, information content are even more to be skyrocketed through.Text more directly and is easily stored compared to audio, And image can then provide more vivid, specific information compared to text, be people's life, the important sources of learning and communication.People When identifying image, brain forms a prompt judgement the important information (i.e. characteristic information) of seen image, and is reflected in iris, makes eye Eyeball iris can more accurately be located in the position comprising characteristic information, then be transmitted to brain, to the characteristic information collected into Row analysis.In image recognition processes, there should be the information for judging identification at that time, also to there is the ability of store-memory information.This Sample, which is just able to achieve, re-recognizes the identification of image, after so that training is stored a large amount of image information, it will be able to deposit after directly utilizing training The information for storing up memory identifies image realization at any time.
The development of image recognition is constantly progressive with society, and computer graphics study deepens continuously, from more letter Single bar code recognition is to Digital Image Processing and identification, then to complicated object identification, and the paces of image recognition are always in court More and more high-end direction develop, and daily life can be applied to.In the development process of research image recognition, discovery Research for Text region is started with by letter, numbers and symbols, from more regular printing word to more complicated Handwriting, Ma Yun sweeps five happinesses activity what Alipay was initiated during such as nearly 2 year Spring Festival, it may be said that is greatly to embody figure As extremely rapid development of the identification in terms of Text region.Eighties of last century is started from 60 years for the processing of digital picture Generation, digital picture have convenient storage compared to analog image, and transmission is simple, rapidly, is not easy the huge advantage of be distorted etc.. Text region and digital image processing method to progress promote the research and development of image recognition.And the knowledge for complex object Not, the scope for belonging to high level computer is the process in conjunction with artificial intelligence, computer vision, computer graphics etc., grinds Study carefully achievement and is constantly applied to Higher-end machines people, object detection etc..The main purpose of image recognition be to image, picture, The information such as scenery, text are by processing and identification, to realize the direct communication process of computer and external environment.
Image recognition has gradually been dissolved into our life, such as: the scanning of supermarket's bar code, the two dimensional codes such as wechat Identification, the fingerprint recognition of mobile phone, what fingerprint payment and update of the clothes based on image recognition technology of ant gold and development proposed Smile to Pay loses face technology, and the face identification functions of iPhone, image recognition is ubiquitous, but image recognition Development and not perfect, or even say to walk there are also very long stretch, when image recognition can all combine various aspects It uses, must will enter a completely new epoch.In terms of Emotion identification, if eye movement analysis research experiment can be combined, Facial expression and limb action are combined, then will be more accurate to the identification of mood.
Summary of the invention
The purpose of the present invention is to provide a kind of emotion recognition system and method based on limb action, to solve above-mentioned back The problem of being proposed in scape technology.
To achieve the above object, the invention provides the following technical scheme: a kind of emotion recognition system based on limb action, Including limb action information acquisition system, the limb action information acquisition system is by camera and computer analytical equipment group At the camera acquisition, which walks limb action information and is transmitted to computer analytical equipment, to be analyzed and processed.
Preferably, the camera kernel is STM32F765ARM Cortex M7;Camera uses OV7725 camera Chip.
Preferably, a method of the emotion recognition system based on limb action, comprising the following steps:
A, it labels;
B, training dataset;
C, it tests.
Preferably, the specific method is as follows by the step A: collecting a large amount of human body images, required instruction is marked in every image Experienced personage;LabelImg is the tool of tag image, carries out labelling operation to picture using LabelImg, is selecting picture In the process, it to be put into the different complicated picture of background, light to be trained, can not be identified to avoid when carrying out image recognition The poor image of condition.
Preferably, concrete operations in the step B are as follows: download different object detection models to train detector.
Preferably, the step C concrete operations are as follows: camera is used to extract the contour feature of body behavior, and installation is taken the photograph As head, camera is opened.
Preferably, the object detection model uses RCNN model, SPP-Net model, Fast-RCNN and Faster- RCNN model.
Compared with prior art, the beneficial effects of the present invention are: the premise that the present invention classifies from eye movement test research mood It sets out, is no longer dependent on the extracting method of manual characteristic information, but learn the angle from deep learning to extract limb action Global characteristics information, using TensorFlow newly developed, the softwares such as Python classify to limb action, and be mapped to Corresponding happiness, in sad and neutral mood.
Detailed description of the invention
Fig. 1 is present system functional block diagram;
Fig. 2 is SSD-MobileNet-V1 model training result schematic diagram of the present invention;
Fig. 3 is Faster-RCNN-Inception-V2 model training result schematic diagram of the present invention;
Fig. 4 is that limbs of the present invention evaluate mood figure one;
Fig. 5 is that limbs of the present invention evaluate mood figure two;
Fig. 6 is that limbs of the present invention evaluate mood figure three;
Fig. 7 is that limbs of the present invention evaluate mood figure four.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Fig. 1-3 is please referred to, the present invention provides a kind of technical solution: a kind of emotion recognition system based on limb action, packet Limb action information acquisition system is included, the limb action information acquisition system is by camera 1 and 2 groups of computer analytical equipment At the acquisition of camera 1, which walks limb action information 3 and is transmitted to computer analytical equipment 2, to be analyzed and processed.In camera Core is STM32F765ARM Cortex M7;Camera uses OV7725 camera chip.
In the present invention, a method of the emotion recognition system based on limb action, comprising the following steps:
A, it labels;
B, training dataset;
C, it tests.
Wherein, the specific method is as follows by step A: collecting a large amount of human body images, the people of training needed for marking in every image Object;LabelImg is the tool of tag image, labelling operation is carried out to picture using LabelImg, in the process for selecting picture In, it to be put into the different complicated picture of background, light and be trained, can not identify condition to avoid when carrying out image recognition Poor image.LabelImg saves the .xml file comprising each image tag data, and the label data of upper figure includes frame The information such as position, width, the title of figure, the position that the frame that each information all records selects region of interest ROI region are special Sign, and the size of this work box when will have a direct impact on real-time identification mood to the end, position etc., these .xml files will For generating TFRecords, this is one of the input of TensorFlow training.
Step C concrete operations are as follows: camera is used to extract the contour feature of body behavior, installs camera, opening is taken the photograph As head.
In addition, concrete operations in step B are as follows: download different object detection models to train detector;Object detection mould Type uses RCNN model, SPP-Net model, Fast-RCNN and Faster-RCNN model.
Wherein, RCNN makes CNN no longer and is all objects in trained whole image, when training, whole image object The extraction of the characteristic information of body, training, is classifying, this brings very big challenge to the calculating of image recognition, good by CNN Feature extraction and classification performance, the image feature information amount of training needed for reducing again, only we need in training image The object of identification does not need to extract the information of whole image, but directly extracts the information of our area-of-interests, and This area-of-interest is chosen out by us, realizes turning for target detection problems by Region Proposal method Change.
Algorithm can be divided into four steps,
1) candidate region selects
Region Proposal is method for extracting region, when identifying the experimentation of mood based on limb action, in a width In image, have powerful connections, has different personages, possible there are also other objects, but limbs are only done in our interested positions The people of movement selects area-of-interest with the high sliding window frame of different width, keeps the characteristic information amount extracted as small as possible.In this way It is just avoided that data volume is excessive and causes Caton, delay, be normalized according to Proposal, the standard as CNN inputs;
2) CNN feature extraction
According to the image feature information of input, the characteristic information extracted is more representative, and information content is also less, makes It obtains calculation amount greatly to reduce, then carries out the operation such as convolution, pond again, make data that there is regression nature, finally obtain fixed dimension The output of degree;
3) classification is returned with boundary
The output of the finally obtained fixed dimension of CNN feature extraction, first according to feature training classifier using support to Amount machine SVM classifies to the vector of output, then obtains accurate area-of-interest by the method that boundary returns, that is, identifies Region, but because can generate multiple subregions in the method for practical operation boundary recurrence, this makes it possible to classify to us The limb action of identification carries out accurate positioning and merges, and avoids the occurrence of the target area dislocation of identification, does not identify not come out even Mistake.
Convolution of the CNN in feature information extraction again and again computes repeatedly, and calculation amount is very big, thus it is very time-consuming, And SPP-Net, since global feature information content is very big, rejects useless background information when extracting global feature information, Calculation amount can be greatly reduced, when solution does a Region interception before classification.
Improvement of SPP-Net on the basis of RCNN:
1) benefit cannot being brought to experiment in view of normalization, and there is also characteristic information loss, storage is improperly asked Topic cancels normalization process, solves to stretch when frame selects target area or truncation makes information caused by anamorphose lose and deposit Storage problem;
2) the last one pond layer before full articulamentum is replaced using spatial pyramid pondization, efficiently solves convolutional layer Compute repeatedly problem.
Fast-RCNN accelerates RCNN: simplifies the pond ROI layer, keeps the characteristic information amount extracted few as far as possible, but again Remain required all information;Loss layers of multitask:
A) with SoftmaxLoss instead of traditional support vector machine classifier SVM;
B) SmoothL1Loss replaces Bouding box to return.
3) full articulamentum is accelerated by SVD;4) all layers can be updated simultaneously when model training, especially in training In the experiment of different limb action identification moods, the difference of many limb actions is simultaneously little, but still to distinguish, and updates mould Convolutional layer and pond layer of type etc. just make trained image relatively reliable, and greatly improve training speed.
For extracting the most common SelectiveSearch method of candidate frame, previous model is extracted in original image Characteristic information executes the operation that candidate frame extracts that is, in original image, but executes candidate frame on the tangible characteristic pattern of Faster-RCNN Operation, low resolution characteristic pattern means less calculation amount, and a width color image characterizes image information with RGB mode, When, need 3 dimensions, but if with the representation method of gray level image, the information content stored will greatly reduce, believe The reduction nature of breath amount also means that the reduction of calculation amount.
Target classification is the target for extracting area-of-interest, it is only necessary to remove the information of background area, it is emerging only to retain sense The information of interesting region ROI region, it is to determine more accurate target position that frame, which returns then, solves the candidate generated when target classification The problem of frame is excessive, too small or dislocation.
Candidate frame basis for selecting:
1) boundary information is usually background information useless in target classification identification, so abandoning crossing the boundary anchor;
2) anchor with sample overlapping region greater than 0.7 is labeled as prospect, and overlapping region is demarcated as carrying on the back less than 0.3 Scape;
From the perspective of model training, labeled good limb action image information is first trained, in entirely training In big data, the data computing capability of identification classification mood is almost had been provided with, by using sharing feature alternating training Mode still carries out the training of image, reaches the performance of near real-time when one side is tested.
The RCNN from RCNN to fast, then faster RCNN is arrived, (candidate region is raw for four basic steps of target detection At feature extraction, classification, position refine) it is unified within a depth network frame finally.All calculating do not repeat, It is completed in GPU completely, substantially increases the speed of service.
The evolution of RCNN network, from the evolution figure of the following figure, it can be seen that structure is more and more simpler, makes in feature extraction Using the pond SPP, Crop/Warp window is reused after extraction, reduces calculation amount;Svm classifier and BBox are grouped into spy Sign extracts this aspect, but more ROI area-of-interest pond;Simplify after the meeting, a direct step completes candidate frame, and feature mentions It takes, Softmax and frame return total method.
Faster RNN is realized to be detected end to end, and calculating does not repeat, and is completed, is substantially increased in GPU completely The speed of service, and nearly reached optimal in effect.Using SSD-MobileNet-V1 model, but detection effect is paid no attention to Think as shown in Figure 2.Faster-RCNN-Inception-V2 model re -training detector, detection effect is more preferable, but speed It is significantly slower.As a result as shown in Figure 3.
The premise classified herein from eye movement test research mood, is no longer dependent on the extraction side of manual characteristic information Method, but learn angle from deep learning to extract limb action global characteristics information, using TensorFlow newly developed, The softwares such as Python, classify to limb action, and are mapped to corresponding happiness, in sad and neutral mood.
The present invention mainly tests in terms of identifying two to picture recognition and realtime graphic, from convolutional neural networks These three models of CNN, Inception-v3 and Faster-RCNN set out, and are tested using following methods:
Mark is the process being managed to training data.
Trained process is to use model quantitative pictures, training figure on the basis of data (image) of mark Picture, and extract characteristic information.
Classification uses category of model new images.
Judging from the experimental results, basic limb action identification mood may be implemented.From the result of picture recognition, it will be seen that Identification the result is that correct, but the accounting due to being added to recognition result, the highest recognition result of accounting rate be correctly, But other recognition result accounting rates be also it is very high, that is, the accuracy identified is not high.And the result identified from realtime graphic From the point of view of, CPU is slow much relative to the calculating speed of GPU, so when running program using CPU, it is easy to Caton occur and prolong When the phenomenon that, but identify accuracy be it is high, due to happy category image training it is most, so usually also can be wound The heart and neutral Emotion identification are happiness.
As shown in figs. 4-7, when testing, the picture for rejecting facial expression is used, avoids facial expression to knot Fruit affects.It is three classification happinesss, sad, in neutral mood classification, main mood classification is no mistake , but in three classification proportions, still there are some errors in final judging result, and as a result not enough precisely, one Aspect and the picture training set of selection have relationship, and there are also being exactly the quantity of picture or fewer, leading to result not is very It is accurate.Sad and happiness image mood be it is opposed, it is more apparent when distinguishing, but distinguishing neutral and sad image When mood, the error of appearance is maximum, is particularly easy to occur that discrimination is extremely low, or even the situation of identification mistake.Identify this reluctantly The sad mood of thing, and discrimination is extremely low, and neutral mood only poor 0.003, sad and neutral mood natively very phase Seemingly, some movements do not identify not Chu Lai really, so having trained the classical sad picture of a comparison again, specifically identify Accuracy rate is exactly very high.So in the research of limb action identification mood, it is also necessary to concentrate and distinguish and train neutral and wound The mood of the heart.Because the performance of the method that everyone expresses mood, limb action is also different, it may appear that deviation.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with A variety of variations, modification, replacement can be carried out to these embodiments without departing from the principles and spirit of the present invention by understanding And modification, the scope of the present invention is defined by the appended.

Claims (7)

1. a kind of emotion recognition system based on limb action, including limb action information acquisition system, it is characterised in that: described Limb action information acquisition system is made of camera (1) and computer analytical equipment (2), camera (1) the acquisition step limb Body action message (3) is simultaneously transmitted to computer analytical equipment (2) and is analyzed and processed.
2. a kind of emotion recognition system based on limb action according to claim 1, it is characterised in that: the camera (1) kernel is STM32F765ARM Cortex M7;Camera uses OV7725 camera chip.
3. realizing a kind of method of the emotion recognition system based on limb action described in claim 1, it is characterised in that: including Following steps:
A, it labels;
B, training dataset;
C, it tests.
4. a kind of method of emotion recognition system based on limb action according to claim 3, it is characterised in that: described The specific method is as follows by step A: collecting a large amount of human body images, the personage of training needed for marking in every image;LabelImg is The tool of tag image carries out labelling operation to picture using LabelImg, during selecting picture, to be put into background, The different complicated picture of light is trained, to avoid that can not identify the poor image of condition when carrying out image recognition.
5. a kind of method of emotion recognition system based on limb action according to claim 3, it is characterised in that: described Concrete operations in step B are as follows: download different object detection models to train detector.
6. a kind of method of emotion recognition system based on limb action according to claim 3, it is characterised in that: described Step C concrete operations are as follows: camera is used to extract the contour feature of body behavior, installs camera, opens camera.
7. a kind of method of emotion recognition system based on limb action according to claim 5, it is characterised in that: described Object detection model uses RCNN model, SPP-Net model, Fast-RCNN and Faster-RCNN model.
CN201810893076.XA 2018-08-07 2018-08-07 A kind of emotion recognition system and method based on limb action Pending CN109034090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810893076.XA CN109034090A (en) 2018-08-07 2018-08-07 A kind of emotion recognition system and method based on limb action

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810893076.XA CN109034090A (en) 2018-08-07 2018-08-07 A kind of emotion recognition system and method based on limb action

Publications (1)

Publication Number Publication Date
CN109034090A true CN109034090A (en) 2018-12-18

Family

ID=64649511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810893076.XA Pending CN109034090A (en) 2018-08-07 2018-08-07 A kind of emotion recognition system and method based on limb action

Country Status (1)

Country Link
CN (1) CN109034090A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886159A (en) * 2019-01-30 2019-06-14 浙江工商大学 It is a kind of it is non-limiting under the conditions of method for detecting human face
CN110135242A (en) * 2019-03-28 2019-08-16 福州大学 Emotion identification device and method based on low resolution infrared thermal imaging depth perception
CN110348350A (en) * 2019-07-01 2019-10-18 电子科技大学 A kind of driver status detection method based on facial expression
CN112931309A (en) * 2021-02-02 2021-06-11 中国水利水电科学研究院 Method and system for monitoring fish proliferation and releasing direction
CN113657154A (en) * 2021-07-08 2021-11-16 浙江大华技术股份有限公司 Living body detection method, living body detection device, electronic device, and storage medium
CN113853161A (en) * 2019-05-16 2021-12-28 托尼有限责任公司 System and method for identifying and measuring emotional states

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168538A (en) * 2017-06-12 2017-09-15 华侨大学 A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action
CN108363978A (en) * 2018-02-12 2018-08-03 华南理工大学 Using the emotion perception method based on body language of deep learning and UKF

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168538A (en) * 2017-06-12 2017-09-15 华侨大学 A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action
CN108363978A (en) * 2018-02-12 2018-08-03 华南理工大学 Using the emotion perception method based on body language of deep learning and UKF

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
向佳: "基于卷积神经网络的人脸检测和表情识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
郭帅杰: "基于语音、表情与姿态的多模态情感识别算法实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886159A (en) * 2019-01-30 2019-06-14 浙江工商大学 It is a kind of it is non-limiting under the conditions of method for detecting human face
CN110135242A (en) * 2019-03-28 2019-08-16 福州大学 Emotion identification device and method based on low resolution infrared thermal imaging depth perception
CN110135242B (en) * 2019-03-28 2023-04-18 福州大学 Emotion recognition device and method based on low-resolution infrared thermal imaging depth perception
CN113853161A (en) * 2019-05-16 2021-12-28 托尼有限责任公司 System and method for identifying and measuring emotional states
CN110348350A (en) * 2019-07-01 2019-10-18 电子科技大学 A kind of driver status detection method based on facial expression
CN110348350B (en) * 2019-07-01 2022-03-25 电子科技大学 Driver state detection method based on facial expressions
CN112931309A (en) * 2021-02-02 2021-06-11 中国水利水电科学研究院 Method and system for monitoring fish proliferation and releasing direction
CN112931309B (en) * 2021-02-02 2021-11-09 中国水利水电科学研究院 Method and system for monitoring fish proliferation and releasing direction
CN113657154A (en) * 2021-07-08 2021-11-16 浙江大华技术股份有限公司 Living body detection method, living body detection device, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN109034090A (en) A kind of emotion recognition system and method based on limb action
CN110427867B (en) Facial expression recognition method and system based on residual attention mechanism
CN106951867B (en) Face identification method, device, system and equipment based on convolutional neural networks
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN111523462B (en) Video sequence expression recognition system and method based on self-attention enhanced CNN
CN110532912B (en) Sign language translation implementation method and device
CN111488773B (en) Action recognition method, device, equipment and storage medium
CN109948447B (en) Character network relation discovery and evolution presentation method based on video image recognition
CN109919031A (en) A kind of Human bodys' response method based on deep neural network
CN110223292A (en) Image evaluation method, device and computer readable storage medium
CN109684959A (en) The recognition methods of video gesture based on Face Detection and deep learning and device
CN111126280B (en) Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method
CN111666845B (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
CN109278051A (en) Exchange method and system based on intelligent robot
Rajan et al. American sign language alphabets recognition using hand crafted and deep learning features
CN109325408A (en) A kind of gesture judging method and storage medium
CN110472582A (en) 3D face identification method, device and terminal based on eye recognition
CN109101881B (en) Real-time blink detection method based on multi-scale time sequence image
CN112069993A (en) Dense face detection method and system based on facial features mask constraint and storage medium
CN115187910A (en) Video classification model training method and device, electronic equipment and storage medium
Gonzalez-Soler et al. Semi-synthetic data generation for tattoo segmentation
CN116721449A (en) Training method of video recognition model, video recognition method, device and equipment
Liu et al. A3GAN: An attribute-aware attentive generative adversarial network for face aging
US11783587B2 (en) Deep learning tattoo match system based
He et al. Dual multi-task network with bridge-temporal-attention for student emotion recognition via classroom video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181218