CN109711343A - Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes - Google Patents

Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes Download PDF

Info

Publication number
CN109711343A
CN109711343A CN201811613715.9A CN201811613715A CN109711343A CN 109711343 A CN109711343 A CN 109711343A CN 201811613715 A CN201811613715 A CN 201811613715A CN 109711343 A CN109711343 A CN 109711343A
Authority
CN
China
Prior art keywords
expression
feature
eyes
feature point
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811613715.9A
Other languages
Chinese (zh)
Inventor
赵博林
刘川贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Situ Scene Data Technology Service Co Ltd
Original Assignee
Beijing Situ Scene Data Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Situ Scene Data Technology Service Co Ltd filed Critical Beijing Situ Scene Data Technology Service Co Ltd
Priority to CN201811613715.9A priority Critical patent/CN109711343A/en
Publication of CN109711343A publication Critical patent/CN109711343A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of behavioral structure methods based on the tracking of expression, gesture recognition and expression in the eyes, belong to Activity recognition technical field.Including neural network prediction model training and target object behavior prediction.The present invention trains the neural network prediction model for predicting target object behavior by a large amount of training sample and specific algorithm, when needing to predict the behavior of target object, obtain the image information of the target object, then expressive features, posture feature and the expression in the eyes feature in image information are extracted, and the expressive features, posture feature and expression in the eyes feature are input in the neural network prediction model, the specific behavior of the target user predicted.Since expression in the eyes tracking feature, expressive features and posture feature largely can reflect out seeing, think and being done for target object, that is, the behavior of target object is intended to, so as to the next step behavior for the target object that calculates to a nicety out comprehensively.

Description

Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes
Technical field
The invention belongs to Activity recognition technical fields, and in particular to it is a kind of based on expression, gesture recognition and expression in the eyes tracking Behavioral structure method.
Background technique
Extensive reference has been obtained in computer vision field for the behavioural analysis of people.As behavioural analysis One important branch predicts that the behavior of people shows very important application, such as video detection, abnormal row in practice For detection and robot interactive etc..
Currently, for the method that generallys use of analysis prediction of human body behavior be the current pose of human body is carried out identification and Analysis, and then the behavior of human body next step is predicted, but existing human body behavioural analysis prediction technique is lacked with following technology It falls into:
The complicated multiplicity of the behavior of people, the next step behavior of a people are not to be determined by the posture (human figure) of previous step It is fixed, but determined by the brain of people, brain is then often seen according to it when determining human body behavior to be thought and is currently done. Therefore, it is identified and analyzed in the prior art only for the current pose of human body, it may appear that biggish error, so as to cause pre- The accuracy of survey is not high.
Summary of the invention
In order to solve the above problems existing in the present technology, the present invention provides one kind to be based on expression, gesture recognition and eye The behavioral structure method of mind tracking.
Technical scheme is as follows:
A kind of behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes, the described method comprises the following steps:
Neural network prediction model training:
Training sample set is obtained, the training sample concentration includes the training for training the neural network prediction model Sample, the training sample include expressive features relevant to specific behavior, posture feature and expression in the eyes feature;
It extracts the training sample and concentrates training sample expressive features relevant to specific behavior, posture feature and expression in the eyes Feature;
The expressive features relevant to specific behavior, posture feature and expression in the eyes feature are instructed according to special algorithm Practice, obtains the neural network prediction model, the neural network prediction model is for establishing expression relevant to specific behavior Mapping relations between feature, posture feature and expression in the eyes feature and specific behavior predictive information;
Target object behavior prediction:
Obtain the image information of target object;
It identifies the human face region and human region in described image information, obtains facial image and human body image;
Feature extraction is carried out to the facial image and the human body image respectively, obtain expressive features, posture feature with And expression in the eyes feature;
The expressive features, the posture feature and the expression in the eyes feature are inputted into the neural network prediction model, Export the specific behavior of the target user of prediction.
Preferably, the method for the human face region in the identification described image information and human region are as follows:
Based on the human face region and human region in concatenated convolutional neural network recognization described image information.
Preferably, the method for obtaining the expressive features includes:
The extraction of feature point is carried out to the facial image, obtains eye feature point, nose feature point and the corners of the mouth Feature point;
The eye feature point, the nose feature point and the corners of the mouth feature point are analyzed, obtained The expressive features.
It is further preferred that after obtaining eye feature point, nose feature point and corners of the mouth feature point, the method Further include:
It is and the corners of the mouth feature point to pass through affine change based on the eye feature point, the nose characteristic point The facial image of changing commanders is remedied to designated position.
Preferably, the method for obtaining the expression in the eyes feature includes:
Based on the nose feature point and the corners of the mouth feature point, the position of the eye feature point is determined;
The pupil feature point in the eye feature point is tracked using image processing method;
Based on default geometric algorithm, the center of the pupil feature point is obtained;
The position and angular relationship of center based on the pupil feature point and the eye feature point, described in acquisition Expression in the eyes feature.
Preferably, the method for obtaining the posture feature includes:
The extraction of feature point is carried out to the human body image, obtains multiple bone key points of human body;
Based on the positional relationship between the bone key point, the posture feature is obtained.
It is further preferred that the bone key point is no less than 18.
Compared with prior art, technical solution provided by the invention has the advantages that or advantage:
Behavioral structure method provided by the present invention based on the tracking of expression, gesture recognition and expression in the eyes is by largely instructing Practice sample and specific algorithm and train the neural network prediction model for predicting target object behavior, when needing to predict target pair When the behavior of elephant, obtain the image information of the target object, then extract image information in expressive features, posture feature and Expression in the eyes feature, and the expressive features, posture feature and expression in the eyes feature are input in the neural network prediction model, it obtains To the specific behavior of the target user of prediction.Since expression in the eyes tracking feature, expressive features and posture feature are in very great Cheng It can reflect out seeing, think and being done for target object on degree, that is, the behavior of target object is intended to, therefore, this hair The next step behavior of the bright target object that can calculate to a nicety out comprehensively.
Referring to following description and accompanying drawings, only certain exemplary embodiments of this invention is disclosed in detail, specifies original of the invention Reason can be in a manner of adopted.It should be understood that embodiments of the present invention are not so limited in range.In appended power In the range of the spirit and terms that benefit requires, embodiments of the present invention include many changes, modifications and are equal.
The feature for describing and/or showing for a kind of embodiment can be in a manner of same or similar one or more It uses in a other embodiment, is combined with the feature in other embodiment, or the feature in substitution other embodiment.
It should be emphasized that term "comprises/comprising" refers to the presence of feature, one integral piece, step or component when using herein, but simultaneously It is not excluded for the presence or additional of one or more other features, one integral piece, step or component.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those skilled in the art without any creative labor, can be with root Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the behavioral structure method provided in an embodiment of the present invention based on the tracking of expression, gesture recognition and expression in the eyes Method flow diagram.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.The present invention being usually described and illustrated herein in the accompanying drawings is implemented The component of example can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiment of the present invention provided in the accompanying drawings is not intended to limit below claimed The scope of the present invention, but be merely representative of selected embodiment of the invention.Based on the embodiments of the present invention, this field is common Technical staff's every other embodiment obtained without creative efforts belongs to the model that the present invention protects It encloses.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase Mutually combination.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.
In the description of the embodiment of the present invention, it should be noted that indicating position or positional relationship is based on shown in attached drawings The orientation or positional relationship invention product using when the orientation or positional relationship usually put or this field Orientation or positional relationship that technical staff usually understands or the invention product using when the orientation usually put or position close System, is merely for convenience of description of the present invention and simplification of the description, rather than the device or element of indication or suggestion meaning must have Specific orientation is constructed and operated in a specific orientation, therefore is not considered as limiting the invention.In addition, term " the One ", " second " is only used for distinguishing description, is not understood to indicate or imply relative importance.
In the description of the embodiment of the present invention, it is also necessary to which explanation is unless specifically defined or limited otherwise, term " setting ", " connection " shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or integrally connect It connects;It can be and be directly connected to, can also be indirectly connected with by intermediary.For the ordinary skill in the art, may be used The attached drawing in the concrete meaning type embodiment of above-mentioned term in the present invention is understood with concrete condition, in the embodiment of the present invention Technical solution is clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, rather than complete The embodiment in portion.The component of embodiments of the present invention, which are generally described and illustrated herein in the accompanying drawings can be with a variety of different configurations To arrange and design.
As shown in Figure 1, the embodiment of the invention provides a kind of behavioral structures based on the tracking of expression, gesture recognition and expression in the eyes Change method, the described method comprises the following steps:
Step S1: neural network prediction model training:
Step S1-1: training sample set is obtained, the training sample concentration includes for training the neural network prediction The training sample of model, the training sample include expressive features relevant to specific behavior, posture feature and expression in the eyes feature;
Step S1-2: it extracts the training sample and concentrates training sample expressive features relevant to specific behavior, posture special Sign and expression in the eyes feature;
Step S1-3: according to special algorithm to the expressive features relevant to specific behavior, posture feature and expression in the eyes Feature is trained, and obtains the neural network prediction model, the neural network prediction model is for foundation and specific behavior Mapping relations between relevant expressive features, posture feature and expression in the eyes feature and specific behavior predictive information;
Step S2: target object behavior prediction:
Step S2-1: the image information of target object is obtained;
Step S2-2: human face region and human region in identification described image information obtain facial image and human body Image;
Step S2-3: carrying out feature extraction to the facial image and the human body image respectively, obtains expressive features, appearance State feature and expression in the eyes feature;
Step S2-4: the expressive features, the posture feature and the expression in the eyes feature are inputted into the neural network Prediction model exports the specific behavior of the target user of prediction.
Since the next step behavior of a people is not to be determined by the posture of previous step, but determined by the brain of people, Brain is then often seen according to it when determining human body behavior to be thought.And seeing for human body, it is embodied in the sight of human body, And think, it is embodied in the expression of human body.Therefore, it is chased after provided by the embodiment of the present invention based on expression, gesture recognition and expression in the eyes The behavioral structure method of track can be obtained more fully hereinafter by combining expression in the eyes tracking feature, expressive features and posture feature Know people in image seeing and thought at current time, so that the next step for this person that calculates to a nicety out acts, greatly improves The accuracy of prediction.
In the specific implementation process, in order to more rapidly and accurately identifying the face area in described image information Domain and human region, preferably, specifically using based on concatenated convolutional neural network in the step S2-2 of the embodiment of the present invention Identify the human face region and human region in described image information.Concatenated convolutional neural network has powerful learning functionality, can Directly to learn separator from image, can more rapidly and accurately distinguish human face region from height distracting background and Human region.
In the specific implementation process, the expressive features of a people be by multiple feature point common combinations of face and At, the expressive features of a people can not be accurately determined by single feature point, it is therefore, special in order to obtain accurate expression Sign, preferably, using in the embodiment of the present invention, the specific method is as follows:
The extraction of feature point is carried out to the facial image, obtains eye feature point, nose feature point and the corners of the mouth Feature point;
The eye feature point, the nose feature point and the corners of the mouth feature point are analyzed, obtained The expressive features.
To the eye feature point, the nose feature point and the corners of the mouth feature point in the embodiment of the present invention The method analyzed can equally be analyzed using deep learning neural network model.By a large amount of sample training, obtain It must be used for the deep learning neural network model of Expression Recognition, then carrying out Expression analysis identification.It is of course also possible to use its His mode, it is not limited here.
In the specific implementation process, since the image information of acquisition is different, it is possible to lead to the size of facial image not One, this may have adverse effect on subsequent processing, therefore, preferably, the embodiment of the present invention is obtaining eyes After feature point, nose feature point and corners of the mouth feature point, the method also includes:
It is and the corners of the mouth feature point to pass through affine change based on the eye feature point, the nose characteristic point The facial image of changing commanders is remedied to designated position.After being remedied to designated position, the size of each facial image is identical, Facial image influence not of uniform size is eliminated, subsequent further analysis processing is conducive to.
In the specific implementation process, target object can be tracked by expression in the eyes feature currently to be seen, can be prediction The next step behavior of target object brings more fully data supporting, therefore, can accurately obtain expression in the eyes and be characterized in weighing very much It wants.The method that the expression in the eyes feature is obtained in the embodiment of the present invention is specific as follows:
Based on the nose feature point and the corners of the mouth feature point, the position of the eye feature point is determined;
The pupil feature point in the eye feature point is tracked using image processing method;
Based on default geometric algorithm, the center of the pupil feature point is obtained;
The position and angular relationship of center based on the pupil feature point and the eye feature point, described in acquisition Expression in the eyes feature.
The embodiment of the present invention first determines the position of eye feature point according to nose feature point and corners of the mouth feature point, so Its sight is determined in one's power further according to the center of pupil feature point and the position of eye feature point and angular relationship afterwards, Neng Gouzhun Currently being seen for target object really is obtained, accurate expression in the eyes feature can be also obtained.
In the specific implementation process, the posture of human body is embodied often by the trunk and four limbs of human body, and body Dry and four limbs are supported by bone, and the method for therefore, in the embodiment of the present invention obtaining the posture feature includes:
The extraction of feature point is carried out to the human body image, obtains multiple bone key points of human body;
Based on the positional relationship between the bone key point, the posture feature is obtained.
Bone key point on human body has the bone key point that very much, can be identified also very much, identification it is more, more Accurate posture feature can be obtained, so, preferably, the bone key point extracted in the embodiment of the present invention is not Less than 18.
Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes provided by the embodiment of the present invention passes through big The training sample of amount and specific algorithm train the neural network prediction model for predicting target object behavior, when needing to predict When the behavior of target object, the image information of the target object is obtained, expressive features, the posture then extracted in image information are special Sign and expression in the eyes feature, and the expressive features, posture feature and expression in the eyes feature are input to the neural network prediction mould In type, the specific behavior of the target user predicted.Since expression in the eyes tracking feature, expressive features and posture feature exist It largely can reflect out seeing, think and being done for target object, that is, the behavior of target object is intended to, because This, the present invention can calculate to a nicety out the next step behavior of target object comprehensively.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (7)

1. a kind of behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes, which is characterized in that the method includes Following steps:
Neural network prediction model training:
Training sample set is obtained, the training sample concentration includes the training sample for training the neural network prediction model This, the training sample includes expressive features relevant to specific behavior, posture feature and expression in the eyes feature;
Extracting the training sample concentrates training sample expressive features relevant to specific behavior, posture feature and expression in the eyes special Sign;
The expressive features relevant to specific behavior, posture feature and expression in the eyes feature are trained according to special algorithm, The neural network prediction model is obtained, the neural network prediction model is special for establishing expression relevant to specific behavior Mapping relations between sign, posture feature and expression in the eyes feature and specific behavior predictive information;
Target object behavior prediction:
Obtain the image information of target object;
It identifies the human face region and human region in described image information, obtains facial image and human body image;
Feature extraction is carried out to the facial image and the human body image respectively, obtains expressive features, posture feature and eye Refreshing feature;
The expressive features, the posture feature and the expression in the eyes feature are inputted into the neural network prediction model, output The specific behavior of the target user of prediction.
2. the behavioral structure method according to claim 1 based on the tracking of expression, gesture recognition and expression in the eyes, feature exist In the method for human face region and human region in the identification described image information are as follows:
Based on the human face region and human region in concatenated convolutional neural network recognization described image information.
3. the behavioral structure method according to claim 1 based on the tracking of expression, gesture recognition and expression in the eyes, feature exist In the method for obtaining the expressive features includes:
The extraction of feature point is carried out to the facial image, obtains eye feature point, nose feature point and corners of the mouth feature Point;
The eye feature point, the nose feature point and the corners of the mouth feature point are analyzed, described in acquisition Expressive features.
4. the behavioral structure method according to claim 3 based on the tracking of expression, gesture recognition and expression in the eyes, feature exist In, after obtaining eye feature point, nose feature point and corners of the mouth feature point, the method also includes:
It, will by affine transformation based on the eye feature point, the nose feature point and the corners of the mouth feature point The facial image is remedied to designated position.
5. the behavioral structure method according to claim 3 based on the tracking of expression, gesture recognition and expression in the eyes, feature exist In the method for obtaining the expression in the eyes feature includes:
Based on the nose feature point and the corners of the mouth feature point, the position of the eye feature point is determined;
The pupil feature point in the eye feature point is tracked using image processing method;
Based on default geometric algorithm, the center of the pupil feature point is obtained;
The position at center and the eye feature point based on the pupil feature point and angular relationship, obtain the expression in the eyes Feature.
6. the behavioral structure method according to claim 1 based on the tracking of expression, gesture recognition and expression in the eyes, feature exist In the method for obtaining the posture feature includes:
The extraction of feature point is carried out to the human body image, obtains multiple bone key points of human body;
Based on the positional relationship between the bone key point, the posture feature is obtained.
7. the behavioral structure method according to claim 6 based on the tracking of expression, gesture recognition and expression in the eyes, feature exist In the bone key point is no less than 18.
CN201811613715.9A 2018-12-27 2018-12-27 Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes Pending CN109711343A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811613715.9A CN109711343A (en) 2018-12-27 2018-12-27 Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811613715.9A CN109711343A (en) 2018-12-27 2018-12-27 Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes

Publications (1)

Publication Number Publication Date
CN109711343A true CN109711343A (en) 2019-05-03

Family

ID=66258798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811613715.9A Pending CN109711343A (en) 2018-12-27 2018-12-27 Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes

Country Status (1)

Country Link
CN (1) CN109711343A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476202A (en) * 2020-04-30 2020-07-31 杨九妹 User behavior analysis method and system of financial institution security system and robot
CN111860285A (en) * 2020-07-15 2020-10-30 北京思图场景数据科技服务有限公司 User registration method and device, electronic equipment and storage medium
CN114185509A (en) * 2021-12-13 2022-03-15 云知声(上海)智能科技有限公司 Multi-mode information screen device based on eye tracking technology and regional information amplification method thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298709A (en) * 2011-09-07 2011-12-28 江西财经大学 Energy-saving intelligent identification digital signage fused with multiple characteristics in complicated environment
CN105096528A (en) * 2015-08-05 2015-11-25 广州云从信息科技有限公司 Fatigue driving detection method and system
CN106372648A (en) * 2016-10-20 2017-02-01 中国海洋大学 Multi-feature-fusion-convolutional-neural-network-based plankton image classification method
CN106447184A (en) * 2016-09-21 2017-02-22 中国人民解放军国防科学技术大学 Unmanned aerial vehicle operator state evaluation method based on multi-sensor measurement and neural network learning
CN107506707A (en) * 2016-11-30 2017-12-22 奥瞳系统科技有限公司 Using the Face datection of the small-scale convolutional neural networks module in embedded system
CN107563279A (en) * 2017-07-22 2018-01-09 复旦大学 The model training method adjusted for the adaptive weighting of human body attributive classification
CN107862300A (en) * 2017-11-29 2018-03-30 东华大学 A kind of descending humanized recognition methods of monitoring scene based on convolutional neural networks
US20180096259A1 (en) * 2016-09-30 2018-04-05 Disney Enterprises, Inc. Deep-learning motion priors for full-body performance capture in real-time
CN108229284A (en) * 2017-05-26 2018-06-29 北京市商汤科技开发有限公司 Eye-controlling focus and training method and device, system, electronic equipment and storage medium
CN108596056A (en) * 2018-04-10 2018-09-28 武汉斑马快跑科技有限公司 A kind of taxi operation behavior act recognition methods and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298709A (en) * 2011-09-07 2011-12-28 江西财经大学 Energy-saving intelligent identification digital signage fused with multiple characteristics in complicated environment
CN105096528A (en) * 2015-08-05 2015-11-25 广州云从信息科技有限公司 Fatigue driving detection method and system
CN106447184A (en) * 2016-09-21 2017-02-22 中国人民解放军国防科学技术大学 Unmanned aerial vehicle operator state evaluation method based on multi-sensor measurement and neural network learning
US20180096259A1 (en) * 2016-09-30 2018-04-05 Disney Enterprises, Inc. Deep-learning motion priors for full-body performance capture in real-time
CN106372648A (en) * 2016-10-20 2017-02-01 中国海洋大学 Multi-feature-fusion-convolutional-neural-network-based plankton image classification method
CN107506707A (en) * 2016-11-30 2017-12-22 奥瞳系统科技有限公司 Using the Face datection of the small-scale convolutional neural networks module in embedded system
CN108229284A (en) * 2017-05-26 2018-06-29 北京市商汤科技开发有限公司 Eye-controlling focus and training method and device, system, electronic equipment and storage medium
CN107563279A (en) * 2017-07-22 2018-01-09 复旦大学 The model training method adjusted for the adaptive weighting of human body attributive classification
CN107862300A (en) * 2017-11-29 2018-03-30 东华大学 A kind of descending humanized recognition methods of monitoring scene based on convolutional neural networks
CN108596056A (en) * 2018-04-10 2018-09-28 武汉斑马快跑科技有限公司 A kind of taxi operation behavior act recognition methods and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
于明等: "基于多特征与卷积神经网络的人脸表情识别", 《科学技术与工程》 *
孙光民等: "基于RBF神经网络的人体运动跟踪与姿态预测", 《仪器仪表学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476202A (en) * 2020-04-30 2020-07-31 杨九妹 User behavior analysis method and system of financial institution security system and robot
CN111860285A (en) * 2020-07-15 2020-10-30 北京思图场景数据科技服务有限公司 User registration method and device, electronic equipment and storage medium
CN111860285B (en) * 2020-07-15 2023-10-17 北京思图场景数据科技服务有限公司 User registration method, device, electronic equipment and storage medium
CN114185509A (en) * 2021-12-13 2022-03-15 云知声(上海)智能科技有限公司 Multi-mode information screen device based on eye tracking technology and regional information amplification method thereof
CN114185509B (en) * 2021-12-13 2023-12-01 云知声(上海)智能科技有限公司 Multi-mode information screen device based on eye-tracking technology and region information amplifying method thereof

Similar Documents

Publication Publication Date Title
CN102547123B (en) Self-adapting sightline tracking system and method based on face recognition technology
CN109117893A (en) A kind of action identification method and device based on human body attitude
CN108537702A (en) Foreign language teaching evaluation information generation method and device
CN110362210A (en) The man-machine interaction method and device of eye-tracking and gesture identification are merged in Virtual assemble
CN109711343A (en) Behavioral structure method based on the tracking of expression, gesture recognition and expression in the eyes
CN106295313A (en) Object identity management method, device and electronic equipment
CN108229268A (en) Expression Recognition and convolutional neural networks model training method, device and electronic equipment
CN109176512A (en) A kind of method, robot and the control device of motion sensing control robot
Poppe et al. AMAB: Automated measurement and analysis of body motion
CN104281845A (en) Face recognition method based on rotation invariant dictionary learning model
WO2023134071A1 (en) Person re-identification method and apparatus, electronic device and storage medium
Cimen et al. Classification of human motion based on affective state descriptors
CN113435236A (en) Home old man posture detection method, system, storage medium, equipment and application
US20200074361A1 (en) Performance measurement device, performance measurement method and performance measurement program
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
CN112989947A (en) Method and device for estimating three-dimensional coordinates of human body key points
CN108717520A (en) A kind of pedestrian recognition methods and device again
CN110427849A (en) Face pose determination method and device, storage medium and electronic equipment
CN113569627A (en) Human body posture prediction model training method, human body posture prediction method and device
CN104850225A (en) Activity identification method based on multi-level fusion
Wang Data feature extraction method of wearable sensor based on convolutional neural network
Gutstein et al. Optical flow, positioning, and eye coordination: automating the annotation of physician-patient interactions
CN115019396A (en) Learning state monitoring method, device, equipment and medium
CN104680134B (en) Quick human body detecting method
Acharjee et al. Identification of significant eye blink for tangible human computer interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190503

RJ01 Rejection of invention patent application after publication