CN109165552B - Gesture recognition method and system based on human body key points and memory - Google Patents
Gesture recognition method and system based on human body key points and memory Download PDFInfo
- Publication number
- CN109165552B CN109165552B CN201810772931.1A CN201810772931A CN109165552B CN 109165552 B CN109165552 B CN 109165552B CN 201810772931 A CN201810772931 A CN 201810772931A CN 109165552 B CN109165552 B CN 109165552B
- Authority
- CN
- China
- Prior art keywords
- moving target
- moving
- key points
- body key
- detecting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a gesture recognition method, a system and a memory based on human body key points, wherein the method comprises the following steps: firstly, the moving areas in the video picture are judged through movement detection, and then target detection is carried out in the effective areas. Then key points of the body and the face are obtained through gesture recognition, machine learning is carried out on the key points, and then human postures such as standing, sitting, turning, playing and the like are recognized. Therefore, key points of the human body and the face are trained, and then the target is subjected to gesture recognition, so that the accuracy of gesture recognition can be greatly improved, and the feasibility of the gesture recognition in practical application scenes is improved.
Description
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a gesture recognition method and system based on human body key points and a memory.
Background
Human body posture recognition and deep learning are research hotspots in the field of intelligent video analysis, have gained wide attention from the academic and engineering communities in recent years, and are the theoretical basis of various fields such as intelligent video analysis and understanding, video monitoring, human-computer interaction and the like. In recent years, deep learning algorithms, which have been widely focused, have been successfully used in various fields such as speech recognition, pattern recognition, and the like. The deep learning theory has achieved significant achievement on static image feature extraction, and is gradually popularized to video behavior recognition research with time series.
In the field of education, video streams shot in teaching places are analyzed through a deep learning method, and behaviors of students are identified and recorded so as to evaluate teaching quality and classroom performance of the students and provide direct guidance for subsequent teaching optimization. In order to analyze the behavior of classroom teaching, it is necessary to identify which student is through the face, and to track and analyze the behavior of students, such as taking a hand to speak in a classroom, answering questions immediately, sitting down, and the like. At present, it is a popular practice to recognize and mark the body features to understand the posture expression at different angles and then to judge the possible behaviors.
However, in a teaching scene, dozens of students usually appear in a picture at the same time, and the lower bodies of the students are usually blocked by desks; and the shielding situation can also occur at the front and the back. In the face of the complex and changeable environment of teachers, the postures of students are analyzed by using the above posture detection, so that specific behaviors of students cannot be correctly recognized easily, and misjudgment can be caused frequently.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a posture recognition method, a system and a memory based on human body key points, which can accurately analyze the behavior analysis in a teaching scene in real time by moving detection, capturing an effective area, detecting the human body and finally analyzing and recording the behavior.
The technical scheme for solving the technical problems is as follows:
in one aspect, the invention provides a gesture recognition method based on human body key points, which comprises the following steps:
obtaining an RTSP video stream;
detecting a moving target of an image in a video stream to obtain a region where the moving target is located;
detecting face and body key points of the image in the region and extracting information;
and inputting the extracted face and body key point information into a pre-trained SVM classifier for classification recognition and posture prediction.
Further, the detecting the moving target of the image in the video stream to obtain the area where the moving target is located includes:
comparing the front video frame and the rear video frame, tracking a moving target and detecting the target to obtain a window where the moving target is located;
setting a filtering threshold value, and filtering a window where the detected moving target with the size smaller than the filtering threshold value is located;
and splicing the windows where the rest moving targets are located to obtain the area where the moving target is located.
Further, the splicing the windows where the remaining moving targets are located to obtain a moving target area includes:
and generating a rectangular frame which is the moving target area according to the positions of the windows where the rest moving targets are located, wherein the rectangular frame is the minimum rectangular frame of the windows where all the rest moving targets are located in the frame-selectable video frame.
Further, after the detecting and information extracting of the face and body key points of the image in the region, the method further comprises the following steps:
and carrying out normalization processing on the extracted face and body key point information.
Further, the training process of the SVM classifier comprises the following steps:
obtaining an RTSP video stream;
carrying out moving target tracking and target detection on images in the video stream to obtain the area of a moving target;
detecting face and body key points of the image in the region and extracting information;
carrying out normalization processing on the extracted face and body key point information;
marking and classifying the normalized data, and training the normalized data through an SVM (support vector machine) to obtain an SVM classifier;
the model type of the SVM selects C _ SVC, the kernel function of the SVM selects LINEAR, and the training sample is not less than 10000.
On the other hand, the invention also provides a gesture recognition system based on the human body key points, which comprises the following components:
the video stream acquisition module is used for acquiring RTSP video streams;
the moving target detection module is used for tracking and detecting a moving target of an image in a video stream to acquire an area where the moving target is located;
the key point information extraction module is used for detecting the key points of the faces and the bodies of the people and extracting the information of the images in the area;
and the recognition module is used for inputting the extracted face and body key point information into a pre-trained SVM classifier for classification recognition and posture prediction.
Further, the moving object detection module includes:
the window acquisition module is used for comparing the front video frame and the rear video frame, tracking a moving target and detecting the target to acquire a window where the moving target is located;
the threshold setting and filtering module is used for setting a filtering threshold, and filtering a window where the detected moving target is located, wherein the size of the window is smaller than the filtering threshold;
and the window splicing module is used for splicing the windows where the rest moving targets are located to obtain the area where the moving targets are located.
Further, the window splicing module is specifically configured to: and generating a rectangular frame which is the moving target area according to the positions of the windows where the rest moving targets are located, wherein the rectangular frame is the minimum rectangular frame of the windows where all the rest moving targets are located in the frame-selectable video frame.
The system further comprises a normalization processing module, which is used for performing normalization processing on the key point information after the key point information extraction module extracts the key point information of the human face and the body.
Furthermore, the invention also provides a memory, and the memory stores a computer software program for realizing the gesture recognition method based on the human body key points.
The invention has the beneficial effects that:
1. the invention provides a method for extracting key points of human bodies and human faces based on the result of motion detection, thereby greatly improving the speed of feature extraction.
2. The data marking is carried out after the human body features and the facial features are mixed, so that the operability of the data is greatly improved.
3. The human body features and the facial features are mixed for gesture recognition, and accuracy of gesture recognition is greatly improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of the system of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
The gesture recognition technology plays an increasingly important role in numerous fields such as intelligent monitoring, man-machine interaction, video sequence understanding, medical health and the like, and the video gesture recognition technology has great challenge in obtaining higher accuracy under the factors of visual angle, complex scene, too many people and the like.
In the field of education, feedback and guidance of teaching quality are obtained for analyzing teaching behaviors of students and teachers, which can be realized by analyzing video streams of cameras in real time based on a deep learning network.
In order to realize student's attitude analysis under the teaching scene, this patent has proposed such solution:
firstly, the moving areas in the video picture are judged through movement detection, and then target detection is carried out in the effective areas. Then key points of the body and the face are obtained through gesture recognition, machine learning is carried out on the key points, and then human postures such as standing, sitting, turning, playing and the like are recognized. Therefore, key points of the human body and the face are trained, and then the target is subjected to gesture recognition, so that the accuracy of gesture recognition can be greatly improved, and the feasibility of the gesture recognition in practical application scenes is improved.
On one hand, the invention provides a gesture recognition method based on human body key points, as shown in fig. 1, comprising the following steps:
s100, obtaining RTSP video stream;
s200, detecting a moving target of an image in the video stream to obtain an area where the moving target is located;
both target detection and attitude analysis consume relatively much software and hardware resources. In order to improve efficiency, before target detection is performed, movement detection is performed first, and a window where a moving target is located is found. This window is typically large but still much smaller than the entire image. The smaller window is mostly false alarms and can be filtered according to a threshold value. In particular, the method comprises the following steps of,
firstly, comparing front and rear video frames, tracking a moving target and detecting the target to obtain a window where the moving target is located;
setting a filtering threshold value, and filtering a window where the detected moving target with the size smaller than the filtering threshold value is located;
and generating a rectangular frame which is the moving target area according to the positions of the windows where the rest moving targets are located, wherein the rectangular frame is the minimum rectangular frame of the windows where all the rest moving targets are located in the frame-selectable video frame.
The detected moving windows are spliced together, so that the target identification is only used for target detection in a partial region with a moving target, and the target detection speed can be greatly improved.
S300, detecting key points of the faces and the bodies of the people and extracting information of the images in the area; body key point detection has a plurality of schemes at present, for example, OpenPose and recently-opened AlphaPose are used for comparing fires, and key points are extracted. There are many schemes for face key point extraction, named dliblandmark and deep learning-based deeplandmark.
S400, in order to improve the effectiveness of data training and the accuracy of results, normalization processing is carried out on the extracted face and body key point information;
the normalized formula is:
wherein the content of the first and second substances,represents the extracted secondIndividual face or body key point information,the values are integers.
Marking the normalized face and body key point information, namely marking the face and body key point information as actions of holding hands to speak, answering questions in standing, sitting down and the like according to actual conditions;
s500, inputting the marked face and body key point information into an SVM for training, wherein the model type of the SVM selects C _ SVC, the kernel function of the SVM selects LINEAR, and a training sample is not less than 10000;
in the recognition stage, the SVM training method is utilized to extract face and body key point information, and then the face and body key point information is input into a trained SVM classifier to perform classification recognition and posture prediction.
In another aspect, the present invention further provides a gesture recognition system based on human body key points, as shown in fig. 2, the system includes:
the video stream acquisition module is used for acquiring RTSP video streams;
the moving target detection module is used for tracking and detecting a moving target of an image in a video stream to acquire an area where the moving target is located;
the key point information extraction module is used for detecting the key points of the faces and the bodies of the people and extracting the information of the images in the area;
and the recognition module is used for inputting the extracted face and body key point information into a pre-trained SVM classifier for classification recognition and posture prediction.
Further, the moving object detection module includes:
the window acquisition module is used for comparing the front video frame and the rear video frame, tracking a moving target and detecting the target to acquire a window where the moving target is located;
the threshold setting and filtering module is used for setting a filtering threshold, and filtering a window where the detected moving target is located, wherein the size of the window is smaller than the filtering threshold;
and the window splicing module is used for splicing the windows where the rest moving targets are located to obtain the area where the moving targets are located.
Further, the window splicing module is specifically configured to: and generating a rectangular frame which is the moving target area according to the positions of the windows where the rest moving targets are located, wherein the rectangular frame is the minimum rectangular frame of the windows where all the rest moving targets are located in the frame-selectable video frame.
The system further comprises a normalization processing module, which is used for performing normalization processing on the key point information after the key point information extraction module extracts the key point information of the human face and the body.
Furthermore, the invention also provides a memory, and the memory stores a computer software program for realizing the gesture recognition method based on the human body key points.
The invention provides a method for extracting key points of human bodies and human faces based on the result of mobile detection, thereby greatly improving the speed of feature extraction; the data marking is carried out after the human body characteristics and the facial characteristics are mixed, so that the operability of the data is greatly improved; the human body features and the facial features are mixed for gesture recognition, and accuracy of gesture recognition is greatly improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (6)
1. A posture identification method based on human key points is characterized by comprising the following steps:
obtaining an RTSP video stream;
detecting a moving target of an image in a video stream to obtain a region where the moving target is located;
detecting face and body key points of the image in the region and extracting information;
inputting the extracted face and body key point information into a pre-trained SVM classifier for classification recognition and posture prediction;
the method for detecting the moving target of the image in the video stream and acquiring the area where the moving target is located includes:
comparing the front video frame and the rear video frame, tracking a moving target and detecting the target to obtain a window where the moving target is located;
setting a filtering threshold value, and filtering a window where the detected moving target with the size smaller than the filtering threshold value is located;
splicing the windows where the rest moving targets are located to obtain the area where the moving targets are located;
the splicing of the windows where the remaining moving targets are located to obtain a moving target area comprises the following steps:
and generating a rectangular frame which is the moving target area according to the positions of the windows where the rest moving targets are located, wherein the rectangular frame is the minimum rectangular frame of the windows where all the rest moving targets are located in the frame-selectable video frame.
2. The method as claimed in claim 1, further comprising, after the detecting and extracting the face and body key points of the image in the region, the following steps:
and carrying out normalization processing on the extracted face and body key point information.
3. The method for recognizing the postures of the human body key points as claimed in claim 2, wherein the training process of the SVM classifier comprises the following steps:
obtaining an RTSP video stream;
carrying out moving target tracking and target detection on images in the video stream to obtain the area of a moving target;
detecting face and body key points of the image in the region and extracting information;
carrying out normalization processing on the extracted face and body key point information;
marking and classifying the normalized data, and training the normalized data through an SVM (support vector machine) to obtain an SVM classifier;
the model type of the SVM selects C _ SVC, the kernel function of the SVM selects LINEAR, and the training sample is not less than 10000.
4. A gesture recognition system based on human key points, the system comprising:
the video stream acquisition module is used for acquiring RTSP video streams;
the moving target detection module is used for tracking and detecting a moving target of an image in a video stream to acquire an area where the moving target is located;
the key point information extraction module is used for detecting the key points of the faces and the bodies of the people and extracting the information of the images in the area;
the recognition module is used for inputting the extracted face and body key point information into a pre-trained SVM classifier for classification recognition and posture prediction;
wherein the moving object detection module comprises:
the window acquisition module is used for comparing the front video frame and the rear video frame, tracking a moving target and detecting the target to acquire a window where the moving target is located;
the threshold setting and filtering module is used for setting a filtering threshold, and filtering a window where the detected moving target is located, wherein the size of the window is smaller than the filtering threshold;
and the window splicing module is used for splicing the windows where the rest moving targets are located to obtain the area where the moving target is located, and is specifically used for: and generating a rectangular frame which is the moving target area according to the positions of the windows where the rest moving targets are located, wherein the rectangular frame is the minimum rectangular frame of the windows where all the rest moving targets are located in the frame-selectable video frame.
5. The system according to claim 4, further comprising a normalization processing module, configured to perform normalization processing on the key point information after the key point information extraction module extracts the face and the body key point information.
6. A memory, characterized in that the memory is stored with a computer software program for implementing a method for human body key point based gesture recognition according to any one of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810772931.1A CN109165552B (en) | 2018-07-14 | 2018-07-14 | Gesture recognition method and system based on human body key points and memory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810772931.1A CN109165552B (en) | 2018-07-14 | 2018-07-14 | Gesture recognition method and system based on human body key points and memory |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109165552A CN109165552A (en) | 2019-01-08 |
CN109165552B true CN109165552B (en) | 2021-02-26 |
Family
ID=64897918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810772931.1A Active CN109165552B (en) | 2018-07-14 | 2018-07-14 | Gesture recognition method and system based on human body key points and memory |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109165552B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815921A (en) * | 2019-01-29 | 2019-05-28 | 北京融链科技有限公司 | The prediction technique and device of the class of activity in hydrogenation stations |
CN109858444A (en) * | 2019-01-31 | 2019-06-07 | 北京字节跳动网络技术有限公司 | The training method and device of human body critical point detection model |
CN109871804A (en) * | 2019-02-19 | 2019-06-11 | 上海宝尊电子商务有限公司 | A kind of method and system of shop stream of people discriminance analysis |
CN110147712A (en) * | 2019-03-27 | 2019-08-20 | 苏州书客贝塔软件科技有限公司 | A kind of intelligent cloud platform of pedestrian's analysis |
CN110298237A (en) * | 2019-05-20 | 2019-10-01 | 平安科技(深圳)有限公司 | Head pose recognition methods, device, computer equipment and storage medium |
CN110222608A (en) * | 2019-05-24 | 2019-09-10 | 山东海博科技信息系统股份有限公司 | A kind of self-service examination machine eyesight detection intelligent processing method |
CN110457999B (en) * | 2019-06-27 | 2022-11-04 | 广东工业大学 | Animal posture behavior estimation and mood recognition method based on deep learning and SVM |
CN110399822A (en) * | 2019-07-17 | 2019-11-01 | 思百达物联网科技(北京)有限公司 | Action identification method of raising one's hand, device and storage medium based on deep learning |
CN110826401B (en) * | 2019-09-26 | 2023-12-26 | 广州视觉风科技有限公司 | Human body limb language identification method and system |
CN110674785A (en) * | 2019-10-08 | 2020-01-10 | 中兴飞流信息科技有限公司 | Multi-person posture analysis method based on human body key point tracking |
CN110837795A (en) * | 2019-11-04 | 2020-02-25 | 防灾科技学院 | Teaching condition intelligent monitoring method, device and equipment based on classroom monitoring video |
CN111046825A (en) * | 2019-12-19 | 2020-04-21 | 杭州晨鹰军泰科技有限公司 | Human body posture recognition method, device and system and computer readable storage medium |
CN111513745B (en) * | 2020-04-21 | 2021-10-26 | 南通大学 | Intelligent non-contact CT body position recognition device used in high-risk environment |
CN111680562A (en) * | 2020-05-09 | 2020-09-18 | 北京中广上洋科技股份有限公司 | Human body posture identification method and device based on skeleton key points, storage medium and terminal |
CN111652076B (en) * | 2020-05-11 | 2024-05-31 | 重庆知熠行科技发展有限公司 | Automatic gesture recognition system for AD (analog-to-digital) meter understanding capability test |
CN111582208B (en) * | 2020-05-13 | 2023-07-21 | 抖音视界有限公司 | Method and device for generating organism posture key point information |
CN111907520B (en) * | 2020-07-31 | 2022-03-15 | 东软睿驰汽车技术(沈阳)有限公司 | Pedestrian posture recognition method and device and unmanned automobile |
CN113158756A (en) * | 2021-02-09 | 2021-07-23 | 上海领本智能科技有限公司 | Posture and behavior analysis module and method based on HRNet deep learning |
CN112836667B (en) * | 2021-02-20 | 2022-11-15 | 上海吉盛网络技术有限公司 | Method for judging falling and reverse running of passengers going upstairs escalator |
CN114842528A (en) * | 2022-03-31 | 2022-08-02 | 上海商汤临港智能科技有限公司 | Motion detection method, motion detection device, electronic device, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105389549A (en) * | 2015-10-28 | 2016-03-09 | 北京旷视科技有限公司 | Object recognition method and device based on human body action characteristic |
CN106228293A (en) * | 2016-07-18 | 2016-12-14 | 重庆中科云丛科技有限公司 | teaching evaluation method and system |
CN107330914A (en) * | 2017-06-02 | 2017-11-07 | 广州视源电子科技股份有限公司 | Face position method for testing motion and device and vivo identification method and system |
CN107697069A (en) * | 2017-10-31 | 2018-02-16 | 上海汽车集团股份有限公司 | Fatigue of automobile driver driving intelligent control method |
CN108197589A (en) * | 2018-01-19 | 2018-06-22 | 北京智能管家科技有限公司 | Semantic understanding method, apparatus, equipment and the storage medium of dynamic human body posture |
CN108198208A (en) * | 2017-12-27 | 2018-06-22 | 浩云科技股份有限公司 | A kind of mobile detection method based on target following |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5990824A (en) * | 1998-06-19 | 1999-11-23 | Northrop Grumman Corporation | Ground based pulse radar system and method providing high clutter rejection and reliable moving target indication with extended range for airport traffic control and other applications |
US6697010B1 (en) * | 2002-04-23 | 2004-02-24 | Lockheed Martin Corporation | System and method for moving target detection |
US20150138078A1 (en) * | 2013-11-18 | 2015-05-21 | Eyal Krupka | Hand pose recognition using boosted look up tables |
CN105469065B (en) * | 2015-12-07 | 2019-04-23 | 中国科学院自动化研究所 | A kind of discrete emotion identification method based on recurrent neural network |
CN106778585B (en) * | 2016-12-08 | 2019-04-16 | 腾讯科技(上海)有限公司 | A kind of face key point-tracking method and device |
CN106845338B (en) * | 2016-12-13 | 2019-12-20 | 深圳市智美达科技股份有限公司 | Pedestrian detection method and system in video stream |
-
2018
- 2018-07-14 CN CN201810772931.1A patent/CN109165552B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105389549A (en) * | 2015-10-28 | 2016-03-09 | 北京旷视科技有限公司 | Object recognition method and device based on human body action characteristic |
CN106228293A (en) * | 2016-07-18 | 2016-12-14 | 重庆中科云丛科技有限公司 | teaching evaluation method and system |
CN107330914A (en) * | 2017-06-02 | 2017-11-07 | 广州视源电子科技股份有限公司 | Face position method for testing motion and device and vivo identification method and system |
CN107697069A (en) * | 2017-10-31 | 2018-02-16 | 上海汽车集团股份有限公司 | Fatigue of automobile driver driving intelligent control method |
CN108198208A (en) * | 2017-12-27 | 2018-06-22 | 浩云科技股份有限公司 | A kind of mobile detection method based on target following |
CN108197589A (en) * | 2018-01-19 | 2018-06-22 | 北京智能管家科技有限公司 | Semantic understanding method, apparatus, equipment and the storage medium of dynamic human body posture |
Non-Patent Citations (4)
Title |
---|
OpenPose 安装配置与测试;JerryZhang__;《https://blog.csdn.net/jerryzhang__/article/details/76208871》;20170727;正文第1页第1段,第6页图2,第7页图2,第8页图1 * |
Realtime Multi-person 2D Pose Estimation using Part Affinity Fields;Zhe Cao等;《2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20171109;第1302-1310页 * |
基于图像处理的移动目标测量方法的研究;李搏轩;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315(第03期);第I138-4411页 * |
课堂学习行为测量系统的设计与实现;张鸿宇;《中国优秀硕士学位论文全文数据库 社会科学II辑》;20180115(第01期);第H127-59页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109165552A (en) | 2019-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109165552B (en) | Gesture recognition method and system based on human body key points and memory | |
CN108399376B (en) | Intelligent analysis method and system for classroom learning interest of students | |
CN108090857B (en) | Multi-mode student classroom behavior analysis system and method | |
Wang et al. | Hierarchical attention network for action recognition in videos | |
CN108648757B (en) | Analysis method based on multi-dimensional classroom information | |
Mathe et al. | Actions in the eye: Dynamic gaze datasets and learnt saliency models for visual recognition | |
Rezazadegan et al. | Action recognition: From static datasets to moving robots | |
Shah et al. | Efficient portable camera based text to speech converter for blind person | |
CN114170672A (en) | Classroom student behavior identification method based on computer vision | |
CN111814733A (en) | Concentration degree detection method and device based on head posture | |
CN111382655A (en) | Hand-lifting behavior identification method and device and electronic equipment | |
CN113705510A (en) | Target identification tracking method, device, equipment and storage medium | |
Yi et al. | Real time learning evaluation based on gaze tracking | |
CN110689066B (en) | Training method combining face recognition data equalization and enhancement | |
CN108197593B (en) | Multi-size facial expression recognition method and device based on three-point positioning method | |
Radwan et al. | A one-decade survey of detection methods of student cheating in exams (features and solutions) | |
Raza et al. | HMM-based scheme for smart instructor activity recognition in a lecture room environment | |
CN114222065B (en) | Image processing method, image processing apparatus, electronic device, storage medium, and program product | |
CN112613436B (en) | Examination cheating detection method and device | |
Sathyanarayana et al. | Hand gestures for intelligent tutoring systems: dataset, techniques & evaluation | |
CN111274898A (en) | Method and device for detecting group emotion and cohesion in video stream based on deep learning | |
CN112580526A (en) | Student classroom behavior identification system based on video monitoring | |
Yang et al. | An interaction system using mixed hand gestures | |
Potluri et al. | A comprehensive survey on the AI based fully automated online proctoring systems to detect anomalous behavior of the examinee | |
CN111860294A (en) | Face capture equipment convenient to trail |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220406 Address after: 518000 A1201, block ABCD, building 3, phase I, Tianan Yungu Industrial Park, Gangtou community, Bantian street, Longgang District, Shenzhen, Guangdong Patentee after: Shenzhen Zhongwei Shenmu Technology Co.,Ltd. Address before: 518000 Room 401, building 7, Jinxiu Dadi, 117 hudipai, songyuanxia community, Guanhu street, Longhua District, Shenzhen City, Guangdong Province Patentee before: SHENZHEN SHENMU INFORMATION TECHNOLOGY Co.,Ltd. |
|
TR01 | Transfer of patent right |