CN114255508A - OpenPose-based student posture detection analysis and efficiency evaluation method - Google Patents

OpenPose-based student posture detection analysis and efficiency evaluation method Download PDF

Info

Publication number
CN114255508A
CN114255508A CN202010997266.3A CN202010997266A CN114255508A CN 114255508 A CN114255508 A CN 114255508A CN 202010997266 A CN202010997266 A CN 202010997266A CN 114255508 A CN114255508 A CN 114255508A
Authority
CN
China
Prior art keywords
student
data
openpose
posture
students
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010997266.3A
Other languages
Chinese (zh)
Inventor
高聪
陈煜喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202010997266.3A priority Critical patent/CN114255508A/en
Publication of CN114255508A publication Critical patent/CN114255508A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Abstract

The invention provides a student writing work posture monitoring analysis and efficiency evaluation method based on OpenPose. The method comprises the following steps: 1) the body movement and facial expression data of the students during homework are obtained through real-time monitoring. 2) The collected data includes facial expressions of shoulders, neck, head, elbow joints, wrists, fingers, eyes, nose, mouth, and the like. 3) Student actions are analyzed through OpenPose human posture recognition and deep learning. 4) The distance between the student and the desk can be monitored when the OpenPose identifies the action of the student, the student is prompted to pay attention to sitting postures, the student can be prompted to concentrate on too long learning time, the student is prompted to have a proper rest, and health is guaranteed. 5) The current learning state of the student, whether the student is nervous or not, whether the student is concentrated or not and the like are analyzed through the collected data.

Description

OpenPose-based student posture detection analysis and efficiency evaluation method
Technical Field
The invention belongs to the technical field of artificial intelligent robots, and particularly relates to a student posture monitoring analysis and efficiency evaluation method based on OpenPose.
Background
The artificial intelligence robot is a result of the joint development of science and technology, and can obtain partial capabilities similar to those of people or other animals, such as perception capability, planning capability, data analysis capability and intelligent voice conversation capability, through deep learning and various sensors. Many children cannot voluntarily control their own learning activities and only rely on the supervision and coaching of parents. However, parents work under great pressure in modern society, children need to be tutored to do work after returning home, the pressure of the parents is increased, the energy of the parents is consumed, and the children do not obtain learning effect. In addition, in many rural families, parents of children go out to work and grandparents take care of the children through grandparents' milk. The old is inconvenient to move, the old is not easy to take care of the diet and daily life of children, and the old needs to supervise and guide the writing operation of the children.
For the analysis of the above problems, openpos is an open source library developed based on a Convolutional Neural Network (CNN) and Supervised Learning (SL) and using Caffe as a framework, and can realize posture estimation of human body actions, facial expressions, finger motions, and the like. The invention provides a student posture monitoring analysis and efficiency evaluation method based on OpenPose, which is used for solving the problems.
Disclosure of Invention
The invention aims to provide a student posture monitoring analysis and efficiency evaluation method based on OpenPose, which aims to solve the problems that: 1) the body movement and facial expression data of the students during homework are obtained through real-time monitoring. 2) The collected data includes facial expressions of shoulders, neck, head, elbow joints, wrists, fingers, eyes, nose, mouth, and the like. 3) Student actions are analyzed through OpenPose human posture recognition and deep learning. 4) The distance between the student and the desk can be monitored when the OpenPose identifies the action of the student, the detected unhealthy posture can be timely reminded, the student can be prompted to concentrate on the study for too long time, the student is prompted to take a proper rest, and the health is guaranteed. 5) The current learning state of the student, whether the student is nervous or not, whether the student is concentrated or not and the like are analyzed through the collected data. Finally, a behavior data report of the child in the whole working process is formed. In order to achieve the purpose, the invention adopts the following technical scheme: OpenPose is a human body posture estimation method from bottom to top, coordinates of all joint points in one image are detected firstly, then coordinate clustering is carried out, and key point coordinates corresponding to people are formed. The invention aims at the posture detection of a single student during learning, has single scene, and is used for the calculation complexity during the single detection without the complex environment of multiple persons. Therefore, it is reasonable to propose the following improvements to the traditional openpos for the application scenario of the present invention:
1. the 6 phases of OpenPose are reduced to 3 phases, and only the initial phase and a single optimization phase are reserved.
2. Replacing the 7 × 7 convolution kernel with 1 × 1 and 3 × 3 convolution kernels can not only mitigate dimensionality reduction but also reduce the amount of computation.
3. Because the invention reduces the 6 stages of the traditional OpenPose to 3 stages, and the calculation precision is difficult to avoid influence, the invention prevents the occurrence of overfitting and improves the precision of the calculation result by optimizing the pooling process.
The method comprises the steps of utilizing a computer vision technology to timely acquire and analyze behaviors of students in homework and facial behavior monitoring, utilizing OpenPose to classify and train extracted feature vectors through a support vector machine to generate a model, carrying out detailed demand analysis, and dividing the model into a data training module, a behavior acquisition module, a behavior discrimination module, a data analysis module and a database to respectively realize different functions. Which comprises the following steps:
firstly, a behavior acquisition module comprises a distance sensor and a vision sensor, collects and sends data to a behavior data analysis module, analyzes and processes the data, and sends the result to a behavior judgment module to make an accurate judgment on the behavior and the state of a student through comparison with a comparison model obtained by a data training module. The method comprises the following specific steps:
and S101, collecting data by a vision sensor, an infrared distance measuring sensor and the like.
Step S102: the vision sensor acquisition system collects light source information through a camera and converts the light source information received by the sensor into an electric signal through a CCD and CMOS technical system. The infrared ranging sensor detects the distance between the head of a student and a desk by utilizing the principle that the infrared signals meet different reflection intensities of barrier distances, sets a distance threshold value, and reminds the student to pay attention to the sitting posture after the distance threshold value is exceeded.
Step S103: after the data collection is finished, the collected data is transmitted to a data analysis module through a bus interface (such as RS-232, RS-485, CAN and the like) of the sensor, and the data is processed by the data module in the next step.
The data training module realizes model establishment, downloads a large number of tired facial expressions and posture from a network for deep learning training, and the prior face detection methods comprise face detection based on priori knowledge, a detection method based on template matching, a detection method based on feature extraction and a detection method based on machine learning. The invention adopts a template matching method. The method is characterized in that pictures of physical states and facial expressions of students under different states, such as specific postures and expressions of yawning, stretching to lazy waist, nodding in doze, absence of concentration of eyes and the like, are downloaded and searched on the internet for pictures of people under different illumination, angles, backgrounds and other conditions under mental states and fatigue states. Training is performed using a facial expression recognition program. And standardized and unified marking is carried out through the content of the video image. And establishing a comparison database. The method comprises the following specific steps:
step S201: a large number of facial expressions and posture of people in mental state inattention and tired state in different lighting, angle and background environments are downloaded from the network.
Step S202: facial expression and posture picture samples of a person in mental state inattentive and exhausted states in a large number of different lighting, angle, background environments downloaded over a network are used as training data sets. A COCO data set widely applied in computer vision research is adopted as a test data set of a human body key point model. As shown in FIG. 5, COCO output formats are nose-0, neck-1, right shoulder-2, right elbow-3, right wrist-4, left shoulder-5, left elbow-6, left wrist-7, right hip-8, right knee-9, right ankle-10, left hip-11, left knee-12, left ankle-13, right eye-14, left eye-15, with ear-16, left ear-17, background-18.
And step S203, the data set is augmented, and because a training model needs a large number of parameters but related resources on a network are few, the existing data needs to be turned over, translated or rotated and the like. More data are created, the data quantity of model training is increased, the generalization capability of the model is improved, the noise data is increased, and the robustness of the model is improved.
Step S204: and selecting a google deep learning framework tensorflow as a rear end on a model architecture, carrying out deep training on the data, and finally establishing a model. And establishing a comparison database by marking the video image content in a standardized and unified way.
And the data analysis module comprises a data processing chip, a system memory and a storage module, processes the data sent by the behavior acquisition module, constructs data logic according to a core algorithm, processes specific data, and finally sends the processed result to the data training module for next work. The specific steps of the work performed by the data analysis module are as follows:
step S301: and processing the data transmitted by the behavior acquisition module.
Step S302: the logic between data is constructed through an optimization algorithm, a parallel algorithm is adopted, deep learning is utilized to preprocess image data, the quality effect of an image is effectively improved, and the characteristic effect of recognition is highlighted. The data processing efficiency is improved, and the identification accuracy is improved.
Step S303: and transmitting the preprocessed data to a behavior judging module, comparing the data with a comparison database obtained by a data training module, and judging the physical state of the student.
The behavior discrimination module comprehensively provides an evaluation result of the physical state and the mental state of the student through the steps. And judging whether the learning state and the sitting posture of the student are standard or not according to the evaluation result, and giving a voice prompt.
Step S401: and carrying out human body posture recognition on the preprocessed data by utilizing OpenPose.
And S402, setting a timer in the process based on OpenPose, intercepting the picture from the real-time video at fixed time, and extracting CNN characteristics from the picture through convolution and pooling operations through the neural network models vgg-19.
Step S403: openpos contains six stages, wherein each stage comprises two branches, one branch is used for detecting heatmap of human posture joint points by using a convolutional network, the other branch is used for obtaining partial affinity fields vectmap of all connected joint points by using CNN, and finally obtaining all joint information key points.
Step S404: the limb connection is obtained using joint information and pafs (for site association). The model comprises 19 limbs, two parts and paf corresponding to each limb are determined, the result obtained by integrating paf information between the two parts is used as the confidence coefficient of the limb, and the confidence map is used for showing the possibility of the occurrence of the human body part in a gray level. Finally, the limb joints are found by using partial Affinity fields PAFs (part Affinity fields), and each joint can be regarded as a limb.
Step S405: after all limbs are obtained, the limbs connected with the same joint are regarded as the limbs of the same person, data are transmitted to a data training module to be compared with the similarity of the limbs of the same person with the deeply trained model, a threshold value is set, and the posture and posture of the student can be judged if the similarity exceeds the threshold value.
Finally, according to the specific situation of the student in the whole process of doing homework, a behavior data report of the student in the whole homework process is formed, wherein the behavior data report comprises the length of time for which the student focuses on learning, the length of time for which the student sits normally and nervously and the length of time for which the student sits abnormally. The behavior data during the whole process of the homework form the homework report of the student.
Drawings
In order to clearly illustrate the technical solution of the present invention, the steps of the main process of the present invention are described in the following flowchart, and the following drawings only show some examples of the present invention and should not be considered as limiting the scope, and all equivalent changes and modifications made by those skilled in the art without departing from the concept and principle of the present invention belong to the protection scope of the present invention.
FIG. 1 is a flow chart of the operation of a behavior acquisition module;
FIG. 2 is a flow chart of the operation of the data training module;
FIG. 3 is a flow chart of the operation of the data analysis module;
FIG. 4 is a flowchart of the operation of the behavior discrimination module;
FIG. 5 is a skeletal structure diagram of OpenPose.

Claims (7)

1. A student posture monitoring and analyzing and efficiency evaluating method based on OpenPose during writing homework. The portable intelligent charging device comprises a body, a power supply and various sensors, wherein the power supply is arranged in the body and can be charged for use or connected with the power supply. The disclosed device is characterized by being provided with:
and in the sensor system, data obtained by the sensor is subjected to data processing by an OpenPose improved algorithm. The system is characterized in that the camera sensor module comprises a depth shooting unit for measuring depth information and outline information of people and scenery in a visual range, a main camera unit for ordinary camera shooting and a connected somatosensory recognition unit, and the depth shooting unit is used for recognizing the posture and facial expression information of students. The intelligent robot is characterized in that the intelligent robot also comprises an infrared distance measuring sensor for detecting the distance between the head of a student and a desk, and a data processing chip for data processing is also arranged in the intelligent robot.
2. An improved algorithm based on OpenPose, characterized in that:
1) the 6 phases of OpenPose are reduced to 3 phases, and only the initial phase and a single optimization phase are reserved.
2) Replacing the 7 × 7 convolution kernel with 1 × 1 and 3 × 3 convolution kernels can not only mitigate dimensionality reduction but also reduce the amount of computation.
3) Because the invention reduces the 6 stages of the traditional OpenPose to 3 stages, and the calculation precision is difficult to avoid influence, the invention prevents the occurrence of overfitting and improves the precision of the calculation result by optimizing the pooling process.
3. The openpos-based student writing work posture monitoring analysis and efficiency assessment method as recited in claim 1 and claim 2, wherein the first step specifically comprises:
1) the physical data and facial expression data of the students are collected through the sensors.
2) And modeling the posture and the facial expression of the student by using the acquired data through an OpenPose-based improved algorithm. For the detection of the sitting posture of a student, a human skeleton key point feature extraction method is adopted, and different motion tracks, limb angle changes and the like can be shown by the human skeleton key points when the student sits or moves. Therefore, the behavior and the action of students are analyzed, and the top-down algorithms for detecting key points of human skeletons currently include PMPE, CFN, Mask RCNN and G-RMI. The correlation modeling algorithms for the relationship between the collected key points include Assocciatic Embedding, Mid-Range offsets and PAF. The modeling mode adopted by the method is also based on the various algorithms and improved so as to achieve accurate collection and modeling analysis of the student skeleton key points.
4. Based on the steps of claim 1 and claim 2, further comprising a neural network model training module, characterized in that: aiming at pictures of the body states and the facial expressions of students under different state conditions, such as specific postures and expressions of yawning, stretching to lazy waist, nodding in doze, absence of attention and distraction of eyes and the like, pictures of the state of people and the fatigue of the face under different illumination, angles, backgrounds and other conditions are downloaded and searched on the internet. Training is performed using a facial expression recognition program. And standardized and unified marking is carried out through the content of the video image. And establishing a comparison database.
5. And realizing posture detection based on the neural network training module of claim 4. The method is characterized in that: deep learning training is carried out on the student posture and facial expression models, through model training and testing of samples, classification and recognition work of different actions is carried out on collected data and established models through an OpenPose improvement-based algorithm. And training actions such as playing a mobile phone, lying on the stomach, leaning on the back and the like into a template model to obtain a model for comparison. And analyzing the collected posture data, comparing the analyzed posture data with the template, judging the current sitting posture state of the student according to the contrast similarity, setting a threshold value, and reminding the student of paying attention to the sitting posture by voice if the contrast similarity exceeds the threshold value.
6. The neural network training module based on the claim 4 realizes the mental state detection of students. The method is characterized in that: the convolutional neural network is utilized to build a proper model to train and learn different learning states of students and whether the spirits are concentrated or not, and the learning states and the spirits of the students are judged from the facial data of the students through setting and adjusting of all parameters. And analyzing the collected facial expressions, comparing the analyzed facial expressions with the trained template, and judging the current mental state of the student according to the contrast similarity. In summary, the machine vision technology is adopted, a method of combining image processing and a convolutional neural network is adopted, the facial image data collected by a sensor is sampled, and the body state and the facial fatigue expression characteristics are trained and learned through a network model, so that the student state is judged.
7. The method for monitoring, analyzing and evaluating the posture and the efficiency of the student during the writing assignment based on the OpenPose according to the claim 1, characterized in that the method further comprises a summary feedback module for finally forming a behavior data report of the student during the whole assignment according to the specific situation of the student during the whole assignment, wherein the data report includes the time length of study, the time length of standard sitting posture, the time length of vague posture and the time length of non-standard sitting posture.
CN202010997266.3A 2020-09-21 2020-09-21 OpenPose-based student posture detection analysis and efficiency evaluation method Pending CN114255508A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010997266.3A CN114255508A (en) 2020-09-21 2020-09-21 OpenPose-based student posture detection analysis and efficiency evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010997266.3A CN114255508A (en) 2020-09-21 2020-09-21 OpenPose-based student posture detection analysis and efficiency evaluation method

Publications (1)

Publication Number Publication Date
CN114255508A true CN114255508A (en) 2022-03-29

Family

ID=80789159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010997266.3A Pending CN114255508A (en) 2020-09-21 2020-09-21 OpenPose-based student posture detection analysis and efficiency evaluation method

Country Status (1)

Country Link
CN (1) CN114255508A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115337607A (en) * 2022-10-14 2022-11-15 佛山科学技术学院 Upper limb movement rehabilitation training method based on computer vision
CN116563797A (en) * 2023-07-10 2023-08-08 安徽网谷智能技术有限公司 Monitoring management system for intelligent campus
CN116645721A (en) * 2023-04-26 2023-08-25 贵州大学 Sitting posture identification method and system based on deep learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115337607A (en) * 2022-10-14 2022-11-15 佛山科学技术学院 Upper limb movement rehabilitation training method based on computer vision
CN116645721A (en) * 2023-04-26 2023-08-25 贵州大学 Sitting posture identification method and system based on deep learning
CN116645721B (en) * 2023-04-26 2024-03-15 贵州大学 Sitting posture identification method and system based on deep learning
CN116563797A (en) * 2023-07-10 2023-08-08 安徽网谷智能技术有限公司 Monitoring management system for intelligent campus
CN116563797B (en) * 2023-07-10 2023-10-27 安徽网谷智能技术有限公司 Monitoring management system for intelligent campus

Similar Documents

Publication Publication Date Title
Jalal et al. A depth video-based human detection and activity recognition using multi-features and embedded hidden Markov models for health care monitoring systems
Piyathilaka et al. Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features
Wang et al. Fall detection based on dual-channel feature integration
Mannini et al. Accelerometry-based classification of human activities using Markov modeling
CN114255508A (en) OpenPose-based student posture detection analysis and efficiency evaluation method
Wu et al. Human activity recognition based on the combined SVM&HMM
CN110490109B (en) Monocular vision-based online human body rehabilitation action recognition method
CN110135242B (en) Emotion recognition device and method based on low-resolution infrared thermal imaging depth perception
Bu Human motion gesture recognition algorithm in video based on convolutional neural features of training images
CN113920326A (en) Tumble behavior identification method based on human skeleton key point detection
CN113516005A (en) Dance action evaluation system based on deep learning and attitude estimation
CN109934182A (en) Object behavior analysis method, device, electronic equipment and computer storage medium
Yahaya et al. Gesture recognition intermediary robot for abnormality detection in human activities
Chiu et al. Emotion recognition through gait on mobile devices
CN113869276B (en) Lie recognition method and system based on micro-expression
CN208969808U (en) Baby monitor based on Face datection and sound detection
Zhu et al. Realtime human daily activity recognition through fusion of motion and location data
Jayaweera et al. Gesture driven smart home solution for bedridden people
Alhersh et al. Learning human activity from visual data using deep learning
CN112257559A (en) Identity recognition method based on gait information of biological individual
CN111339878A (en) Eye movement data-based correction type real-time emotion recognition method and system
Batool et al. Fundamental Recognition of ADL Assessments Using Machine Learning Engineering
Tao et al. Research on communication APP for deaf and mute people based on face emotion recognition technology
Ramanathan et al. Combining pose-invariant kinematic features and object context features for rgb-d action recognition
Wameed et al. Hand gestures robotic control based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination