CN114255507A - Student posture recognition analysis method based on computer vision - Google Patents
Student posture recognition analysis method based on computer vision Download PDFInfo
- Publication number
- CN114255507A CN114255507A CN202010997257.4A CN202010997257A CN114255507A CN 114255507 A CN114255507 A CN 114255507A CN 202010997257 A CN202010997257 A CN 202010997257A CN 114255507 A CN114255507 A CN 114255507A
- Authority
- CN
- China
- Prior art keywords
- posture
- student
- homework
- standard
- computer vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 10
- 230000036544 posture Effects 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 13
- 230000009471 action Effects 0.000 claims abstract description 10
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 10
- 238000005516 engineering process Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 6
- 230000008921 facial expression Effects 0.000 claims abstract description 5
- 230000008569 process Effects 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 6
- 230000006399 behavior Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000012706 support-vector machine Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 2
- 238000013519 translation Methods 0.000 claims description 2
- 238000012544 monitoring process Methods 0.000 claims 1
- 238000000638 solvent extraction Methods 0.000 claims 1
- 230000003238 somatosensory effect Effects 0.000 claims 1
- 238000011410 subtraction method Methods 0.000 claims 1
- 238000005452 bending Methods 0.000 abstract description 3
- 210000002310 elbow joint Anatomy 0.000 abstract description 2
- 210000003127 knee Anatomy 0.000 abstract description 2
- 210000000707 wrist Anatomy 0.000 abstract description 2
- 238000013135 deep learning Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 210000002414 leg Anatomy 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Electromagnetism (AREA)
- Educational Administration (AREA)
- Radar, Positioning & Navigation (AREA)
- Tourism & Hospitality (AREA)
- General Engineering & Computer Science (AREA)
- Strategic Management (AREA)
- Educational Technology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Remote Sensing (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention aims to provide a student gesture recognition analysis method based on computer vision, which is used for solving the problems that the method comprises the following steps: 1) real-time posture data of students during homework are obtained through the video sensor. 2) The distance between the body part of the student and the desk is measured in real time through the infrared distance sensor. 3) The collected data includes the student's facial expressions of shoulder, neck, head, elbow joint, wrist, finger, waist, knee, foot, eyes, nose, mouth, etc. 4) The action posture of the student during doing homework is analyzed through deep learning algorithms such as computer vision correlation technology, convolutional neural network and the like. 5) Training various conditions such as bending head, lying prone, leaning to one side and the like which easily occur in standard sitting postures and homework of students, and establishing a comparison database. 6) And processing the video through a DensePose technology, comparing the result with the trained data, judging the current posture state of the student, and sending a voice prompt if the posture state is not standard.
Description
Technical Field
The invention relates to the technical field of computer vision image recognition, in particular to a student posture recognition analysis method.
Background
Vision is the most important way for humans to observe and realize the world, and about 75% of the information that people obtain from the outside world in daily life comes from the visual system. Since computers and the internet come into the world, the human mental and perception abilities are greatly expanded and extended, human body action recognition based on computer vision is more and more concerned from the end of the last century to the present, and video action recognition is gradually a hot problem for researchers at home and abroad. At present, most students do not have standard posture and posture when doing homework, which not only affects the writing speed and quality, but also causes the vision to be degraded, so that the children wear the glasses too early; moreover, the normal development of the spine and the cervical vertebra is affected because the waist is not straight; incorrect sitting posture, such as leg-lifting and leg-lifting, can also affect the development of bones and muscles of the legs. Therefore, incorrect homework posture can have great influence on the physical health of students.
For the analysis of the problems, the students should be reminded of sitting posture problems in time, but the working and living pressure of parents in the modern society is very large, and most parents are free of time to take care of the problems of the children. Therefore, the invention provides a student posture recognition and analysis method based on computer vision, and can remind children of problems such as irregular sitting posture in time by voice, so that students can keep a good sitting posture and the body health is ensured.
Disclosure of Invention
The invention aims to provide a student gesture recognition analysis method based on computer vision, which is used for solving the problems that the method comprises the following steps: 1) real-time posture data of students during homework are obtained through the video sensor. 2) The distance between the body part of the student and the desk is measured in real time through the infrared distance sensor. 3) The collected data includes the student's facial expressions of shoulder, neck, head, elbow joint, wrist, finger, waist, knee, foot, eyes, nose, mouth, etc. 4) The action posture of the student during doing the homework is analyzed by the computer vision correlation technology, the convolutional neural network and the DensePose technology. 5) The method is used for training various conditions such as facial expressions and various actions of students as bending heads, lying on the stomach, bending bodies to one side and the like when the standard sitting postures and the states which are easy to appear in homework of the students are not concentrated, and a comparison database is established. 6) The current posture state of the student is judged by comparing with the trained data, and if the posture is not standard or the mental state is not concentrated, a voice prompt is given.
In order to achieve the purpose, the invention adopts the following technical scheme: the video action recognition is used as a classification task, the design of the traditional method basically follows the frame of extracting features through a convolutional neural network and classifying based on the features in the traditional computer vision, and the DensePose technology is utilized to judge the posture and facial expression of students. Is very suitable for being used as a data test set of the invention. The student gesture recognition analysis method based on computer vision comprises the following stages:
and step one, acquiring actions, namely sampling and shooting standard and various easily-appearing problem postures by a visual sensor when a student does homework.
And step two, data classification, namely, performing feature extraction and posture classification on the image data collected in the step one.
And step three, identifying and training the collected and classified pictures by using a Convolutional Neural Network (CNN), and performing grouping training on standard posture and various easily-appearing problem postures. And obtaining a comparison recognition model of the correct posture and various wrong postures.
And fourthly, judging the posture of the student in doing homework, shooting the posture in the doing homework process, and extracting the characteristics. And estimating the posture of the student by using a DensePose technology, and then searching and comparing through the posture comparison recognition model obtained in the third stage.
And fifthly, outputting a comparison result, setting a threshold value, and finally giving an accurate judgment to the posture and posture of the student.
Wherein the first stage further comprises the following specific steps:
step S101: posture images of students are dynamically collected through a video sensor. A timer is set to acquire image frames at a fixed frequency.
Step S102: the distance between the head of the student and the desk is measured by using an infrared distance sensor.
Step S103: and caching data information acquired by various sensors in a system memory, and waiting for the next processing.
The invention aims at the scene that the posture action of the student is detected when the student works, the scene is single, and the background and the camera are basically static. Therefore, a global feature extraction method can be used, so that the second stage specifically comprises the following steps:
step S201: the method adopts a Background Subtraction (BS) method to position students in the video, needs to model the Background, and adopts a Kalman filter to model so as to eliminate the Background noise to achieve a better effect.
Step S202: and extracting features in the acquired image frames by adopting a Convolutional Neural Network (CNN), and controlling the fitting capacity of the whole model by utilizing different convolutions, pooling and the size of the finally output feature vector.
Step S203: and then classifying the human body posture in the picture by using a Support Vector Machine (SVM).
The third stage comprises the following steps:
step S301: and carrying out image operations such as rotation, cutting, translation and the like on the images collected and classified in the last stage, so as to expand training data and improve generalization capability and model robustness.
Step S302: in the convolution process, two convolution kernels of 3 x 3 are adopted to replace one convolution kernel of 5 x 5, the number of parameters is reduced, the depth of the network can be improved under the condition that the same receptive field is guaranteed, and a better training effect can be achieved.
Step S303: after deep training, a model is established, the models of various posture states are marked in a standard and unified mode, a comparison database is established, and a comparison identification model of the standard posture and various error postures is obtained.
The DensePose divides the human body into 24 parts and 1 background, and on the network details, adopts the localized treatment regression problem, firstly carries out two classifications to obtain the foreground and the background, and then subdivides the foreground into different body parts for respective regression. Aiming at completing the mapping from all human key points in the RGB image to the human 3D surface,
the fourth stage comprises the following specific steps:
step S401: through the first stage and the second stage, the real-time videos of the students for doing homework are uploaded to the data processing unit for preprocessing, the image quality effect is improved, the characteristic effect is highlighted, and the overall efficiency is improved.
Step S402: and processing the preprocessed RGB image by using a DensePose technology, and detecting by using fast-RCNN to obtain a human region detection image.
Step S403: the detection map is partitioned using a conventional neural network.
Step S404: and (3) parameterizing each part of the image subjected to the blocking processing by using a local two-dimensional coordinate system, and identifying and predicting the position of the point on the curved surface model through a regression function.
Step S405: converting the points obtained from the previous step into thermodynamic diagrams IUV (I: body block diagram. U, V: UV mapping in curved surface model)
Step S406: after all the thermodynamic diagrams are obtained, the geometric texture information human body representation DensePose is utilized to judge the posture and posture of the student, and the judgment and comparison are carried out on the posture and posture of the student and the comparison recognition model obtained through training in the third stage.
Stage five comprises the following steps:
step S501: and outputting the posture judgment result of the stage four to be compared with the trained comparison and recognition model of the stage three.
Step S502: and setting a threshold, wherein the relative standard of the sitting posture is not exceeded, and if the threshold is exceeded, the student is prompted to pay attention to the sitting posture by voice.
Step S503: finally, according to the specific situation of the student in the whole process of doing homework, a behavior data report of the student in the whole process of doing homework is formed, wherein the behavior data report comprises the time length of study concentration, the time length of standard sitting posture, the time length of vague sitting posture and the time length of non-standard sitting posture. The behavior data during the whole process of the homework form the homework report of the student.
Drawings
In order to clearly illustrate the technical solution of the present invention, the steps of the main process of the present invention are described in the following flowchart, and the following drawings only show some examples of the present invention and should not be considered as limiting the scope, and all equivalent changes and modifications made by those skilled in the art without departing from the concept and principle of the present invention belong to the protection scope of the present invention.
FIG. 1 is a flowchart of the stage one operation;
FIG. 2 is a flowchart of the stage two operation;
FIG. 3 is a stage three workflow diagram;
FIG. 4 is a stage four workflow diagram;
fig. 5 is a flow chart of the operation of stage five.
Claims (6)
1. A student posture detection and analysis method based on computer vision comprises a memory, a data processing chip and various sensors. The system is characterized in that the camera sensor module comprises a depth shooting unit for measuring depth information and outline information of people and scenery in a visual range, a main camera unit for ordinary camera shooting and a connected somatosensory recognition unit for recognizing the posture and facial expression information of students.
2. A method for optimizing motion acquisition, motion classification and motion recognition is characterized in that:
1) and positioning the students in the video by adopting a background subtraction method in the action acquisition stage.
2) And then a Kalman filter is used for modeling to eliminate background noise so as to achieve a better effect.
3) And extracting picture features by using the CNN.
4) And classifying the acquired pictures by adopting a Support Vector Machine (SVM).
5) The training data set is enlarged through operations such as translation, rotation and cutting, and generalization capability and model robustness are improved.
6) In the convolution process, two convolution kernels of 3 x 3 are used for replacing one convolution kernel of 5 x 5, the number of parameters is reduced, and a better training effect is achieved under the condition that the same receptive field is ensured.
7) And carrying out standard and unified marking on the model established after training and establishing a comparison database.
8) And (3) detecting the obtained motion image by using a DensePose technology, partitioning, coordinate regression, converting a UV (ultraviolet) image and the like, and then judging the posture of the real-time posture.
9) And searching and comparing the result with the comparison identification model obtained in 7).
10) And setting a threshold, wherein the relative standard of the sitting posture is not exceeded, and if the threshold is exceeded, the student is prompted to pay attention to the sitting posture by voice.
3. The computer vision-based student posture detection and analysis method is characterized by comprising the following five stages of:
and step one, acquiring actions, namely sampling and shooting standard and various easily-appearing problem postures by a visual sensor when a student does homework.
And step two, data classification, namely, performing feature extraction and posture classification on the image data collected in the step one.
And step three, identifying and training the collected and classified pictures by using a Convolutional Neural Network (CNN), and performing grouping training on standard overall posture and various easily-occurring problem postures. And obtaining a comparison recognition model of the correct posture and various wrong postures.
And fourthly, judging the posture of the student in doing homework, shooting the posture in the doing homework process, and extracting the characteristics. And estimating the posture of the student by using a DensePose technology, and then searching and comparing through the posture comparison recognition model obtained in the third stage.
And fifthly, outputting a comparison result, setting a threshold value, and finally giving an accurate judgment to the posture and posture of the student.
4. The first stage of claim 2, wherein body parts of the student such as the head, shoulders, hands, upper arms, lower arms, front and rear elbows, torso, etc. are marked and photographed at the time of motion acquisition.
5. The second stage according to claim 2, characterized in that the collected gestures are classified into standard gestures and classes of problem-prone gestures.
6. The system for monitoring, analyzing and evaluating the posture and efficiency of the student during writing homework based on OpenPose according to claim 1, further comprising a summary feedback module for finally forming a behavior data report of the student during the whole homework according to the specific situation of the student during the whole homework making process, wherein the data report is focused on the learning time length, the sitting posture standard time length, the vague time length and the non-standard sitting posture time length.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010997257.4A CN114255507A (en) | 2020-09-21 | 2020-09-21 | Student posture recognition analysis method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010997257.4A CN114255507A (en) | 2020-09-21 | 2020-09-21 | Student posture recognition analysis method based on computer vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114255507A true CN114255507A (en) | 2022-03-29 |
Family
ID=80789068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010997257.4A Pending CN114255507A (en) | 2020-09-21 | 2020-09-21 | Student posture recognition analysis method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114255507A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563797A (en) * | 2023-07-10 | 2023-08-08 | 安徽网谷智能技术有限公司 | Monitoring management system for intelligent campus |
-
2020
- 2020-09-21 CN CN202010997257.4A patent/CN114255507A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563797A (en) * | 2023-07-10 | 2023-08-08 | 安徽网谷智能技术有限公司 | Monitoring management system for intelligent campus |
CN116563797B (en) * | 2023-07-10 | 2023-10-27 | 安徽网谷智能技术有限公司 | Monitoring management system for intelligent campus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Javeed et al. | Wearable sensors based exertion recognition using statistical features and random forest for physical healthcare monitoring | |
Huang et al. | Multimodal sleeping posture classification | |
CN108614999B (en) | Eye opening and closing state detection method based on deep learning | |
CN105740780A (en) | Method and device for human face in-vivo detection | |
CN109271918B (en) | Method for distinguishing people with balance ability disorder based on gravity center shift model | |
CN112668531A (en) | Motion posture correction method based on motion recognition | |
CN114255508A (en) | OpenPose-based student posture detection analysis and efficiency evaluation method | |
CN112990137B (en) | Classroom student sitting posture analysis method based on template matching | |
Loureiro et al. | Using a skeleton gait energy image for pathological gait classification | |
Fieraru et al. | Learning complex 3D human self-contact | |
CN110298279A (en) | A kind of limb rehabilitation training householder method and system, medium, equipment | |
Bei et al. | Sitting posture detection using adaptively fused 3D features | |
CN112101235B (en) | Old people behavior identification and detection method based on old people behavior characteristics | |
CN114255507A (en) | Student posture recognition analysis method based on computer vision | |
CN114550299A (en) | System and method for evaluating daily life activity ability of old people based on video | |
CN116266415A (en) | Action evaluation method, system and device based on body building teaching training and medium | |
CN112085105B (en) | Action similarity evaluation method based on human body shape and posture estimation | |
CN113989936A (en) | Desk lamp capable of recognizing sitting posture of child and automatically correcting voice | |
CN116805433B (en) | Human motion trail data analysis system | |
TWI686775B (en) | Method and system for detecting reading posture using images, computer-readable recording media and computer program products | |
CN112233769A (en) | Recovery system after suffering from illness based on data acquisition | |
Rege et al. | Vision-based approach to senior healthcare: Depth-based activity recognition with convolutional neural networks | |
CN113723571A (en) | Method for judging pen holding posture in dot matrix pen writing | |
CN106327484A (en) | Method for evaluating operation posture of dentist | |
Mishra et al. | XAI-based gait analysis of patients walking with Knee-Ankle-Foot orthosis using video cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |