CN110738151B - Examination room video human body main posture determining method adopting motion coding - Google Patents

Examination room video human body main posture determining method adopting motion coding Download PDF

Info

Publication number
CN110738151B
CN110738151B CN201910937648.4A CN201910937648A CN110738151B CN 110738151 B CN110738151 B CN 110738151B CN 201910937648 A CN201910937648 A CN 201910937648A CN 110738151 B CN110738151 B CN 110738151B
Authority
CN
China
Prior art keywords
motion
gesture
examinee
coding
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910937648.4A
Other languages
Chinese (zh)
Other versions
CN110738151A (en
Inventor
石祥滨
代海龙
刘芳
李浩文
杨啸宇
王俊远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Tuwei Technology Co ltd
Original Assignee
Shenyang Tuwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Tuwei Technology Co ltd filed Critical Shenyang Tuwei Technology Co ltd
Priority to CN201910937648.4A priority Critical patent/CN110738151B/en
Publication of CN110738151A publication Critical patent/CN110738151A/en
Application granted granted Critical
Publication of CN110738151B publication Critical patent/CN110738151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for determining human body main posture of examination room video by adopting motion coding, which comprises the following steps: s1: and (3) carrying out motion coding on the examinee: extracting the postures of the examinee from each frame of the video of the examinee to form a posture sequence, and then carrying out motion coding on the examinee according to the sequence; s2: dividing static fragments: detecting a static segment in the examination video by utilizing motion coding; s3: and (4) posture classification: calculating the gesture mean value of each static segment detected in the S2, and creating a gesture category array according to the gesture mean value; s4: dividing the main posture: traversing the gesture category array, and taking the category corresponding to the gesture with the longest total duration as the main gesture of the examinee; s5: and determining the reasonable motion range of the elbow joint of the examinee. The method for determining the human body main posture of the examination room video by adopting the motion coding can accurately and quickly determine the main posture of an examinee in an examination period and the reasonable motion range of the elbow joint when the examinee is in the main posture.

Description

Examination room video human body main posture determining method adopting motion coding
Technical Field
The invention belongs to the field of computer vision and video understanding, and particularly provides a method for determining human body main posture of an examination room video by adopting motion coding.
Background
In order to analyze the examination wind problem in the examination, such as cheating behaviors of examinees and absence of invigilates of teachers, after various examinations such as college entrance examination, student examination, adult self-examination and academic level test, post-manual analysis needs to be performed on the examination video. Examination video analysis requires a significant amount of time and money costs, especially for large examinations, requiring a significant amount of manpower to view the examination video. Therefore, a set of examination video big data analysis system is urgently needed to be developed so as to automatically analyze the behavior of the examinee and further analyze the problems existing in the examination.
In the examination process, the main activity of the examinee is the answer sheet, the main posture of the examinee is the writing posture, and the activity range of the elbow joint when the examinee is in the writing posture has important reference value for analyzing whether the examinee is the answer sheet or other behaviors.
Therefore, how to accurately determine the main posture of the examinee and the motion range of the elbow joint of the examinee during the examination becomes a problem to be solved urgently.
Disclosure of Invention
In view of this, the present invention provides a method for determining a main human body pose of an examination room video by using motion coding, so as to solve the problem that the manual analysis of an examination wind in the prior art requires a large amount of manpower and financial resources.
The technical scheme provided by the invention is as follows: a method for determining human body main posture of an examination room video by adopting motion coding comprises the following steps:
s1: and (3) carrying out motion coding on the examinee: extracting the posture of the examinee from each frame of the examinee video to form a posture sequence, then carrying out motion coding on the examinee according to the sequence, and measuring the motion of a plurality of joint points of the human body in a coding mode;
s2: dividing static fragments: detecting a still segment in the test video by using the motion coding in the S1, and screening the time period that the examinee is still during the test;
s3: and (4) posture classification: calculating the gesture mean value of each static segment detected in the S2, and creating a gesture category array according to the gesture mean value;
s4: dividing the main posture: traversing the gesture category array, calculating the total duration time of each category gesture, sorting the category in a descending order according to the total duration time, and then taking the category corresponding to the gesture with the longest total duration time as the main gesture of the examinee;
s5: determining the reasonable motion range of the elbow joint of the examinee: and determining the reasonable motion range of the elbow joint of the examinee by using the average position of the elbow joint in the examinee main posture category in the S4.
Preferably, in S1, the motion encoding includes: key point motion coding, included angle change coding, limb orientation change coding and shoulder orientation change coding.
Further preferably, the key point motion coding comprises motion coding of key points of a neck, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist and a right wrist, and the coding process is as follows: firstly, calculating the displacement dis of the corresponding key point in the M frame, if dis < T1, then no motion exists, and coding as 0; if dis > -T1, motion exists, and the motion direction is calculated, one direction is divided every 45 degrees, the direction is coded to be 1 in the interval (337.5, 22.5), 1 is changed every 45 degrees, the coding is respectively 1 to 8, the number of image frames selected by M is changed, and T1 is a preset displacement reference value.
Further preferably, the angle change code includes an angle between the lower left arm and the upper left arm, an angle between the upper left arm and the shoulder, an angle between the lower right arm and the upper right arm, and an angle change code between the upper right arm and the shoulder, where the code is 1 if the corresponding angle change is greater than T2, the code is 2 if the corresponding angle change is less than-T2, and the code is 0 in other cases, where T2 is a preset angle change reference value.
Further preferably, the limb orientation change codes comprise change codes of the directions of a left lower arm, a left upper arm, a right lower arm and a right upper arm, wherein the code is 1 if the change angle of the corresponding limb orientation is greater than a threshold value T3, the code is 2 if the change angle of the corresponding limb orientation is less than-T3, and the code is 0 in other cases, wherein T3 is a preset change angle reference value of the limb orientation.
Further preferably, the shoulder orientation change codes: shoulder orientations include three, horizontal, left-leaning and horizontal, with horizontal encoding 0, left-leaning encoding 1, and right-leaning encoding 2.
Further preferably, S2 includes the steps of:
s21: initializing a static segment array to be null;
s22: traversing the attitude sequence of the examinee, if the motion codes in the examinee video of the previous frame and the next frame are not changed, continuing to judge the next frame until the motion codes of the previous frame and the next frame are changed, and at the moment, putting the attitude sequence of the time interval of which the motion codes are not changed into the static segment array until the traversal of the attitude sequence is finished.
Preferably, in S22, the method further includes a step of detecting a time duration corresponding to the time interval in which the motion code does not change, and if the time duration is greater than a preset time duration value T4, the gesture sequence of the time interval in which the motion code does not change is placed in the array of the stationary segments, so as to ensure the time length of the stationary segments.
More preferably, S3 specifically includes the following steps:
s31: traversing the static segment array, and calculating the gesture mean value of each static segment sequence as the gesture representative of the static segment;
s32: initializing a posture category array, putting a posture representative of a first static segment in the static segment array into the posture category array, and simultaneously recording the starting time, the ending time and the duration of the corresponding static segment;
s33: and traversing the static segment array, if the gesture representation of the static segment is similar to any gesture in the gesture category array, classifying the static segment into a corresponding category, and if the gesture representation of the static segment is not similar to any gesture in the gesture category array, adding the corresponding static gesture representation into the gesture category array to serve as a new category.
Further preferably, in S5, the reasonable movement range of the examinee ' S elbow joint is a circular region centered on the average position of the elbow joint in the examinee ' S main posture category and having a radius of 1/4 of the examinee ' S average shoulder length.
According to the examination room video human body main posture determining method adopting motion coding, the main posture (writing posture) of the human body in the video can be determined by counting and clustering analysis of the human body posture in the video, the reasonable motion range of the elbow joint in the main posture can be determined, and the method is suitable for various indoor video analysis.
Detailed Description
The invention will be further explained with reference to specific embodiments, without limiting the invention.
The invention provides a method for determining human body main posture of an examination room video by adopting motion coding, which comprises the following steps:
s1: and (3) carrying out motion coding on the examinee: extracting the posture of the examinee from each frame of the examinee video to form a posture sequence, then carrying out motion coding on the examinee according to the sequence, and measuring the motion of a plurality of joint points of the human body in a coding mode;
the motion encoding includes: key point motion coding, included angle change coding, limb orientation change coding, shoulder orientation change coding, wherein,
the key point motion coding comprises the motion coding of key points of a neck, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist and a right wrist, and the coding process is as follows: firstly, calculating the displacement dis of the corresponding key point in the M frame, if dis < T1, then no motion exists, and coding as 0; if dis > -T1, motion exists, the motion direction is calculated, a direction is divided every 45 degrees, the direction is coded to be 1 in the interval (337.5, 22.5), the coding is changed by 1 every 45 degrees, the coding is respectively 1 to 8, M is the selected image frame number and can be obtained according to empirical data, and T1 is a preset displacement reference value;
the included angle change code comprises a left lower arm and left upper arm included angle, a left upper arm and shoulder included angle, a right lower arm and right upper arm included angle and a right upper arm and shoulder included angle change code, if the corresponding angle change is larger than T2, the code is 1, if the corresponding angle change is smaller than-T2, the code is 2, and the other condition codes are 0, wherein T2 is a preset included angle change reference value;
the limb orientation change codes comprise change codes of the directions of a left lower arm, a left upper arm, a right lower arm and a right upper arm, if the change angle corresponding to the limb orientation is larger than a threshold value T3, the code is 1, if the change angle corresponding to the limb orientation is smaller than-T3, the code is 2, and the other condition codes are 0, wherein T3 is a preset change angle reference value of the limb orientation;
shoulder orientation change code: shoulder orientations include three, horizontal, left-leaning and horizontal, with horizontal encoding 0, left-leaning encoding 1, and right-leaning encoding 2.
S2: dividing static fragments: detecting a still segment in the test video by using the motion coding in the S1, and screening the time period that the examinee is still during the test;
the method specifically comprises the following steps:
s21: initializing a static segment array to be null;
s22: traversing the attitude sequence of the examinee, if the motion codes in the examinee video of the previous frame and the next frame are not changed, continuing to judge the next frame until the motion codes of the previous frame and the next frame are changed, and at the moment, putting the attitude sequence of the time interval of which the motion codes are not changed into the static segment array until the traversal of the attitude sequence is finished.
And S22, detecting the time length corresponding to the time interval in which the motion code does not change, and if the time length is greater than a preset time length value T4, placing the attitude sequence of the time interval in which the motion code does not change into a static segment array to ensure the time length of the static segment.
S3: and (4) posture classification: calculating the gesture mean value of each static segment detected in the S2, and creating a gesture category array according to the gesture mean value;
the method specifically comprises the following steps:
s31: traversing the static segment array, and calculating the gesture mean value of each static segment sequence as the gesture representative of the static segment;
s32: initializing a posture category array, putting a posture representative of a first static segment in the static segment array into the posture category array, and simultaneously recording the starting time, the ending time and the duration of the corresponding static segment;
s33: and traversing the static segment array, if the gesture representation of the static segment is similar to any gesture in the gesture category array, classifying the static segment into a corresponding category, and if the gesture representation of the static segment is not similar to any gesture in the gesture category array, adding the corresponding static gesture representation into the gesture category array to serve as a new category.
S4: dividing the main posture: traversing the gesture category array, calculating the total duration time of each category gesture, sorting the gestures in a descending order according to the total duration time, and then taking the category corresponding to the gesture with the longest total duration time as the main gesture of the examinee, wherein the total duration time of each category gesture can be roughly judged by the total frame number of the gestures;
s5: determining the reasonable motion range of the elbow joint of the examinee: determining a reasonable motion range of the elbow joint of the examinee by using the average position of the elbow joint in the examinee main posture category in the S4;
preferably, the reasonable motion range of the elbow joint of the examinee is a circular area with 1/4 of the average shoulder length of the examinee as a radius, and the average position of the elbow joint in the examinee's main posture category is taken as the center.
According to the method for determining the main human body posture of the examination room video by adopting motion coding, the motion rule of examinees is analyzed by using the human body posture, the main posture of each examinee can be determined by analyzing the posture data of the examinees and taking each examinee as a unit, and meanwhile, the motion range of elbow joints of each examinee can be further determined. The method has higher accuracy and robustness, and can realize the determination of the main postures of multiple persons and the motion range of the elbow joint.
While the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (8)

1. A method for determining human body main posture of an examination room video by adopting motion coding is characterized by comprising the following steps:
s1: and (3) carrying out motion coding on the examinee: extracting the posture of the examinee from each frame of the examination room video to form a posture sequence, then carrying out motion coding on the examinee according to the sequence, and measuring the motion of a plurality of joint points of the human body in a coding mode;
s2: dividing static fragments: detecting a static segment in the examination room video by using the motion coding in the S1, and screening the time period that the examinee is static during the examination;
s2 includes the steps of:
s21: initializing a static segment array to be null;
s22: traversing the attitude sequence of the examinee, if the motion codes in the examination room videos of the previous frame and the next frame do not change, continuing to judge the next frame until the motion codes of the previous frame and the next frame change, and at the moment, putting the attitude sequence of the time interval of which the motion codes do not change into the static segment array until the traversal of the attitude sequence is finished;
s3: and (4) posture classification: calculating the gesture mean value of each static segment detected in the S2, and creating a gesture category array according to the gesture mean value;
s3 specifically includes the following steps:
s31: traversing the static segment array, and calculating the gesture mean value of each static segment sequence as the gesture representative of the static segment;
s32: initializing a posture category array, putting a posture representative of a first static segment in the static segment array into the posture category array, and simultaneously recording the starting time, the ending time and the duration of the corresponding static segment;
s33: traversing the static segment array, if the gesture representation of the static segment is similar to any gesture in the gesture category array, classifying the static segment into a corresponding category, and if the gesture representation of the static segment is not similar to any gesture in the gesture category array, adding the corresponding static gesture representation into the gesture category array to serve as a new category;
s4: dividing the main posture: traversing the gesture category array, calculating the total duration time of each category gesture, sorting the category in a descending order according to the total duration time, and then taking the category corresponding to the gesture with the longest total duration time as the main gesture of the examinee;
s5: determining the reasonable motion range of the elbow joint of the examinee: and determining the reasonable motion range of the elbow joint of the examinee by using the average position of the elbow joint in the examinee main posture category in the S4.
2. The method for determining the main human body posture of the examination room video by adopting the motion coding as claimed in claim 1, characterized in that: at S1, the motion encoding includes: key point motion coding, included angle change coding, limb orientation change coding and shoulder orientation change coding.
3. The method for determining the main human body posture of the examination room video by adopting the motion coding as claimed in claim 2, characterized in that: the key point motion coding comprises the motion coding of key points of a neck, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist and a right wrist, and the coding process is as follows: firstly, calculating the displacement dis of the corresponding key point in the M frame, if dis < T1, then no motion exists, and coding as 0; if dis > = T1, motion exists, motion direction is calculated, one direction is divided every 45 degrees, the direction is coded to be 1 in intervals (337.5, 22.5), the coding is changed by 1 every 45 degrees, the coding is respectively 1 to 8, M is the number of the selected image frames, and T1 is a preset displacement reference value.
4. The method for determining the main human body posture of the examination room video by adopting the motion coding as claimed in claim 2, characterized in that: the included angle change code comprises a left lower arm and left upper arm included angle, a left upper arm and shoulder included angle, a right lower arm and right upper arm included angle and a right upper arm and shoulder included angle change code, if the corresponding angle change is larger than T2, the code is 1, if the corresponding angle change is smaller than-T2, the code is 2, and the other condition codes are 0, wherein T2 is a preset included angle change reference value.
5. The method for determining the human body main posture of the examination room video by adopting the motion coding as claimed in claim 2, characterized in that: the limb orientation change codes comprise change codes of the directions of a left lower arm, a left upper arm, a right lower arm and a right upper arm, if the change angle of the corresponding limb orientation is larger than a threshold value T3, the code is 1, if the change angle of the corresponding limb orientation is smaller than-T3, the code is 2, otherwise, the code is 0, wherein T3 is a preset change angle reference value of the limb orientation.
6. The method for determining the main human body posture of the examination room video by adopting the motion coding as claimed in claim 2, characterized in that: shoulder orientation change code: shoulder orientations include three, horizontal, left-leaning and horizontal, with horizontal encoding 0, left-leaning encoding 1, and right-leaning encoding 2.
7. The method for determining the human body main posture of the examination room video by adopting the motion coding as claimed in claim 1, characterized in that: and S22, detecting the time length corresponding to the time interval without the change of the motion code, and if the time length is greater than a preset time length value T4, putting the attitude sequence of the time interval without the change of the motion code into a static segment array to ensure the time length of the static segment.
8. The method for determining the main human body posture of the examination room video by adopting the motion coding as claimed in claim 1, characterized in that: in S5, the reasonable range of motion of the examinee ' S elbow joint is a circular region centered on the average position of the elbow joint in the examinee ' S main posture category and having a radius of 1/4 of the examinee ' S average shoulder length.
CN201910937648.4A 2019-09-30 2019-09-30 Examination room video human body main posture determining method adopting motion coding Active CN110738151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910937648.4A CN110738151B (en) 2019-09-30 2019-09-30 Examination room video human body main posture determining method adopting motion coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910937648.4A CN110738151B (en) 2019-09-30 2019-09-30 Examination room video human body main posture determining method adopting motion coding

Publications (2)

Publication Number Publication Date
CN110738151A CN110738151A (en) 2020-01-31
CN110738151B true CN110738151B (en) 2022-05-20

Family

ID=69269830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910937648.4A Active CN110738151B (en) 2019-09-30 2019-09-30 Examination room video human body main posture determining method adopting motion coding

Country Status (1)

Country Link
CN (1) CN110738151B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128336A (en) * 2021-03-10 2021-07-16 恒鸿达科技有限公司 Pull-up test counting method, device, equipment and medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793922B (en) * 2013-09-12 2016-07-06 电子科技大学 A kind of particular pose real-time detection method
CN109255296A (en) * 2018-08-06 2019-01-22 广东工业大学 A kind of daily Human bodys' response method based on depth convolutional neural networks
CN110147736A (en) * 2019-04-25 2019-08-20 沈阳航空航天大学 A kind of repetition anomaly detection method based on posture

Also Published As

Publication number Publication date
CN110738151A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN109863535B (en) Motion recognition device, storage medium, and motion recognition method
CN110674785A (en) Multi-person posture analysis method based on human body key point tracking
CN111726586A (en) Production system operation standard monitoring and reminding system
KR20200005987A (en) System and method for diagnosing cognitive impairment using touch input
WO2015153266A1 (en) Method and system for analyzing exam-taking behavior and improving exam-taking skills
US7409373B2 (en) Pattern analysis system and method
CN111814587A (en) Human behavior detection method, teacher behavior detection method, and related system and device
CN108762503A (en) A kind of man-machine interactive system based on multi-modal data acquisition
CN111816314B (en) Chest card selection, labeling and verification method for artificial intelligent screening of pneumoconiosis
CN110738151B (en) Examination room video human body main posture determining method adopting motion coding
CN106529470A (en) Gesture recognition method based on multistage depth convolution neural network
CN111523445A (en) Examination behavior detection method based on improved Openpos model and facial micro-expression
CN114863571A (en) Collaborative robot gesture recognition system based on computer vision
CN105631410B (en) A kind of classroom detection method based on intelligent video processing technique
CN117133057A (en) Physical exercise counting and illegal action distinguishing method based on human body gesture recognition
CN112036291A (en) Kinematic data model construction method based on motion big data and deep learning
CN114639168B (en) Method and system for recognizing running gesture
CN106446837B (en) A kind of detection method of waving based on motion history image
CN111507555B (en) Human body state detection method, classroom teaching quality evaluation method and related device
CN110175531B (en) Attitude-based examinee position positioning method
CN110751062B (en) Examinee attitude sequence generation method based on attitude voting
CN112288266A (en) Shunting hand signal processing method, shunting hand signal model obtaining method, shunting hand signal processing device, shunting hand signal model obtaining device, shunting hand signal processing equipment and shunting hand signal model obtaining medium
CN110781763B (en) Human body looking-at motion detection method based on posture
RU153699U1 (en) ANTHROPOMORPHIC ROBOT OF THE EDUCATIONAL PROCESS
CN113378772B (en) Finger flexible detection method based on multi-feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201102

Address after: Room d09-629, international software park, No. 863-9, shangshengou village, Hunnan District, Shenyang City, Liaoning Province

Applicant after: Shenyang Tuwei Technology Co., Ltd

Address before: 110136, Liaoning, Shenyang, Shenbei New Area moral South Avenue No. 37

Applicant before: SHENYANG AEROSPACE University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant