CN110176025B - Invigilator tracking method based on posture - Google Patents

Invigilator tracking method based on posture Download PDF

Info

Publication number
CN110176025B
CN110176025B CN201910336880.2A CN201910336880A CN110176025B CN 110176025 B CN110176025 B CN 110176025B CN 201910336880 A CN201910336880 A CN 201910336880A CN 110176025 B CN110176025 B CN 110176025B
Authority
CN
China
Prior art keywords
point
tracking
target
included angle
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910336880.2A
Other languages
Chinese (zh)
Other versions
CN110176025A (en
Inventor
石祥滨
刘芳
张德园
杨啸宇
毕静
武卫东
李照奎
刘翠微
代钦
代海龙
王俊远
王佳
李浩文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Tuwei Technology Co Ltd
Original Assignee
Shenyang Tuwei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Tuwei Technology Co Ltd filed Critical Shenyang Tuwei Technology Co Ltd
Priority to CN201910336880.2A priority Critical patent/CN110176025B/en
Publication of CN110176025A publication Critical patent/CN110176025A/en
Application granted granted Critical
Publication of CN110176025B publication Critical patent/CN110176025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a posture-based invigilator tracking method, which determines the positions of different persons by analyzing the relationship between the joint points of the upper half part of the human body, obtains the position of the invigilator by using the human body joint point data generated by deep learning, and matches the front and back frame targets according to the feature similarity by extracting the features of the invigilator. The method comprises the following steps: initializing a tracking target, counting joint point information of all persons, and detecting a invigilator; matching a tracking target, and comparing similarity of the characteristics provided by the invigilators of the two frames before and after, wherein the tracking target is the tracking target with the maximum similarity; self-adaptive tracking: and the conditions of target loss, target mistracking and the like are processed in time, and the lost target is quickly and accurately found. By adopting the method, the position of the invigilator in the examination room can be accurately acquired, and reference is provided for the subsequent invigilator action identification.

Description

Invigilator tracking method based on posture
Technical Field
The invention belongs to the technical field of target tracking based on computer vision and video understanding, and particularly relates to a prisoner tracking method based on a posture.
Background
At present, after various examinations such as college entrance examination, student examination, self-examination and academic proficiency test, etc., a lot of manpower is needed to watch the examination video to analyze the examination wind problem in the examination, for example, a proctor is not used as the teacher. When the number of detection points is too many, the staff is easy to generate visual fatigue, the problems of many false reports and missed reports and difficult video retrieval are caused, and the number of personnel is increased, so that the serious waste of human resources is caused, and therefore, an examination video big data analysis system or method is needed, the behaviors of an examinee and an invigilator can be automatically analyzed, the problems existing in an examination are further analyzed, and the invigilator tracking plays a vital role in analyzing and identifying the behavior of the invigilator.
The invigilator tracking is to detect, extract, identify and track the invigilator in a video sequence to obtain the motion parameters of the invigilator, such as position, speed, acceleration, motion trail and the like, so as to perform the next processing and analysis, realize the behavior understanding of the invigilator and complete the higher-level detection task.
There are many multi-target tracking methods at home and abroad, and the traditional tracking method mainly comprises methods based on probability density distribution and mean shift, such as mean-shift, Cam-shift and the like. The particle filter method is a method based on particle distribution statistics, a Kalman filter is often used to describe a motion model of a target, and after feature points are extracted from the target, optical flow matching points of the feature points are calculated in the next frame to obtain the target position through statistics based on optical flow tracking of the feature points. The TLD algorithm proposed by Kalal et al adopts an online learning mechanism to train a tracking detector, and proposes a semi-supervised learning method, and updates a model by using a P-N learning limiting condition, which is a typical discriminant target tracking algorithm using a detection tracking idea. After the related filtering tracking algorithm appears, the speed is ensured, and meanwhile, the robustness and the accuracy exceed those of the traditional algorithm. The Minimum Output mean square Error (MOSSE) algorithm (Minimum Output Sum of Squared Error filter) proposed by Bolme et al firstly uses correlation filtering in the field of target tracking [33], and the main idea of the target tracking of the correlation filtering class is that the more similar two signals are, the higher the correlation value is, and during target tracking, the area with the maximum response value of the initialized tracking target in a video frame is to be found. Henriques et al propose CSK methods, propose cyclic matrix-based kernel tracking methods to solve the problem of dense sampling, and utilize Fourier transform to quickly implement the detection process. A KCF algorithm (High-Speed Tracking with Kernelized Correlation Filters) proposed by other users later uses a cyclic matrix to collect positive and negative samples, a ridge regression is used for training a target detector, and the diagonalization property of the cyclic matrix in a Fourier space is used for effectively reducing the operation amount. The SAMF algorithm proposed by Li.Y and the like is based on a KCF algorithm, HOG, CN and gray level characteristics are combined for use, and the scale pool technology is used for realizing target self-adaptation, so that the adaptation problem of target scale change in the tracking process is solved.
The three years are the high-speed development period of the deep learning technology, and the deep learning technology is also successfully applied to the field of target tracking. The deep learning target tracking method mostly adopts a mode of combining offline training with online updating, but the speed is generally slow due to the problem of model correction of positive and negative samples. Deep network is tracked by using a single target for the first time by a Deep Learning Tracking algorithm provided by dynamite and the like, and the problem of insufficient training samples is solved by the thinking of off-line training and on-line fine tuning. The MDNet algorithm proposed by Nam and the like also uses a network online updating strategy, so that the accuracy is higher, but the speed is lower. The GOTURN algorithm (Generic Object Tracking Using Regression Networks) proposed by david et al is the first deep learning algorithm to reach 100FPS in Tracking speed, but with a low accuracy. In the aspect of attitude tracking, the people of repair and building the like design an optimization network which forms an attitude flow by establishing a frame-to-frame relation, and put forward an idea that the attitude flow is not maximum to inhibit and reduce redundant attitude flow and reestablish the non-intersecting attitude relation in time.
Disclosure of Invention
The technical task of the invention is to provide a invigilator tracking method based on the posture aiming at the defects of the prior art, the method utilizes the characteristics of the human posture and the position information to be fused to match the corresponding target, and provides a simplified algorithm, so as to realize the robust tracking of the target in a way that the fused characteristics match the similarity of the candidate target provided by the posture detection algorithm.
The technical scheme adopted by the invention for solving the technical problems is as follows: a proctor tracking method based on posture comprises the following steps:
the method comprises the following steps of 1, acquiring position information of 18 joint points of all frame owners in a video through an Openpos model;
step 2, initializing a tracking target: acquiring neck point position information and posture included angle information of all invigilators according to all joint point data of the current frame;
step 3, calculating the attitude included angle information of each candidate target according to all the joint point data of the next frame;
step 4, initializing the tracking target and the candidate target to carry out similarity calculation;
step 5, matching the tracking target according to the similarity and updating the posture included angle information and the neck point position information of the initialized tracking target;
step 6, counting whether the number of successful matches is equal to the number of the initialized tracking targets, and if so, setting the Frame number of missing frames to be 0; if not, the Frame number Frame is Frame + 1;
and 7, repeating the steps 3, 4, 5 and 6 until all data are taken.
Further, in the process of repeating the step 6, when the Frame number Frame of missing is equal to the threshold value, the tracking target is reinitialized, and the posture included angle information of the invigilator is recalculated.
Further, the specific operation of the step (2) is as follows:
a. traversing all the joint points of the current frame, and carrying out any neck point (x) on the current framei,yi) And examinee position (x)s,ys) Calculating the Euclidean distance:
Figure BDA0002039428210000041
if the corresponding distances dist of the neck points are larger than the threshold value, the corresponding distances dist are determined as tracking targets;
b. calculating an included angle alpha formed by a straight line of the neck point and the left shoulder point of the tracking target and a straight line of the neck point and the right shoulder point1(ii) a An included angle alpha is formed by a straight line of the neck point and the left shoulder point and a straight line of the left shoulder point and the left elbow point2(ii) a An included angle alpha is formed by a straight line of the left shoulder point and the left elbow point and a straight line of the left elbow point and the left wrist point3(ii) a An included angle alpha is formed by a straight line of the neck point and the right shoulder point and a straight line of the right shoulder point and the right elbow point4(ii) a An included angle alpha is formed by a straight line of the right shoulder point and the right elbow point and a straight line of the right elbow point and the right wrist point5
c. Saving tracking target angle information set (alpha)1…α5) And target neck point position information (x, y).
Further, the similarity calculation and matching method of the initialized tracking target and the candidate target comprises the following steps:
a. obtaining 5 angle information sets (alpha) of each candidate target according to the step (3)i1…αi5) Information set (alpha) of included angle with step (2)1…α5) And (3) comparing, screening 5 persons with the change of the included angle smaller than a threshold value t1, namely meeting the condition:
αn-t1<αin<αn+t1n-t1>0,0<n<6);
acquiring neck point position information (xi, yi) of the screened people, and calculating Euclidean distance between the screened people and the neck point (x, y) of the tracking target in sequence:
Figure BDA0002039428210000042
b. and comparing the minimum distance Min (dist), wherein the neck point and other corresponding joint points are all joint points of the target, and an external rectangle formed by all the joint points is the tracking target.
Compared with the prior art, the technical scheme of the invention has the advantages that:
the method has the advantages that the problem of drift of long-time tracking cannot be solved by a classic human body tracking algorithm, and the algorithm is complex and cannot meet real-time performance, the similarity and the position characteristics of human body postures are fused for target matching, so that the complexity of the algorithm is simplified, and the target tracking can achieve a real-time effect; on the basis of the method, the number of the tracking targets of the current frame is detected in real time, whether the tracking targets are correct or not and whether the tracking targets are missing or not can be judged, and the adaptivity of a tracking algorithm is greatly improved, so that target tracking is completed better. The method has wide application prospect in the aspects of determining and tracking the invigilator teacher in a large-scale examination scene, determining the escape track of the suspect under security monitoring, monitoring the personnel entering and exiting the security room and the like.
Drawings
FIG. 1 is a schematic view of the flow structure of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a proctor tracking method based on posture, the flow of which is shown in fig. 1, mainly includes the following three stages:
the first stage is as follows: initializing a tracking target
1. Acquiring position information (X, Y) of 18 joint points such as eyes, noses and necks of all frame owners in the video and corresponding confidence scores through an Openpos model;
2. related to traversing the current frameNode, for any neck point (x) of current framei,yi) And examinee position (x)s,ys) Calculating the Euclidean distance:
Figure BDA0002039428210000051
if the corresponding distances dist of the neck points are larger than the threshold D, the corresponding distances are determined as tracking targets;
3. calculating an included angle alpha formed by a straight line of the neck point k1 and the left shoulder point k2 of the tracking target and a straight line of the neck point k1 and the right shoulder point k51
Figure BDA0002039428210000061
The included angle alpha formed by the straight line of the neck point k1 and the left shoulder point k2 and the straight line of the left shoulder point k2 and the left elbow point k3 is calculated by the same method2(ii) a The straight line of the left shoulder point k2 and the left elbow point k3 forms an included angle alpha with the straight line of the left elbow point k3 and the left wrist point k43(ii) a The straight line of the neck point k1 and the right shoulder point k5 forms an included angle alpha with the straight line of the right shoulder point k5 and the right elbow point k64(ii) a An included angle alpha is formed by a straight line of the right shoulder point k5 and the right elbow point k6 and a straight line of the right elbow point k6 and the right wrist point k75
4. Saving the angle of the target (alpha)1…α5) And target neck point position information (x, y);
and a second stage: object matching
1. Obtaining the joint point information (X, Y) of all people in the next frame and the corresponding confidence score, and calculating 5 included angle information (alpha) corresponding to each person according to the step 3 in the first stagei1…αi5);
2. Using angle information set (alpha) of each personi1…αi5) And a previously stored set of angle information (alpha)1…α5) And (3) comparing, screening 5 persons with the change of the included angle smaller than a threshold value t1, namely meeting the condition:
αn-t1<αin<αn+t1n-t1>0,0<n<6)
acquiring neck point position information (xi, yi) of the screened people, and calculating Euclidean distance between the screened people and the neck point (x, y) of the tracking target in sequence:
Figure BDA0002039428210000062
3. comparing the minimum distance Min (dist), wherein the neck point and other corresponding joint points are all joint points of the target, and an external rectangle formed by all the joint points is the tracking target;
4. updating the angle information of the target and the position information of the neck point to make alphan=αin;(x,y)=(xi,yi);
And a third stage: adaptive tracking
1. Judging whether the number of the targets matched with each frame is equal to the initial tracking target number, and if so, setting the number of the missing frames to be 0(Num is 0); if not, adding 1 to the number of missing frames (Num + 1);
2. when the number of missing frames reaches the threshold Q, in this embodiment, the threshold Q is 100, and the tracking target is initialized again and the number of missing frames is set to 0.
In the invention, the threshold D corresponding to the distance dist, the included angle change threshold t1 and the missing frame number threshold Q can be artificially determined according to actual requirements so as to adapt to different tracking environments.
The working process of the invention is as follows: initializing a tracking target, counting joint point information of all persons, and detecting a invigilator; matching a tracking target, and comparing similarity of the characteristics provided by the invigilators of the two frames before and after, wherein the tracking target is the tracking target with the maximum similarity; self-adaptive tracking: and the conditions of target loss, target mistracking and the like are processed in time, and the lost target is quickly and accurately found. By adopting the system and the method, the position of the invigilator in the examination room can be accurately acquired, and reference is provided for the subsequent invigilator action identification.
The technical idea of the present invention is described in the above technical solutions, and the protection scope of the present invention is not limited thereto, and any changes and modifications made to the above technical solutions according to the technical essence of the present invention belong to the protection scope of the technical solutions of the present invention.

Claims (4)

1. A proctor tracking method based on posture is characterized by comprising the following steps:
the method comprises the following steps of 1, acquiring position information of 18 joint points of all frame owners in a video through an Openpos model;
step 2, initializing a tracking target: acquiring neck point position information and posture included angle information of all invigilators according to all joint point data of the current frame;
step 3, calculating the attitude included angle information of each candidate target according to all the joint point data of the next frame;
step 4, initializing the tracking target and the candidate targets to carry out similarity calculation, comparing the posture included angle information of each candidate target with the posture included angle information of all the proctoring teachers, and screening out people with the change of the included angle information smaller than a threshold t 1;
step 5, matching the tracking target according to the similarity and updating the posture included angle information and the neck point position information of the initialized tracking target;
step 6, counting whether the number of successful matches is equal to the number of the initialized tracking targets, and if so, setting the Frame number of missing frames to be 0; if not, the Frame number Frame is Frame + 1;
and 7, repeating the steps 3, 4, 5 and 6 until all data are taken.
2. The proctor tracking method based on posture as claimed in claim 1, wherein in the process of repeating step 6, when the Frame number of missing frames Frame is equal to the threshold, the tracking target is reinitialized, and the posture angle information of the proctor is recalculated.
3. The posture-based proctor tracking method according to claim 1, wherein said step (2) is specifically operated as follows:
a. traversing all the joint points of the current frame, and carrying out any neck point (x) on the current framei,yi) And examinee position (x)s,ys) Calculating the Euclidean distance:
Figure FDA0002952895450000011
if the corresponding distances dist of the neck points are larger than the threshold value, the corresponding distances dist are determined as tracking targets;
b. calculating an included angle alpha formed by a straight line of the neck point and the left shoulder point of the tracking target and a straight line of the neck point and the right shoulder point1(ii) a An included angle alpha is formed by a straight line of the neck point and the left shoulder point and a straight line of the left shoulder point and the left elbow point2(ii) a An included angle alpha is formed by a straight line of the left shoulder point and the left elbow point and a straight line of the left elbow point and the left wrist point3(ii) a An included angle alpha is formed by a straight line of the neck point and the right shoulder point and a straight line of the right shoulder point and the right elbow point4(ii) a An included angle alpha is formed by a straight line of the right shoulder point and the right elbow point and a straight line of the right elbow point and the right wrist point5
c. Saving tracking target angle information set (alpha)1…α5) And target neck point position information (x, y).
4. The pose-based invigilator tracking method according to claim 3, wherein the similarity calculation and matching method of the initial tracking target and the candidate target is as follows:
a. obtaining 5 angle information sets (alpha) of each candidate target according to the step (3)i1…αi5) Information set (alpha) of included angle with step (2)1…α5) And (3) comparing, screening 5 persons with the change of the included angle smaller than a threshold value t1, namely meeting the condition:
αn-t1<αin<αn+t1n-t1>0,0<n<6);
acquiring neck point position information (xi, yi) of the screened people, and calculating Euclidean distance between the screened people and the neck point (x, y) of the tracking target in sequence:
Figure FDA0002952895450000021
b. and comparing the minimum distance Min (dist), wherein the neck point and other corresponding joint points are all joint points of the target, and an external rectangle formed by all the joint points is the tracking target.
CN201910336880.2A 2019-04-25 2019-04-25 Invigilator tracking method based on posture Active CN110176025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910336880.2A CN110176025B (en) 2019-04-25 2019-04-25 Invigilator tracking method based on posture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910336880.2A CN110176025B (en) 2019-04-25 2019-04-25 Invigilator tracking method based on posture

Publications (2)

Publication Number Publication Date
CN110176025A CN110176025A (en) 2019-08-27
CN110176025B true CN110176025B (en) 2021-06-18

Family

ID=67690078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910336880.2A Active CN110176025B (en) 2019-04-25 2019-04-25 Invigilator tracking method based on posture

Country Status (1)

Country Link
CN (1) CN110176025B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781763B (en) * 2019-09-30 2022-06-17 沈阳图为科技有限公司 Human body looking-at motion detection method based on posture
CN110751062B (en) * 2019-09-30 2022-04-05 沈阳图为科技有限公司 Examinee attitude sequence generation method based on attitude voting
CN112818796B (en) * 2021-01-26 2023-10-24 厦门大学 Intelligent gesture distinguishing method and storage device suitable for online prison scene
CN112990137B (en) * 2021-04-29 2021-09-21 长沙鹏阳信息技术有限公司 Classroom student sitting posture analysis method based on template matching
CN113239791B (en) * 2021-05-11 2022-08-23 青岛以萨数据技术有限公司 Examiner abnormal behavior monitoring method and system based on neural network and target tracking

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184541A (en) * 2011-05-04 2011-09-14 西安电子科技大学 Multi-objective optimized human body motion tracking method
CN106355603A (en) * 2016-08-29 2017-01-25 深圳市商汤科技有限公司 Method and device for human tracking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184541A (en) * 2011-05-04 2011-09-14 西安电子科技大学 Multi-objective optimized human body motion tracking method
CN106355603A (en) * 2016-08-29 2017-01-25 深圳市商汤科技有限公司 Method and device for human tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Social Grouping for Multi-Taget Tracking and Head Pose Estimation in video";Zhen Qin;《IEEE Transaction on Pattern Analysis and Machine Intelligence》;20151103;第38卷(第10期);第1-14页 *
"一种结合区域特征的人体标记跟踪方法";石祥滨;《沈阳航空航天大学学报》;20140430;第31卷(第2期);第59-64页 *

Also Published As

Publication number Publication date
CN110176025A (en) 2019-08-27

Similar Documents

Publication Publication Date Title
CN110176025B (en) Invigilator tracking method based on posture
CN110852219B (en) Multi-pedestrian cross-camera online tracking system
CN108447080B (en) Target tracking method, system and storage medium based on hierarchical data association and convolutional neural network
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
CN111611905B (en) Visible light and infrared fused target identification method
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
Hu et al. Research on abnormal behavior detection of online examination based on image information
CN110135282B (en) Examinee return plagiarism cheating detection method based on deep convolutional neural network model
CN109298785A (en) A kind of man-machine joint control system and method for monitoring device
CN109190544B (en) Human identity recognition method based on sequence depth image
KR102132722B1 (en) Tracking method and system multi-object in video
CN109544523B (en) Method and device for evaluating quality of face image based on multi-attribute face comparison
CN109815816B (en) Deep learning-based examinee examination room abnormal behavior analysis method
CN109858457A (en) Cheating movement based on OpenPose assists in identifying method and system
CN107564035B (en) Video tracking method based on important area identification and matching
CN103593679A (en) Visual human-hand tracking method based on online machine learning
CN112434599B (en) Pedestrian re-identification method based on random occlusion recovery of noise channel
CN112926522B (en) Behavior recognition method based on skeleton gesture and space-time diagram convolution network
CN108537829A (en) A kind of monitor video personnel state recognition methods
CN112287753A (en) System for improving face recognition precision based on machine learning and algorithm thereof
CN113963399A (en) Personnel trajectory retrieval method and device based on multi-algorithm fusion application
CN111860117A (en) Human behavior recognition method based on deep learning
Almajai et al. Anomaly detection and knowledge transfer in automatic sports video annotation
CN105335695A (en) Glasses detection based eye positioning method
CN105631410B (en) A kind of classroom detection method based on intelligent video processing technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201102

Address after: Room d09-629, international software park, No. 863-9, shangshengou village, Hunnan District, Shenyang City, Liaoning Province

Applicant after: Shenyang Tuwei Technology Co., Ltd

Address before: 110136, Liaoning, Shenyang, Shenbei New Area moral South Avenue No. 37

Applicant before: SHENYANG AEROSPACE University

GR01 Patent grant
GR01 Patent grant