CN110781769A - Method for rapidly detecting and tracking pedestrians - Google Patents

Method for rapidly detecting and tracking pedestrians Download PDF

Info

Publication number
CN110781769A
CN110781769A CN201910947245.8A CN201910947245A CN110781769A CN 110781769 A CN110781769 A CN 110781769A CN 201910947245 A CN201910947245 A CN 201910947245A CN 110781769 A CN110781769 A CN 110781769A
Authority
CN
China
Prior art keywords
pedestrian
frame
template
image
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910947245.8A
Other languages
Chinese (zh)
Inventor
王朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Institute of Technology of ZJU
Original Assignee
Ningbo Institute of Technology of ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Institute of Technology of ZJU filed Critical Ningbo Institute of Technology of ZJU
Priority to CN201910947245.8A priority Critical patent/CN110781769A/en
Publication of CN110781769A publication Critical patent/CN110781769A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method for rapid pedestrian detection and tracking based on a Vibe fusion HOG-SVM scheme, which comprises the following steps of 1) finding out a region where a moving target is located by utilizing foreground detection; 2) only executing the pedestrian detection process in the area where the moving targets are located, and if the pedestrian is detected, recording the position of the pedestrian and transferring to the step 3); if no pedestrian is detected, repeating the steps 1) and 2); 3) searching for pedestrians in the area where the position is located by utilizing a template matching algorithm, and in the tracking process, if the HOG + SVM fails to detect the pedestrians, tracking the most possible position of the pedestrians by the template matching algorithm according to the last detection template; the method has better precision and real-time performance.

Description

Method for rapidly detecting and tracking pedestrians
Technical Field
The invention relates to the technical field of pedestrian detection and tracking, in particular to a rapid pedestrian detection and tracking method.
Background
At present, the main task of pedestrian detection is to find dynamic pedestrians from video sequences. With the development of computer vision, pedestrian detection is widely applied in the fields of intelligent auxiliary driving, intelligent monitoring, pedestrian analysis, intelligent robots and the like. However, due to the complexity of the real-life background, the diversity of the postures of the pedestrians and the diversification of the shooting angles, it is a great challenge to quickly and effectively extract the pedestrians from the video. In this way, pedestrian detection has been a hot topic in the field of computer vision research. At present, pedestrian detection methods are mainly divided into two types: a conventional pedestrian detection method and a pedestrian detection method based on machine learning. Machine learning is currently the mainstream method of pedestrian detection. The pedestrian area is described by mainly utilizing image characteristics such as edges, shapes and colors in a static image. Some features may be used to detect pedestrians well, such as Haar wavelet features, HOG features, Edgelet features, Shapelet features, shape outline template features, and the like.
Object tracking is the real-time localization of individual or multiple specific objects of interest and the acquisition of accurate motion states. The appearance, contour, position and motion state of the moving object have good stability and similarity in adjacent video frames. The object and its surrounding background have some differences in the appearance of the image. Based on these basic conditions, the tracking algorithm extracts features that describe the appearance of the target, or builds a model of the target that is different from the background. Target tracking algorithms can be largely classified into active contour model-based tracking, feature-based tracking, region-based tracking, model-based tracking, and the like. Compared to machine learning based methods, they have their own advantages: they are typically computationally inefficient, requiring no significant samples of pedestrians or non-pedestrians to be collected.
Histogram gradient (HOG) features were used for target identification, proposed by Dalal and Triggs in 2005. Dalal combines the HOG features with the SVM classifier, realizing a breakthrough in the field of pedestrian detection. The HOG feature method extracts the local histogram of the directional gradient in the image window densely, and can completely extract pedestrian shape information and appearance information. It has excellent discrimination ability and can distinguish pedestrians from other objects. However, computing the HOG features requires intensive and complex scans, which greatly results in high computational complexity and reduced real-time performance.
In summary, the method for detecting and tracking pedestrians has the contradiction problem of both precision (accuracy) and real-time performance, and the applicant aims to further research and provide a method for rapidly detecting and tracking pedestrians with both precision and real-time performance.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for rapidly detecting and tracking pedestrians based on a Vibe fusion HOG-SVM scheme, wherein the method has high precision and real-time performance.
In order to solve the technical problem, the invention provides a method for rapidly detecting and tracking pedestrians, which comprises the following steps:
1) finding out the area of the moving target by using foreground detection, wherein the step comprises the following steps: deleting a target shadow based on an algorithm of an HSV color space, and extracting a region where a moving target is located from a video by using a Vibe method, erosion and expansion, a pedestrian contour algorithm of four neighborhoods and boundary expansion;
2) performing a pedestrian detection process only in the areas where the moving objects are located, the detection process including: calculating and extracting HOG characteristics of the region, and then sending the HOG characteristics to an SVM classifier so as to detect whether pedestrians exist in the region where the moving target is located; if the pedestrian is detected, recording the position of the pedestrian and transferring to the step 3); if no pedestrian is detected, repeating the steps 1) and 2);
3) searching for pedestrians in the area where the position is located by utilizing a template matching algorithm, and in the tracking process, if the HOG + SVM fails to detect the pedestrians, tracking the most possible position of the pedestrians by the template matching algorithm according to the last detection template; and during matching, setting a timer, when the duration time of template matching exceeds n seconds, regarding the pedestrian as a lost pedestrian, and regarding the pedestrian as template matching failure, and under the condition of template matching failure, using a Vibe + HOG + SVM algorithm to reinitialize pedestrian tracking on the current whole video frame.
After adopting the structure, compared with the prior art, the invention has the following advantages: the method provides an improved method according to the characteristics of video monitoring, and the step 1) is to use a Vibe method to extract a foreground object area in a video, but the shadow of an object can influence the pedestrian detection efficiency, so that the shadow of each frame is removed by using an HSV color space method before the Vibe method. Then, erosion and dilation, a four-neighborhood search algorithm, and boundary expansion are performed to further alter the extracted foreground. And 2) calculating directional gradient Histogram (HOG) features of the extracted region, and then sending the features to a Support Vector Machine (SVM) classifier to judge the position of the pedestrian. And 3) further tracking the detected pedestrians and processing the tracking loss condition by using a template matching technology, and then setting a timer function to ensure the real-time requirement of the algorithm. Experimental results show that the method is superior to the traditional HOG + SVM and GMM + HOG + SVM algorithms in the aspects of recognition accuracy and processing speed, and has better accuracy and real-time performance.
As an improvement, in the step 1), combining the features of normal pedestrian posture and shape, and dividing the foreground extraction boundary according to a set of rules of moving area boundary expansion in pedestrian detection, wherein the rules are as follows: (1) when one frame is very small and the other frame is very large, the large frame is directly positioned on the small frame, and the two frames are very close to each other, the two frames are combined; (2) when one frame is very small, the other frame is very large, the small frame is directly positioned above the large frame, and the two frames are very close to each other, the two frames are combined; (3) when one frame is very small, the other frame is very large, and the large frame completely contains the small frame, the small frame is deleted; whether the border of the moving area is close to the border of the two borders is judged by judging the minimum distance between the borders of the two borders, the minimum distance is limited by taking a pixel value as a threshold, the value range of the threshold is 3-8 natural numbers, including 3 and 8, for example, the threshold is 3, the threshold represents that the border distance of the large border and the small border is taken as the threshold, and the border of the moving area is judged to be close to the border of the moving area if the border distance of the large border and the small border is less than or equal to 3 pixels; in addition, the value range of the threshold is 3-8 natural numbers, so that the method is favorable for adapting to different photo qualities, and is more favorable for improving the precision and the real-time property.
As an improvement, the matching selects a normalized squared difference distribution matching method, and the matching degree is determined by the sum of squares of gray value differences of a normalized template image and an image to be matched, so that the improvement of precision and real-time performance is facilitated.
Setting the size of the template image M (i, j) as nXm, superposing the template image M (i, j) on the image P to be matched, translating, marking the size of the image P as WXH,
Figure BDA0002224141540000032
the area of the template image covering the searched image, namely the overlapping area, is set as S (i, j), wherein (i, j) is the coordinates of S (i, j) at the lower left corner of the image on the image P, and the searching range is as follows: i is more than or equal to 1 and less than or equal to W-n, 1 is not less than j is not less than H-m, then, the template image is traversed and compared on the image to be searched, and the overlapping area under the template image is set as R (i, j), wherein (i, j) is the coordinate position of the overlapping area S (i, j) in the image P to be matched;
the normalized squared error of the matching method is defined as follows:
Figure BDA0002224141540000031
the smaller the value of R (i, j) in the above formula, the more similar it is to the template. When the value is 0, it means that the best match is found at that position, but in a real system, the template is almost unlikely to be exactly the same as the overlap region under the template, so the probability of 0 is very small.
As an improvement, the position where the value is the minimum value is regarded as the target position, which is more favorable for the balance of both accuracy and real-time.
Drawings
Fig. 1 is a flow chart of a method for rapid pedestrian detection and tracking according to the present invention.
FIG. 2 is a schematic diagram of deleting a target shadow.
Fig. 3 is a schematic diagram of a pedestrian contour searching algorithm based on four neighborhoods.
Fig. 4 is a diagram illustrating expansion of the moving region boundary in pedestrian detection.
Fig. 5 is a schematic diagram of a scene-a pedestrian tracking detection result using the HOG + SVM method.
Fig. 6 is a schematic diagram of a scene-a pedestrian tracking detection result using the GMM + HOG + SVM method.
FIG. 7 is a diagram illustrating a pedestrian tracking detection result using the method of the present invention.
Fig. 8 is a schematic diagram of a pedestrian tracking detection result by using the HOG + SVM method in the scene two.
Fig. 9 is a schematic diagram of a pedestrian tracking detection result in a scenario two using a GMM + HOG + SVM method.
FIG. 10 is a diagram of a pedestrian tracking detection result using the method of the present invention in scene two.
Fig. 11 is a diagram illustrating a scene-a double-pedestrian tracking detection result using the HOG + SVM method.
FIG. 12 is a diagram illustrating a scene-a two-pedestrian tracking detection result using the GMM + HOG + SVM method.
FIG. 13 is a schematic diagram of a scene showing the results of a double-pedestrian tracking test using the method of the present invention.
Detailed Description
The invention is described in further detail below:
the technical scheme adopted by the invention comprises the following main steps:
the method comprises the following steps of 1) removing the influence of shadow effect, removing the shadow of a video image, and extracting a moving region from the video by using a Vibe method, erosion and expansion, a four-neighbor domain search algorithm and boundary expansion.
Imperfect foreground extraction using a four-neighborhood search algorithm (not including small pixel interference) can result in separation of the head and body, which is detrimental to target detection. In order to prevent the situation from deteriorating, the invention combines the characteristics of normal pedestrian posture and form, and then sets a set of rules for combining or deleting frames: (1) when one frame is very small and the other frame is very large, the large frame is directly positioned on the small frame, and the two frames are very close to each other, the two frames are combined; (2) when one frame is very small, the other frame is very large, the small frame is directly positioned above the large frame, and the two frames are very close to each other, the two frames are combined; (3) when one border is very small and the other border is very large, the small border is deleted if the large border completely contains the small border. In this way, the present invention can more accurately divide the boundary.
Whether the border of the two frames is close to the border of the two frames is judged by judging the minimum distance, the minimum distance is limited by taking a pixel value as a threshold, the value range of the threshold is 3-8 natural numbers including 3 and 8, and 3 is taken in the example.
In moving areas, we should also consider that the foreground extraction does not contain the whole person, and some small moving areas will be lost, for example: head, foot, etc. Therefore, the HOG + SVM algorithm cannot effectively detect pedestrians in the moving object region. Therefore, the method of the present invention can properly adjust the size of the frame. After this operation, the entire moving object will be located within the boundary.
Step 2) performs the pedestrian detection process only in these moving regions, thereby avoiding an exhaustive sliding window search over the entire sequence of test frames.
The HOG features of the extracted region are computed and then sent to the SVM classifier. Once a pedestrian is detected, the template matching method in step 3 is used to track the detected pedestrian. After finding the pedestrian, its position is recorded and the area of interest is calculated around it. Then, we detect pedestrians in the region of interest instead of the full image, which can significantly speed up the detection process.
The target tracking algorithm based on template matching, namely the template matching algorithm, mainly comprises three steps: template establishment, matching tracking and template updating. The input of the algorithm is the image in the video, and the output is the tracking result of the input image. The template establishment belongs to an initialization stage, the main body of the algorithm is matching and tracking, and the template updating is the link for maintaining the whole target tracking process.
In the invention, a normalized squared difference matching method is selected, and the matching degree is determined by normalizing the gray value difference of a template image and the square sum of the images to be matched. We set the size of the template image M (i, j) to nXm, the template image M (i, j) is translated superimposed on the image P to be matched, the image P size is denoted WXH,
Figure BDA0002224141540000052
the area of the template image covering the searched image, namely the overlapping area, is set as S (i, j), wherein (i, j) is the coordinates of S (i, j) at the lower left corner of the image on the image P, and the searching range is as follows: i is more than or equal to 1 and less than or equal to W-n, j is more than or equal to 1 and less than or equal to H-m, then, the template images are traversed and compared on the image to be searched, and the overlapping area under the template images is set to be R (i, j), wherein (i, j) is the coordinate position of the overlapping area S (i, j) in the image P to be matched;
the normalized squared error of the matching method is defined as follows:
Figure BDA0002224141540000051
the smaller the value of R (i, j) in the above formula, the more similar it is to the template. When the value is 0, it means that the best match is found at that position, but in a real system, the template is almost unlikely to be exactly the same as the overlap region under the template, so the probability of 0 is very small. As an improvement, the position where the value is the minimum value is regarded as the target position, which is more favorable for the balance of both accuracy and real-time.
Upon detecting a region of interest, if no pedestrian is detected, we repeat the following steps: shadow removal, Vibe method, erosion and dilation, four-neighbor search algorithm and boundary expansion, and detection of the presence of pedestrians.
If the HOG + SVM algorithm fails to identify the pedestrian, such as when the tracked objects in the video rotate their bodies by a certain angle, the template matching algorithm tracks the most probable position of the pedestrian according to the last detected pedestrian template.
In order to ensure the real-time performance of the identification tracking, a timer is set. When the duration of template matching exceeds n seconds, the tracking window will be considered as missing pedestrian, for example, n is set to 2.5, and the effect is very good when 2.5 is taken.
Under the condition that template matching fails, the pedestrian tracking is reinitialized on the current whole frame by using a slower Vibe + HOG + SVM algorithm, so that the false alarm rate is reduced, the recognition accuracy is improved, namely, the steps 1) and 2) are repeated on the current whole frame, and the most possible position of the pedestrian is found.
In order to verify the difference between the method and the target detection algorithm of the SVM classifier based on HOG in accuracy and real-time, the method focuses on single and double pedestrians to perform comparison experiments on two algorithms in two scenes, as shown in FIGS. 5 to 13, and the comparison experiment results show that the effects of the HOG + SVM algorithm and the GMM + HOG + SVM algorithm of the method are experimentally compared. From the above results we can see that there are some false detections and missing checks in pedestrian detection using the HOG + SVM algorithm. The main reason for false detection is that infeasible non-pedestrian regions may be similar to pedestrian regions in the video scene. And the detection of the HOG + SVM algorithm requires scanning the whole image, so the false alarm rate is high. The GMM + HOG + SVM algorithm basically solves the problem of error detection of the HOG + SVM algorithm, and the phenomenon of detection loss still exists. The method has good detection effect on the front, the back and the side of the pedestrian in pedestrian detection. It is much lower than the original HOG + SVM algorithm and the GMM + HOG + SVM algorithm in terms of false positive rate and false negative rate.
In the present invention, six videos are processed using three different algorithms, and we use accuracy and false alarm rate as the evaluation indexes of the system, as shown in table 1. As can be seen from table 1, the average accuracy of the method of the invention is 90.84%, whereas the HOG + SVM and GMM + HOG + SVM algorithms are only about 80%. Meanwhile, in terms of false alarm rate, the method is 0%, the GMM + HOG + SVM algorithm is 5.39%, and the HOG + SVM algorithm is 48.08%. Therefore, compared with HOG + SVM and GMM + HOG + SVM, the method greatly improves the stability and accuracy.
Table 1 shows comparison of the experimental results of the present invention and the detection algorithms based on HOG + SVM and GMM + HOG + SVM
Figure BDA0002224141540000071
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the appended claims.

Claims (5)

1. A method for rapid pedestrian detection and tracking, comprising the steps of:
1) finding out the area of the moving target by using foreground detection, wherein the step comprises the following steps: deleting a target shadow based on an algorithm of an HSV color space, and extracting a region where a moving target is located from a video by using a Vibe method, erosion and expansion, a pedestrian contour algorithm of four neighborhoods and boundary expansion;
2) performing a pedestrian detection process only in the areas where the moving objects are located, the detection process including: calculating and extracting HOG characteristics of the region, and then sending the HOG characteristics to an SVM classifier so as to detect whether pedestrians exist in the region where the moving target is located; if the pedestrian is detected, recording the position of the pedestrian and transferring to the step 3); if no pedestrian is detected, repeating the steps 1) and 2);
3) searching for pedestrians in the area where the position is located by utilizing a template matching algorithm, and in the tracking process, if the HOG + SVM fails to detect the pedestrians, tracking the most possible position of the pedestrians by the template matching algorithm according to the last detection template; and during matching, setting a timer, when the duration time of template matching exceeds n seconds, regarding the pedestrian as a lost pedestrian, and regarding the pedestrian as template matching failure, and under the condition of template matching failure, using a Vibe + HOG + SVM algorithm to reinitialize pedestrian tracking on the current whole video frame.
2. The method for rapid pedestrian detection and tracking according to claim 1, wherein in the step 1), the features of normal pedestrian posture and shape are combined, and the foreground extraction boundary is divided according to a set of rules of moving region edge expansion in pedestrian detection, wherein the rules are as follows: (1) when one frame is very small and the other frame is very large, the large frame is directly positioned on the small frame, and the two frames are very close to each other, the two frames are combined; (2) when one frame is very small, the other frame is very large, the small frame is directly positioned above the large frame, and the two frames are very close to each other, the two frames are combined; (3) when one frame is very small, the other frame is very large, and the large frame completely contains the small frame, the small frame is deleted;
whether the border of the two frames is close to the border of the two frames is judged by judging the minimum distance, the minimum distance is limited by taking a pixel value as a threshold, and the value range of the threshold is 3-8 natural numbers including 3 and 8.
3. The method of fast pedestrian detection and tracking according to claim 1, wherein said matching selects a normalized squared difference matching method, the degree of matching being determined by normalizing a sum of a gray value difference of the template image and a square of the image to be matched.
4. The method for rapid pedestrian detection and tracking according to claim 3The method is characterized in that the size of a template image M (i, j) is set to be nXm, the template image M (i, j) is overlapped on an image P to be matched and translated, the image P is WXH in size,
Figure FDA0002224141530000011
the area of the template image covering the searched image, namely the overlapping area, is set as S (i, j), wherein (i, j) is the coordinates of S (i, j) at the lower left corner of the image on the image P, and the searching range is as follows: i is more than or equal to 1 and less than or equal to W-n, j is more than or equal to 1 and less than or equal to H-m, then, the template images are traversed and compared on the image to be searched, and the overlapping area under the template images is set to be R (i, j), wherein (i, j) is the coordinate position of the overlapping area S (i, j) in the image P to be matched;
the normalized squared error of the matching method is defined as follows:
Figure FDA0002224141530000021
the smaller the value of R (i, j) in the above formula, the more similar it is to the template.
5. The method of rapid pedestrian detection and tracking according to claim 4, wherein the location where the value is the minimum value is considered as the target location.
CN201910947245.8A 2019-10-01 2019-10-01 Method for rapidly detecting and tracking pedestrians Pending CN110781769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910947245.8A CN110781769A (en) 2019-10-01 2019-10-01 Method for rapidly detecting and tracking pedestrians

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910947245.8A CN110781769A (en) 2019-10-01 2019-10-01 Method for rapidly detecting and tracking pedestrians

Publications (1)

Publication Number Publication Date
CN110781769A true CN110781769A (en) 2020-02-11

Family

ID=69384797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910947245.8A Pending CN110781769A (en) 2019-10-01 2019-10-01 Method for rapidly detecting and tracking pedestrians

Country Status (1)

Country Link
CN (1) CN110781769A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529938A (en) * 2020-12-08 2021-03-19 郭金朋 Intelligent classroom monitoring method and system based on video understanding

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216885A (en) * 2008-01-04 2008-07-09 中山大学 Passerby face detection and tracing algorithm based on video
US20150063628A1 (en) * 2013-09-04 2015-03-05 Xerox Corporation Robust and computationally efficient video-based object tracking in regularized motion environments
CN107291910A (en) * 2017-06-26 2017-10-24 图麟信息科技(深圳)有限公司 A kind of video segment structuralized query method, device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216885A (en) * 2008-01-04 2008-07-09 中山大学 Passerby face detection and tracing algorithm based on video
US20150063628A1 (en) * 2013-09-04 2015-03-05 Xerox Corporation Robust and computationally efficient video-based object tracking in regularized motion environments
CN107291910A (en) * 2017-06-26 2017-10-24 图麟信息科技(深圳)有限公司 A kind of video segment structuralized query method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
归加琪: "《监控视频中的前景提取和行人检测跟踪算法研究》", 《万方数据知识服务平台》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529938A (en) * 2020-12-08 2021-03-19 郭金朋 Intelligent classroom monitoring method and system based on video understanding

Similar Documents

Publication Publication Date Title
JP5726125B2 (en) Method and system for detecting an object in a depth image
Ogale A survey of techniques for human detection from video
KR101653278B1 (en) Face tracking system using colar-based face detection method
CN110569785B (en) Face recognition method integrating tracking technology
CN110766720A (en) Multi-camera vehicle tracking system based on deep learning
CN106682641A (en) Pedestrian identification method based on image with FHOG- LBPH feature
CN105893957B (en) View-based access control model lake surface ship detection recognition and tracking method
García-Martín et al. Robust real time moving people detection in surveillance scenarios
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
Xiao et al. Vehicle and person tracking in aerial videos
Kheirkhah et al. A hybrid face detection approach in color images with complex background
CN108319961B (en) Image ROI rapid detection method based on local feature points
CN110555867B (en) Multi-target object tracking method integrating object capturing and identifying technology
Guo et al. Vehicle fingerprinting for reacquisition & tracking in videos
Jun et al. LIDAR and vision based pedestrian detection and tracking system
CN112347967B (en) Pedestrian detection method fusing motion information in complex scene
CN110781769A (en) Method for rapidly detecting and tracking pedestrians
Ahlvers et al. Model-free face detection and head tracking with morphological hole mapping
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
CN112380966B (en) Monocular iris matching method based on feature point re-projection
Kassir et al. A region based CAMShift tracking with a moving camera
Xu et al. Car detection using deformable part models with composite features
Le et al. Pedestrian lane detection in unstructured environments for assistive navigation
Lin et al. Optimal threshold and LoG based feature identification and tracking of bat flapping flight
Campadelli et al. A color based method for face detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200211

WD01 Invention patent application deemed withdrawn after publication