CN106355154A - Method for detecting frequent pedestrian passing in surveillance video - Google Patents
Method for detecting frequent pedestrian passing in surveillance video Download PDFInfo
- Publication number
- CN106355154A CN106355154A CN201610793181.7A CN201610793181A CN106355154A CN 106355154 A CN106355154 A CN 106355154A CN 201610793181 A CN201610793181 A CN 201610793181A CN 106355154 A CN106355154 A CN 106355154A
- Authority
- CN
- China
- Prior art keywords
- remarkable
- passerby
- facial image
- monitor video
- described step
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000001815 facial effect Effects 0.000 claims description 15
- 238000012544 monitoring process Methods 0.000 claims description 11
- 239000000284 extract Substances 0.000 claims description 4
- 238000013500 data storage Methods 0.000 abstract description 4
- 238000012545 processing Methods 0.000 abstract description 2
- 238000001514 detection method Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Television Signal Processing For Recording (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to a method for detecting frequent pedestrian passing in a surveillance video. The method comprises steps as follows: S1, a video stream acquired by a surveillance camera is loaded, face images of passing pedestrians are acquired, and pedestrian passing records are generated; S2, human image features are extracted according to the face images, and the human image features and the pedestrian passing records are stored in a data storage module; S3, the number of times of each pedestrian passing the surveillance video in a set period is searched in the data storage module and exported according to human image feature descriptors. Compared with the prior art, the mature method for matching passing pedestrians with faces in videos is provided, the processing matching accuracy is high, and the hit rate can reach 60% or above and the false alarm rate is kept 0.1% or below for million-class face databases.
Description
Technical field
The present invention relates to a kind of video security protection field, especially relate to detect frequently remarkable side in a kind of monitor video
Method.
Background technology
Nowadays it is assembled with camera head in a lot of places, but the video that these camera heads are gathered at present is general
The use afterwards inquired about can only be played, because in the often side generally by manual analyses of preventative analysis video in advance
Formula, especially for frequently remarkable analysis, on video scene, due to its complexity and high amount of calculation bring to algorithm
The challenge of accuracy and speed, the method lacking correlation.Additionally, prior art does not have in the identical remarkable skill of video inner excavated
Art method, wherein, unusually show the way process in monitor camera device surveillance area for the people.
Content of the invention
The purpose of the present invention is exactly to overcome the defect of above-mentioned prior art presence to provide inspection in a kind of monitor video
The numerous remarkable method of frequency measurement.
The purpose of the present invention can be achieved through the following technical solutions:
Remarkable method is detected frequently in a kind of monitor video, including step:
S1: be loaded into the video code flow of monitoring camera collection, obtain the facial image through passerby, and generate remarkable note
Record;
S2: its portrait feature is extracted according to facial image, and by portrait feature and remarkable record storage to data storage mould
Block;
S3: retrieved in data memory module according to portrait feature descriptor, and derive each passerby in setting time section
The interior number of times through this monitoring camera head region.
In described step s1, adaboost grader is specifically adopted to obtain through passerby's facial image.
During its portrait feature being extracted according to facial image in described step s2, extract 35 characteristic points altogether.
Described step s3 specifically includes step:
S31: the remarkable record storing in data memory module is carried out similarity mode, portrait feature similarity is remarkable
Record is sorted out;
S32: the remarkable record after sorting out is filtered according to default attribute character;
S33: derive the number of times that each passerby passes through this monitoring camera head region in setting time section.
In described step s32, default attribute character includes: mask, sunglasses, age, sex.
In described step s33, derive each passerby in setting time section after the number of times of this monitoring camera head region,
Also export the facial image that occurrence number within the unit interval is more than the passerby of set point number.
Compared with prior art, the invention has the advantages that
1) method proposing a set of ripe remarkable coupling of face in processing video, processes matching precision high, for hundred
The face database of ten thousand ranks can accomplish more than 60% hit rate, keep less than 0.1% rate of false alarm simultaneously.
2) speed is fast, and each unusually can occur obtaining it in 3 seconds unusually recording at it.
3) strong robustness, can use under different scenes.
Brief description
Fig. 1 is the key step schematic flow sheet of the present invention.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention
Premised on implemented, give detailed embodiment and specific operating process, but protection scope of the present invention be not limited to
Following embodiments.
Remarkable method is detected frequently in a kind of monitor video, as shown in figure 1, including step:
S1: be loaded into the video code flow of monitoring camera collection, obtain the facial image through passerby, and generate remarkable note
Record, specifically adopts adaboost grader to obtain through passerby's facial image;
S2: its portrait feature is extracted according to facial image, and by portrait feature and remarkable record storage to data storage mould
Block, during extracting its portrait feature according to facial image, extracts 35 characteristic points altogether;
S3: retrieved in data memory module according to portrait feature descriptor, and derive each passerby in setting time section
The interior number of times through this monitoring camera head region, specifically includes step:
S31: the remarkable record storing in data memory module is carried out similarity mode, portrait feature similarity is remarkable
Record is sorted out;
S32: according to default attribute character, the remarkable record after sorting out is filtered, default attribute character includes:
Mask, sunglasses, age, sex etc.;
S33: derive the number of times that each passerby passes through this monitoring camera head region in setting time section, and export in unit
In time, occurrence number is more than the facial image of the passerby of set point number.
In present techniques, input as video code flow, be output as frequently unusually recording
Realize process: software comprises following 5 processes (module) altogether
1. portrait detection and tracing module: in the video code flow detection face of input, Face datection adopts general
Adaboost grader, face tracking adopts optical flow method.This module borrows the concept of " prime frame " and " auxiliary frame " in video code flow,
Mutually compare the calculating that every frame does full dose, algorithm is reduced to more than 80% to amount of calculation;Meanwhile, detection done with tracking combine excellent
Change, when quoting detection with tracing algorithm, all line algorithm is entered using the mutual regional area having obtained and calculate, to play acceleration
Effect.
2. human nature feature extraction module: for remarkable on each video, obtain face size, human face five-sense-organ position, people
Face attitude information, judges whether properly to be used for doing face alignment;Here adopt dynamic mode it is ensured that each remarkable at least n
The feature extraction of frame.In portrait feature, have chosen lbp, sift, and the various features operator such as neutral net is so that face
Feature is able to maximized expression.
3. portrait memory module: provide the conforming portrait of multimachine to store, the photographic head that each is unusually located, the time,
Position in video, portrait feature, face sectional drawing etc. preserves, and both can pass through interface access data, also directly retrieval module is carried
Support for data.
4. portrait retrieval module: the people's similarity model being obtained based on off-line training, by each unusually in historical record
In carry out similarity mode, obtain 1 to many affinity list.In order to increase retrieval rate, it is employed herein the poly- of class kmeans
Even if the pretreatment of class algorithm is so that single retrieval rate is within ten million magnitude also can be maintained at 1s.
5. frequent remarkable post-processing module: in order to improve hit rate, copy the common practices of search engine, strategically do
Secondary or multiple query expansion;Simultaneously in order to reduce rate of false alarm, extract the attribute information of face, such as the age, sex, attitude,
Wear dark glasses etc., filter to the type of higher wrong report.
1. portrait feature extraction module: this module carries out key point positioning (35 characteristic points altogether) first on face,
In key point, with different feature operators (lbp, sift, neutral net), sample decimation 100,000 dimension is above to high-density afterwards
Feature, then do dimension-reduction treatment to about 100 dimensions, obtain small volume characteristic vector
2. portrait retrieval module: using the similarity between two features of l2 Similarity Measure, it is speed-up computation, special to portrait
Levy and pre-build index, index is the class center being obtained using kmeans mode, for ensureing recall rate, obtained with method of randomization
To multiple class centers.After this process, retrieval speed-up ratio can reach more than 30 times.
3. frequent remarkable post-processing module: this module comprises 2 submodules, and is by the tentatively similar lists of persons obtaining
Do expanding query, this process is likely to bring certain wrong report while improving hit rate, has therefore done stronger restriction
Condition, such as affinity score have to be larger than a high threshold and just can do query expansion;Two is that the normal wrong report type occurring is done
Filter, common type such as old man, child, identical hair style, wear a mask, it is clear and definite that filter method is by attributive classification
Whether belong to these types, then cut off wrong report using higher score threshold.
Claims (6)
1. detect frequently remarkable method in a kind of monitor video it is characterised in that including step:
S1: be loaded into the video code flow of monitoring camera collection, obtain the facial image through passerby, and generate remarkable record;
S2: its portrait feature is extracted according to facial image, and by portrait feature and remarkable record storage to data memory module;
S3: retrieved in data memory module according to portrait feature descriptor, and derive each passerby warp in setting time section
Cross the number of times of this monitoring camera head region.
2. detect frequently remarkable method in a kind of monitor video according to claim 1 it is characterised in that described step
In s1, adaboost grader is specifically adopted to obtain through passerby's facial image.
3. detect frequently remarkable method in a kind of monitor video according to claim 1 it is characterised in that described step
During its portrait feature being extracted according to facial image in s2, extract 35 characteristic points altogether.
4. detect frequently remarkable method in a kind of monitor video according to claim 1 it is characterised in that described step
S3 specifically includes step:
S31: the remarkable record storing in data memory module is carried out similarity mode, by the remarkable record of portrait feature similarity
Sort out;
S32: the remarkable record after sorting out is filtered according to default attribute character;
S33: derive the number of times that each passerby passes through this monitoring camera head region in setting time section.
5. detect frequently remarkable method in a kind of monitor video according to claim 4 it is characterised in that described step
In s32, default attribute character includes: mask, sunglasses, age, sex.
6. detect frequently remarkable method in a kind of monitor video according to claim 4 it is characterised in that described step
In s33, derive each passerby and after the number of times of this monitoring camera head region, also export in the unit interval in setting time section
Interior occurrence number is more than the facial image of the passerby of set point number.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610793181.7A CN106355154B (en) | 2016-08-31 | 2016-08-31 | Method for detecting frequent passing of people in surveillance video |
PCT/CN2016/106672 WO2018040306A1 (en) | 2016-08-31 | 2016-11-21 | Method for detecting frequent passers-by in monitoring video |
SG11201806418TA SG11201806418TA (en) | 2016-08-31 | 2016-11-21 | Method for detecting frequent passer-passing in monitoring video |
PH12018501518A PH12018501518A1 (en) | 2016-08-31 | 2018-07-13 | Method for detecting frequent passer-passing in monitoring video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610793181.7A CN106355154B (en) | 2016-08-31 | 2016-08-31 | Method for detecting frequent passing of people in surveillance video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106355154A true CN106355154A (en) | 2017-01-25 |
CN106355154B CN106355154B (en) | 2020-09-11 |
Family
ID=57858174
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610793181.7A Active CN106355154B (en) | 2016-08-31 | 2016-08-31 | Method for detecting frequent passing of people in surveillance video |
Country Status (4)
Country | Link |
---|---|
CN (1) | CN106355154B (en) |
PH (1) | PH12018501518A1 (en) |
SG (1) | SG11201806418TA (en) |
WO (1) | WO2018040306A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897460A (en) * | 2017-03-14 | 2017-06-27 | 华平智慧信息技术(深圳)有限公司 | The method and device of data classification in safety monitoring |
WO2018165863A1 (en) * | 2017-03-14 | 2018-09-20 | 华平智慧信息技术(深圳)有限公司 | Data classification method and apparatus in safety and protection monitoring |
CN109492604A (en) * | 2018-11-23 | 2019-03-19 | 北京嘉华科盈信息系统有限公司 | Faceform's characteristic statistics analysis system |
CN110019963A (en) * | 2017-12-11 | 2019-07-16 | 罗普特(厦门)科技集团有限公司 | The searching method of suspect relationship personnel |
CN110134812A (en) * | 2018-02-09 | 2019-08-16 | 杭州海康威视数字技术股份有限公司 | A kind of face searching method and its device |
CN111143594A (en) * | 2019-12-26 | 2020-05-12 | 北京橘拍科技有限公司 | Portrait searching method, server, storage medium, video processing method and system |
CN111401315A (en) * | 2020-04-10 | 2020-07-10 | 浙江大华技术股份有限公司 | Face recognition method, recognition device and storage device based on video |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111552681A (en) * | 2020-04-30 | 2020-08-18 | 山东众志电子有限公司 | Dynamic large data technology-based place access frequency abnormity calculation method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101618542A (en) * | 2009-07-24 | 2010-01-06 | 塔米智能科技(北京)有限公司 | System and method for welcoming guest by intelligent robot |
CN104376679A (en) * | 2014-11-24 | 2015-02-25 | 苏州立瓷电子技术有限公司 | Intelligent household pre-warning method |
KR20150071920A (en) * | 2013-12-19 | 2015-06-29 | 한국전자통신연구원 | Apparatus and method for counting person number using face identification |
CN105160319A (en) * | 2015-08-31 | 2015-12-16 | 电子科技大学 | Method for realizing pedestrian re-identification in monitor video |
CN105357496A (en) * | 2015-12-09 | 2016-02-24 | 武汉大学 | Multisource big data fused video monitoring pedestrian identity identification method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101847218A (en) * | 2009-03-25 | 2010-09-29 | 微星科技股份有限公司 | People stream counting system and method thereof |
CN102176746A (en) * | 2009-09-17 | 2011-09-07 | 广东中大讯通信息有限公司 | Intelligent monitoring system used for safe access of local cell region and realization method thereof |
CN103971103A (en) * | 2014-05-23 | 2014-08-06 | 西安电子科技大学宁波信息技术研究院 | People counting system |
-
2016
- 2016-08-31 CN CN201610793181.7A patent/CN106355154B/en active Active
- 2016-11-21 WO PCT/CN2016/106672 patent/WO2018040306A1/en active Application Filing
- 2016-11-21 SG SG11201806418TA patent/SG11201806418TA/en unknown
-
2018
- 2018-07-13 PH PH12018501518A patent/PH12018501518A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101618542A (en) * | 2009-07-24 | 2010-01-06 | 塔米智能科技(北京)有限公司 | System and method for welcoming guest by intelligent robot |
KR20150071920A (en) * | 2013-12-19 | 2015-06-29 | 한국전자통신연구원 | Apparatus and method for counting person number using face identification |
CN104376679A (en) * | 2014-11-24 | 2015-02-25 | 苏州立瓷电子技术有限公司 | Intelligent household pre-warning method |
CN105160319A (en) * | 2015-08-31 | 2015-12-16 | 电子科技大学 | Method for realizing pedestrian re-identification in monitor video |
CN105357496A (en) * | 2015-12-09 | 2016-02-24 | 武汉大学 | Multisource big data fused video monitoring pedestrian identity identification method |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897460A (en) * | 2017-03-14 | 2017-06-27 | 华平智慧信息技术(深圳)有限公司 | The method and device of data classification in safety monitoring |
WO2018165863A1 (en) * | 2017-03-14 | 2018-09-20 | 华平智慧信息技术(深圳)有限公司 | Data classification method and apparatus in safety and protection monitoring |
CN110019963A (en) * | 2017-12-11 | 2019-07-16 | 罗普特(厦门)科技集团有限公司 | The searching method of suspect relationship personnel |
CN110134812A (en) * | 2018-02-09 | 2019-08-16 | 杭州海康威视数字技术股份有限公司 | A kind of face searching method and its device |
CN109492604A (en) * | 2018-11-23 | 2019-03-19 | 北京嘉华科盈信息系统有限公司 | Faceform's characteristic statistics analysis system |
CN111143594A (en) * | 2019-12-26 | 2020-05-12 | 北京橘拍科技有限公司 | Portrait searching method, server, storage medium, video processing method and system |
CN111401315A (en) * | 2020-04-10 | 2020-07-10 | 浙江大华技术股份有限公司 | Face recognition method, recognition device and storage device based on video |
CN111401315B (en) * | 2020-04-10 | 2023-08-22 | 浙江大华技术股份有限公司 | Face recognition method based on video, recognition device and storage device |
Also Published As
Publication number | Publication date |
---|---|
SG11201806418TA (en) | 2018-08-30 |
CN106355154B (en) | 2020-09-11 |
PH12018501518A1 (en) | 2019-03-18 |
WO2018040306A1 (en) | 2018-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106355154A (en) | Method for detecting frequent pedestrian passing in surveillance video | |
CN108549846B (en) | Pedestrian detection and statistics method combining motion characteristics and head-shoulder structure | |
CN107145862A (en) | A kind of multiple features matching multi-object tracking method based on Hough forest | |
CN107688830B (en) | Generation method of vision information correlation layer for case serial-parallel | |
Zakaria et al. | Face detection using combination of Neural Network and Adaboost | |
Ye et al. | Jersey number detection in sports video for athlete identification | |
CN110826390A (en) | Video data processing method based on face vector characteristics | |
Liu et al. | Video content analysis for compliance audit in finance and security industry | |
Tseng et al. | Person retrieval in video surveillance using deep learning–based instance segmentation | |
Kanna et al. | Deep learning based video analytics for person tracking | |
CN110287369A (en) | A kind of semantic-based video retrieval method and system | |
Brindha et al. | Bridging semantic gap between high-level and low-level features in content-based video retrieval using multi-stage ESN–SVM classifier | |
Johnson et al. | Person re-identification with fusion of hand-crafted and deep pose-based body region features | |
CN106295523A (en) | A kind of public arena based on SVM Pedestrian flow detection method | |
Razalli et al. | Real-time face tracking application with embedded facial age range estimation algorithm | |
Wang et al. | Human interaction recognition based on sparse representation of feature covariance matrices | |
Yang et al. | An improved system for real-time scene text recognition | |
Zhong et al. | Fast and robust text detection in MOOCs videos | |
CN113657169A (en) | Gait recognition method, device, system and computer readable storage medium | |
Archana et al. | Tracking based event detection of singles broadcast tennis video | |
Sasireka et al. | Comparative Analysis on Image Retrieval Technique using Machine Learning | |
Ho et al. | A scene text-based image retrieval system | |
CN112304435A (en) | Human body thermal imaging temperature measurement method combining face recognition | |
Archana et al. | Face recognition using Deep Neural Network and Support Based Anchor Boxes Methods in video frames | |
Shrestha et al. | Temporal querying of faces in videos using bitmap index |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |