WO2011097795A1 - 人流量统计的方法及系统 - Google Patents

人流量统计的方法及系统 Download PDF

Info

Publication number
WO2011097795A1
WO2011097795A1 PCT/CN2010/070607 CN2010070607W WO2011097795A1 WO 2011097795 A1 WO2011097795 A1 WO 2011097795A1 CN 2010070607 W CN2010070607 W CN 2010070607W WO 2011097795 A1 WO2011097795 A1 WO 2011097795A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
detection
human
classifier
human head
Prior art date
Application number
PCT/CN2010/070607
Other languages
English (en)
French (fr)
Inventor
呼志刚
朱勇
任烨
蔡巍伟
贾永华
胡扬忠
邬伟琪
Original Assignee
杭州海康威视软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视软件有限公司 filed Critical 杭州海康威视软件有限公司
Priority to PCT/CN2010/070607 priority Critical patent/WO2011097795A1/zh
Priority to EP10845469.5A priority patent/EP2535843A4/en
Priority to US13/578,324 priority patent/US8798327B2/en
Publication of WO2011097795A1 publication Critical patent/WO2011097795A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present invention relates to the field of video surveillance and image processing and analysis technologies, and in particular, to a method and system for human traffic statistics.
  • the method first tracks the feature points of some motions, then clusters the trajectories of the feature points to obtain the human flow information; the method based on feature point tracking needs to track the feature points of some motions. Then, the trajectory of the feature points is clustered and analyzed to obtain the human flow information.
  • the disadvantage of the method is that the feature points themselves are difficult to track stably, and the counting accuracy is poor.
  • the second method is based on human body segmentation and tracking.
  • the method first needs to extract the moving target block, then segment the moving target block to obtain a single human target, and finally track the statistics of each human target to achieve human flow; based on human body segmentation and tracking method Firstly, it is necessary to extract the moving target block, then divide the moving target block to obtain a single human target, and finally track the trajectories of each human body, thereby realizing the statistics of the human flow.
  • the disadvantage of this method is that when the human body has occlusion, the accuracy of human body segmentation is difficult to be ensured, which affects statistical accuracy.
  • the third method is based on the detection and tracking of the head or head and shoulder. The method detects the head or the head and shoulders in the video, and performs the statistics of the human flow by tracking the head or the head and shoulders.
  • the method based on human head detection and tracking is to detect the human head in the video, and the human traffic is counted by tracking the human head.
  • the method based on the human head detection is the first two methods. Accuracy has improved.
  • some companies have proposed a method based on the number of people detecting statistic. For example, in the method mentioned in the patent file of application number 200910076256.X, Beijing Zhongxing Microelectronics first extracts the foreground of motion and then trains two haar features.
  • the serial classifier searches for the head of a predetermined size in the foreground to realize the head detection, wherein the haar feature is a rectangular feature, and the shape and gray scale information of the target can be described by changing the size and combination of the rectangle.
  • the classifier used in this method only detects the same type of target and cannot detect different types of targets at the same time. For example, it is impossible to detect the head of a dark hair (including a dark hat) and the head of a light hair (including a light-colored hat).
  • the headcount statistics are not comprehensive.
  • the present invention provides a method and system for human traffic statistics to solve the problem that the existing human traffic statistics scheme is not comprehensive.
  • the embodiment of the present invention adopts the following technical solutions:
  • a method for human traffic statistics comprising: performing a human head detection on a current image by using a parallel multi-class classifier to determine each human head in the current image; tracking the determined human heads to form a human head target motion trajectory; The head movement trajectory direction is used to count the flow of people.
  • the method further includes: performing edge feature selection filtering on the human head detected by the parallel multi-class classifier.
  • Performing the edge feature fine filtering process on the human head detected by the parallel multi-class classifier includes: calculating a fitting degree of the inner edge feature of the rectangular head determined by the classifier to the preset upper semi-elliptical arc, if If the degree of convergence is greater than the threshold, the rectangle is determined to be a human head, otherwise the rectangle is removed from the target list.
  • the method further includes: performing scene calibration on the detection area in the image, thereby dividing the detection area into a plurality of sub-areas; and the parallel multi-class classifier performing the head detection It is carried out in a number of sub-areas.
  • the performing scene calibration on the detection area in the image comprises: selecting a calibration frame; calculating a depth variation coefficient of the scene; calculating a range of a size change of the head target in the detection area; and dividing the detection area into a plurality of sub-areas according to the range of the head size change of the head.
  • the following includes: Performing smoothness analysis on the trajectory of the human head target.
  • the target trajectories of head smoothness analysis comprises: determining a smoothness head target trajectory, determines the smoothness meets a threshold, if yes, to retain the head target trajectory, shellfish No 1 J, discarding the head target Movement track.
  • the adopting the parallel multi-class classifier to perform the human head detection on the image includes: setting a detection order of each type of classifier, and sequentially performing the head detection on the current image by using each classifier according to the detection order, until the head is determined, wherein the parallel connection
  • the multi-class classifier is formed by paralleling at least two types of classifiers.
  • the parallel multi-class classifiers are formed by paralleling any two or more of a dark hair universal classifier, a light hair classifier, a hat classifier, and an extended classifier.
  • a system for human traffic statistics comprising: a human head detection module, configured to perform a head detection on a current image by using a parallel multi-class classifier to determine each human head in the current image; a head target tracking module, configured to determine each The head is tracked to form a human head target motion trajectory; the human flow counting module is configured to perform human flow counting in the direction of the head target motion trajectory.
  • the human head detection module further includes a fine screening sub-module for performing edge feature fine-scoring processing on the human head detected by the parallel multi-class classifier.
  • the method further includes: a scene calibration module, configured to perform scene calibration on the detection area in the image, thereby dividing the detection area into a plurality of sub-areas.
  • a scene calibration module configured to perform scene calibration on the detection area in the image, thereby dividing the detection area into a plurality of sub-areas.
  • the method further includes: a human head target motion trajectory analysis module, configured to calculate a smoothness of the human head target motion trajectory, determine whether the smoothness satisfies a threshold value, and if yes, retain the human head target motion trajectory; otherwise, discard the human head target motion trajectory.
  • a human head target motion trajectory analysis module configured to calculate a smoothness of the human head target motion trajectory, determine whether the smoothness satisfies a threshold value, and if yes, retain the human head target motion trajectory; otherwise, discard the human head target motion trajectory.
  • the human head detection module includes a coarse detection sub-module, configured to set a detection order of each type of classifier, and sequentially performs a head detection on the current image according to the detection order, until the head is determined, wherein the parallel multiple types
  • the classifier is formed by paralleling at least two types of classifiers.
  • the parallel multi-class classifier in the human head detection module is formed by paralleling any two or more of a dark hair universal classifier, a light hair classifier, a hat classifier and an extended classifier. It can be seen that in the present invention, a plurality of classifiers are used in parallel, and various types of head targets such as dark hair, light hair, and various color hats can be simultaneously detected to ensure more comprehensive statistics. Further, the present invention also provides an extended classifier that can collect sample training according to the application of a special environment, and detect a human head of a specified color or hat, such as a work cap of a factory or a warehouse.
  • the edge features are used to finely screen the coarse detection results, and finally the real head target is obtained, so that the detection is more accurate.
  • the invention automatically selects the size of the detection window by scene calibration before the detection, so that the invention can adapt to various camera angles and broaden the application range.
  • the false target can be removed, and the detection accuracy can be further improved.
  • FIG. 1 is a flowchart of a method for human traffic statistics according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for collecting human traffic statistics according to another embodiment of the present invention.
  • FIG. 3 is a flow chart of scene calibration of a preferred embodiment of the present invention.
  • FIG. 4 is a structural block diagram of a human head detection module according to a preferred embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a cascading classification process of various classifiers according to a preferred embodiment of the present invention.
  • FIG. 6 is a flow chart of particle filter tracking in a preferred embodiment of the present invention.
  • FIG. 7 is a flow chart of analyzing a smoothness of a motion trajectory according to a preferred embodiment of the present invention.
  • FIG. 8 is a schematic diagram of the system structure of the inventor's traffic statistics.
  • FIG. 1 a flowchart of an embodiment of the present invention is shown. , including: S101: performing parallel head detection on the current image by using a parallel multi-class classifier to determine each human head in the current image;
  • S102 Tracking the determined heads to form a human head target motion track
  • S103 Performing a flow of people according to the direction of the movement of the head target.
  • the specific process of performing the head detection on the image by using the multi-class classifiers in parallel is as follows: setting the detection order of the various classifiers, and sequentially performing the head detection on the current image according to the detection order, until the head is determined, wherein,
  • the parallel multi-class classifier is formed by parallel connection of at least two types of classifiers.
  • One example of a parallel multi-class classifier is a dark hair general classifier, a light hair classifier, a hat classifier and an extended classifier. Any two or more of them are connected in parallel.
  • FIG. 2 is a flowchart of another embodiment of the present invention, including:
  • the scene calibration refers to performing scene calibration on the detection area in the image, thereby dividing the detection area into several sub-areas.
  • S202 Human head detection;
  • the human head detection further comprises two steps of parallel classifier coarse detection and edge feature fine screening to determine each human head in the current image.
  • S203 Tracking the head target; forming a head trajectory by tracking the determined heads.
  • S204 Performing smoothness analysis on the trajectory of the head target; specifically, performing smoothness analysis on the trajectory of the head target includes: determining a smoothness of the trajectory of the head target, determining whether the smoothness satisfies a threshold, and if so, retaining the head target The motion trajectory, otherwise, discards the human head's target motion trajectory.
  • the scene calibration module only needs to be enabled before the first frame detects the head target, and then each frame is detected when the head is detected.
  • the result of one frame calibration can be. If the scene changes, you need to enable scene calibration again.
  • the depth variation of the scene can be approximated as a linear change along the y coordinate of the image, ie:
  • y) f x y + c (1 )
  • vv(x, _y) represents the width of the circumscribed rectangle of the head target with the center image coordinate ( , _ )
  • f is the scene depth coefficient
  • C is a constant.
  • the purpose of scene calibration is to determine the value of / and C through the calibration box, and then find the size of the circumscribed rectangle of the head target at any coordinate in the image by equation (1).
  • the present invention calculates the depth variation coefficient of the scene by selecting 4 ⁇ 6 calibration boxes to calculate the two unknowns/ and C in the equation (1), and then substitutes the upper edge and the lower edge coordinates of the circumscribed rectangle of the detection area (1).
  • the minimum head size dish and the maximum head size w max are obtained .
  • the detection area is divided into several sub-areas according to the range of the head size change, and each sub-area corresponds to a smaller head size range, and then In the human head detection module, each sub-area searches for candidate rectangles with different size windows.
  • the block diagram of the scene calibration step is shown in FIG.
  • Human head detection The human head detection in the present invention is divided into two steps: parallel detection of rough classifier and fine selection of edge features. In the coarse detection section of the parallel classifier, most of the non-human head regions are excluded by the pre-trained classifier, leaving the head target and part of the false detection area, and then removing most of the false detections through the edge feature fine screening step, and retaining the real head target. .
  • the block diagram of the human head detection module is shown in Figure 4.
  • the invention adopts the haar feature to train a dark hair universal classifier including a frontal and a backhead, a dark hair front branch classifier, a dark hair back branch classifier, a light hair classifier, a hat classifier, and the like according to the Adaboost algorithm.
  • Multiple classifiers such as extended classifiers that are specifically tailored to specific environments.
  • the combination of multiple classifiers is shown in the coarse detection section of Figure 4.
  • the dark hair universal classifier is combined with the front branch classifier and the back branch classifier into a tree structure, and then with a light hair classifier, a hat classifier, and
  • the extended classifier forms a parallel connection, and the result of the classifier detection enters the fine screening of the edge of the human head, and finally the real head target is obtained.
  • Parallel classifier coarse detection link The classifier needs to be trained with a large number of positive samples and negative samples in advance.
  • the present invention uses the haar feature and the Adaboost algorithm used in face detection to train the recognizer.
  • the Haar feature consists of two or three rectangles of different sizes.
  • the shape and grayscale information of a specific target can be described by changing the size, combination, and angle of the rectangle.
  • the Adaboost algorithm is a method that combines several weak classifiers into a strong classifier. Each weak classifier selects one or several haar features to classify the samples, and several weak classifiers are combined into a first class strong classifier by the Adaboost algorithm.
  • the various types of classifiers described in the present invention are all cascaded by several levels of strong classifiers.
  • the invention searches for the head target candidate rectangle in an exhaustive manner according to the size of the head target obtained by the scene calibration module.
  • the candidate rectangles are respectively input into the dark hair general classifier, the light hair classifier, the hat classifier, and the extended classifier for classification. If classified as a human head, the candidate rectangle is detected as the head target output, and the judgment is continued. A candidate rectangle, otherwise, the candidate rectangle is discarded, and the next candidate rectangle is continued.
  • a candidate rectangle is classified by the classifier as a head target, and the classifiers need to pass through the classifiers of the classifiers. Otherwise, they are classified as non-head targets.
  • the process diagram is shown in FIG. 5.
  • the preferred classifier can be adjusted according to the actual application. In general application scenarios, the probability of dark hair is the largest, so the dark hair classifier is preferred. In certain scenarios, such as detecting the warehouse door, the extended classifier detection of the work cap sample training can be preferentially selected to speed up the detection.
  • Fine feature selection of edge feature By the coarse detection of the parallel classifier, most of the non-human head rectangles are excluded, leaving only the real human head rectangle and the rectangle that is misdetected by the classifier as the head.
  • the fine feature of the edge feature can remove most of the false detection rectangles by extracting the edge features in the rectangle, and retain the real head target.
  • the invention adopts an elliptical upper semi-circular arc as a human head model, and the edge feature fine screening is to calculate the fitting degree of the inner edge feature of the rectangle determined by the classifier as the head target and the upper semi-elliptical arc. If the degree of fit is greater than the judgment threshold, the rectangle is a real head rectangle, otherwise the head rectangle is misdetected, and the rectangle is removed from the target list.
  • the target tracking module of the present invention uses a particle filter algorithm to track the head target.
  • Step 601 Particle initialization
  • Step 602 particle resampling
  • the particle will have a "degradation phenomenon", that is, the weight of a few particles close to the real human head rectangle will become larger, and the weight of most particles far from the human head rectangle will be larger. It becomes very small, and a lot of calculations are wasted on these particles with very small weights.
  • the particles should be resampled each time the particle weight is updated. Particle resampling is to preserve and copy particles with larger weights. Eliminate particles with smaller weights, and map the original weighted particles to equal weighted particles to continue predictive tracking. When the tracker is newly generated, the weights of the particles in the tracker are equal, so there is no need to resample again.
  • Step 603 particle propagation; particle propagation, that is, state transition of the particle, refers to the process of updating the state of the particle over time.
  • the state of the particles refers to the position and size of the target rectangle represented by the particles.
  • Particle propagation is achieved by a random motion process, where the current state of the particle is obtained from the previous state plus a random quantity.
  • each of the current particles represents a possible position and size of the head target in the current frame.
  • Step 604 Update the particle weight according to the observed value; the particle only obtains the possible position and size of the head target in the current frame by the propagation mode, and needs to obtain the observation value of the current image to determine which particles are most likely to be the head rectangle.
  • the haar feature and the edge feature of the extracted image corresponding to the image rectangle are used as the observation value to update the weight of the particle.
  • Step 605 Update the target motion trajectory; sort the particles according to the weight size, take out the particles with the largest weight, calculate the overlapping area of the rectangle corresponding to the particle with the largest weight and the detected head target rectangle, and the overlap area is the largest and larger than the set threshold.
  • the head target is the head of the head target represented by the tracker represented by the particle in the current frame, and the target motion track of the tracker is updated with the position of the head target, and the head target is used to replace the particle with the largest weight, and the next step is entered.
  • Frame tracking if the particle with the largest weight does not overlap with all the head targets detected in the current frame or the overlapping area is smaller than the threshold, it is considered that the head target represented by the tracker of the particle does not find the corresponding head in the current frame, and then The position of the particle updates the target motion trajectory of the tracker and proceeds to the next frame tracking. If the particle with the highest weight is continuous N ( N > 2 ) If the frame cannot find the corresponding head target, it indicates that the head target represented by the tracker of the particle disappears and the tracker is eliminated.
  • the head target between the frame and the frame is associated to form a motion track of the head target.
  • track smoothness analysis module
  • the movement of the real head target is relatively smooth, and the false detection target may present a disorderly motion. Therefore, the present invention further improves the detection accuracy by removing the false detection from the smoothness analysis of the target motion trajectory.
  • the target motion trajectory generated by the tracking module is analyzed, and the smoothing coefficient of the target trajectory is calculated. If the smoothing coefficient is greater than the set smoothing threshold, the trajectory is retained; otherwise, the trajectory is eliminated.
  • the trajectory smoothness analysis module flow is shown in Figure 7, including:
  • the invention counts the flow of people by the direction of the movement of the head target.
  • the invention determines whether the direction of the target trajectory is consistent with the set "person flow entry" direction in the detection area, and if it is consistent, the "enter number of people” count is incremented by one, otherwise the “away person number” count is incremented by one. Mark the target as "counted” after the count is completed, leaving the track in an invalid state and avoiding repeated counting of the same target.
  • the present invention also provides a system for human traffic statistics, which can be implemented by software, hardware or a combination of software and hardware.
  • the system includes: a human head detection module 801, configured to perform a human head detection on a current image by using a multi-class classifier in parallel, and determine each human head in the current image; a human head target tracking module 802, configured to the human head detection module 801.
  • the determined heads are tracked to form a head target motion trajectory; the human flow counting module 803 is configured to perform a human flow count in the direction of the head target motion trajectory determined by the head target tracking module 802.
  • the human head detection module 801 includes a coarse detection sub-module, configured to set a detection order of each type of classifier, and sequentially performs a head detection on the current image according to the detection order, until the head is determined, wherein the parallel connection is
  • the classifier is composed of at least two types of classifiers connected in parallel.
  • the parallel multi-class classifier in the human head detection module 801 is formed by paralleling any two or more of a dark hair universal classifier, a light hair classifier, a hat classifier and an extended classifier.
  • the human head detection module 801 further includes a fine screening sub-module for performing edge feature fine-scoring processing on the human head detected by the parallel multi-class classifier.
  • the system further includes: a scene calibration module 804, configured to perform scene calibration on the detection area in the image, thereby dividing the detection area into a plurality of sub-areas.
  • the purpose of the scene calibration module 804 is to obtain a depth coefficient of the scene, and according to the scene depth coefficient, the size of the head target at each position in the image can be calculated, and the detection size is provided for the head target detection module.
  • the human head detection module 801 searches for a human head target in a specified number of sub-areas according to the size provided by the scene calibration module 804.
  • the system further includes: a head target motion trajectory analysis module 805, configured to calculate a smoothness of the head target motion trajectory, determine whether the smoothness meets a threshold, and if so, retain the head target motion trajectory, otherwise, discard The head target movement trajectory.
  • the human traffic statistics module 803 is based on the head target motion trajectory analysis module 805, and is based on the head of the motion trajectory.
  • the present invention uses the haar feature to train multiple parallel classifiers for rough head detection based on the Adaboost algorithm, and then uses edge features to finely select the rough detection results, and finally obtains a real head target.
  • a plurality of classifiers are used in parallel, and a plurality of types of head targets such as dark hair, light hair, and various color hats can be simultaneously detected.
  • the present invention also provides an extended classifier, which can be collected according to the application of a special environment. Sample training, detecting the head of a given color or hat, such as a work cap in a factory or warehouse.
  • the present invention automatically selects the size of the detection window by scene calibration before the detection, so that the present invention can adapt to various camera angles and broaden the application range.
  • the false target can be removed, and the detection accuracy can be improved.

Description

人流量统计的方法及系统 技术领域 本发明涉及视频监控及图像处理与分析技术领域,尤其涉及一种人流量统 计的方法及系统。
背景技术 随着社会的不断进步,视频监控系统的应用范围越来越广。在超市、商场、 体育馆以及机场车站等场所的出入口常安装有监控摄像机,以便保安人员和管 理者对这些场所的出入口进行监控。 另一方面, 超市、 商场、 体育馆以及机场 车站等场所进出的人流量对于上述场所的经营者或管理者来说有着重要的意 义, 其中, 人流量是指按一定方向流动的人数, 本文中特指按进入 /离开两个 方向流动的人数。 现有的视频监控中,人流量统计主要是通过监控人员人工清点来实现。这 种人工统计人流量的方法在监控时间短、人流量稀疏的情况下比较可靠,但由 于人眼生物特性的限制, 当监控时间较长, 人流量密集时, 统计的准确性将大 大下降, 而且人工统计的方式需要耗费大量的人力成本。基于视频分析的人流 量统计方法可以实现人流量的自动统计,解决人工统计带来的各种问题。目前, 基于视频分析的流量统计方法主要有三类:
一是基于特征点跟踪的方法, 该方法首先跟踪一些运动的特征点, 然后对 特征点的轨迹进行聚类分析,从而得到人流量信息;基于特征点跟踪的方法需 要跟踪一些运动的特征点, 然后对特征点的轨迹进行聚类分析,从而得到人流 量信息, 该方法的缺点是特征点本身难以稳定地跟踪, 计数精度较差。 二是基于人体分割和跟踪的方法, 该方法首先需要提取出运动目标块, 然 后对运动目标块进行分割得到单个人体目标,最后跟踪各个人体目标实现人流 量的统计; 基于人体分割和跟踪的方法首先需要提取处运动目标块, 然后对运 动目标块进行分割得到单个人体目标, 最后跟踪得到各个人体的轨迹,从而实 现人流量的统计。 该方法的缺点是当人体存在遮挡时,人体分割的准确性难以 得到保证, 影响统计精度。 三是基于人头或头肩检测和跟踪的方法, 该方法在视频中检测人头或头 肩,通过对人头或头肩的跟踪进行人流量的统计。基于人头检测和跟踪的方法 是在视频中检测人头,通过对人头的跟踪进行人流量的统计, 当摄像机角度合 适时,人头出现遮挡的情况较少, 因此基于人头检测的方法较前两种方法准确 性有所提高, 目前有公司提出了基于人头检测统计人数的方法, 例如北京中星 微电子在申请号 200910076256.X的专利文件所提到的方法中, 首先提取运 动前景, 然后采用 haar特征训练两个串行的分类器在前景中搜索预定尺寸的 人头, 实现人头检测, 其中, haar特征, 是一种矩形特征, 通过改变矩形的 尺寸和组合方式可以描述目标的形状和灰度信息。该方法采用的分类器只是检 测同一类目标, 无法同时检测不同类目标, 例如, 无法同时检测深色头发(包 括戴深色帽子)的人头与浅色头发 (包括戴浅色帽子)的人头, 导致人头统计 不全面。
发明内容
有鉴于此,本发明提供一种人流量统计的方法及系统, 以解决现有人流量 统计方案统计不全面的问题。 为此, 本发明实施例采用如下技术方案:
一种人流量统计的方法, 包括: 采用并联的多类分类器对当前图像进行人 头检测, 确定当前图像中的各人头; 对确定出的各人头进行跟踪, 形成人头目 标运动轨迹; 才艮据人头目标运动轨迹方向进行人流量计数。
在采用并联的多类分类器对当前图像进行人头检测之后、确定当前图像中 的各人头之前,还包括: 对并联的多类分类器检测到的人头进行边缘特征细筛 选处理。
所述对并联的多类分类器检测到的人头进行边缘特征细筛选处理包括:计 算所述分类器判断为人头目标的矩形内边缘特征与预置的上半椭圆弧的拟合 度, 如果拟合度大于阈值, 则将该矩形确定为人头, 否则将该矩形从目标列表 中去除。
在采用并联的多类分类器对当前图像进行人头检测之前,还包括: 对图像 中的检测区域进行场景标定,从而将检测区域划分为若干个子区域; 所述并联 的多类分类器进行人头检测是在所述若干个子区域内进行的。 所述对图像中的检测区域进行场景标定包括: 选择标定框; 计算场景深度 变化系数; 计算检测区域内人头目标尺寸变化范围; 根据人头目标尺寸变化范 围将检测区域划分为若干个子区域。
在形成人头目标运动轨迹之后、根据人头目标运动轨迹方向进行人流量计 数之前, 还包括: 对人头目标运动轨迹进行平滑度分析。
所述对人头目标运动轨迹进行平滑度分析包括:确定人头目标运动轨迹的 平滑度, 判断所述平滑度是否满足阈值, 若是, 保留该人头目标运动轨迹, 否 贝1 J , 丟弃该人头目标运动轨迹。
所述采用并联的多类分类器对图像进行人头检测包括:设置各类分类器的 检测顺序,按照检测顺序依次采用各个分类器对当前图像进行人头检测, 直到 确定出人头, 其中, 所述并联的多类分类器由至少两类分类器并联而成。
所述并联的多类分类器由深色头发通用分类器、 浅色头发分类器、 帽子分 类器和扩展分类器中的任意两种或多种并联而成。
一种人流量统计的系统, 包括: 人头检测模块, 用于采用并联的多类分类 器对当前图像进行人头检测, 确定当前图像中的各人头; 人头目标跟踪模块, 用于对确定出的各人头进行跟踪, 形成人头目标运动轨迹; 人流量计数模块, 用于在人头目标运动轨迹方向进行人流量计数。
所述人头检测模块还包括细筛选子模块,用于对并联的多类分类器检测到 的人头进行边缘特征细薛选处理。
还包括: 场景标定模块, 用于对图像中的检测区域进行场景标定, 从而将 检测区域划分为若干个子区域。
还包括: 人头目标运动轨迹分析模块, 用于计算人头目标运动轨迹的平滑 度, 判断所述平滑度是否满足阈值, 若是, 保留该人头目标运动轨迹, 否则, 丟弃该人头目标运动轨迹。
所述人头检测模块包括粗检测子模块, 用于设置各类分类器的检测顺序, 按照检测顺序依次采用各个分类器对当前图像进行人头检测, 直到确定出人 头, 其中, 所述并联的多类分类器由至少两类分类器并联而成。
所述人头检测模块中的所述并联的多类分类器由深色头发通用分类器、浅 色头发分类器、 帽子分类器和扩展分类器中的任意两种或多种并联而成。 可见, 本发明中将多个分类器并联使用, 能同时检测深色头发、 浅色头发 以及各种颜色帽子等多类人头目标, 确保统计更加全面。 进一步, 本发明还设 置了一个扩展分类器, 可以根据特殊环境的应用, 采集样本训练, 检测指定颜 色或帽子的人头, 比如工厂或仓库的工作帽等。 进一步, 在多个并联的分类器 作为人头粗检测的基础上,再利用边缘特征对粗检测结果进行细筛选, 最后得 到真正的人头目标, 使得检测更加准确。 另外, 本发明在检测前通过场景标定 自动选择检测窗口的尺寸,使本发明能自适应各种摄像机角度,拓宽了应用范 围。 并且, 通过对人头目标轨迹的平滑度分析可以去除虚假目标, 可进一步提 高检测准确率。
附图说明 图 1为本发明一实施例人流量统计的方法流程图;
图 2为本发明另一实施例人流量统计的方法流程图;
图 3为本发明较优实施例场景标定流程图;
图 4为本发明较优实施例人头检测模块结构框图;
图 5为本发明较优实施例各类分类器级联分类过程示意图;
图 6为本发明较优实施例粒子滤波跟踪的流程图;
图 7为本发明较优实施例运动轨迹平滑度分析流程图;
图 8为本发明人流量统计的系统结构示意图。
具体实施方式 现有的基于人头检测确定人流量的方案中,是采用单类分类器进行的,这 种方案常常导致漏检, 例如, 无法同时检测深色头发(包括戴深色帽子)的人 头与浅色头发(包括戴浅色帽子)的人头, 为了解决目前检测不全面、 不准确 的问题, 本发明提出一种人流量统计的方法, 请参见图 1 , 为本发明一实施例 流程图, 包括: S101 : 采用并联的多类分类器对当前图像进行人头检测, 确定当前图像 中的各人头;
S102: 对确定出的各人头进行跟踪, 形成人头目标运动轨迹; S103: 根据人头目标运动轨迹方向进行人流量计数。 其中, 采用并联的多类分类器对图像进行人头检测的具体过程为: 设置各 类分类器的检测顺序,按照检测顺序依次采用各个分类器对当前图像进行人头 检测, 直到确定出人头, 其中, 所述并联的多类分类器由至少两类分类器并联 而成, 并联的多类分类器的一个实例是, 由深色头发通用分类器、 浅色头发分 类器、 帽子分类器和扩展分类器中的任意两种或多种并联而成。 可见, 本发明在人头检测中, 采用多类分类器并联的方式, 对各类人头进 行检测, 从而扩大了检测范围, 提高人流量统计的准确性。 为了进一步提高人流量统计的准确性, 在图 1所示的方案基础上, 可进一 步进行优化, 包括, 场景标定、 对并联分类器粗检测的人头进行边缘特征细筛 选, 以及对人头目标运动轨迹进行平滑度分析等, 请参见图 2, 为本发明另一 实施例流程图, 包括:
S201 : 场景标定;
具体地, 场景标定是指对图像中的检测区域进行场景标定,从而将检测区 域划分为若干个子区域。
S202: 人头检测; 人头检测进一步包括并联分类器粗检测以及边缘特征细筛选两个步骤,从 而确定当前图像中的各人头。
S203: 人头目标跟踪; 通过对确定出的各人头进行跟踪, 形成人头目标运动轨迹。
S204: 对人头目标运动轨迹进行平滑度分析; 具体地,对人头目标运动轨迹进行平滑度分析包括: 确定人头目标运动轨 迹的平滑度,判断所述平滑度是否满足阈值,若是,保留该人头目标运动轨迹, 否则, 丟弃该人头目标运动轨迹。 S205: 人流量统计: 通过人头目标运动轨迹方向对人流量进行计数。 需要说明的是, 上述场景标定、对并联分类器粗检测的人头进行边缘特征 用。 下面对包含所有改进点的本发明最优实施例进行详细分析。 1、 场景标定 由于用于人流量统计的摄像机一般都是固定安装的, 场景变化性较小, 因 此场景标定模块只需要在第一帧检测人头目标前启用,之后各帧检测人头时均 采用第一帧标定的结果即可。 如果场景发生变化, 则需要再次启用场景标定。 在摄像机无旋转的情况下, 场景的深度变化可以近似为沿图像 y坐标成线 性变化, 即:
y) = f x y + c (1 ) 其中, vv(x, _y)表示中心图像坐标为( , _ )的人头目标外接矩形的宽度, f 为场景深度系数, C为常数。场景标定的目的就是通过标定框确定/和 C的值, 从而通过式(1 )求出图像中任意坐标处人头目标外接矩形的尺寸。
本发明通过选择 4~6个标定框计算式 (1 )中的两个未知量/和 C ,从而得到 场景的深度变化系数,然后将检测区域外接矩形的上边缘和下边缘坐标代入式 (1 )中, 得到检测区域内最小人头尺寸 皿和最大人头尺寸 wmax , 最后, 根据 人头尺寸变化范围将检测区域分为若干个子区域,每个子区域对应一个变化较 小的人头尺寸范围,在接下来的人头检测模块中,每个子区域用不同尺寸窗口 搜索候选矩形。 场景标定步骤框图如图 3所示, 包括: S301 : 选择标定框; S302: 计算场景深度变化系数; S303: 计算检测区域内人头目标尺寸变化范围; S304: 根据人头目标尺寸变化范围将检测区域划分为若干个子区域。 至此, 场景标定结束。 接下来开始在每一帧图像中进行人头的检测、 跟踪 和计数。
2、 人头检测 本发明中的人头检测分为并联分类器粗检测和边缘特征细筛选两个环节。 并联分类器粗检测环节中通过预先训练好的分类器将大部分非人头区域 排除, 剩下人头目标和部分误检区域, 然后再通过边缘特征细筛选环节去除大 部分误检, 保留真实人头目标。 人头检测模块结构框图如图 4所示。 本发明采用 haar特征基于 Adaboost算法分别训练包含正面人头和背面人 头的深色头发通用分类器、深色头发正面分支分类器、深色头发背面分支分类 器、 浅色头发分类器、帽子分类器以及为适应特定环境专门设置的扩展分类器 等多个分类器。 多个分类器的组合方式如图 4粗检测环节所示: 深色头发通用 分类器与正面分支分类器、 背面分支分类器组合成树形结构, 然后与浅色头发 分类器、帽子分类器以及扩展分类器形成并联, 分类器检测结果进入人头边缘 细筛选环节, 最后得到真实的人头目标。 2.1、 并联分类器粗检测环节 分类器需要预先用大量正样本和负样本进行训练,本发明采用人脸检测中 使用的 haar特征加 Adaboost算法训练识别器。
Haar特征由两个或三个不同尺寸的矩形构成。 通过改变矩形的尺寸、 组 合方式和角度可以描述特定目标的形状和灰度信息。 Adaboost算法是一种能 将若干弱分类器组合成强分类器的方法。每一个弱分类器选择一个或几个 haar 特征来对样本进行分类, 若干个弱分类器通过 Adaboost算法组合成一级强分 类器。 本发明中所述的各类分类器, 均由若干级强分类器级联而成。
本发明在检测区域内,根据场景标定模块得到的人头目标尺寸, 采用穷举 的方式搜索人头目标候选矩形。 将候选矩形分别输入到深色头发通用分类器、 浅色头发分类器、帽子分类器以及扩展分类器中进行分类,如果被分类为人头, 则该候选矩形被检测为人头目标输出, 继续判断下一个候选矩形, 否则, 将选 候选矩形丟弃, 继续判断下一个候选矩形。 在上述过程中,一个候选矩形被分类器分类为人头目标需要逐级通过级联 分类器的各级强分类器, 否则被分类为非人头目标,其过程示意图如图 5所示。 另外,上述分类器检测过程中,优先选择的分类器可以根据实际应用调整。 一般的应用场景中深色头发的概率最大, 因此优先选择深色头发分类器检测, 在特定场景, 比如检测仓库门口, 可优先选择工作帽样本训练得到的扩展分类 器检测, 以加快检测速度。
2.2、 边缘特征细薛选环节 通过并联分类器粗检测环节, 大部分非人头矩形被排除了, 只留下真实人 头矩形和被分类器误检为人头的矩形。边缘特征细筛选环节则能通过提取矩形 内的边缘特征去除大部分误检矩形, 保留真实人头目标。 本发明采用椭圆上半圆弧作为人头模型,边缘特征细筛选就是计算被分类 器判断为人头目标的矩形内边缘特征与上半椭圆弧的拟合度。如果拟合度大于 判断阈值, 则该矩形为真实人头矩形, 否则为误检人头矩形, 将该矩形从目标 列表中去除。
3、 人头跟踪
人头目标检测出来后需要进行跟踪, 形成目标运动轨迹, 以避免同一个 目标重复计数。 本发明的目标跟踪模块采用粒子滤波算法对人头目标进行跟 踪。
粒子滤波跟踪的流程如图 6所示, 具体过程如下: 步骤 601 : 粒子初始化;
当新检测到的人头目标没有已有的粒子对应时, 则新生成一个粒子跟踪 器, 并用新检测到的目标初始化跟踪器中各个粒子的位置和尺寸, 并赋给各粒 子相等的权重值。 步骤 602: 粒子重采样;
在跟踪过程中, 粒子经过几次权重更新后会出现"退化现象", 即接近真实 人头矩形的少数粒子的权重会变得较大,而远离人头矩形的大部分粒子的权重 变得很小,大量的计算会浪费在这些权重很小的粒子上。为了解决"退化现象", 每次粒子权重更新后应该对粒子进行重采样。 粒子重采样就是保留和复制权重较大的粒子, 剔除权重较小的粒子,使原 来带权重的粒子映射为等权重的粒子继续预测跟踪。跟踪器新生成时,跟踪器 中各粒子的权重相等, 因此, 不需要在再进行重采样。 步骤 603: 粒子的传播; 粒子的传播, 也即粒子的状态转移, 是指粒子的状态随时间的更新过程。 本发明中,粒子的状态是指粒子所代表的目标矩形的位置和尺寸。粒子的传播 采用一种随机运动过程实现,即粒子的当前状态由上一个状态加上一个随机量 得到。这样, 当前的每一个粒子都代表着人头目标在当前帧中的一个可能位置 和尺寸。 步骤 604: 根据观测值更新粒子权重; 粒子通过传播方式只是得到了人头目标在当前帧中的可能位置和尺寸,还 需要利当前图像的观测值来确定哪些粒子最有可能是人头矩形。本发明中提取 粒子对应图像矩形的 haar特征和边缘特征作为观测值更新粒子的权重。粒子的 观测值与真实人头越接近, 则该粒子对应的矩形越可能是人头矩形,粒子的权 重增大; 否则, 粒子的权重减小。 步骤 605: 更新目标运动轨迹; 将粒子按权重大小排序, 取出权重最大的粒子, 计算权重最大的粒子对 应的矩形与检测得到的所有人头目标矩形的重叠面积, 重叠面积最大,且大于 设定阈值的人头目标即是该粒子所在跟踪器代表的人头目标在当前帧中对应 的人头, 则用该人头目标的位置更新跟踪器的目标运动轨迹, 并用该人头目标 代替权重最大的粒子, 进入下一帧跟踪; 如果权重最大的粒子与当前帧中检测 出来的所有人头目标均不重叠或重叠面积小于阈值,则认为该粒子所在跟踪器 代表的人头目标在当前帧中没有找到对应的人头,则用该粒子的位置更新跟踪 器的目标运动轨迹, 并进入下一帧跟踪。 如果权重最大的粒子连续 N ( N > 2 ) 帧找不到对应人头目标, 则说明该粒子所在的跟踪器代表的人头目标以及消 失, 剔除该跟踪器。
经过上述五个步骤, 帧与帧之间的人头目标便关联起来形成了人头目标 的运动轨迹。 4、 轨迹平滑度分析模块
一般来说, 真实人头目标的运动比较平滑, 而误检目标则可能会呈现出 杂乱的运动, 因此, 本发明通过对目标运动轨迹的平滑度分析去除误检, 进一 步提高检测准确性。
对跟踪模块生成的目标运动轨迹进行分析, 计算目标轨迹的平滑系数, 如果平滑系数大于设定的平滑阈值, 则保留该轨迹; 否则, 剔除该轨迹。 轨迹 平滑度分析模块流程如图 7所示, 包括:
S701 : 获取目标运动轨迹;
S702: 确定人头目标运动轨迹的平滑度;
S703: 判断平滑度是否满足预置的平滑度阈值要求, 若是, 执行 S704, 否则, 执行 S705;
S704: 保留该目标运动轨迹;
S705: 丟弃该目标运动轨迹;
S706: 输出目标运动轨迹。
5、 人流量计数模块
本发明通过人头目标运动轨迹方向对人流量进行计数。本发明在检测区域 内判断该目标轨迹的方向与设定的"人流进入"方向是否一致,如果一致,则 "进 入人数 "计数加一, 否则 "离开人数"计数加一。 计数完成后将该目标标记为 "已 计数", 使轨迹处于无效状态, 避免同一个目标重复计数。
至此, 通过场景标定、 人头检测、 人头目标跟踪、 人头目标运动轨迹分析 和人流量统计这五大步骤, 即完成了对人流量的全面、 准确统计。 与上述方法相对应, 本发明还提供一种人流量统计的系统, 该系统可通过 软件、 硬件或软硬件结合实现。 参考图 8, 该系统包括: 人头检测模块 801 , 用于采用并联的多类分类器对当前图像进行人头检 测, 确定当前图像中的各人头; 人头目标跟踪模块 802,用于对人头检测模块 801确定出的各人头进行跟 踪, 形成人头目标运动轨迹; 人流量计数模块 803 ,用于在人头目标跟踪模块 802确定的人头目标运动 轨迹方向进行人流量计数。 其中, 人头检测模块 801 包括粗检测子模块, 用于设置各类分类器的检测顺序, 按照检测顺序依次采用各个分类器对当前图像进行人头检测, 直到确定出人 头, 其中, 所述并联的多类分类器由至少两类分类器并联而成。 人头检测模块 801 中的所述并联的多类分类器由深色头发通用分类器、 浅色头发分类器、 帽 子分类器和扩展分类器中的任意两种或多种并联而成。优选地, 该人头检测模 块 801还包括细筛选子模块, 用于对并联的多类分类器检测到的人头进行边 缘特征细薛选处理。 优选地, 该系统还包括: 场景标定模块 804, 用于对图像中的检测区域进行场景标定, 从而将检测 区域划分为若干个子区域。 其中, 场景标定模块 804的目的是获得场景的深度 系数,根据场景深度系数可以计算出图像中各个位置的人头目标的大小, 为人 头目标检测模块提供检测尺寸。此时,人头检测模块 801根据场景标定模块 804 提供的尺寸, 在指定的若干个子区域内搜索人头目标。 优选地, 该系统还包括: 人头目标运动轨迹分析模块 805, 用于计算人头目标运动轨迹的平滑度, 判断所述平滑度是否满足阈值, 若是, 保留该人头目标运动轨迹, 否则, 丟弃 该人头目标运动轨迹。 此时, 人流量统计模块 803是在人头目标运动轨迹分 析模块 805的基础上, 根据运动轨迹方向的人头进行统计的。 上述系统的具体实现请参见方法实施例, 不作赘述。 可见, 本发明采用 haar特征基于 Adaboost算法训练多个并联的分类器进 行人头粗检测,再利用边缘特征对粗检测结果进行细薛选, 最后得到真正的人 头目标。 本发明中将多个分类器并联使用, 能同时检测深色头发、 浅色头发以 及各种颜色帽子等多类人头目标, 本发明还设置了一个扩展分类器, 可以根据 特殊环境的应用, 采集样本训练, 检测指定颜色或帽子的人头, 比如工厂或仓 库的工作帽等。另夕卜,本发明在检测前通过场景标定自动选择检测窗口的尺寸, 使本发明能自适应各种摄像机角度, 拓宽了应用范围。 并且, 通过对人头目标 轨迹的平滑度分析可以去除虚假目标, 可提高检测准确率。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通 技术人员来说, 在不脱离本发明原理的前提下, 还可以做出若干改进和润饰, 这些改进和润饰也应视为本发明的保护范围。

Claims

权 利 要 求
1、 一种人流量统计的方法, 其特征在于, 包括:
采用并联的多类分类器对当前图像进行人头检测,确定当前图像中的各人 头;
对确定出的各人头进行 3艮踪, 形成人头目标运动轨迹;
根据人头目标运动轨迹方向进行人流量计数。
2、 根据权利要求 1所述方法, 其特征在于, 在采用并联的多类分类器对 当前图像进行人头检测之后、 确定当前图像中的各人头之前, 还包括:
对并联的多类分类器检测到的人头进行边缘特征细筛选处理。
3、 根据权利要求 2所述方法, 其特征在于, 所述对并联的多类分类器检 测到的人头进行边缘特征细薛选处理包括:
计算所述分类器判断为人头目标的矩形内边缘特征与预置的上半椭圆弧 的拟合度, 如果拟合度大于阈值, 则将该矩形确定为人头, 否则将该矩形从目 标列表中去除。
4、根据权利要求 1所述方法, 其特征在于, 在采用并联的多类分类器对当 前图像进行人头检测之前, 还包括:
对图像中的检测区域进行场景标定, 从而将检测区域划分为若干个子区 域;
所述并联的多类分类器进行人头检测是在所述若干个子区域内进行的。
5、根据权利要求 4所述方法, 其特征在于, 所述对图像中的检测区域进行 场景标定包括:
选择标定框;
计算场景深度变化系数;
计算检测区域内人头目标尺寸变化范围;
根据人头目标尺寸变化范围将检测区域划分为若干个子区域。
6、 根据权利要求 1所述方法, 其特征在于, 在形成人头目标运动轨迹之 后、 根据人头目标运动轨迹方向进行人流量计数之前, 还包括:
对人头目标运动轨迹进行平滑度分析。
7、 根据权利要求 6所述方法, 其特征在于, 所述对人头目标运动轨迹进 行平滑度分析包括:
确定人头目标运动轨迹的平滑度, 判断所述平滑度是否满足阈值, 若是, 保留该人头目标运动轨迹, 否则, 丟弃该人头目标运动轨迹。
8、 根据权利要求 1至 7任一项所述方法, 其特征在于, 所述采用并联的 多类分类器对图像进行人头检测包括:
设置各类分类器的检测顺序,按照检测顺序依次采用各个分类器对当前图 像进行人头检测, 直到确定出人头, 其中, 所述并联的多类分类器由至少两类 分类器并联而成。
9、 根据权利要求 1至 7任一项所述方法, 其特征在于, 所述并联的多类 分类器由深色头发通用分类器、 浅色头发分类器、 帽子分类器和扩展分类器中 的任意两种或多种并联而成。
10、 一种人流量统计的系统, 其特征在于, 包括:
人头检测模块, 用于采用并联的多类分类器对当前图像进行人头检测, 确 定当前图像中的各人头;
人头目标跟踪模块, 用于对确定出的各人头进行跟踪, 形成人头目标运动 轨迹;
人流量计数模块, 用于在人头目标运动轨迹方向进行人流量计数。
11、 根据权利要求 10所述系统, 其特征在于, 所述人头检测模块还包括 细筛选子模块,用于对并联的多类分类器检测到的人头进行边缘特征细筛选处 理。
12、 根据权利要求 10所述系统, 其特征在于, 还包括:
场景标定模块, 用于对图像中的检测区域进行场景标定,从而将检测区域 划分为若干个子区域。
13、 根据权利要求 10所述系统, 其特征在于, 还包括:
人头目标运动轨迹分析模块, 用于计算人头目标运动轨迹的平滑度, 判断 所述平滑度是否满足阈值, 若是, 保留该人头目标运动轨迹, 否则, 丟弃该人 头目标运动轨迹。
14、 根据权利要求 1 0、 11、 12或 13所述系统, 其特征在于, 所述人头检测模块包括粗检测子模块, 用于设置各类分类器的检测顺序, 按照检测顺序依次采用各个分类器对当前图像进行人头检测, 直到确定出人 头, 其中, 所述并联的多类分类器由至少两类分类器并联而成。
15、 根据权利要求 10、 11、 12或 13所述系统, 其特征在于, 所述人头 检测模块中的所述并联的多类分类器由深色头发通用分类器、 浅色头发分类 器、 帽子分类器和扩展分类器中的任意两种或多种并联而成。
PCT/CN2010/070607 2010-02-10 2010-02-10 人流量统计的方法及系统 WO2011097795A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2010/070607 WO2011097795A1 (zh) 2010-02-10 2010-02-10 人流量统计的方法及系统
EP10845469.5A EP2535843A4 (en) 2010-02-10 2010-02-10 METHOD AND SYSTEM FOR POPULATION FLOW STATISTICS
US13/578,324 US8798327B2 (en) 2010-02-10 2010-02-10 Method and system for people flow statistics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/070607 WO2011097795A1 (zh) 2010-02-10 2010-02-10 人流量统计的方法及系统

Publications (1)

Publication Number Publication Date
WO2011097795A1 true WO2011097795A1 (zh) 2011-08-18

Family

ID=44367145

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/070607 WO2011097795A1 (zh) 2010-02-10 2010-02-10 人流量统计的方法及系统

Country Status (3)

Country Link
US (1) US8798327B2 (zh)
EP (1) EP2535843A4 (zh)
WO (1) WO2011097795A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577832A (zh) * 2012-07-30 2014-02-12 华中科技大学 一种基于时空上下文的人流量统计方法
CN104123714A (zh) * 2013-04-27 2014-10-29 华中科技大学 一种人流量统计中最优目标检测尺度的生成方法
CN109241871A (zh) * 2018-08-16 2019-01-18 北京此时此地信息科技有限公司 一种基于视频数据的公共区域人流跟踪方法
CN111241866A (zh) * 2018-11-27 2020-06-05 比业电子(北京)有限公司 一种用于人流量统计的激光扫描装置及方法
US10832416B2 (en) 2018-09-21 2020-11-10 International Business Machines Corporation Crowd flow rate estimation

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065123A (zh) * 2012-12-21 2013-04-24 南京邮电大学 基于图像预处理和背景差分的人头跟踪及计数方法
US9754154B2 (en) * 2013-02-15 2017-09-05 Microsoft Technology Licensing, Llc Identification using depth-based head-detection data
TWI490803B (zh) * 2013-03-15 2015-07-01 國立勤益科技大學 人流監控廣告方法及其系統
CN103559478B (zh) * 2013-10-07 2018-12-04 唐春晖 俯视行人视频监控中的客流计数与事件分析方法
KR101557376B1 (ko) * 2014-02-24 2015-10-05 에스케이 텔레콤주식회사 사람 계수 방법 및 그를 위한 장치
KR20170036657A (ko) * 2014-03-19 2017-04-03 뉴럴라 인코포레이티드 자율 로봇 제어를 위한 방법들 및 장치
CN105704434A (zh) * 2014-11-28 2016-06-22 上海新联纬讯科技发展有限公司 基于智能视频识别的体育场馆人群监控方法及系统
CN104537685B (zh) * 2014-12-12 2017-06-16 浙江工商大学 一种基于视频图像进行自动客流统计分析方法
CN105184258B (zh) * 2015-09-09 2019-04-30 苏州科达科技股份有限公司 目标跟踪方法及系统、人员行为分析方法及系统
CN107103299B (zh) * 2017-04-21 2020-03-06 天津大学 一种监控视频中的人数统计方法
CN107368823A (zh) * 2017-08-23 2017-11-21 广州市九安光电技术股份有限公司 一种基于全景图像的人流热点监控方法和系统
FR3073652A1 (fr) 2017-11-13 2019-05-17 Suez Groupe Dispositif et procede de traitement de donnees heterogenes pour determiner des affluences spatio-temporelles
CN107948937B (zh) * 2018-01-05 2021-01-22 北京全路通信信号研究设计院集团有限公司 一种客流量统计方法及装置
CN109657700B (zh) * 2018-11-22 2022-11-11 南京茶非氪信息科技有限公司 一种宏观区域连通道热度检测方法
TWI686748B (zh) * 2018-12-07 2020-03-01 國立交通大學 人流分析系統及人流分析方法
CN111353342B (zh) * 2018-12-21 2023-09-19 浙江宇视科技有限公司 肩头识别模型训练方法、装置、人数统计方法、装置
CN112116650A (zh) * 2020-09-29 2020-12-22 联想(北京)有限公司 一种基准线调整方法及装置
CN112633608B (zh) * 2021-01-06 2022-04-12 南方科技大学 人流转移预测方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471053A (zh) * 2002-07-26 2004-01-28 佳能株式会社 图象处理方法和设备、图象处理系统以及存储介质
CN101178773A (zh) * 2007-12-13 2008-05-14 北京中星微电子有限公司 基于特征提取和分类器的图像识别系统及方法
US20080219517A1 (en) * 2007-03-05 2008-09-11 Fotonation Vision Limited Illumination Detection Using Classifier Chains
CN101464946A (zh) * 2009-01-08 2009-06-24 上海交通大学 基于头部识别和跟踪特征的检测方法
CN101477641A (zh) * 2009-01-07 2009-07-08 北京中星微电子有限公司 基于视频监控的人数统计方法和系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7123918B1 (en) * 2001-08-20 2006-10-17 Verizon Services Corp. Methods and apparatus for extrapolating person and device counts
CN101231755B (zh) * 2007-01-25 2013-03-06 上海遥薇(集团)有限公司 运动目标跟踪及数量统计方法
US8295545B2 (en) * 2008-11-17 2012-10-23 International Business Machines Corporation System and method for model based people counting

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1471053A (zh) * 2002-07-26 2004-01-28 佳能株式会社 图象处理方法和设备、图象处理系统以及存储介质
US20080219517A1 (en) * 2007-03-05 2008-09-11 Fotonation Vision Limited Illumination Detection Using Classifier Chains
CN101178773A (zh) * 2007-12-13 2008-05-14 北京中星微电子有限公司 基于特征提取和分类器的图像识别系统及方法
CN101477641A (zh) * 2009-01-07 2009-07-08 北京中星微电子有限公司 基于视频监控的人数统计方法和系统
CN101464946A (zh) * 2009-01-08 2009-06-24 上海交通大学 基于头部识别和跟踪特征的检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2535843A4

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577832A (zh) * 2012-07-30 2014-02-12 华中科技大学 一种基于时空上下文的人流量统计方法
CN103577832B (zh) * 2012-07-30 2016-05-25 华中科技大学 一种基于时空上下文的人流量统计方法
CN104123714A (zh) * 2013-04-27 2014-10-29 华中科技大学 一种人流量统计中最优目标检测尺度的生成方法
CN104123714B (zh) * 2013-04-27 2016-12-28 华中科技大学 一种人流量统计中最优目标检测尺度的生成方法
CN109241871A (zh) * 2018-08-16 2019-01-18 北京此时此地信息科技有限公司 一种基于视频数据的公共区域人流跟踪方法
US10832416B2 (en) 2018-09-21 2020-11-10 International Business Machines Corporation Crowd flow rate estimation
CN111241866A (zh) * 2018-11-27 2020-06-05 比业电子(北京)有限公司 一种用于人流量统计的激光扫描装置及方法
CN111241866B (zh) * 2018-11-27 2024-03-22 比业电子(北京)有限公司 一种用于人流量统计的激光扫描装置及方法

Also Published As

Publication number Publication date
EP2535843A1 (en) 2012-12-19
EP2535843A4 (en) 2016-12-21
US8798327B2 (en) 2014-08-05
US20130070969A1 (en) 2013-03-21

Similar Documents

Publication Publication Date Title
WO2011097795A1 (zh) 人流量统计的方法及系统
CN109948582B (zh) 一种基于跟踪轨迹分析的车辆逆行智能检测方法
CN108053427B (zh) 一种基于KCF与Kalman的改进型多目标跟踪方法、系统及装置
CN108009473B (zh) 基于目标行为属性视频结构化处理方法、系统及存储装置
CN108052859B (zh) 一种基于聚类光流特征的异常行为检测方法、系统及装置
US9646211B2 (en) System and method for crowd counting and tracking
WO2015131734A1 (zh) 一种前视监视场景下的行人计数方法、装置和存储介质
CN101872431B (zh) 可适用多角度应用场景的人流量统计的方法及系统
JP6570731B2 (ja) 乗客の混雑度の算出方法及びそのシステム
CN108021848B (zh) 客流量统计方法及装置
US9418546B1 (en) Traffic detection with multiple outputs depending on type of object detected
CN102542289B (zh) 一种基于多高斯计数模型的人流量统计方法
CN103824070B (zh) 一种基于计算机视觉的快速行人检测方法
CN103246896B (zh) 一种鲁棒性车辆实时检测与跟踪方法
CN106600631A (zh) 基于多目标跟踪的客流统计方法
CN101872414B (zh) 可去除虚假目标的人流量统计的方法及系统
CN107578048B (zh) 一种基于车型粗分类的远视场景车辆检测方法
WO2019222947A1 (zh) 一种基于网络流量的无线摄像头检测及定位方法
CN109543648B (zh) 一种过车图片中人脸提取方法
CN108229256A (zh) 一种道路施工检测方法及装置
WO2023155482A1 (zh) 一种人群快速聚集行为的识别方法、系统、设备及介质
CN106570490A (zh) 一种基于快速聚类的行人实时跟踪方法
CN116153086B (zh) 基于深度学习的多路段交通事故及拥堵检测方法及系统
Yan-yan et al. Pedestrian detection and tracking for counting applications in metro station
CN111027370A (zh) 一种多目标跟踪及行为分析检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10845469

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010845469

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 13578324

Country of ref document: US