WO2015131734A1 - 一种前视监视场景下的行人计数方法、装置和存储介质 - Google Patents

一种前视监视场景下的行人计数方法、装置和存储介质 Download PDF

Info

Publication number
WO2015131734A1
WO2015131734A1 PCT/CN2015/072048 CN2015072048W WO2015131734A1 WO 2015131734 A1 WO2015131734 A1 WO 2015131734A1 CN 2015072048 W CN2015072048 W CN 2015072048W WO 2015131734 A1 WO2015131734 A1 WO 2015131734A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
area
region
detection
pedestrian
Prior art date
Application number
PCT/CN2015/072048
Other languages
English (en)
French (fr)
Inventor
陆平
罗圣美
孙健
金立左
武文静
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2015131734A1 publication Critical patent/WO2015131734A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit

Definitions

  • the present invention relates to video surveillance technologies, and in particular, to a pedestrian counting method, apparatus, and storage medium in a forward-looking surveillance scenario.
  • Pedestrian counting technology can play different roles in different occasions. It can monitor people's traffic in parks and other places. When people's traffic is over-alarmed, they can control people's traffic and prevent potential safety hazards caused by congestion. In shopping malls and other places, you can determine The time and space distribution of passenger flow, targeted increase or decrease of staff, improve service level; in important places such as power plants, you can control access, monitor abnormalities, and timely alarm.
  • A.N. Marana et al. used the method of extracting features to solve the problem of density estimation. Based on the idea of background subtraction, T.W.S. Chow et al. used crowd foreground areas and edge features to estimate population density levels.
  • Chinese patent for patent application number 200910076256.X (publication number CN101477641), This paper introduces a method and system for counting people based on video surveillance.
  • This invention uses Haar features for human head detection to count the tracking and motion estimation of human heads.
  • the shortcoming of this invention is that the accuracy of the human head detection is not high, and the false detection rate is high.
  • the Chinese patent of the patent application number 201010114819.2 (publication number CN101877058A) introduces a method and system for human flow statistics that requires training a large number of different classifiers, such as light hair, dark hair, hats, etc., and then these areas Performing head curve fitting is cumbersome to train and can cause a lot of false detections.
  • embodiments of the present invention are expected to provide a pedestrian counting method, apparatus, and storage medium in a forward-looking monitoring scenario.
  • An embodiment of the present invention provides a pedestrian counting method in a front view monitoring scenario, where the method includes:
  • the monitoring device sets the detection area and the position of the detection line, detects the head area, the head and shoulder area of the motion mask image in the detection area, and determines the exact head area according to the positional relationship between the head area and the head and shoulder area;
  • the determined motion trajectory of the head region is tracked, and the pedestrians are counted in the direction according to the motion trajectory and the detected line position.
  • An embodiment of the present invention provides a pedestrian counting device in a front view monitoring scenario, where the device includes: a detection setting module, a human head detection module, a human head tracking module, and a human head counting module;
  • a detection setting module configured to set a detection area and a position of the detection line
  • the head detection module is configured to perform detection of a head region and a head and shoulder region on a motion mask image in the detection area, and determine an exact head region according to a positional relationship between the head region and the head and shoulder regions;
  • a head tracking module configured to track a motion trajectory of the head region
  • the head counting module is configured to count pedestrians in a direction according to the motion trajectory and the position of the detection line.
  • Embodiments of the present invention provide a computer storage medium in which a computer program is stored, The computer program is used to perform the pedestrian counting method in the forward looking monitoring scenario described above.
  • Embodiments of the present invention provide a pedestrian counting method, apparatus, and storage medium in a front view monitoring scenario, and set a detection area and a position of a detection line, and perform a head area, a head and shoulder area on a motion mask image in the detection area. Detecting, determining an exact head region according to the positional relationship between the head region and the head and shoulder regions; tracking the determined motion trajectory of the head region, and counting the pedestrians in the direction according to the motion trajectory and the detection line position; In this way, the direction and trajectory of each pedestrian can be accurately recorded in the detection area, and the number of pedestrians in the surveillance area can be accurately recorded, which avoids the occlusion problem when there are many pedestrians, and reduces the computational complexity.
  • FIG. 1 is a schematic flowchart of a pedestrian counting method in a front view monitoring scenario according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of implementing a training head classifier and a head and shoulder part class according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a training process of an Adaboost cascade classifier according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of extracting a motion mask image according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of implementing header area detection according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of implementing head area tracking according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an effect of implementing pedestrian counting according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a pedestrian counting device in a front view monitoring scenario according to an embodiment of the present invention.
  • the detection area and the position of the detection line are set, and the head area and the head and shoulder area are detected on the motion mask image in the detection area, and the exact position is determined according to the positional relationship between the head area and the head and shoulder area.
  • the embodiment of the invention implements a pedestrian counting method in a forward-looking monitoring scenario. As shown in FIG. 1 , the method includes the following steps:
  • Step 101 The monitoring device sets a location of the detection area and the detection line;
  • the monitoring device sets a detection area for calibrating the detection range on the monitoring scene, which can reduce the detection area, and set the positions of the two detection lines in the detection area according to the direction of the pedestrian's access, according to the pedestrian head.
  • the movement path of the part passes through the direction of the two detection lines, and the direction of the pedestrian can be determined.
  • Step 102 The monitoring device performs a head region, a head and shoulder region detection on the motion mask image in the detection area, and determines an exact head region according to the positional relationship between the head region and the head and shoulder region;
  • the monitoring device performs mixed Gaussian modeling on the monitoring scene, extracts a foreground area of the motion, establishes a motion mask image according to the foreground area, and uses a head classifier and a head and shoulder part class to detect a motion mask in the detection area.
  • the images are respectively detected by the head region and the head and shoulder regions of the same pedestrian, and the exact head region of the pedestrian is determined according to the geometric position constraint relationship between the head region and the head and shoulder regions of the same pedestrian.
  • Step 103 The monitoring device tracks the motion track of the determined head region.
  • the monitoring device presets a tracking sequence, where the tracking sequence is used to store the head regions of different pedestrians, and determines whether the header region determined in step 102 already exists in the tracking sequence, and if yes, updates the header.
  • the current location of the region and the tracking template if not, the header region is added to the tracking sequence, and the current location of the header region is recorded, and a tracking template of the header region is created, and the tracking template is used. Record the motion track of the head area.
  • Step 104 The monitoring device counts the pedestrians in the direction according to the motion trajectory and the detection line position.
  • step 102 the head classifier and the head and shoulder part class need to be trained in advance, and the specific steps are as shown in FIG. 2:
  • Step 201 Prepare a positive sample and a negative sample
  • a large number of head images and head and shoulder images are collected as positive samples, non-head and shoulder images, non-head images and background images are used as negative samples, and positive and negative samples are normalized to a uniform size, and are positive The sample was masked.
  • Step 202 Extract HOG features of positive samples and negative samples, and normalize the extracted HOG features
  • Step 203 Perform cascading classifier training on the HOG features of the normalized positive samples and the negative samples to obtain a head classifier and a head and shoulder part classifier;
  • the HOG features of the normalized negative samples obtained in step 202 are respectively trained by the Adaboost cascade classifier, and the head and shoulder part classifiers and the head classifiers are respectively obtained for respectively the head region and the head.
  • the shoulder area is detected; the Adaboost cascade classifier training process is shown in FIG. 3, wherein the classifier h1, the classifier h2, the classifier h3 are cascaded, there are n negative samples, and the negative sample library is randomly selected.
  • Each class of classifiers is trained with the same number of m negative samples, and the weighted values of the negative samples added to each classifier are added to the next classifier training until the training level is reached. Get the head and shoulders classifier and head classifier.
  • the number of negative samples is required to be sufficient, so to prepare enough negative sample images, a negative sample can be randomly generated in the form of a sliding window.
  • the monitoring device performs hybrid Gaussian modeling on the monitoring scene, extracts a foreground region of the motion, and establishes a motion mask image according to the foreground region, which may be:
  • the motion mask region extraction is divided into two parts: background modeling and foreground extraction.
  • Background modeling is modeled by mixed Gaussian model, and foreground extraction is extracted by background subtraction method.
  • the hybrid Gaussian model uses online learning to automatically update parameters, introduces a hybrid model adaptive mechanism, and dynamically updates the number of Gaussian models.
  • a pixel belongs to one of the B background models, the pixel is recorded as a background, otherwise it is recorded as a foreground, thereby obtaining a binary image of the entire image:
  • the foreground region of the motion is extracted.
  • the foreground image pixel value in the motion region is retained, and the background point is set to a uniform value to form a motion mask image, which needs attention: the background pixel point
  • the value cannot be similar to the edge pixel of the foreground area, which will weaken the foreground edge feature, which is not conducive to further detection and tracking.
  • the head mask and the head and shoulder part class respectively perform the detection of the head region and the head and shoulder regions of the same pedestrian in the motion mask image in the detection area, according to the head of the same pedestrian.
  • the geometric position constraint relationship of the region, the head and shoulder regions determines the head region of the pedestrian, and specifically may be:
  • an image pyramid is established. On each layer of the pyramid, a fixed-size window is used to slide on the image in a set step size, and the head classifier and head are trained offline.
  • the shoulder part class determines whether the area in the sliding window is the head or the head and shoulders, respectively.
  • the test results are fused to determine the head area of the pedestrian.
  • Figure 5 The main steps are shown in Figure 5:
  • Step 501 Establish an image pyramid on the motion mask image.
  • a multi-scale image pyramid is established on the image, that is, the image is scaled by setting the scaling factor multiple times to form Multi-layer images of different sizes.
  • Step 502 Perform pixel gray space compression on each layer image of the image pyramid.
  • the gamma gray space compression is performed on each layer image of the image pyramid by using the gamma correction method to reduce the shadow and illumination variation of the image locality.
  • the gamma compression formula is:
  • Step 503 Calculate an image gradient of each layer of the image after the pixel gray space compression
  • Gradient calculation is performed on the gamma corrected image obtained in step 502.
  • the gradient and direction of the pixel are calculated for each channel, and the gradient and direction of the pixel in the channel with the largest gradient value are selected as the The result of the pixel.
  • Step 504 Form an integral histogram according to an image gradient of each layer of the image
  • the gradient direction of each pixel of each layer of the image obtained in step 503 is divided into nine directions, that is, each pixel forms a gradient direction histogram, and then an integral histogram is formed by using an integral map.
  • Step 505 Perform HOG feature extraction on the integral histogram of each layer of the image
  • a window suitable for the head, shoulder, and head size is respectively used to slide on the integral histogram in a set step size, and each sliding window is calculated.
  • HOG features are respectively used to slide on the integral histogram in a set step size, and each sliding window is calculated.
  • Step 506 classify the head and shoulder area and the head area of each layer of the image by using a head and shoulder part classifier and a head classifier;
  • the head and shoulder HOG feature and the head HOG feature obtained in step 505 are classified by a head and shoulder part class and a head classifier, respectively, to obtain a head and shoulder area and a head area of each layer of images.
  • Step 507 Fusing the head and shoulder regions and the head region of each layer of the image to obtain an exact head region of the layer image
  • the head area of the same person must be located in the head and shoulder area, it can be determined whether there is an inclusion relationship between the head and shoulder area and the head area obtained in step 506, and the head and shoulder area where the inclusion relationship exists and The head region is fused to obtain the final exact head region, filtering out the head and shoulder regions and the head region where there is no inclusion relationship.
  • Step 508 Fusing the exact head region of each layer image to obtain an exact head region of the motion mask image
  • multi-scale merging is performed according to the size and position of the head region of each layer of the image to obtain an exact head region of the final motion mask image.
  • the detection is performed once every 5 frames, and the remaining frame tracking mode determines whether the current frame is detected according to the frame number.
  • the frame to be detected is called a detection frame, and the frame number for tracking is called.
  • the above step 103 is specifically as shown in FIG. 6:
  • Step 601 Acquire a frame number and image information of a current frame.
  • Step 602 Determine whether the current frame is a detection frame or a tracking frame according to the frame number, when the frame is detected, step 603 is performed, and when the frame is tracked, step 607 is performed;
  • Step 603 Acquire a detected head area
  • Step 604 Determine whether the detected header area already exists in the tracking sequence, if it exists, step 605 is performed, if not, step 606 is performed;
  • the size and position of the detected head region and the header region already existing in the tracking sequence can be compared, and if it is smaller than the determination threshold, it is already present. Tracking sequence, otherwise it does not exist in the tracking sequence.
  • Step 605 Update the current location and the tracking template of the header area, and end the process.
  • Step 606 Add the header area to the tracking sequence, record the current position of the head area, create a tracking template of the head area, and use the tracking template to record the motion track of the head area, and end the process;
  • Step 607 Open a search window near the head region of the previous frame, and use a three-step search method to match the tracking model to obtain a matching value of the best matching position and the best matching position;
  • Step 608 Compare the matching value with a threshold
  • the threshold is preset and may be a percentage of the degree of matching.
  • Step 609 If the matching value is greater than the threshold, the process ends.
  • Step 610 If the matching value is less than the threshold, the final tracking result is used;
  • Step 611 Determine a location of a header area of the current frame, and update a current location and a tracking template of the header area.
  • step 607 if the head region is not detected when the tracking model is matched, the current position of the head region is predicted using the tracking position of the previous frame.
  • the head region in the tracking sequence is not detected again for a long time in the detection region, it is considered a false target and is deleted in the tracking sequence.
  • step 104 it can be:
  • the counter is first cleared in the first frame of the count, and then the direction of the pedestrian movement is judged according to the motion track of the head region of the pedestrian. As shown in FIG. 7, in the detection region, the detection line is passed in different motion directions. The pedestrians count.
  • an embodiment of the present invention provides a pedestrian counting device in a front view monitoring scenario.
  • the device includes: a detection setting module 801, a human head detection module 802, a human head tracking module 803, and a human head counting module. 804; among them,
  • the detection setting module 801 can be implemented by a human-machine interface display and configured to set a detection area and a position of the detection line;
  • the human head detection module 802 can be implemented by an image processor configured to detect a head region and a head and shoulder region of a motion mask image in the detection area, and determine an exact head according to a positional relationship between the head region and the head and shoulder regions.
  • the human head tracking module 803 can be implemented by an image processor configured to track the determined motion trajectory of the head region;
  • the head counting module 804 can be implemented by a counter configured to count pedestrians in a direction according to the motion trajectory and the detection line position.
  • the apparatus further includes: a motion mask area extraction module 805 configured to perform mixed Gaussian modeling on the monitoring scene, extract a foreground area of the motion, and establish a motion mask image according to the foreground area;
  • a motion mask area extraction module 805 configured to perform mixed Gaussian modeling on the monitoring scene, extract a foreground area of the motion, and establish a motion mask image according to the foreground area;
  • the apparatus further includes: a classifier training module 806 configured to obtain a head and shoulder section classifier and a head classifier by training the sample images of the head shoulder and the head by the cascade classifier;
  • the human head detecting module 802 is specifically configured to perform the detection of the head region and the head and shoulder regions of the same pedestrian by the head classifier and the head and shoulder part class, respectively, according to the same pedestrian.
  • the geometric position constraint relationship of both the head region and the head and shoulder regions determines the exact head region of the pedestrian;
  • the human head tracking module 803 is specifically configured to determine whether the determined header area already exists in the tracking sequence, and if yes, update the current location and the tracking template of the header area; if not, the The head region is added to the tracking sequence, and the current position of the head region is recorded, and a tracking template of the head region is created, and the tracking template is used to record the motion track of the head region.
  • the pedestrian counting method in the forward-looking monitoring scenario may also be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a stand-alone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • Make a computer set The device (which may be a personal computer, server, or network device, etc.) performs all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a removable hard disk, a read-only memory (ROM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • ROM read-only memory
  • magnetic disk or an optical disk, and the like, which can store program codes.
  • optical disk and the like, which can store program codes.
  • the embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute a pedestrian counting method in a forward-looking monitoring scenario according to an embodiment of the present invention.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

提供了一种前视监视场景下的行人计数方法,包括:设置检测区域和检测线的位置(101);对检测区域内的运动掩模图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域(102);跟踪所确定的头部区域的运动轨迹(103);根据所述运动轨迹及检测线位置,分方向对行人进行计数(104)。还提供了一种前视监视场景下的行人计数装置和存储介质。

Description

一种前视监视场景下的行人计数方法、装置和存储介质 技术领域
本发明涉及视频监控技术,尤其涉及一种前视监视场景下的行人计数方法、装置和存储介质。
背景技术
随着视频监控技术的发展,基于视觉的行人计数越来越受到人们的重视,应用于多种场合。行人计数技术可以在不同场合发挥不同作用,在公园景区等地可以监视人的流量,在人的流量超警戒时,控制人的流量,防止因为拥挤造成的安全隐患;在商场等地,可以确定客流的时间空间分布,有针对性的增加或减少工作人员,提高服务水平;在电厂等重要场所,可以控制访问,监控异常,及时报警。
对于行人计数算法方面,国外的科研机构如卡耐基梅隆大学、Tl公司,更多倾向于对人群密度的研究,其中成熟有效的方法有动态纹理模型、透视方法和基于数学形态学的方法。但是,动态纹理方法计算复杂度较高,透视方法需要借助模式识别方法识别行人,从而使得这两种方法都很难应用到实时视频监控。基于数学形态学的方法可以满足实时性条件,但应用场景是基于正向俯视摄像机,导致扩展性较差。为了实现基于时间序列统计出通过某一固定位置的行人数量,O.Sidla与Y.Lypetskyy等人采用基于形状识别与目标跟踪相结合的方法识别行人进而统计人数,处理速度达到了每秒15帧。针对火车站、地铁站的密集人群,A.N.Marana等人采用提取特征的方法解决密度估计问题。基于背景减除的思想,T.W.S.Chow等人采用人群前景面积及边缘特征进行人群密度等级的估计。
专利申请号为200910076256.X(公告号为CN101477641)的中国专利, 介绍了一种基于视频监控的人数统计方法和系统,此发明采用Haar特征进行人头检测,以对人头的跟踪和运动估计进行计数。此发明存在的不足是人头检测的准确率不高,误检率较高。专利申请号为201010114819.2(公告号为CN101877058A)的中国专利,介绍的一种人流量统计的方法和系统需要训练大量不同的分类器,例如浅色头发、深色头发、帽子等,再对这些区域进行头部曲线拟合,训练较为麻烦,且会产生大量误检。
发明内容
为解决现有存在的技术问题,本发明实施例期望提供一种前视监视场景下的行人计数方法、装置和存储介质。
本发明实施例的技术方案是这样实现的:
本发明实施例提供一种前视监视场景下的行人计数方法,该方法包括:
监控设备设置检测区域和检测线的位置,对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域;跟踪所确定的头部区域的运动轨迹,并根据所述运动轨迹及检测线位置,分方向对行人进行计数。
本发明实施例提供一种前视监视场景下的行人计数装置,该装置包括:检测设置模块、人头检测模块、人头跟踪模块、人头计数模块;其中,
检测设置模块,配置为设置检测区域和检测线的位置;
人头检测模块,配置为对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域;
人头跟踪模块,配置为跟踪所述头部区域的运动轨迹;
人头计数模块,配置为根据所述运动轨迹及检测线位置,分方向对行人进行计数。
本发明实施例提供一种计算机存储介质,其中存储有计算机程序,该 计算机程序用于执行上述的前视监视场景下的行人计数方法。
本发明实施例提供了一种前视监视场景下的行人计数方法、装置和存储介质,设置检测区域和检测线的位置,对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域;跟踪所确定的头部区域的运动轨迹,并根据所述运动轨迹及检测线位置,分方向对行人进行计数;如此,能够在检测区域精确记录每一个行人的运动方向和轨迹,较为精准的记录监视区域内的行人个数,很好的避免行人较多时的遮挡问题,并降低了运算复杂度。
附图说明
图1为本发明实施例实现前视监视场景下的行人计数方法的流程示意图;
图2为本发明实施例实现训练头部分类器和头肩部分类器的流程示意图;
图3为本发明实施例提供的Adaboost级联分类器训练过程示意图;
图4为本发明实施例提供的运动掩膜图像的提取示意图;
图5为本发明实施例实现头部区域检测的流程示意图;
图6为本发明实施例实现头部区域跟踪的流程示意图;
图7为本发明实施例实现行人计数的效果示意图;
图8为本发明实施例实现前视监视场景下的行人计数装置的结构示意图。
具体实施方式
本发明实施例中,设置检测区域和检测线的位置,对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域;跟踪所确定的头部区域的运动轨迹,并 根据所述运动轨迹及检测线位置,分方向对行人进行计数。
下面通过附图及具体实施例对本发明做进一步的详细说明。
本发明实施例实现一种前视监视场景下的行人计数方法,如图1所示,该方法包括以下几个步骤:
步骤101:监控设备设置检测区域和检测线的位置;
由于监控场景范围较大,监控设备在监控场景上设置用于标定检测范围的检测区域,能够减小检测区域,并根据行人的出入方向在检测区域内设置两条检测线的位置,根据行人头部的运动轨迹穿过两条检测线的方向,可以确定行人的方向,通过设置检测区域和检测线可以有效减少检测的计算量、提高检测性能。
步骤102:监控设备对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域;
具体的,监控设备对监视场景进行混合高斯建模,提取运动的前景区域,根据所述前景区域建立运动掩膜图像,通过头部分类器和头肩部分类器对检测区域内的运动掩膜图像分别进行同一个行人的头部区域、头肩部区域检测,根据同一个行人的头部区域、头肩部区域两者的几何位置约束关系确定所述行人的确切的头部区域。
步骤103:监控设备跟踪所确定的头部区域的运动轨迹;
具体的,监控设备预先设置跟踪序列,所述跟踪序列用于存储不同行人的头部区域,判断步骤102中确定的头部区域是否已存在在跟踪序列中,如果已存在,则更新所述头部区域的当前位置和跟踪模板;如果不存在,则将所述头部区域加入跟踪序列,并记录所述头部区域的当前位置,创建所述头部区域的跟踪模板,所述跟踪模板用于记录头部区域的运动轨迹。
步骤104:监控设备根据所述运动轨迹及检测线位置,分方向对行人进行计数。
上述步骤102中,所述头部分类器和头肩部分类器需要预先训练得到,具体步骤如图2所示:
步骤201:准备正样本和负样本;
具体的,搜集大量的头部图像和头肩部图像作为正样本,非头肩图像、非头部图像和背景图像作为负样本,将正样本和负样本归一化到统一大小,并对正样本进行掩膜处理。
步骤202:提取正样本和负样本的HOG特征,对提取的HOG特征进行归一化处理;
步骤203:对归一化处理后的正样本和负样本的HOG特征进行级联分类器训练,获得头部分类器和头肩部分类器;
具体的,将步骤202所得归一化处理后的正负样本的HOG特征分别进行Adaboost级联分类器训练,分别得到头肩部分类器和头部分类器,用于分别对头部区域及头肩部区域进行检测;所述Adaboost级联分类器训练过程如图3所示,其中,分类器h1、分类器h2、分类器h3级联,有n个负样本,在负样本库中随机抽取与正样本等量的m个负样本,进行每一级分类器训练,每一级分类器训练后被错分的负样本增加权值加入到下一级分类器训练中,直到达到训练级数,得到头肩部分类器和头部分类器。这里,要求负样本数量要足够多,因此要准备足够多的负样本图像,可以采用滑窗形式随机产生负样本。
上述步骤102中,所述监控设备对监视场景进行混合高斯建模,提取运动的前景区域,根据所述前景区域建立运动掩膜图像具体可以是:
运动掩膜区域提取分为背景建模和前景提取两部分,背景建模采用混合高斯模型进行建模,前景提取采用背景减除法提取运动前景。
混合高斯模型的数学描述如下:
任意像素点x,在t时刻的值可以表示为
Figure PCTCN2015072048-appb-000001
对于
Figure PCTCN2015072048-appb-000002
的时序集合 XT={x(t),......,x(t-T)},由于XT中同时包含背景像素和前景像素,因此由XT得出的核密度估计函数可以表示为式(1)。
Figure PCTCN2015072048-appb-000003
其中
Figure PCTCN2015072048-appb-000004
是由一段时间内的XT得到的混合高斯分布估计,由M个单高斯模型组成,
Figure PCTCN2015072048-appb-000005
分别表示混合高斯分布中每个单高斯分布的均值估计、方差估计和权重估计,
Figure PCTCN2015072048-appb-000006
为高斯核。
混合高斯模型采用在线学习,自动更新参数,引入混合模型自适应机制,动态更新高斯模型个数。当像素属于B个背景模型之一,就将所述像素记为背景,否则记为前景,从而得到整幅图像的二值图像:
Figure PCTCN2015072048-appb-000007
根据二值图像提取运动的前景区域,如图4所示,将运动区域中的前景图像像素值保留,而背景点置为统一数值,形成运动掩膜图像,需要注意的是:背景像素点的取值不能与前景区域边缘像素点相似,这样会使前景边缘特征变弱,不利于进一步的检测和跟踪。
上述步骤102中,所述通过头部分类器和头肩部分类器对检测区域内的运动掩膜图像分别进行同一个行人的头部区域、头肩部区域检测,根据同一个行人的头部区域、头肩部区域两者的几何位置约束关系确定所述行人的头部区域,具体可以是:
在运动掩膜图像的基础上,建立图像金字塔,在金字塔的每一层图像上,采用固定大小的窗口,以设定的步长在图像上滑动,利用离线训练好的头部分类器、头肩部分类器分别判断滑窗内的区域是否为头部或头肩部, 并将检测结果进行融合,确定行人的头部区域。主要步骤如图5所示:
步骤501:对运动掩膜图像建立图像金字塔;
本步骤中,由于监控视频的图像中行人头部大小并不固定,为了适应多尺度的头部检测,对图像建立多尺度的图像金字塔,即对图像以设定缩放系数进行多次缩放,形成不同尺寸的多层图像。
步骤502:对图像金字塔的每层图像进行像素灰度空间压缩;
这里,采用Gamma校正的方法对图像金字塔的每层图像进行像素灰度空间压缩,降低所述图像局部的阴影和光照变化,所述Gamma压缩公式为:
I(x,y)=I(x,y)gamma
(3)
其中,I(x,y)表示图像在坐标(x,y)位置处的像素值,这里取gamma=1/2。
步骤503:计算像素灰度空间压缩后的每层图像的图像梯度;
对步骤502中得到的Gamma矫正后的图像进行梯度计算,在对彩色图像计算梯度时,要对各个通道分别计算像素的梯度和方向,选取其中梯度值最大的通道中像素的梯度和方向作为所述像素的结果。
步骤504:根据每层图像的图像梯度形成积分直方图;
具体的,将步骤503中得到的每层图像的每个像素的梯度方向分为9个方向,即每个像素形成一个梯度方向直方图,再采用积分图的方式形成积分直方图。
步骤505:对每层图像的积分直方图进行HOG特征提取;
具体的,对步骤504中得到的每层图像的积分直方图,分别采用适合头肩部、头部尺寸的窗口,以设定的步长在积分直方图上进行滑动,计算每个滑窗内的HOG特征。
步骤506:利用头肩部分类器、头部分类器分类出每层图像的头肩部区域及头部区域;
具体的,对步骤505中得到的头肩部HOG特征、头部HOG特征分别采用头肩部分类器、头部分类器进行分类,得到每层图像的头肩部区域及头部区域。
步骤507:将每层图像的头肩部区域及头部区域融合,得到该层图像的确切的头部区域;
由于同一个的人的头部区域必位于其头肩部区域内,因此,可以判断步骤506中得到的头肩部区域、头部区域是否存在包含关系,对存在包含关系的头肩部区域和头部区域进行融合得到最终的确切的头部区域,滤掉不存在包含关系的头肩部区域和头部区域。
步骤508:将各层图像的确切的头部区域融合,得到运动掩膜图像的确切的头部区域;
具体的,根据每层图像的头部区域的尺寸大小及位置进行多尺度合并,得到最终的运动掩膜图像的确切的头部区域。
为了达到实时性和获得较好的跟踪效果,这里采取隔5帧检测一次,剩余帧跟踪的方式,根据帧号判断当前帧是否进行检测,进行检测的帧称为检测帧,进行跟踪的帧称为跟踪帧。上述步骤103具体如图6所示:
步骤601:获取当前帧的帧号和图像信息;
步骤602:根据帧号判断当前帧为检测帧还是跟踪帧,在为检测帧时,执行步骤603,在为跟踪帧时,执行步骤607;
步骤603:获取检测到的头部区域;
步骤604:判断检测到的头部区域是否已存在在跟踪序列中,如果已存在,则执行步骤605,如果不存在,则执行步骤606;
由于两帧之间头部大小及位置变化较小,因此可以通过将检测到头部区域与已存在在跟踪序列中的头部区域的大小及位置进行比较,若小于判定阈值则为已存在在跟踪序列中,否则不存在在跟踪序列中。
步骤605:更新所述头部区域的当前位置和跟踪模板,结束流程;
步骤606:将所述头部区域加入跟踪序列,记录所述头部区域的当前位置,创建所述头部区域的跟踪模板,所述跟踪模板用于记录头部区域的运动轨迹,结束流程;
步骤607:在上一帧头部区域附近开搜索窗,采用三步搜索法对跟踪模型进行匹配,得到最佳匹配位置及最佳匹配位置的匹配值;
步骤608:对所述匹配值与阈值进行比较;
所述阈值为预先设置的,可以是匹配度的百分比。
步骤609:若匹配值大于阈值,结束流程;
步骤610:若匹配值小于阈值,则作为最终跟踪结果;
步骤611:确定当前帧的头部区域的位置,更新所述头部区域的当前位置和跟踪模板。
在步骤607中,若对跟踪模型进行匹配时未检测到头部区域,则用上一帧的跟踪位置预测头部区域的当前位置。
如果跟踪序列中的头部区域在检测区域内长时间未被再次检测到,则被认为是假目标,将其在跟踪序列中删除。
对于步骤104可以是:
行人计数时,首先在计数首帧对计数器进行清零,再根据行人的头部区域的运动轨迹,判断行人运动方向,如图7所示,在检测区域内,对以不同运动方向通过检测线的行人进行计数。
为了实现上述方法,本发明实施例提供一种前视监视场景下的行人计数装置,如图8所示,该装置包括:检测设置模块801、人头检测模块802、人头跟踪模块803、人头计数模块804;其中,
检测设置模块801,可以由人机界面显示器实现,配置为设置检测区域和检测线的位置;
人头检测模块802,可以由图像处理器实现,配置为对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域;
人头跟踪模块803,可以由图像处理器实现,配置为跟踪所确定的头部区域的运动轨迹;
人头计数模块804,可以由计数器实现,配置为根据所述运动轨迹及检测线位置,分方向对行人进行计数。
该装置还包括:运动掩膜区域提取模块805,配置为对监视场景进行混合高斯建模,提取运动的前景区域,根据所述前景区域建立运动掩膜图像;
该装置还包括:分类器训练模块806,配置为通过级联分类器训练头肩部和头部的样本图像得到头肩部分类器和头部分类器;
所述人头检测模块802,具体配置为通过头部分类器和头肩部分类器对检测区域内的运动掩膜图像分别进行同一个行人的头部区域、头肩部区域检测,根据同一个行人的头部区域、头肩部区域两者的几何位置约束关系确定所述行人的确切的头部区域;
所述人头跟踪模块803,具体配置为判断所确定的头部区域是否已存在在跟踪序列中,如果已存在,则更新所述头部区域的当前位置和跟踪模板;如果不存在,则将所述头部区域加入跟踪序列,并记录所述头部区域的当前位置,创建所述头部区域的跟踪模板,所述跟踪模板用于记录头部区域的运动轨迹。
本发明实施例所述前视监视场景下的行人计数方法如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设 备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。
相应的,本发明实施例还提供一种计算机存储介质,其中存储有计算机程序,该计算机程序用于执行本发明实施例的前视监视场景下的行人计数方法。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。
工业实用性
综合上述本发明各实施例,能够在检测区域通过检测到的头部区域与头肩部区域的位置关系进一步确定每一个行人的确切的头部区域,精确记录每一个行人的运动方向和轨迹,更加精准的记录了监视区域内的行人个数,很好的避免行人较多时的遮挡问题,并降低了运算复杂度。

Claims (11)

  1. 一种前视监视场景下的行人计数方法,该方法包括:
    监控设备设置检测区域和检测线的位置,对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域;跟踪所确定的头部区域的运动轨迹,并根据所述运动轨迹及检测线位置,分方向对行人进行计数。
  2. 根据权利要求1所述的行人计数方法,其中,所述设置检测区域和检测线的位置包括:在监控场景上设置用于标定检测范围的检测区域,并根据行人的出入方向在检测区域内设置两条检测线的位置。
  3. 根据权利要求1所述的行人计数方法,其中,所述对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域,包括:对监视场景进行混合高斯建模,提取运动的前景区域,根据所述前景区域建立运动掩膜图像,通过头部分类器和头肩部分类器对检测区域内的运动掩膜图像分别进行同一个行人的头部区域、头肩部区域检测,根据同一个行人的头部区域、头肩部区域两者的几何位置约束关系确定所述行人的确切的头部区域。
  4. 根据权利要求3所述的行人计数方法,其中,所述跟踪所确定的头部区域的运动轨迹包括:预先设置跟踪序列,判断确定的头部区域是否已存在在跟踪序列中,如果已存在,则更新所述头部区域的当前位置和跟踪模板;如果不存在,则将所述头部区域加入跟踪序列,并记录所述头部区域的当前位置,创建所述头部区域的跟踪模板,所述跟踪模板用于记录头部区域的运动轨迹。
  5. 根据权利要求3所述的行人计数方法,其中,该方法还包括:通过级联分类器训练头肩部和头部的样本图像得到头肩部分类器和头部分类器。
  6. 一种前视监视场景下的行人计数装置,该装置包括:检测设置模块、人头检测模块、人头跟踪模块、人头计数模块;其中,
    检测设置模块,配置为设置检测区域和检测线的位置;
    人头检测模块,配置为对检测区域内的运动掩膜图像进行头部区域、头肩部区域检测,根据头部区域与头肩部区域的位置关系确定确切的头部区域;
    人头跟踪模块,配置为跟踪所述头部区域的运动轨迹;
    人头计数模块,配置为根据所述运动轨迹及检测线位置,分方向对行人进行计数。
  7. 根据权利要求6所述的行人计数装置,其中,该装置还包括:运动掩膜区域提取模块,配置为对监视场景进行混合高斯建模,提取运动的前景区域,根据所述前景区域建立运动掩膜图像。
  8. 根据权利要求7所述的行人计数装置,其中,该装置还包括:分类器训练模块,配置为通过级联分类器训练头肩部和头部的样本图像得到头肩部分类器和头部分类器。
  9. 根据权利要求8所述的行人计数装置,其中,所述人头检测模块,配置为通过头部分类器和头肩部分类器对检测区域内的运动掩膜图像分别进行同一个行人的头部区域、头肩部区域检测,根据同一个行人的头部区域、头肩部区域两者的几何位置约束关系确定所述行人的确切的头部区域。
  10. 根据权利要求9所述的行人计数装置,其中,所述人头跟踪模块,配置为判断确定的头部区域是否已存在在跟踪序列中,如果已存在,则更新所述头部区域的当前位置和跟踪模板;如果不存在,则将所述头部区域加入跟踪序列,并记录所述头部区域的当前位置,创建所述头部区域的跟踪模板,所述跟踪模板用于记录头部区域的运动轨迹。
  11. 一种计算机存储介质,其中存储有计算机程序,该计算机程序用 于执行权利要求1至5任一项所述的前视监视场景下的行人计数方法。
PCT/CN2015/072048 2014-07-25 2015-01-30 一种前视监视场景下的行人计数方法、装置和存储介质 WO2015131734A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410360781.5A CN105303191A (zh) 2014-07-25 2014-07-25 一种前视监视场景下的行人计数方法和装置
CN201410360781.5 2014-07-25

Publications (1)

Publication Number Publication Date
WO2015131734A1 true WO2015131734A1 (zh) 2015-09-11

Family

ID=54054576

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/072048 WO2015131734A1 (zh) 2014-07-25 2015-01-30 一种前视监视场景下的行人计数方法、装置和存储介质

Country Status (2)

Country Link
CN (1) CN105303191A (zh)
WO (1) WO2015131734A1 (zh)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509896A (zh) * 2018-03-28 2018-09-07 腾讯科技(深圳)有限公司 一种轨迹跟踪方法、装置和存储介质
CN110210302A (zh) * 2019-04-26 2019-09-06 平安科技(深圳)有限公司 多目标跟踪方法、装置、计算机设备及存储介质
CN110245556A (zh) * 2019-05-06 2019-09-17 深圳耄耋看护科技有限公司 一种在家离家状态的检测方法、装置、系统及存储介质
CN110276778A (zh) * 2019-05-08 2019-09-24 西藏民族大学 动物进圈轨迹提取、统计模型构建、统计方法及装置
CN110889339A (zh) * 2019-11-12 2020-03-17 南京甄视智能科技有限公司 基于头肩检测的危险区域分级预警方法与系统
CN111160203A (zh) * 2019-12-23 2020-05-15 中电科新型智慧城市研究院有限公司 一种基于头肩模型和iou跟踪的徘徊逗留行为分析方法
CN111652900A (zh) * 2020-05-29 2020-09-11 浙江大华技术股份有限公司 基于场景流的客流量的计数方法、系统及设备、存储装置
CN111680569A (zh) * 2020-05-13 2020-09-18 北京中广上洋科技股份有限公司 基于图像分析的出勤率检测方法、装置、设备及存储介质
CN111723664A (zh) * 2020-05-19 2020-09-29 烟台市广智微芯智能科技有限责任公司 一种用于开放式区域的行人计数方法及系统
CN112052838A (zh) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 一种对象流量数据监控方法、装置以及可读存储介质
CN113052019A (zh) * 2021-03-10 2021-06-29 南京创维信息技术研究院有限公司 目标跟踪方法及装置、智能设备和计算机存储介质
CN113674303A (zh) * 2021-08-31 2021-11-19 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质
CN113807185A (zh) * 2021-08-18 2021-12-17 苏州涟漪信息科技有限公司 一种数据处理方法和装置
CN114882404A (zh) * 2022-05-06 2022-08-09 安徽工业大学 一种基于深度相机的人数进出实时计数方法及系统
CN117132942A (zh) * 2023-10-20 2023-11-28 山东科技大学 一种基于区域分割的室内人员实时分布监测方法

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092915B (zh) * 2016-02-18 2021-03-02 中国移动通信集团浙江有限公司 一种检测人群密度的方法和装置
CN106326851B (zh) * 2016-08-19 2019-08-13 杭州智诺科技股份有限公司 一种人头检测的方法
CN107403137B (zh) * 2017-06-29 2020-01-31 山东师范大学 基于视频的密集人群流量计算方法和装置
CN108197579B (zh) * 2018-01-09 2022-05-20 杭州智诺科技股份有限公司 防护舱中人数的检测方法
CN108280427B (zh) * 2018-01-24 2021-11-09 广州盖盟达工业品有限公司 一种基于人流量的大数据处理方法
CN108345842B (zh) * 2018-01-24 2022-03-04 中电长城圣非凡信息系统有限公司 一种基于大数据的处理方法
CN110490030B (zh) * 2018-05-15 2023-07-14 保定市天河电子技术有限公司 一种基于雷达的通道人数统计方法及系统
CN111091529A (zh) * 2018-10-24 2020-05-01 株式会社理光 一种人数统计方法及人数统计系统
CN111797652A (zh) * 2019-04-09 2020-10-20 佳能株式会社 对象跟踪方法、设备及存储介质
CN110705408A (zh) * 2019-09-23 2020-01-17 东南大学 基于混合高斯人数分布学习的室内人数统计方法及系统
CN112084959B (zh) * 2020-09-11 2024-04-16 腾讯科技(深圳)有限公司 一种人群图像处理方法及装置
CN112232210B (zh) * 2020-10-16 2024-06-28 京东方科技集团股份有限公司 一种人员流量分析方法和系统、电子设备和可读存储介质
CN112446340B (zh) * 2020-12-07 2024-06-28 深圳市信义科技有限公司 结合行人局部特征和服饰属性分类的行人搜索方法、系统及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021059A (zh) * 2012-12-12 2013-04-03 天津大学 一种基于视频监控的公交客流计数方法
CN103425967A (zh) * 2013-07-21 2013-12-04 浙江大学 一种基于行人检测和跟踪的人流监控方法
CN103871082A (zh) * 2014-03-31 2014-06-18 百年金海科技有限公司 一种基于安防视频图像的人流量统计方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577812B (zh) * 2009-03-06 2014-07-30 北京中星微电子有限公司 一种岗位监测的方法和系统
CN101872422B (zh) * 2010-02-10 2012-11-21 杭州海康威视数字技术股份有限公司 可精确辨别目标的人流量统计的方法及系统
CN101847206B (zh) * 2010-04-21 2012-08-08 北京交通大学 基于交通监控设施的行人流量统计方法与系统
US9117147B2 (en) * 2011-04-29 2015-08-25 Siemens Aktiengesellschaft Marginal space learning for multi-person tracking over mega pixel imagery
CN102568005B (zh) * 2011-12-28 2014-10-22 江苏大学 一种基于混合高斯模型的运动目标检测方法
CN102799935B (zh) * 2012-06-21 2015-03-04 武汉烽火众智数字技术有限责任公司 一种基于视频分析技术的人流量统计方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021059A (zh) * 2012-12-12 2013-04-03 天津大学 一种基于视频监控的公交客流计数方法
CN103425967A (zh) * 2013-07-21 2013-12-04 浙江大学 一种基于行人检测和跟踪的人流监控方法
CN103871082A (zh) * 2014-03-31 2014-06-18 百年金海科技有限公司 一种基于安防视频图像的人流量统计方法

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509896B (zh) * 2018-03-28 2020-10-13 腾讯科技(深圳)有限公司 一种轨迹跟踪方法、装置和存储介质
CN108509896A (zh) * 2018-03-28 2018-09-07 腾讯科技(深圳)有限公司 一种轨迹跟踪方法、装置和存储介质
US11087476B2 (en) 2018-03-28 2021-08-10 Tencent Technology (Shenzhen) Company Limited Trajectory tracking method and apparatus, computer device, and storage medium
CN110210302B (zh) * 2019-04-26 2023-06-20 平安科技(深圳)有限公司 多目标跟踪方法、装置、计算机设备及存储介质
CN110210302A (zh) * 2019-04-26 2019-09-06 平安科技(深圳)有限公司 多目标跟踪方法、装置、计算机设备及存储介质
CN110245556A (zh) * 2019-05-06 2019-09-17 深圳耄耋看护科技有限公司 一种在家离家状态的检测方法、装置、系统及存储介质
CN110276778B (zh) * 2019-05-08 2022-10-28 西藏民族大学 动物进圈轨迹提取、统计模型构建、统计方法及装置
CN110276778A (zh) * 2019-05-08 2019-09-24 西藏民族大学 动物进圈轨迹提取、统计模型构建、统计方法及装置
CN110889339A (zh) * 2019-11-12 2020-03-17 南京甄视智能科技有限公司 基于头肩检测的危险区域分级预警方法与系统
CN111160203A (zh) * 2019-12-23 2020-05-15 中电科新型智慧城市研究院有限公司 一种基于头肩模型和iou跟踪的徘徊逗留行为分析方法
CN111160203B (zh) * 2019-12-23 2023-05-16 中电科新型智慧城市研究院有限公司 一种基于头肩模型和iou跟踪的徘徊逗留行为分析方法
CN111680569B (zh) * 2020-05-13 2024-04-19 北京中广上洋科技股份有限公司 基于图像分析的出勤率检测方法、装置、设备及存储介质
CN111680569A (zh) * 2020-05-13 2020-09-18 北京中广上洋科技股份有限公司 基于图像分析的出勤率检测方法、装置、设备及存储介质
CN111723664A (zh) * 2020-05-19 2020-09-29 烟台市广智微芯智能科技有限责任公司 一种用于开放式区域的行人计数方法及系统
CN111652900B (zh) * 2020-05-29 2023-09-29 浙江大华技术股份有限公司 基于场景流的客流量的计数方法、系统及设备、存储介质
CN111652900A (zh) * 2020-05-29 2020-09-11 浙江大华技术股份有限公司 基于场景流的客流量的计数方法、系统及设备、存储装置
CN112052838A (zh) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 一种对象流量数据监控方法、装置以及可读存储介质
CN113052019A (zh) * 2021-03-10 2021-06-29 南京创维信息技术研究院有限公司 目标跟踪方法及装置、智能设备和计算机存储介质
CN113807185A (zh) * 2021-08-18 2021-12-17 苏州涟漪信息科技有限公司 一种数据处理方法和装置
CN113807185B (zh) * 2021-08-18 2024-02-27 苏州涟漪信息科技有限公司 一种数据处理方法和装置
CN113674303A (zh) * 2021-08-31 2021-11-19 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质
CN114882404A (zh) * 2022-05-06 2022-08-09 安徽工业大学 一种基于深度相机的人数进出实时计数方法及系统
CN114882404B (zh) * 2022-05-06 2024-09-03 安徽工业大学 一种基于深度相机的人数进出实时计数方法及系统
CN117132942A (zh) * 2023-10-20 2023-11-28 山东科技大学 一种基于区域分割的室内人员实时分布监测方法
CN117132942B (zh) * 2023-10-20 2024-01-26 山东科技大学 一种基于区域分割的室内人员实时分布监测方法

Also Published As

Publication number Publication date
CN105303191A (zh) 2016-02-03

Similar Documents

Publication Publication Date Title
WO2015131734A1 (zh) 一种前视监视场景下的行人计数方法、装置和存储介质
Yang et al. Online learned discriminative part-based appearance models for multi-human tracking
Sun et al. Benchmark data and method for real-time people counting in cluttered scenes using depth sensors
CN108009473B (zh) 基于目标行为属性视频结构化处理方法、系统及存储装置
CN108052859B (zh) 一种基于聚类光流特征的异常行为检测方法、系统及装置
CN105844234B (zh) 一种基于头肩检测的人数统计的方法及设备
Li et al. Anomaly detection and localization in crowded scenes
CN106203513B (zh) 一种基于行人头肩多目标检测及跟踪的统计方法
Mukherjee et al. Anovel framework for automatic passenger counting
CN102663452B (zh) 基于视频分析的可疑行为检测方法
Lee et al. Smoke detection using spatial and temporal analyses
CN111191667B (zh) 基于多尺度生成对抗网络的人群计数方法
Avgerinakis et al. Recognition of activities of daily living for smart home environments
Hsu et al. Passenger flow counting in buses based on deep learning using surveillance video
CN105139425A (zh) 一种人数统计方法及装置
Ferryman et al. Performance evaluation of crowd image analysis using the PETS2009 dataset
CN103235944A (zh) 人群流分割及人群流异常行为识别方法
WO2022078134A1 (zh) 一种人员流量分析方法和系统、电子设备和可读存储介质
CN111353338A (zh) 一种基于营业厅视频监控的能效改进方法
Yang et al. A method of pedestrians counting based on deep learning
Pervaiz et al. Artificial neural network for human object interaction system over Aerial images
Kokul et al. Online multi-person tracking-by-detection method using ACF and particle filter
Heili et al. Parameter estimation and contextual adaptation for a multi-object tracking CRF model
Wang et al. Beyond pedestrians: A hybrid approach of tracking multiple articulating humans
Gade et al. Automatic analysis of activities in sports arenas using thermal cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15758110

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15758110

Country of ref document: EP

Kind code of ref document: A1