CN103345792B - Based on passenger flow statistic device and the method thereof of sensor depth image - Google Patents

Based on passenger flow statistic device and the method thereof of sensor depth image Download PDF

Info

Publication number
CN103345792B
CN103345792B CN201310279375.1A CN201310279375A CN103345792B CN 103345792 B CN103345792 B CN 103345792B CN 201310279375 A CN201310279375 A CN 201310279375A CN 103345792 B CN103345792 B CN 103345792B
Authority
CN
China
Prior art keywords
depth
depth map
map
coordinate
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310279375.1A
Other languages
Chinese (zh)
Other versions
CN103345792A (en
Inventor
顾国华
尹章芹
李娇
陈海欣
周玉蛟
钱惟贤
陈钱
路东明
任侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201310279375.1A priority Critical patent/CN103345792B/en
Publication of CN103345792A publication Critical patent/CN103345792A/en
Application granted granted Critical
Publication of CN103345792B publication Critical patent/CN103345792B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种基于传感器景深图像的客流统计装置及其方法,该装置包括传感器底座、双深度传感器、USB数据线、数据处理控制单元、显示器、模式选择开关和电源;双深度传感器的镜头均俯视地面且镜头中心轴均与地平面垂直,二者信号输出端通过USB数据线与数据处理控制单元连接,数据处理控制单元通过视频线接入显示器,模式选择开关与数据处理控制单元连接,电源的输出端接入其它部件的电源输入端。方法如下:将双景深图像转换到同一物理坐标系并拼接;遍历拼接后的景深图像,找出灰度级极小区域并标记;提取每一个标记区域,确定头部的位置信息;建立头部运动航迹,统计进出的人数。本发明装置具有成本低、精度高、稳定性强的优点,应用前景广阔。

The invention discloses a passenger flow counting device and a method thereof based on a sensor depth of field image. The device includes a sensor base, a double depth sensor, a USB data line, a data processing control unit, a display, a mode selection switch and a power supply; the lens of the double depth sensor Both look down on the ground and the central axis of the lens is perpendicular to the ground plane. The signal output ends of the two are connected to the data processing control unit through the USB data line. The data processing control unit is connected to the display through the video line, and the mode selection switch is connected to the data processing control unit. The output end of the power supply is connected to the power input end of other components. The method is as follows: convert the double depth-of-field images to the same physical coordinate system and stitch them together; traverse the stitched depth-of-field images to find and mark the gray-scale minimum areas; extract each marked area and determine the position information of the head; establish the head Motion track, count the number of people entering and leaving. The device of the invention has the advantages of low cost, high precision and strong stability, and has broad application prospects.

Description

基于传感器景深图像的客流统计装置及其方法Passenger flow counting device and method based on sensor depth image

一、技术领域1. Technical field

本发明属于数字图像处理与模式识别领域,特别是一种基于传感器景深图像的客流统计装置及其方法。The invention belongs to the field of digital image processing and pattern recognition, in particular to a passenger flow counting device and method based on sensor depth images.

二、背景技术2. Background technology

随着统计分析技术和计算机技术的不断发展,即时、可靠的客流量信息已经成为可能。同时客流信息的重要性也日益体现出来,特别是对于大型商场、超市等购物场所,铁路、地铁和公交车等公共交通系统以及展览馆、体育场、图书馆、机场等对公共安全要求很高的公共场所来说,实时、准确的客流信息的重要性更加不言而喻。With the continuous development of statistical analysis technology and computer technology, instant and reliable passenger flow information has become possible. At the same time, the importance of passenger flow information is increasingly reflected, especially for shopping places such as large shopping malls and supermarkets, public transportation systems such as railways, subways and buses, and exhibition halls, stadiums, libraries, airports and other places that have high requirements for public safety. For public places, the importance of real-time and accurate passenger flow information is self-evident.

传统的人数统计方法有人体计数或电子设备触发计数,不仅浪费人力和物力,而且效率低下。随着计算机视觉和图像处理技术的飞速发展,使其在人数统计方面的应用成为可能。由于在客流高分期拥挤的人群中,乘客相互遮挡、无法对人体进行有效建模,只有乘客的头在图像处理中体现的较为完整,因此基于机器视觉的客流统计方法主要采用提取乘客头部特征的方式对乘客进行定位和跟踪。基于彩色视频流的自动客流计数主要是利用图像中的特征识别和模式匹配,但对计数环境的光线变化比较敏感,不能排除阴影的干扰,和人流相互遮挡问题,只适用于背景简单情况,且需要处理的数据量大,算法应用受到很多限制,不具备普遍性。基于立体视觉的自动客流计数主要是利用双传感器提取运动目标的深度信息,并根据深度信息检测目标,实验结果证明该方法的有效性,但需要满足双传感器同步采集,硬件实现成本太高,不适合大规模推广。Traditional people counting methods include human counting or counting triggered by electronic equipment, which not only wastes manpower and material resources, but also is inefficient. With the rapid development of computer vision and image processing technology, its application in people counting becomes possible. In the crowded crowd with high passenger flow, passengers block each other, and the human body cannot be effectively modeled. Only the head of the passenger is relatively complete in the image processing. Therefore, the passenger flow statistics method based on machine vision mainly uses the feature extraction of passenger head. way to locate and track passengers. The automatic passenger flow counting based on the color video stream mainly uses the feature recognition and pattern matching in the image, but it is sensitive to the light changes of the counting environment, and the interference of shadows and mutual occlusion of the flow of people cannot be ruled out. It is only suitable for simple background situations, and The amount of data to be processed is large, and the application of the algorithm is subject to many restrictions, so it is not universal. The automatic passenger flow counting based on stereo vision mainly uses dual sensors to extract the depth information of the moving target, and detects the target according to the depth information. Suitable for large-scale promotion.

三、发明内容3. Contents of the invention

本发明的目的在于提供一种精度高、稳定性强的基于传感器景深图像的客流统计装置及方法,解决客流统计中拥挤、遮挡、阴影的问题,对客流进行实时高效的统计。The purpose of the present invention is to provide a highly accurate and stable passenger flow counting device and method based on sensor depth of field images to solve the problems of crowding, occlusion, and shadows in passenger flow counting, and to perform real-time and efficient counting of passenger flow.

实现本发明目的的技术解决方案为,一种基于传感器景深图像的客流统计装置,包括传感器底座、第一深度传感器、第二深度传感器、USB数据线、数据处理控制单元、显示器、模式选择开关、电源;第一深度传感器和第二深度传感器相同且平行设置在同一传感器底座上,第一深度传感器和第二深度传感器的镜头均俯视地面且镜头中心轴均与地平面垂直,第一深度传感器和第二深度传感器的信号输出端均通过USB数据线与数据处理控制单元连接,数据处理控制单元的信号输出端通过视频线接入显示器,模式选择开关与数据处理控制单元的控制信号输入端连接,电源的输出端分别接入第一深度传感器、第二深度传感器、数据处理控制单元和显示器的电源输入端;The technical solution to realize the object of the present invention is, a kind of passenger flow counting device based on the depth of field image of the sensor, comprising a sensor base, a first depth sensor, a second depth sensor, a USB data line, a data processing control unit, a display, a mode selection switch, Power supply; the first depth sensor and the second depth sensor are identical and arranged in parallel on the same sensor base, the lenses of the first depth sensor and the second depth sensor are both overlooking the ground and the central axis of the lens is perpendicular to the ground plane, the first depth sensor and the The signal output ends of the second depth sensor are all connected to the data processing control unit through the USB data line, the signal output ends of the data processing control unit are connected to the display through the video line, and the mode selection switch is connected to the control signal input end of the data processing control unit. The output ends of the power supply are respectively connected to the power input ends of the first depth sensor, the second depth sensor, the data processing control unit and the display;

电源给装置上电后,第一深度传感器和第二深度传感器采集到的景深图像均通过USB数据线输入数据处理控制单元,数据数据处理控制单元对采集到的景深图像经过数字图像处理,并将处理的结果通过视频线输出至显示器。After the power supply is powered on to the device, the depth of field images collected by the first depth sensor and the second depth sensor are all input to the data processing control unit through the USB data line, and the data processing control unit performs digital image processing on the collected depth of field images, and The processed results are output to the monitor via the video line.

一种基于传感器景深图像的客流统计方法,步骤如下:A passenger flow counting method based on sensor depth images, the steps are as follows:

步骤1,电源给装置上电后,数据处理控制单元通过USB数据线获取第一深度传感器采集到的第一景深图和第二深度传感器采集到的第二景深图;Step 1, after powering on the device, the data processing control unit acquires the first depth-of-field map collected by the first depth sensor and the second depth-of-field map collected by the second depth sensor through the USB data cable;

步骤2,根据获取的首帧第一景深图或首帧第二景深图,将目标至地面的距离H与景深图中对应灰度级G之间的关系进行标定;Step 2: Calibrate the relationship between the distance H from the target to the ground and the corresponding gray level G in the depth-of-field map according to the acquired first-frame first-frame depth-of-field map or first-frame second-field-field map;

步骤3,通过模式选择开关确定工作模式:若选择单深度传感器工作模式,则选取第一景深图或者第二景深图作为第三景深图A3并转入步骤6;若选择双深度传感器工作模式,则进入下一步;Step 3, determine the working mode through the mode selection switch: if the single depth sensor working mode is selected, select the first depth of field map or the second depth of field map as the third depth of field map A3 and go to step 6; if the dual depth sensor working mode is selected, Then go to the next step;

步骤4,将第一景深图和第二景深图均投影转换到三维空间物理坐标系;Step 4, transforming both the first depth-of-field map and the second depth-of-field map into a three-dimensional space physical coordinate system;

步骤5,将转换到三维空间物理坐标系的第一景深图A1和第二景深图A2进行拼接得到第三景深图A3;Step 5, splicing the first depth-of-field map A1 and the second depth-of-field map A2 transformed into the three-dimensional space physical coordinate system to obtain the third depth-of-field map A3;

步骤6,对第三景深图A3进行区域分割,找出每个区域中灰度级极小值对应的像素坐标,并对寻找到的像素坐标进行标记;Step 6, performing region segmentation on the third depth-of-field image A3, finding the pixel coordinates corresponding to the minimum gray level value in each region, and marking the found pixel coordinates;

步骤7,根据区域分割后像素坐标的标记,进行头部识别得到头部信息;Step 7, perform head recognition to obtain head information according to the mark of pixel coordinates after region segmentation;

步骤8,对识别的头部信息进行跟踪计数;Step 8, tracking and counting the identified header information;

步骤9,按照步骤4~8的方法依次对采集到的各帧景深图进行处理,并且每隔一分钟将统计的进出结果以文件的形式通过视频线输出至显示器。Step 9: sequentially process each frame of the acquired depth map according to the method of steps 4-8, and output the statistical entry and exit results in the form of a file to the display through the video line every one minute.

本发明与现有技术相比,具有如下的显著优点:(1)系统可以选择单传感器实现客流统计,也可以使用多传感器增扩视场,实现多视场客流统计;(2)双传感器的视频采集时同步的,解决了基于立体视觉中传感器同步问题,单传感器输出的景深图包含人体三维信息,并且解决了基于视频流客流统计中客流拥挤、遮挡、阴影等问题;(3)系统能实时检测统计客流量,且稳定性强、成本低,帧频为30pf/s,精度在96%以上。Compared with the prior art, the present invention has the following significant advantages: (1) the system can select a single sensor to realize passenger flow statistics, and can also use multi-sensors to expand the field of view to realize multi-field passenger flow statistics; (2) the dual-sensor The synchronization of video acquisition solves the problem of sensor synchronization based on stereo vision. The depth map output by a single sensor contains three-dimensional information of the human body, and solves the problems of passenger flow congestion, occlusion, shadows, etc. in passenger flow statistics based on video streams; (3) the system can Real-time detection and statistics of passenger flow, with strong stability and low cost, the frame frequency is 30pf/s, and the accuracy is above 96%.

四、附图说明4. Description of drawings

图1是本发明基于传感器景深图的客流统计装置结构示意图。FIG. 1 is a schematic structural diagram of a passenger flow counting device based on a sensor depth map according to the present invention.

图2是本发明基于传感器景深图的客流统计方法流程图。Fig. 2 is a flow chart of the passenger flow counting method based on the sensor depth map of the present invention.

图3是两个深度传感器的空间物理坐标关系图(a)第一深度传感器坐标系(b)第二深度传感器坐标系(c)实际物理坐标系。Fig. 3 is a spatial physical coordinate relationship diagram of two depth sensors (a) first depth sensor coordinate system (b) second depth sensor coordinate system (c) actual physical coordinate system.

图4是本发明区域分割中的九宫格示意图。Fig. 4 is a schematic diagram of a nine-square grid in the region segmentation of the present invention.

图5是本发明实施例中原始景深图,其中(a)为第一景深图,(b)为第二景深图。Fig. 5 is the original depth map in the embodiment of the present invention, wherein (a) is the first depth map, and (b) is the second depth map.

图6是本发明实施例中第一景深图和第二景深图拼接后形成的第三景深图。Fig. 6 is a third depth map formed after splicing the first depth map and the second depth map in the embodiment of the present invention.

图7是本发明实施例中对第三景深图的头部跟踪识别结果图。Fig. 7 is a diagram of the head tracking recognition result of the third depth of field image in the embodiment of the present invention.

五、具体实施方式5. Specific implementation

下面结合附图及具体实施例对本发明作出进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.

结合图1,本发明基于传感器景深图像的客流统计装置,包括传感器底座1、第一深度传感器2、第二深度传感器3、USB数据线7、数据处理控制单元8、显示器9、模式选择开关10、电源11;第一深度传感器2和第二深度传感器3相同且平行设置在同一传感器底座1上,第一深度传感器2和第二深度传感器3的镜头均俯视地面且镜头中心轴均与地平面垂直,第一深度传感器2和第二深度传感器3的信号输出端均通过USB数据线7与数据处理控制单元8连接,数据处理控制单元8的信号输出端通过视频线12接入显示器9,模式选择开关10与数据处理控制单元8的控制信号输入端连接,电源11的输出端分别接入第一深度传感器2、第二深度传感器3、数据处理控制单元8和显示器9的电源输入端;In conjunction with FIG. 1 , the passenger flow counting device based on the sensor depth image of the present invention includes a sensor base 1, a first depth sensor 2, a second depth sensor 3, a USB data line 7, a data processing control unit 8, a display 9, and a mode selection switch 10 , power supply 11; the first depth sensor 2 and the second depth sensor 3 are identical and arranged in parallel on the same sensor base 1, the lenses of the first depth sensor 2 and the second depth sensor 3 all look down on the ground and the central axes of the lenses are all in line with the ground plane Vertically, the signal output ends of the first depth sensor 2 and the second depth sensor 3 are connected to the data processing control unit 8 through the USB data line 7, and the signal output ends of the data processing control unit 8 are connected to the display 9 through the video line 12, the mode The selector switch 10 is connected to the control signal input end of the data processing control unit 8, and the output end of the power supply 11 is respectively connected to the power input ends of the first depth sensor 2, the second depth sensor 3, the data processing control unit 8 and the display 9;

电源11给装置上电后,通过模式选择开关10确定工作模式,第一深度传感器2和第二深度传感器3采集到的景深图像均通过USB数据线7输入数据处理控制单元8,数据数据处理控制单元8对采集到的景深图像经过数字图像处理,并将处理的结果通过视频线12输出至显示器9。After the power supply 11 powers up the device, the mode of operation is determined by the mode selection switch 10, and the depth of field images collected by the first depth sensor 2 and the second depth sensor 3 are all input into the data processing control unit 8 through the USB data line 7, and the data processing control The unit 8 performs digital image processing on the collected depth-of-field image, and outputs the processed result to the display 9 through the video line 12 .

所述第一深度传感器2和第二深度传感器3之间的水平距离WIDTH满足以下条件:The horizontal distance WIDTH between the first depth sensor 2 and the second depth sensor 3 satisfies the following conditions:

Hh 00 -- Hh 44 11 22 WIDTHWIDTH == 11 tanthe tan vv 22

H0表示第一深度传感器2和第二深度传感器3到地面的距离,3000mm<H0<3700mm,ν为第一深度传感器2和第二深度传感器3的横向视场角,H4(2100<H4<H0)表示两个视场切点距离地面的距离。H 0 represents the distance from the first depth sensor 2 and the second depth sensor 3 to the ground, 3000mm<H 0 <3700mm, ν is the lateral field angle of the first depth sensor 2 and the second depth sensor 3, H 4 (2100< H 4 <H 0 ) represents the distance between the two tangent points of the field of view and the ground.

所述第一深度传感器2和第二深度传感器3均采用华硕XtionPRO,是华硕公司生产的XtionSensor,该传感器输出的灰度图是包含人体三维信息的景深图。所述模式选择开关10设有单深度传感器工作模式和双深度传感器工作模式。在第一深度传感器2和第二深度传感器3视场内的地面上还设有进方向标4、出方向标5、进出判断线6,做为客流的进出判断标准。The first depth sensor 2 and the second depth sensor 3 both use ASUS XtionPRO, which is an XtionSensor produced by ASUS, and the grayscale image output by the sensor is a depth map containing three-dimensional information of the human body. The mode selection switch 10 is provided with a single depth sensor working mode and a double depth sensor working mode. On the ground in the field of view of the first depth sensor 2 and the second depth sensor 3, an incoming direction mark 4, an outgoing direction mark 5, and an entry and exit judgment line 6 are also provided as the entry and exit judgment standard of passenger flow.

结合图2,本发明基于传感器景深图像的客流统计方法,步骤如下:In conjunction with Fig. 2, the present invention is based on the passenger flow counting method of sensor depth image, and the steps are as follows:

步骤1,电源11给装置上电后,数据处理控制单元8通过USB数据线7获取第一深度传感器2采集到的第一景深图和第二深度传感器3采集到的第二景深图;Step 1, after the power supply 11 powers on the device, the data processing control unit 8 obtains the first depth-of-field map collected by the first depth sensor 2 and the second depth-of-field map collected by the second depth sensor 3 through the USB data line 7;

步骤2,根据获取的首帧第一景深图或首帧第二景深图,将目标至地面的距离H与景深图中对应灰度级G之间的关系进行标定;Step 2: Calibrate the relationship between the distance H from the target to the ground and the corresponding gray level G in the depth-of-field map according to the acquired first-frame first-frame depth-of-field map or first-frame second-field-field map;

距离地面的高度为H1的目标对应灰度级G1、距离地面的高度为H2的目标对应灰度级G2、距离地面的高度为H3的目标对应灰度级G3,其中100mm<H1<200mm、500mm<H2<800mm、2300mm<H3<H0,H0表示传感器到地面的距离且3000mm<H0<3700mm,G0表示没有人在视场范围内,景深图中地面对应的灰度级,则有:A target with a height of H 1 from the ground corresponds to gray level G 1 , a target with a height of H 2 from the ground corresponds to gray level G 2 , and a target with a height of H 3 from the ground corresponds to gray level G 3 , where 100mm <H 1 <200mm, 500mm<H 2 <800mm, 2300mm<H 3 <H 0 , H 0 represents the distance from the sensor to the ground and 3000mm<H 0 <3700mm, G 0 represents no one is within the field of view, depth map The gray level corresponding to the middle ground is:

Hh 00 == &beta;&beta; ** GG 00 Hh 00 -- Hh 11 == &beta;&beta; ** GG 11 Hh 00 -- Hh 22 == &beta;&beta; ** GG 22 Hh 00 -- Hh 33 == &beta;&beta; ** GG 33

式中,距离标定以毫米为单位,β表示距离与灰度级的比例系数,且10<β<20;In the formula, the distance calibration is in millimeters, β represents the proportional coefficient of distance and gray level, and 10<β<20;

步骤3,通过模式选择开关10确定工作模式:若选择单深度传感器工作模式,则选取第一景深图或者第二景深图作为第三景深图A3并转入步骤6;若选择双深度传感器工作模式,则进入下一步;Step 3, determine the working mode through the mode selection switch 10: if the single depth sensor working mode is selected, then select the first depth of field map or the second depth of field map as the third depth of field map A3 and go to step 6; if the dual depth sensor working mode is selected , then go to the next step;

步骤4,将第一景深图和第二景深图均投影转换到三维空间物理坐标系;Step 4, transforming both the first depth-of-field map and the second depth-of-field map into a three-dimensional space physical coordinate system;

将第一景深图从二维平面坐标(i1,j1)投影转换到三维空间物理坐标(x1,y1,z1)得到新的第一景深图A1,将第二景深图从二维平面坐标(i2,j2)投影转换到三维空间物理坐标(x2,y2,z2)得到新的第二景深图A2;设二维景深图的像素分辨率为I*J,纵坐标的中心坐标为Im,横坐标的中心坐标为Jm,μ为传感器的纵向视场角、ν为传感器的横向视场角,则投影转换关系如下:Transform the first depth-of-field map from two-dimensional plane coordinates (i 1 , j 1 ) to three-dimensional space physical coordinates (x 1 , y 1 , z 1 ) to obtain a new first depth-of-field map A1, and convert the second depth-of-field map from two Two-dimensional plane coordinates (i 2 , j 2 ) are projected into three-dimensional space physical coordinates (x 2 , y 2 , z 2 ) to obtain a new second depth-of-field map A2; the pixel resolution of the two-dimensional depth-of-field map is I*J, The center coordinate of the ordinate is I m , the center coordinate of the abscissa is J m , μ is the longitudinal field angle of the sensor, and ν is the transverse field angle of the sensor, then the projection conversion relationship is as follows:

xx ythe y zz == kk ii 00 00 00 kk jj 00 00 00 11 ii jj GG (( ii ,, jj ))

其中,ki=sinθi*G(i,j),θi表示二维平面坐标(i,j)中纵坐标i到Im对应的视场角度,kj=sinθj*G(i,j),θj表示二维平面坐标(i,j)中横坐标j到中心坐标Jm对应的视场角度,G(i,j)为二维平面坐标(i,j)对应的灰度级;Wherein, ki =sinθi*G( i ,j), θi represents the field of view angle corresponding to the ordinate i to Im in the two-dimensional plane coordinates (i,j), k j = sinθ j *G(i,j), θ j represents the field of view angle corresponding to the abscissa j to the center coordinate J m in the two-dimensional plane coordinates (i,j), G(i,j) is the gray level corresponding to the two-dimensional plane coordinates (i,j);

步骤5,将转换到三维空间物理坐标系的第一景深图A1和第二景深图A2进行拼接;结合图3,坐标系(a)和坐标系(b)分别表示第一景深图A1和第二景深图A2对应的物理坐标系,这两个物理坐标系(a)和(b)之间存在位移和旋转的关系R|T,利用R|T将两坐标关系转到同一坐标(c)中。Step 5, stitching the first depth-of-field map A1 and the second depth-of-field map A2 transformed into the three-dimensional space physical coordinate system; combined with Figure 3, coordinate system (a) and coordinate system (b) respectively represent the first depth-of-field map A1 and the second depth-of-field map A1 The physical coordinate system corresponding to the second depth of field map A2, there is a displacement and rotation relationship R|T between the two physical coordinate systems (a) and (b), use R|T to transfer the two coordinates to the same coordinate (c) middle.

步骤(5.1),利用多视点测量数据的方法确定第二景深图A2坐标(x2,y2,z2)与第一景深图A1坐标(x1,y1,z1)之间的转换关系R|T:Step (5.1), using the multi-view point measurement data method to determine the conversion between the second depth map A2 coordinates (x 2 , y 2 , z 2 ) and the first depth map A1 coordinates (x 1 , y 1 , z 1 ) Relation R|T:

(( RR || TT )) xx 11 ythe y 11 zz 11 == xx 22 ythe y 22 zz 22

取第一景深图A1中的S个点坐标以及这S个点在第二景深图A2中对应的坐标a=0,1,2…,S-1且S>2,确定第二景深图A2相对于第一景深图A1旋转和平移的值:Take the coordinates of S points in the first depth-of-field map A1 And the corresponding coordinates of these S points in the second depth-of-field map A2 a=0,1,2...,S-1 and S>2, determine the value of rotation and translation of the second depth map A2 relative to the first depth map A1:

PP LL aa == RR &times;&times; TT &times;&times; PP RR aa

R表示第二景深图A2相对于第一景深图A1的旋转关系,T表示第二景深图A2相对于第一景深图A1的位移,则:R represents the rotation relationship of the second depth map A2 relative to the first depth map A1, and T represents the displacement of the second depth map A2 relative to the first depth map A1, then:

RR == coscos &theta;&theta; sinsin &theta;&theta; 00 00 -- sinsin &theta;&theta; coscos &theta;&theta; 00 00 00 00 11 00 00 00 00 11

TT == 11 00 00 00 00 11 00 00 00 00 11 00 &Delta;x&Delta;x 00 &Delta;y&Delta;y 00 &Delta;z&Delta;z 00 11

上式中θ表示第二景深图A2相对于第一景深图A1的旋转角,(Δx0,Δy0,Δz0)表示第二景深图A2的坐标(x2,y2,z2)相对于第一景深图A1的坐标(x1,y1,z1)的位移差;In the above formula, θ represents the rotation angle of the second depth map A2 relative to the first depth map A1, and (Δx 0 , Δy 0 , Δz 0 ) represents the relative coordinates (x 2 , y 2 , z 2 ) of the second depth map A2 The displacement difference at the coordinates (x 1 , y 1 , z 1 ) of the first depth-of-field map A1;

步骤(5.2),创建一幅像素分辨率为C*D,初始值为0的第三景深图A3,其中I<C<2I,J<D<2J;将第一景深图A1坐标(x1,y1,z1)转换至第二景深图A2坐标(x2,y2,z2),然后将第一景深图A1和第二景深图A2拼接后存于第三景深图A3中,如果灰度级G(x,y)>G1,则G(x,y)=G1,在第一景深图A1和第二景深图A2的拼接重合区域,灰度级取第一景深图A1与第二景深图A2中对应坐标灰度级的均值;Step (5.2), create a third depth-of-field map A3 with a pixel resolution of C*D and an initial value of 0, where I<C<2I, J<D<2J; coordinates of the first depth-of-field map A1 (x 1 , y 1 , z 1 ) to the coordinates (x 2 , y 2 , z 2 ) of the second depth-of-field map A2, and then the first depth-of-field map A1 and the second depth-of-field map A2 are stitched together and stored in the third depth-of-field map A3, If the gray level G(x,y)>G 1 , then G(x,y)=G 1 , in the spliced overlapping area of the first depth map A1 and the second depth map A2, the gray level takes the first depth map A1 and the mean value of the corresponding coordinate gray levels in the second depth-of-field image A2;

步骤6,对第三景深图A3进行区域分割,找出每个区域中灰度级极小值对应的像素坐标,并对寻找到的像素坐标进行标记;Step 6, performing region segmentation on the third depth-of-field image A3, finding the pixel coordinates corresponding to the minimum gray level value in each region, and marking the found pixel coordinates;

步骤(6.1),以A*A像素为窗口,20<A<70,将第三景深图A3分割成(C/A)*(D/A)个区域,且C/A、D/A均为正整数;利用九宫格遍历第三景深图A3的所有区域,图4为九宫格的示意图,九宫格的每个窗口对应一个区域,其中中间的一格为中心窗口,寻找满足以下两个条件的九宫格中心窗口区域:①中心窗口所在区域存在灰度级小于G2大于G3的像素点②该中心窗口区域灰度均值小于九宫格其它窗口灰度均值;将寻找到的区域标记为Nt,t=1,2,3…T且T<(C/A)*(D/A),同时记录下每一区域最小灰度级MintStep (6.1), using A*A pixels as the window, 20<A<70, divide the third depth map A3 into (C/A)*(D/A) regions, and C/A, D/A are both It is a positive integer; use the Jiugongge to traverse all areas of the third depth of field map A3. Figure 4 is a schematic diagram of the Jiugongge. Each window of the Jiugongge corresponds to an area, and the middle one is the center window. Find the center of the Jiugongge that meets the following two conditions Window area: ①The area where the center window is located has pixels whose gray level is smaller than G 2 and greater than G 3 ②The average gray value of the central window area is smaller than the average gray value of other windows in Jiugongge; mark the found area as N t , t=1 ,2,3...T and T<(C/A)*(D/A), and record the minimum gray level Min t of each area at the same time;

步骤(6.2),将Nt区域四边外扩一倍,找出外扩后的区域中灰度级满足Gt(x,y)-Mint<λ(4<λ<15)条件的所有像素坐标,并标记为Kt,共生成T个标记符。In step (6.2), double the four sides of the N t area, and find out all the pixels in the expanded area whose gray level satisfies the condition of G t (x,y)-Min t <λ (4<λ<15) Coordinates, and marked as K t , a total of T markers are generated.

步骤7,根据区域分割后像素坐标的标记Kt,进行头部识别,具体步骤如下:Step 7, perform head recognition according to the marker K t of the pixel coordinates after region segmentation, the specific steps are as follows:

步骤(7.1),创建T幅初始值为0的二值图,编号为1~T,且各幅二值图的像素分辨率均为C*D;提取标记符Kt对应的所有像素坐标,并将第t幅二值图中对应像素坐标的灰度值设为255;Step (7.1), create T binary images with an initial value of 0, numbered 1 to T, and the pixel resolution of each binary image is C*D; extract all pixel coordinates corresponding to the marker K t , And set the grayscale value of the corresponding pixel coordinate in the tth binary image to 255;

步骤(7.2),确定每幅二值图中连通域的面积Areat,以及第三景深图A3中对应该连通域的灰度级均值寻找具有以下特征的连通域:①面积Areat大于Area,且Area≥500,②二值图中的连通域是类椭圆状,③头部宽度w′和头部高度h′按照如下公式换算后均满足小于25且大于15,公式如下:Step (7.2), determine the area Area t of the connected domain in each binary image, and the gray level mean value corresponding to the connected domain in the third depth of field image A3 Find the connected domain with the following characteristics: ① the area Area t is larger than Area, and Area≥500, ② the connected domain in the binary image is elliptical, ③ the head width w′ and head height h′ are converted according to the following formula All satisfy less than 25 and greater than 15, the formula is as follows:

hh &prime;&prime; == &alpha;&alpha; ** (( 255.0255.0 -- GG &OverBar;&OverBar; tt )) ** hh

ww &prime;&prime; == &alpha;&alpha; ** (( 255.0255.0 -- GG &OverBar;&OverBar; tt )) ** ww

其中h′表示人体头部的实际高度,w′表示人体头部的实际宽度,h表示二值图中连通域的高,w表示二值图中连通域的宽,α是比例系数且0<α<1;Where h' represents the actual height of the human head, w' represents the actual width of the human head, h represents the height of the connected domain in the binary image, w represents the width of the connected domain in the binary image, α is the proportional coefficient and 0< α<1;

记录所有寻找到的连通域起始坐标、宽高以及区域均值即头部信息;Record the starting coordinates, width and height of all found connected domains and the mean value of the region That is, the header information;

步骤8,对识别的头部信息进行跟踪;Step 8, tracking the identified header information;

步骤(8.1),视场范围内以头部信息中心点为点迹,波门形状为矩形,波门大小设为Q个像素,且10<Q<100,由点迹(x,y)和波门(Δx,Δy)确定以目标点迹预测值为中心的空间搜索区域,搜索区域如下:Step (8.1), in the field of view, the central point of the head information is used as the dot trace, the shape of the wave gate is rectangular, the size of the wave gate is set to Q pixels, and 10<Q<100, the dot trace (x, y) and The gate (Δx, Δy) determines the predicted value of the target point trace is the centered spatial search area, the search area is as follows:

|| xx -- xx ^^ || &le;&le; &Delta;x&Delta;x

|| ythe y -- ythe y ^^ || &le;&le; &Delta;y&Delta;y

步骤(8.2),首先,假定航迹,第M帧初次记录头部目标点迹,基于第M+1帧得到的坐标对目标点迹的速度进行估计,如果估计的速度在搜索区域范围即 则生成一条暂时航迹;其次,基于第M+2帧得到的坐标对目标点迹的的位置进行预测,并以预测位置为中心确定相关区域任何落在相关区域内的点迹扩展一条暂时的航迹,继续估计速度值,基于速度值对下一帧的位置进行预测并建立相关区域,任何落在相关区域内的点迹将生成一条新的航迹;最后,对所有生成的航迹用二次曲线进行拟合,如果航迹上的点和拟合曲线的误差δ<0.1,则确定该航迹,如果不满足,则删除该航迹;Step (8.2), first, assuming the flight track, record the head target track for the first time in the Mth frame, estimate the speed of the target track based on the coordinates obtained in the M+1th frame, if the estimated speed is within the scope of the search area, that is A temporary track is generated; secondly, based on the coordinates obtained in the M+2th frame, the position of the target track is predicted, and the relevant area is determined centered on the predicted position Any point trace falling in the relevant area will expand a temporary track, continue to estimate the velocity value, predict the position of the next frame based on the velocity value and establish the relevant area, any point trace falling in the relevant area will generate a new track Finally, fit all the generated tracks with a quadratic curve, if the error δ<0.1 between the point on the track and the fitted curve, then determine the track, if not, delete the track trace;

步骤(8.3),确定目标运动航迹后,在第三景深图A3的第l1行和第l2行画两条判断线,其中10<l1<l2<470像素,如果当前目标点迹x小于l1但航迹存在行坐标大于l2的点迹,则进的人数加1,反之如果当前目标点迹x大于l2但航迹存在行坐标小于l1的点迹,则出的人数加1,同时删除该航迹;Step (8.3), after determining the target motion track, draw two judgment lines on the l1th line and l2th line of the third depth of field map A3, wherein 10 < l1 < l2 <470 pixels, if the current target point If the track x is less than l 1 but there are points with line coordinates greater than l 2 on the track, the number of people entering will be increased by 1, otherwise if the current target point x is greater than l 2 but there are points with line coordinates less than l 1 on the track, then exit The number of people increases by 1, and the track is deleted at the same time;

步骤9,按照步骤4~8的方法依次对采集到的各帧景深图进行处理,并且每隔一分钟将统计的进出结果以文件的形式通过视频线12输出至显示器9。Step 9: sequentially process the acquired depth maps of each frame according to the method of steps 4-8, and output the statistical entry and exit results to the display 9 via the video line 12 every minute.

实施例1Example 1

本发明基于传感器景深图像的客流统计装置,具体参数如下:The present invention is based on the passenger flow counting device of the sensor depth image, and the specific parameters are as follows:

(一)、第一深度传感器2和第二深度传感器3均采用华硕XtionPRO,是华硕公司生产的XtionSensor,第一深度传感器2和第二深度传感器3的纵向视场角μ为45°、横向视场角ν为58°,输出二维景深图的像素分辨率为I*J为480*640,则纵坐标的中心坐标Im为240,横坐标的中心坐标Jm为320,因此 (1), the first depth sensor 2 and the second depth sensor 3 all adopt ASUS XtionPRO, which is the XtionSensor produced by ASUS, the longitudinal field angle μ of the first depth sensor 2 and the second depth sensor 3 is 45 °, and the horizontal viewing The field angle ν is 58°, and the pixel resolution of the output two-dimensional depth map is I*J is 480*640, then the center coordinate I m of the ordinate is 240, and the center coordinate J m of the abscissa is 320, so

(二)、分别取第一景深图A1和第二景深图A2中的S=6个点,确定投影到空间物理坐标系后得到的两幅景深度图A1和A2之间对应的旋转参数R和位移参数T,创建一幅像素分辨率为C*D=600*1200、初始值为0的第三景深图A3,将第一景深图A1和第二景深图A2拼接后存于600*1200的第三景深图A3中;图5中(a)为第一景深图A1,(b)为第二景深图A2,图6是第一景深图和第二景深图拼接后形成的第三景深图A3,拼接后的第三景深图A3拼接完好,增扩了视场;(2), respectively take S=6 points in the first depth-of-field map A1 and the second depth-of-field map A2, and determine the corresponding rotation parameter R between the two depth-of-field maps A1 and A2 obtained after projecting onto the space physical coordinate system and the displacement parameter T to create a third depth-of-field map A3 with a pixel resolution of C*D=600*1200 and an initial value of 0. The first depth-of-field map A1 and the second depth-of-field map A2 are spliced and stored in 600*1200 In the third depth-of-field map A3; in Fig. 5 (a) is the first depth-of-field map A1, (b) is the second depth-of-field map A2, and Fig. 6 is the third depth-of-field formed after the splicing of the first depth-of-field map and the second depth-of-field map Figure A3, the stitched third depth of field image A3 is well stitched, expanding the field of view;

(三)、以A*A=40*40像素为窗口,将像素为600*1200的第三景深图A3分割成(C/A)*(D/A)=(600/40)*(1200/40)=15*30个区域;(3), with A*A=40*40 pixels as the window, the third depth of field map A3 with pixels of 600*1200 is divided into (C/A)*(D/A)=(600/40)*(1200 /40)=15*30 areas;

(四)、将Nt区域四边外扩一倍,找出外扩后的区域中灰度级满足Gt(x,y)-Mint<λ,λ=6的所有像素坐标,并标记为Kt,共生成T个标记符;(4) Expand the four sides of the N t area twice, find out all pixel coordinates whose gray levels in the expanded area satisfy G t (x, y)-Min t <λ, λ=6, and mark them as K t , generate T tokens in total;

(五)、确定目标运动航迹后,在第三景深图A3的第l1=300行和第l2=400行画两条判断线;头部宽度w′和头部高度h′换算公式:(5), after determining the target movement track, draw two judgment lines on the l 1 =300 line and the l 2 =400 line of the third depth of field map A3; head width w' and head height h' conversion formula :

hh &prime;&prime; == &alpha;&alpha; ** (( 255.0255.0 -- GG &OverBar;&OverBar; tt )) ** hh

ww &prime;&prime; == &alpha;&alpha; ** (( 255.0255.0 -- GG &OverBar;&OverBar; tt )) ** ww

其中α是比例系数且α=0.007;where α is a proportionality coefficient and α=0.007;

图7是对第三景深图A3的头部跟踪识别结果图,用框框出目标所在位置,并建立目标运动的航迹,在景深图中画两条判断线,将进出统计的结果显示出来,同时有助于通过画面实时监测客流情况;Fig. 7 is the head tracking recognition result diagram of the third depth-of-field image A3. The position of the target is framed with a frame, and the track of the target movement is established. Two judgment lines are drawn in the depth-of-field image, and the results of the in-and-out statistics are displayed. At the same time, it helps to monitor the passenger flow in real time through the screen;

(六)、将系统的统计结果写入数据文本文件中,并与实际的结果作比较,结果如表1,从表中可以看出本发明统计的结果与实际进出人数相比,平均精确度在96%以上。(6), the statistical result of system is written in the data text file, and compare with actual result, the result is as table 1, can find out from the table that the result of the present invention's statistics compares with the actual number of people in and out, average accuracy Above 96%.

表1Table 1

时间time 进/出(统计结果)In/Out (statistical results) 进/出(实际结果)In/Out (actual result) 精度precision 9:00-9:159:00-9:15 37/1737/17 35/1735/17 0.97140.9714 11:30-12:0011:30-12:00 110/93110/93 117/92117/92 0.96470.9647 14:00-14:1514:00-14:15 79/6479/64 81/6081/60 0.95430.9543 17:00-17:1517:00-17:15 80/3980/39 84/4184/41 0.95170.9517 平均值average value 0.96050.9605

综上所述,本发明基于传感器景深图像的客流统计装置及其方法,可以选择单传感器实现客流统计,也可以使用多传感器增扩视场,实现多视场客流统计,双传感器的视频采集时同步的,解决了基于立体视觉中传感器同步问题,单传感器输出的景深图包含人体三维信息,并且解决了基于视频流客流统计中客流拥挤,遮挡,阴影等问题,系统稳定性强,成本低,帧频为30pf/s,精度在96%以上,具有广阔的应用前景。To sum up, the device and method for passenger flow statistics based on sensor depth images in the present invention can select a single sensor to realize passenger flow statistics, and can also use multiple sensors to expand the field of view to realize multi-field passenger flow statistics. Synchronous, solves the sensor synchronization problem based on stereo vision, the depth map output by a single sensor contains three-dimensional information of the human body, and solves the problems of passenger flow congestion, occlusion, shadows and other problems in passenger flow statistics based on video streams. The system has strong stability and low cost. The frame frequency is 30pf/s, and the accuracy is above 96%, which has broad application prospects.

Claims (7)

1. the passenger flow statistic device based on sensor depth image, it is characterized in that, comprise sensor base (1), the first depth transducer (2), the second depth transducer (3), USB data line (7), data processing control units (8), display (9), mode selection switch (10), power supply (11), first depth transducer (2) is identical with the second depth transducer (3) and be set in parallel in same sensor base (1), first depth transducer (2) all overlooks ground with the camera lens of the second depth transducer (3) and optical center axle is all vertical with ground level, first depth transducer (2) is all connected with data processing control units (8) by USB data line (7) with the signal output part of the second depth transducer (3), the signal output part of data processing control units (8) is by video line (12) access display (9), mode selection switch (10) is connected with the control signal input end of data processing control units (8), the output terminal of power supply (11) accesses the first depth transducer (2) respectively, second depth transducer (3), the power input of data processing control units (8) and display (9),
After power supply (11) powers on to device, the depth image that first depth transducer (2) and the second depth transducer (3) collect is all by USB data line (7) input data processing control units (8), Data Data processing and control element (PCE) (8) through Digital Image Processing, and exports the result of process to display (9) by video line (12) to the depth image collected; Horizontal distance W IDTH between described first depth transducer (2) and the second depth transducer (3) meets the following conditions:
H 0represent the first depth transducer (2) and the second depth transducer (3) distance to ground, 3000mm < H 0< 3700mm, ν are the transverse field angle of the first depth transducer (2) and the second depth transducer (3), H 4(2100 < H 4< H 0) represent the distance of point of contact, two visual fields apart from ground.
2. the passenger flow statistic device based on sensor depth image according to claim 1, is characterized in that, described first depth transducer (2) and the second depth transducer (3) all adopt Asus XtionPRO.
3. based on a passenger flow statistical method for sensor depth image, it is characterized in that, step is as follows:
Step 1, after power supply (11) powers on to device, data processing control units (8) obtains the first depth map that the first depth transducer (2) collects and the second depth map that the second depth transducer (3) collects by USB data line (7);
Step 2, according to first frame first depth map obtained or first frame second depth map, demarcates the relation in target to the distance H and depth map on ground between corresponding grey scale level G;
Step 3, determines mode of operation by mode selection switch (10): if select single depth transducer mode of operation, then choose the first depth map or the second depth map as the 3rd depth map A3 and proceed to step 6; If select dual-depth working sensor pattern, then enter next step;
Step 4, by the first depth map and the equal projection transform of the second depth map to three dimensions physical coordinates system;
Step 5, carries out splicing by the first depth map A1 and the second depth map A2 that are transformed into three dimensions physical coordinates system and obtains the 3rd depth map A3;
Step 6, carries out region segmentation to the 3rd depth map A3, finds out the pixel coordinate that in each region, gray level minimal value is corresponding, and mark the pixel coordinate searched out;
Step 7, according to the mark of pixel coordinate after region segmentation, carries out head identification and obtains header information; Described head identification concrete steps are as follows:
Step (7.1), creating T width initial value is the binary map of 0, be numbered 1 ~ T, and the pixel resolution of each width binary map is C*D; Extract marker character K tcorresponding all pixel coordinates, and the gray-scale value of respective pixel coordinate in t width binary map is set to 255;
Step (7.2), determines the area A rea of connected domain in every width binary map t, and to should the gray level average of connected domain in the 3rd depth map A3 find the connected domain with following characteristics: 1. area A rea tbe greater than Area, and Area>=500, the connected domain 2. in binary map is class ellipticity, and 3. head width w ' and height of head h ' is less than 25 according to all meeting after following formula scales and is greater than 15, and formula is as follows:
The wherein true altitude of h ' expression human body head, the developed width of w ' expression human body head, h represents the height of connected domain in binary map, and w represents the wide of connected domain in binary map, and α is scale-up factor and 0 < α < 1;
Record all the connected domain origin coordinates, wide height and the regional average value that search out i.e. header information;
Step 8, carries out count tracking to the header information identified; Described head tracking counting, detailed process is as follows:
Step (8.1), in field range with header information central point for a mark, ripple door shape is rectangle, ripple door size is set to Q pixel, and 10 < Q < 100, determined by mark (x, a y) He Bomen (Δ x, Δ y) with Targets Dots predicted value centered by space search region, region of search is as follows:
Step (8.2), first, assuming that flight path, M frame records head Targets Dots for the first time, and the speed of coordinate to Targets Dots obtained based on M+1 frame is estimated, if the speed estimated in region of search scope namely then generate a temporary transient flight path; Secondly, the coordinate obtained based on M+2 frame to Targets Dots position predict, and determine relevant range centered by predicted position any some mark dropped in relevant range expands a temporary transient flight path, continues estimating speed value, predicts based on the position of velocity amplitude to next frame and set up relevant range, anyly drops on some mark in relevant range by flight path new for generation one; Finally, matching is carried out to the flight path quafric curve of all generations, if the error delta < 0.1 of the point on flight path and matched curve, then determine this flight path, if do not met, then delete this flight path;
Step (8.3), after determining target travel flight path, at the l of the 3rd depth map A3 1row and l 2two, row picture judges line, wherein 10 < l 1< l 2< 470 pixel, if current goal point mark x is less than l 1but flight path exists row-coordinate is greater than l 2some mark, then the number entered adds 1, if instead current goal point mark x is greater than l 2but flight path exists row-coordinate is less than l 1some mark, then the number gone out adds 1, deletes this flight path simultaneously;
Step 9, processes each frame depth map collected successively according to the method for step 4 ~ 8, and exports the turnover result of statistics to display (9) by video line (12) in the form of a file every one minute.
4. the passenger flow statistical method based on sensor depth image according to claim 3, is characterized in that, the relation in target described in step 2 to the distance H and depth map on ground between corresponding grey scale level G is demarcated, specific as follows:
The height on distance ground is H 1target corresponding grey scale level G 1, distance ground height be H 2target corresponding grey scale level G 2, distance ground height be H 3target corresponding grey scale level G 3, wherein 100mm < H 1< 200mm, 500mm < H 2< 800mm, 2300mm < H 3< H 0, H 0represent that sensor is to the distance on ground and 3000mm < H 0< 3700mm, G 0represent that nobody is in field range, the gray level that in depth map, ground is corresponding, then have:
In formula, distance calibration is in units of millimeter, and β represents the scale-up factor of distance and gray level, and 10 < β < 20.
5. the passenger flow statistical method based on sensor depth image according to claim 3, is characterized in that, described in step 4 by the first depth map and the equal projection transform of the second depth map to three dimensions physical coordinates system, step is as follows:
By the first depth map from two dimensional surface coordinate (i 1, j 1) projection transform is to three dimensions physical coordinates (x 1, y 1, z 1) obtain the first new depth map A1, by the second depth map from two dimensional surface coordinate (i 2, j 2) projection transform is to three dimensions physical coordinates (x 2, y 2, z 2) obtain the second new depth map A2; If the pixel resolution of two-dimentional depth map is I*J, the centre coordinate of ordinate is I m, the centre coordinate of horizontal ordinate is J m, the transverse field angle that longitudinal field angle that μ is sensor, ν are sensor, then projection transform relation is as follows:
Wherein, k i=sin θ i* G (i, j), θ irepresent ordinate i to I in two dimensional surface coordinate (i, j) mcorresponding field of view angle, k j=sin θ j* G (i, j), θ jrepresent that in two dimensional surface coordinate (i, j), horizontal ordinate j is to centre coordinate J mcorresponding field of view angle, g (i, j) is the gray level of two dimensional surface coordinate (i, j) correspondence.
6. the passenger flow statistical method based on sensor depth image according to claim 3, is characterized in that, the first depth map A1 and the second depth map A2 that are transformed into three dimensions physical coordinates system is spliced, be specially described in step 5:
Step (5.1), utilizes the method for multiple views measurement data to determine the second depth map A2 coordinate (x 2, y 2, z 2) and the first depth map A1 coordinate (x 1, y 1, z 1) between transformational relation R|T:
Get S point coordinate in the first depth map A1 and the coordinate that this S o'clock corresponding in the second depth map A2 a=0,1,2 ..., S-1 and S > 2, determine that the second depth map A2 rotates relative to the first depth map A1 and the value of translation:
R represents the rotation relationship of the second depth map A2 relative to the first depth map A1, and T represents the displacement of the second depth map A2 relative to the first depth map A1, then:
In above formula, θ represents the rotation angle of the second depth map A2 relative to the first depth map A1, (Δ x 0, Δ y 0, Δ z 0) represent the coordinate (x of the second depth map A2 2, y 2, z 2) relative to the coordinate (x of the first depth map A1 1, y 1, z 1) displacement difference;
Step (5.2), creating a width pixel resolution is C*D, and initial value is the 3rd depth map A3 of 0, wherein I < C < 2I, J < D < 2J; By the first depth map A1 coordinate (x 1, y 1, z 1) be converted to the second depth map A2 coordinate (x 2, y 2, z 2), be stored in the 3rd depth map A3, if gray level G (x, y) > is G after then the first depth map A1 and the second depth map A2 being spliced 1, then G (x, y)=G 1, in the splicing overlapping region of the first depth map A1 and the second depth map A2, gray level gets the average of respective coordinates gray level in the first depth map A1 and the second depth map A2.
7. the passenger flow statistical method based on sensor depth image according to claim 3, is characterized in that, described in step 6, pixel coordinate labeling process is as follows:
Step (6.1), with A*A pixel for window, 20 < A < 70, the 3rd depth map A3 is divided into (C/A) * (D/A) individual region, and C/A, D/A are positive integer; Utilize nine grids to travel through all regions of the 3rd depth map A3, find the nine grids central window port area meeting following two conditions: 1. window region in center exists gray level and is less than G 2be greater than G 3pixel 2. this central window port area gray average be less than other window gray average of nine grids; Be N by the zone marker searched out t, t=1,2,3 ... T and T < (C/A) * (D/A), record each region minimal gray level Min simultaneously t;
Step (6.2), by N tlimit, region four extends out one times, finds out gray level in the region after extending out and meets G t(x, y)-Min t< λ, all pixel coordinates of 4 < λ < 15 conditions, and be labeled as K t, symbiosis becomes T marker character.
CN201310279375.1A 2013-07-04 2013-07-04 Based on passenger flow statistic device and the method thereof of sensor depth image Expired - Fee Related CN103345792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310279375.1A CN103345792B (en) 2013-07-04 2013-07-04 Based on passenger flow statistic device and the method thereof of sensor depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310279375.1A CN103345792B (en) 2013-07-04 2013-07-04 Based on passenger flow statistic device and the method thereof of sensor depth image

Publications (2)

Publication Number Publication Date
CN103345792A CN103345792A (en) 2013-10-09
CN103345792B true CN103345792B (en) 2016-03-02

Family

ID=49280585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310279375.1A Expired - Fee Related CN103345792B (en) 2013-07-04 2013-07-04 Based on passenger flow statistic device and the method thereof of sensor depth image

Country Status (1)

Country Link
CN (1) CN103345792B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123776B (en) * 2014-07-31 2017-03-01 上海汇纳信息科技股份有限公司 A kind of object statistical method based on image and system
CN104268506B (en) * 2014-09-15 2017-12-15 郑州天迈科技股份有限公司 Passenger flow counting detection method based on depth image
CN104809787B (en) * 2015-04-23 2017-11-17 中电科安(北京)科技股份有限公司 A kind of intelligent passenger volume statistic device based on camera
CN106548451A (en) * 2016-10-14 2017-03-29 青岛海信网络科技股份有限公司 A kind of car passenger flow crowding computational methods and device
CN113536985B (en) * 2021-06-29 2024-05-31 中国铁道科学研究院集团有限公司电子计算技术研究所 Passenger flow distribution statistical method and device based on depth-of-field attention network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731456A (en) * 2005-08-04 2006-02-08 浙江大学 Bus Passenger Flow Statistics Method and System Based on Stereo Vision
CN1950722A (en) * 2004-07-30 2007-04-18 松下电工株式会社 Individual detector and accompanying detection device
CN101587605A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Passenger-flow data management system
CN201845367U (en) * 2010-08-04 2011-05-25 杨克虎 Passenger flow volume statistic device based on distance measurement sensor
CN102890791A (en) * 2012-08-31 2013-01-23 浙江捷尚视觉科技有限公司 Depth information clustering-based complex scene people counting method
CN203503023U (en) * 2013-07-04 2014-03-26 南京理工大学 Passenger flow counting device based on sensor depth image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5065744B2 (en) * 2007-04-20 2012-11-07 パナソニック株式会社 Individual detector

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1950722A (en) * 2004-07-30 2007-04-18 松下电工株式会社 Individual detector and accompanying detection device
CN1731456A (en) * 2005-08-04 2006-02-08 浙江大学 Bus Passenger Flow Statistics Method and System Based on Stereo Vision
CN101587605A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Passenger-flow data management system
CN201845367U (en) * 2010-08-04 2011-05-25 杨克虎 Passenger flow volume statistic device based on distance measurement sensor
CN102890791A (en) * 2012-08-31 2013-01-23 浙江捷尚视觉科技有限公司 Depth information clustering-based complex scene people counting method
CN203503023U (en) * 2013-07-04 2014-03-26 南京理工大学 Passenger flow counting device based on sensor depth image

Also Published As

Publication number Publication date
CN103345792A (en) 2013-10-09

Similar Documents

Publication Publication Date Title
Serna et al. Urban accessibility diagnosis from mobile laser scanning data
CN103150559B (en) Head recognition and tracking method based on Kinect three-dimensional depth image
Wang et al. Window detection from mobile LiDAR data
CN103345792B (en) Based on passenger flow statistic device and the method thereof of sensor depth image
US12092479B2 (en) Map feature identification using motion data and surfel data
Tavasoli et al. Real-time autonomous indoor navigation and vision-based damage assessment of reinforced concrete structures using low-cost nano aerial vehicles
CN106156752B (en) A kind of model recognizing method based on inverse projection three-view diagram
Chong et al. Integrated real-time vision-based preceding vehicle detection in urban roads
Famouri et al. A novel motion plane-based approach to vehicle speed estimation
CN105136064A (en) Moving object three-dimensional size detection system and method
Zhang et al. Longitudinal-scanline-based arterial traffic video analytics with coordinate transformation assisted by 3D infrastructure data
Park et al. Vision-based surveillance system for monitoring traffic conditions
CN103679172B (en) Method for detecting long-distance ground moving object via rotary infrared detector
Hara et al. Exploring early solutions for automatically identifying inaccessible sidewalks in the physical world using google street view
CN203503023U (en) Passenger flow counting device based on sensor depth image
CN104331708B (en) A kind of zebra crossing automatic detection analysis method and system
JP2019174910A (en) Information acquisition device and information aggregation system and information aggregation device
CN102622582B (en) Road pedestrian event detection method based on video
CN106997685A (en) A kind of roadside parking space detection device based on microcomputerized visual
Wang et al. Measuring driving behaviors from live video
Karaki et al. A comprehensive survey of the vehicle motion detection and tracking methods for aerial surveillance videos
Murayama et al. Deep pedestrian density estimation for smart city monitoring
Tan et al. Vehicle speed measurement for accident scene investigation
CN114973147A (en) Distributed surveillance camera positioning method and system based on lidar mapping
Nguyen et al. Deep Learning Based Vehicle Speed Estimation on Highways

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160302

Termination date: 20180704

CF01 Termination of patent right due to non-payment of annual fee