CN102289805A - Video-based method for detecting crowd density in subway - Google Patents
Video-based method for detecting crowd density in subway Download PDFInfo
- Publication number
- CN102289805A CN102289805A CN2011101562708A CN201110156270A CN102289805A CN 102289805 A CN102289805 A CN 102289805A CN 2011101562708 A CN2011101562708 A CN 2011101562708A CN 201110156270 A CN201110156270 A CN 201110156270A CN 102289805 A CN102289805 A CN 102289805A
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- crowd
- subway
- crowd density
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000001514 detection method Methods 0.000 claims description 23
- 230000000694 effects Effects 0.000 claims description 11
- 241000533950 Leucojum Species 0.000 claims description 9
- 238000006073 displacement reaction Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 4
- 238000009792 diffusion process Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a video-based method for detecting crowd density in subway. The method comprises the following steps: firstly inputting a corresponding four-point coordinate based on the video images on different occasions in subway to mark areas with crowds and areas without crowds; judging whether each frame in the video is an error frame or not, if so, abandoning the error frames in the video; and if not, extracting the frames in the video without crowds one by one to carry out Gaussian background modeling; utilizing the obtained Gaussian background images and the frames in the video with crowds to obtain the foreground by background subtraction; marking the foreground images obtained in the step 4) and marking the pixel values of the areas without crowds as 0; carrying out binarization on the obtained images to ensure the obtained background to be black and the crowds to be white; carrying out connected domain marking on the obtained binary images; and detecting the crowd density in the subway according to the whole crowd density and the local crowd density.
Description
Technical Field
The invention relates to crowd density detection, in particular to a subway crowd density detection method based on videos.
Background
Subway traffic in various big cities is rapidly developed at present, and brings great convenience to life of people. The method is very important for detecting the crowd density. Because the density of the stream of people is too high, a plurality of hidden troubles need to occur.
The chinese patent application with publication number CN101431664A introduces a method and system for automatically detecting density passenger flow based on video images, which uses a computer vision algorithm to process the collected video images, and uses pixel number calculation and time axis information to generate background when the crowd density is low; when the crowd density is higher, multi-angle analysis is carried out on the crowd image by wavelet packet decomposition, box-counting dimensions of a wavelet packet decomposition coefficient matrix are extracted to be used as features, and a classifier is used for classification to obtain the crowd density grade.
US patent publication No. US2007/0031005a1 describes a video-based real-time crowd density estimation, and uses mrf (markov Random field) method to detect the interleaved image, and then uses geometric rules to estimate the crowd density of the scene. However, the method has poor accuracy in estimating scene density of high-density crowd.
Chinese patent publication No. CN102034243A describes a method for obtaining a population density map in a video image, which uses Harris algorithm to obtain angular points in a detection region, then performs density diffusion on each angular point to obtain a density diffusion value of each pixel point corresponding to the angular point in the detection region, and then accumulates the density diffusion values of each pixel point corresponding to different angular points to obtain a density value of the pixel point; and finally, drawing a crowd density graph of the detection area according to the density values of the pixel points. The method has high calculation complexity, does not consider the phenomena of black screen, snowflake and displacement of the video, and reduces the calculation accuracy of the overall crowd density. Therefore, the practical application significance in the aspect of subway crowd density detection is not great.
Therefore, there is an urgent need to solve the conventional crowd density detection problem.
Disclosure of Invention
The invention aims to solve the problems and provides a subway crowd density detection method based on videos, so that real-time and accurate crowd density detection is realized.
In order to achieve the above object, the present invention provides a method for detecting subway crowd density based on video, which is characterized in that the method comprises the following steps:
step 1): inputting corresponding four-point coordinates based on subway video images under different occasions of a subway to mark an area with human activity and an area without human activity;
step 2): judging whether each frame in the video is an error frame according to the unmanned active area obtained in the step 1), and if so, discarding the error video frame; if not, go to step 3);
step 3): extracting a frame of video without people for background Gaussian modeling;
step 4): obtaining a foreground by using the Gaussian background image obtained in the step 3) and the video frame with the crowd through a background difference method;
step 5): marking the foreground image obtained in the step 4) according to the unmanned active area obtained in the step 1), and marking the pixel value of the unmanned active area as 0;
step 6): carrying out binarization processing on the image obtained in the step 5), wherein the obtained background is black and artificial white; carrying out connected domain marking on the obtained binary image;
step 7): judging whether i is less than N or not; if i < ═ N is true, go to step 8); if i < ═ N is true, go to step 10); wherein N represents the number of connected domains, i represents the number of the connected domains, 0 < i < ═ N, and i is a positive integer;
step 8):if the number of the pixel points in the connected domain i is not in the set range, the number of the pixel points in the connected domain i is not counted in the overall density of the crowd; otherwise, go to step 9); wherein,num _ obj (j) is the number of pixel points in the ith connected domain which is judged as the crowd, and Num _ mem is the number of pixel points contained in the minimum circumscribed rectangle of the connected domain;
step 9): counting the number of pixel points of a connected domain i into the overall crowd density, then automatically adding 1 to i, and repeating the step 7), the step 8) and the step 9 until i is equal to N;
step 10): calculating the overall crowd density of one frame of image according to the formula (1);
wherein, Num _ obj (j) is the number of pixels in the ith connected domain determined as the crowd,the sum of all the pixel points of the connected domain which is interpreted as the crowd is obtained; height Width is the sum of pixel points of one frame of image; num _ label is the total number of pixel points of the unmanned active area marked according to the four-point coordinates.
The error frame in the step 2) comprises: black screen, snowflakes and displacement.
The method for detecting the black screen comprises the following steps: and after graying the video frame, solving the mean value and the variance of the unmanned domain marked by the four points, and judging as a black screen if the mean value is less than a threshold th and the variance is less than a threshold zh.
Taking 60 as the threshold th; the threshold zh is taken to be 100.
The snowflake and displacement detection method comprises the following steps: after graying the video frame, subtracting the pixel point value of the unmanned domain marked by the four points from the background pixel point value of the unmanned domain, calculating the square sum of the pixel difference of the subtracted areas, and if the square sum value is greater than a threshold ch, judging that the camera is shifted or snowflake.
The threshold ch takes 100.
The method has the advantages that the method marks the unmanned activity area and the manned activity area, and only the manned activity area is calculated in the aspect of detecting the crowd density, so that the calculation complexity is reduced, and a foundation is laid for real-time performance; when people stream density is calculated, the phenomena of black screen, snowflake and displacement of a camera picture are also considered, so that the accuracy of people stream density detection is integrally improved. And finally, the ratio of the number of the pixel points in the connected domain of the manned moving area to the number of the pixel points of the manned moving area is used as the overall density value of the crowd, so that the calculation complexity is low, and the accuracy of measuring the overall density of the crowd is high. Especially for scenes with high crowd density.
Drawings
FIG. 1 is a flow chart of a video-based subway crowd density detection method of the present invention;
FIG. 2 is a diagram illustrating the definition of a connected domain i according to the present invention;
fig. 3 is a calibration diagram of the unmanned area of the present invention.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to the accompanying drawings and the detailed description.
As shown in fig. 1, fig. 1 is a flowchart of a subway crowd density detection method based on video according to the present invention. The method comprises the following steps:
step 1): inputting corresponding four-point coordinates based on subway video images under different occasions of a subway to mark an area with human activity and an area without human activity;
step 2): judging whether each frame in the video is an error frame according to the unmanned active area obtained in the step 1), and if so, discarding the error video frame; if not, go to step 3);
step 3): extracting a frame of video without people in a scene to perform background Gaussian modeling;
step 4): obtaining a foreground by using the Gaussian background image obtained in the step 3) and the video frame with the crowd through a background difference method;
step 5): marking the foreground image obtained in the step 4) according to the unmanned active area obtained in the step 1), and marking the pixel value of the unmanned active area as 0;
step 6): carrying out binarization processing on the image obtained in the step 5), wherein the obtained background is black and artificial white; carrying out connected domain marking on the obtained binary image;
step 7): judging whether i is less than N or not; if i < ═ N is true, go to step 8); if i < ═ N is true, go to step 10); wherein N represents the number of connected domains, i represents the number of the connected domains, 0 < i < ═ N, and i is a positive integer;
step 8):if the number of the pixel points in the connected domain i is not in the set range, the number of the pixel points in the connected domain i is not counted in the overall density of the crowd; otherwise, go to step 9); wherein,num _ obj (j) is the number of pixel points in the ith connected domain which is judged as the crowd, and Num _ mem is the number of pixel points contained in the minimum circumscribed rectangle of the connected domain;
step 9): counting the number of pixel points of a connected domain i into the overall crowd density, then automatically adding 1 to i, and repeating the step 7), the step 8) and the step 9 until i is equal to N;
step 10): calculating the overall crowd density of one frame of image according to the formula (1);
wherein, Num _ obj (j) is the number of pixels in the ith connected domain determined as the crowd,the sum of all the pixel points of the connected domain which is interpreted as the crowd is obtained; height Width is the sum of pixel points of one frame of image; num _ label is the total number of pixel points of the unmanned active area marked according to the four-point coordinates.
1. Definition of global and local population densities
Wherein, Num _ obj (j) is the number of pixels in the ith connected domain determined as the crowd,the sum of all the pixel points of the connected domain which is interpreted as the crowd is obtained; height Width is the sum of pixel points of one frame of image; num _ label is the total number of pixel points of the unmanned active area marked according to the four-point coordinates.
As shown in fig. 2, fig. 2 is a definition diagram of a connected domain i according to the present invention. The calculation formula of the local density of the crowd is as follows:wherein Num _ obj (i) is the number of pixels in the connected component i, and Num _ mem is the number of pixels contained in the minimum bounding rectangle of the connected component.
2. Gaussian background modeling
The traditional background modeling often uses mixed Gaussian background modeling, but aiming at the subway environment, the illumination intensity has no obvious change, and in the time period of background establishment, under the condition that the effective duration of a motion foreground in a certain area is far shorter than the duration of no person in the area, the mixed Gaussian background modeling is simplified. Currently, only one gaussian model is used for background modeling. The parameters of the gaussian model are: mean, variance. The simplified gaussian background modeling is also divided into two steps.
Establishing a background: since there is only a single gaussian model, in order to improve the correctness of the background model, multiple frames of images are used to establish a gaussian background. And one frame of image is extracted at intervals to avoid the interference of walking or staying of the pedestrian.
The specific process is as follows: firstly, a frame of image is arbitrarily extracted, and the mean value of the Gaussian model of each pixel point is set as the current value of the pixel point. Extracting a frame of image at regular intervals, updating the mean value of the Gaussian model of each pixel point, and extracting N frames in total, wherein the mean value bg _ model. At this time, an initial value Variance of the gaussian model of each pixel point is set to Variance _ ini (the initial value of Variance here refers to the Variance of N frames in the unmanned active region and the Variance of N frames in the manned active region, and takes an appropriate value).
Updating the background: for each frame of subsequent images, performing condition judgment, wherein the current pixel point value is pixel, and temp is (pixel-bg _ model.mean (i, j));
if temp/bg _ model.variation (i, j) < ═ threshold (1), then the pixel is determined to be background, and two parameters of the Gaussian model of the pixel are updated, wherein the updating process is as follows:
bg_model.mean(i,j)=(bg_model.mean(i,j)+learn_rate*temp);
bg _ model.variable (i, j) ═ 1-leann _ rate × bg _ model.variable (i, j) + leann _ rate × temp ^ 2; where the leann _ rate is the learning rate.
If temp.. bg _ model. variation (i, j) > threshold (threshold 1), then it is considered as foreground and not processed.
It can be seen that for this design: the simplified Gaussian modeling method can better model the background.
Number of frames extracted for gaussian background modeling and consideration of how to extract:
1. the frame number of the extraction is mainly selected in consideration of the fact that a scene does not change drastically, and if the frequency of the camera is 25 frames per second, one frame is extracted at an interval of 4 seconds, so that if a person moves in the background, the distance between the two frames is far, and when the person moves in the background, the person moves out of the position area of the previous frame, and the background is fitted relatively accurately.
2. The Gaussian model is established by taking 100 frames, and experiments show that the fitting effect is almost the same when more than 100 frames are taken, if the fitting is too many frames, the calculation amount and the calculation time are increased, the more the number of the taken frames is, the longer the fitting time is, more people entering the station are gradually increased, the fitting effect is poor, the less the number of the taken frames is, and the inaccuracy is caused (the fuzzy image of people), so that 100 frames are taken.
Parameter setting in simplified gaussian background modeling: the threshold 1 is selected as a compromise between the no-human region and the human-population active region, and 2 x 2.5 is taken in the design, and the lead _ rate is 0.05 which is commonly used.
3. Unmanned area calibration
As shown in fig. 3, fig. 3 is a calibration diagram of the unmanned area of the present invention. Since there are places where human presence is not possible, such as above the electronic clock, and the upper spaces of the two side columns. Counting the people flow in the video frame, and finding that the unmanned area can be calibrated by three straight lines, wherein the calibrated coordinates are as follows: x0 ═ 210; y0 is 1; x1 ═ 80; y1 ═ 150; x2 ═ 80; y2 ═ 210; x3 ═ 210; y3 ═ 352; for other points in the image, if the coordinates x and y meet the conditions of x < x1 or (x1-x0) × y + (x-x1) × y0- (x-x0) × y1 > -0 or (x3-x2) × y + (x-x3) × y2- (x-x2) × y3 > -0, the points are determined to be an unmanned area. The four coordinate points can be input by the user, namely, the user can calibrate the unmanned area.
4. Error frame detection thought
Screen blacking: consider that the black screen is characterized by being substantially the same across the screen. Reading the crowd video frame, converting the crowd video frame into a crowd gray-scale image, then calculating the mean value and the variance of the whole image, and if the mean value is smaller than a threshold value 1 and the variance value is smaller than a threshold value 2, judging that the image is a black screen. This portion can be calculated by extracting one frame in 1 second to reduce the amount of calculation.
Judging snowflakes and displacement: the coordinate calibration of the unmanned area is carried out, so that after a crowd video frame is read out and converted into a crowd grey-scale map, the pixel point of the unmanned area of the crowd grey-scale map is subtracted from the area of the background, the square sum of the pixel difference after the subtraction of the area is calculated, and if the value is larger than a certain set threshold value 3, the situation that the camera is shifted or snowflakes is considered. (since no relevant video frame error video is used for experiment at present, several thresholds are set as: threshold 1 is 60, threshold 2 is 100, and threshold 3 is 100), since several of black screen, snow and shift are all alarms of video error and can be judged in the unmanned area, so that judgment can be combined, and this part only needs to be calculated in the unmanned area, so the calculation amount is not large.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (6)
1. A subway crowd density detection method based on videos is characterized by comprising the following specific contents:
step 1): inputting corresponding four-point coordinates based on subway video images under different occasions of a subway to mark an area with human activity and an area without human activity;
step 2): judging whether each frame in the video is an error frame according to the unmanned active area obtained in the step 1), and if so, discarding the error video frame; if not, go to step 3);
step 3): extracting a frame of video without people for background Gaussian modeling;
step 4): obtaining a foreground by using the Gaussian background image obtained in the step 3) and the video frame with the crowd through a background difference method;
step 5): marking the foreground image obtained in the step 4) according to the unmanned active area obtained in the step 1), and marking the pixel value of the unmanned active area as 0;
step 6): carrying out binarization processing on the image obtained in the step 5), wherein the obtained background is black and artificial white; carrying out connected domain marking on the obtained binary image;
step 7): judging whether i is less than N or not; if i < ═ N is true, go to step 8); if i < ═ N is true, go to step 10); wherein N represents the number of connected domains, i represents the number of the connected domains, 0 < i < ═ N, and i is a positive integer;
step 8):if the number of the pixel points in the connected domain i is not in the set range, the number of the pixel points in the connected domain i is not counted in the overall density of the crowd; otherwise, go to step 9); wherein,num _ obj (j) is the number of pixel points in the ith connected domain which is judged as the crowd, and Num _ mem is the number of pixel points contained in the minimum circumscribed rectangle of the connected domain;
step 9): counting the number of pixel points of a connected domain i into the overall crowd density, then automatically adding 1 to i, and repeating the step 7), the step 8) and the step 9 until i is equal to N;
step 10): calculating the overall crowd density of one frame of image according to the formula (1);
wherein Num _ obj (j) is determined as the ith connected domain of the crowdThe number of the pixel points is increased,the sum of all the pixel points of the connected domain which is interpreted as the crowd is obtained; height Width is the sum of pixel points of one frame of image; num _ label is the total number of pixel points of the unmanned active area marked according to the four-point coordinates.
2. A video-based subway crowd density detection method as claimed in claim 1, wherein said error frame in step 2) comprises: black screen, snowflakes and displacement.
3. A video-based subway crowd density detection method as claimed in claim 2, wherein said black screen detection method comprises: and after graying the video frame, solving the mean value and the variance of the unmanned domain marked by the four points, and judging as a black screen if the mean value is less than a threshold th and the variance is less than a threshold zh.
4. A video-based subway crowd density detection method as claimed in claim 3, wherein said threshold th is taken to be 60; the threshold zh is taken to be 100.
5. A video-based subway crowd density detection method as claimed in claim 2, wherein said snow and shift detection method comprises: after graying the video frame, subtracting the pixel point value of the unmanned domain marked by the four points from the background pixel point value of the unmanned domain, calculating the square sum of the pixel difference of the subtracted areas, and if the square sum value is greater than a threshold ch, judging that the camera is shifted or snowflake.
6. A video-based subway crowd density detection method as claimed in claim 5, wherein said threshold ch is taken as 100.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011101562708A CN102289805A (en) | 2011-06-10 | 2011-06-10 | Video-based method for detecting crowd density in subway |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011101562708A CN102289805A (en) | 2011-06-10 | 2011-06-10 | Video-based method for detecting crowd density in subway |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102289805A true CN102289805A (en) | 2011-12-21 |
Family
ID=45336195
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011101562708A Pending CN102289805A (en) | 2011-06-10 | 2011-06-10 | Video-based method for detecting crowd density in subway |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102289805A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708565A (en) * | 2012-05-07 | 2012-10-03 | 深圳市贝尔信智能系统有限公司 | Foreground detection method, device and system |
CN102750710A (en) * | 2012-05-31 | 2012-10-24 | 信帧电子技术(北京)有限公司 | Method and device for counting motion targets in images |
CN103347156A (en) * | 2013-06-24 | 2013-10-09 | 清华大学 | Method for estimating number of wild animal populations based on infrared camera data |
CN103489012A (en) * | 2013-09-30 | 2014-01-01 | 深圳市捷顺科技实业股份有限公司 | Crowd density detecting method and system based on support vector machine |
CN104135910A (en) * | 2012-02-20 | 2014-11-05 | 佳能株式会社 | Image forming apparatus and image forming method |
CN105574499A (en) * | 2015-12-15 | 2016-05-11 | 东华大学 | Method and system for detecting and counting number of people based on SOC |
CN106407901A (en) * | 2016-08-31 | 2017-02-15 | 重庆云库房物联科技有限公司 | Infrared array-based induced abortion behavior analysis system and human body identification method |
CN107784258A (en) * | 2016-08-31 | 2018-03-09 | 南京三宝科技股份有限公司 | Subway density of stream of people method of real-time |
CN110287929A (en) * | 2019-07-01 | 2019-09-27 | 腾讯科技(深圳)有限公司 | The quantity of target determines method, apparatus, equipment and storage medium in group region |
CN110285858A (en) * | 2019-07-18 | 2019-09-27 | 郑州铁路职业技术学院 | A kind of railroad train operating environment monitoring device based on wireless sensor |
CN110390266A (en) * | 2019-06-24 | 2019-10-29 | 黄燕 | A kind of system and its measurement method of the measurement scenic spot flow of the people based on area algorithm |
CN110490103A (en) * | 2019-08-05 | 2019-11-22 | 中国铁道科学研究院集团有限公司电子计算技术研究所 | Track traffic for passenger flow Density Detection method and device |
CN111680547A (en) * | 2020-04-27 | 2020-09-18 | 北京百度网讯科技有限公司 | Method and device for identifying traffic countdown board, electronic equipment and storage medium |
CN114030907A (en) * | 2022-01-10 | 2022-02-11 | 安徽高哲信息技术有限公司 | Feeding system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040091153A1 (en) * | 2002-11-08 | 2004-05-13 | Minolta Co., Ltd. | Method for detecting object formed of regions from image |
CN101188743A (en) * | 2007-09-17 | 2008-05-28 | 深圳先进技术研究院 | An intelligent digital system based on video and its processing method |
CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
-
2011
- 2011-06-10 CN CN2011101562708A patent/CN102289805A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040091153A1 (en) * | 2002-11-08 | 2004-05-13 | Minolta Co., Ltd. | Method for detecting object formed of regions from image |
CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
CN101188743A (en) * | 2007-09-17 | 2008-05-28 | 深圳先进技术研究院 | An intelligent digital system based on video and its processing method |
Non-Patent Citations (3)
Title |
---|
王尔丹等: "基于多尺度分析和分形的人群密度估计方法", 《计算机工程与应用》, no. 29, 11 October 2005 (2005-10-11), pages 35 - 38 * |
蒋绪团等: "基于实时背景提取的人群密度检测", 《电脑知识与技术》, vol. 6, no. 15, 25 May 2010 (2010-05-25), pages 3996 - 3998 * |
郭军: "有线电视视频图像故障监测系统设计开发", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 07, 15 July 2006 (2006-07-15), pages 136 - 165 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104135910A (en) * | 2012-02-20 | 2014-11-05 | 佳能株式会社 | Image forming apparatus and image forming method |
US10791920B2 (en) | 2012-02-20 | 2020-10-06 | Canon Kabushiki Kaisha | Image forming apparatus and image forming method |
CN104135910B (en) * | 2012-02-20 | 2017-05-24 | 佳能株式会社 | Image forming apparatus and image forming method |
CN102708565A (en) * | 2012-05-07 | 2012-10-03 | 深圳市贝尔信智能系统有限公司 | Foreground detection method, device and system |
CN102750710A (en) * | 2012-05-31 | 2012-10-24 | 信帧电子技术(北京)有限公司 | Method and device for counting motion targets in images |
CN103347156A (en) * | 2013-06-24 | 2013-10-09 | 清华大学 | Method for estimating number of wild animal populations based on infrared camera data |
CN103347156B (en) * | 2013-06-24 | 2016-03-30 | 清华大学 | A kind of method based on infrared camera data estimation wild animal population quantity |
CN103489012A (en) * | 2013-09-30 | 2014-01-01 | 深圳市捷顺科技实业股份有限公司 | Crowd density detecting method and system based on support vector machine |
CN103489012B (en) * | 2013-09-30 | 2017-05-24 | 深圳市捷顺科技实业股份有限公司 | Crowd density detecting method and system based on support vector machine |
CN105574499B (en) * | 2015-12-15 | 2019-08-20 | 东华大学 | A kind of number detection statistics method and system based on SOC |
CN105574499A (en) * | 2015-12-15 | 2016-05-11 | 东华大学 | Method and system for detecting and counting number of people based on SOC |
CN107784258A (en) * | 2016-08-31 | 2018-03-09 | 南京三宝科技股份有限公司 | Subway density of stream of people method of real-time |
CN106407901A (en) * | 2016-08-31 | 2017-02-15 | 重庆云库房物联科技有限公司 | Infrared array-based induced abortion behavior analysis system and human body identification method |
CN110390266A (en) * | 2019-06-24 | 2019-10-29 | 黄燕 | A kind of system and its measurement method of the measurement scenic spot flow of the people based on area algorithm |
CN110287929A (en) * | 2019-07-01 | 2019-09-27 | 腾讯科技(深圳)有限公司 | The quantity of target determines method, apparatus, equipment and storage medium in group region |
CN110287929B (en) * | 2019-07-01 | 2023-09-05 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for determining number of targets in group area |
CN110285858A (en) * | 2019-07-18 | 2019-09-27 | 郑州铁路职业技术学院 | A kind of railroad train operating environment monitoring device based on wireless sensor |
CN110490103A (en) * | 2019-08-05 | 2019-11-22 | 中国铁道科学研究院集团有限公司电子计算技术研究所 | Track traffic for passenger flow Density Detection method and device |
CN110490103B (en) * | 2019-08-05 | 2022-06-14 | 中国铁道科学研究院集团有限公司电子计算技术研究所 | Method and device for detecting passenger flow density of rail transit |
CN111680547A (en) * | 2020-04-27 | 2020-09-18 | 北京百度网讯科技有限公司 | Method and device for identifying traffic countdown board, electronic equipment and storage medium |
CN111680547B (en) * | 2020-04-27 | 2024-01-12 | 阿波罗智能技术(北京)有限公司 | Traffic countdown sign recognition method and device, electronic equipment and storage medium |
CN114030907A (en) * | 2022-01-10 | 2022-02-11 | 安徽高哲信息技术有限公司 | Feeding system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102289805A (en) | Video-based method for detecting crowd density in subway | |
WO2021208275A1 (en) | Traffic video background modelling method and system | |
Shin et al. | A superparticle filter for lane detection | |
EP1836683B1 (en) | Method for tracking moving object in video acquired of scene with camera | |
US8019157B2 (en) | Method of vehicle segmentation and counting for nighttime video frames | |
CN103295016B (en) | Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN104200485A (en) | Video-monitoring-oriented human body tracking method | |
CN101882217B (en) | Target classification method of video image and device | |
CN103413444A (en) | Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video | |
CN108564120B (en) | Feature point extraction method based on deep neural network | |
CN101976504B (en) | Multi-vehicle video tracking method based on color space information | |
CN103218816A (en) | Crowd density estimation method and pedestrian volume statistical method based on video analysis | |
KR101285106B1 (en) | Obstacle detection method using image data fusion and apparatus | |
CN103996198A (en) | Method for detecting region of interest in complicated natural environment | |
CN109919053A (en) | A kind of deep learning vehicle parking detection method based on monitor video | |
CN107346547B (en) | Monocular platform-based real-time foreground extraction method and device | |
CN106204594A (en) | A kind of direction detection method of dispersivity moving object based on video image | |
Lian et al. | A novel method on moving-objects detection based on background subtraction and three frames differencing | |
CN109711256B (en) | Low-altitude complex background unmanned aerial vehicle target detection method | |
CN106991686A (en) | A kind of level set contour tracing method based on super-pixel optical flow field | |
CN102663778B (en) | A kind of method for tracking target based on multi-view point video and system | |
CN104766065A (en) | Robustness prospect detection method based on multi-view learning | |
US20170103536A1 (en) | Counting apparatus and method for moving objects | |
Lin et al. | Improved traffic sign recognition for in-car cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20111221 |