CN109344690B - People counting method based on depth camera - Google Patents

People counting method based on depth camera Download PDF

Info

Publication number
CN109344690B
CN109344690B CN201810903897.7A CN201810903897A CN109344690B CN 109344690 B CN109344690 B CN 109344690B CN 201810903897 A CN201810903897 A CN 201810903897A CN 109344690 B CN109344690 B CN 109344690B
Authority
CN
China
Prior art keywords
target
pedestrian
moving
depth
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810903897.7A
Other languages
Chinese (zh)
Other versions
CN109344690A (en
Inventor
王海宽
戚谢鑫
孙浩翔
李仲秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qingshi Intelligent Technology Co ltd
Original Assignee
Shanghai Qingshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qingshi Intelligent Technology Co ltd filed Critical Shanghai Qingshi Intelligent Technology Co ltd
Priority to CN201810903897.7A priority Critical patent/CN109344690B/en
Publication of CN109344690A publication Critical patent/CN109344690A/en
Application granted granted Critical
Publication of CN109344690B publication Critical patent/CN109344690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a people counting method based on a depth camera, which comprises the steps of firstly, using the depth camera to obtain a depth image of a monitoring area in real time and constructing a three-dimensional space background model; identifying moving targets with different distances and segmenting overlapped moving targets by utilizing depth information hierarchical search; then utilizing vertical and horizontal projection to position the moving target and divide the same depth layer into a plurality of moving targets; then, according to the mapping relation between the depth and the height of the moving target, calculating the morphological density proportion of the moving target in different depth areas to judge the pedestrian; and finally, searching and matching the head area in the current frame according to the detected central coordinate of the head characteristic area of the pedestrian in the latest frame and the pedestrian speed to carry out people counting.

Description

People counting method based on depth camera
Technical Field
The invention relates to the technical field of real-time video image processing and recognition, in particular to a people counting method based on a depth camera.
Background
At present, with the increasing level of information management, people statistics data such as people statistics real-time estimation, passenger flow distribution analysis, congestion degree estimation and the like for places with huge people flow, such as supermarkets, shopping malls, stations, banks and the like, become an effective way for providing first-hand data for public area management, and current passenger flow detection and monitoring mainly adopts three types of technologies, namely: infrared detection technology, pressure detection technology and image processing technology.
(1) The infrared detection technology is mature, and is widely applied to places with frequent population movement, such as stations, docks, shops, bookstores and the like, and artificial auxiliary monitoring is performed. The method is characterized in that the method can accurately and effectively judge the stream of people with a certain distance interval, but has poor effect on the stream of people crowded in front and at back and coming back.
(2) The pressure detection is based on the principle that the weight of a human body is detected to sense the existence of people, the number of people is counted in consideration of passenger flow detection, the common method is to adopt a pedal type pressure sensor technology, when passengers pass through the passenger flow, the change of a strain gauge inside a sensor is caused by stepping on a pedal, so that the change of current in the sensor is influenced, the current change is sampled, the passenger flow information is counted, and the pressure monitoring is applied on the premise that anyone must step on each pedal, and the effective detection is difficult to be carried out on the condition that a plurality of feet step on the pedal simultaneously and the condition that people step on the pedal immediately.
(3) However, in the image processing method, the most commonly used people counting method based on two-dimensional image processing is disclosed in chinese patent CN102855466A entitled "a people counting method based on video image processing", but the invention is easily interfered by illumination variation and shadows generated by illumination, so that the segmentation of the pedestrian sub-region is not accurate, the statistical effect of the pedestrian flow is affected, and meanwhile, the influence factors such as the shielding and deformation of the target object are large, so that the counting technology cannot meet the statistical demand of the constantly changing pedestrian flow in various environments in public areas.
In view of the above, there is a need to improve the people counting method in public areas in the prior art to solve the above problems.
Disclosure of Invention
Aiming at the problems in the background art, the invention provides a people counting method based on a depth camera, and the technical problems that the people counting method in a public area in the prior art is poor in robustness to an application environment and low in tracking accuracy of a moving target are solved.
In order to achieve the purpose, the invention provides the following technical scheme: a depth camera based people counting method, the method comprising the steps of:
s1: a depth camera machine vision platform is built, and a depth camera is used for acquiring a depth image of a monitored area in real time;
s2: acquiring a depth image of an application place in an open space by using a depth camera, and constructing a three-dimensional space background model by using a multi-frame image averaging method;
s3: obtaining a foreground image of the moving target by using a background difference method, and carrying out binarization processing on the foreground image to obtain a binary image of the moving target;
s4: identifying moving targets with different distances and segmenting overlapped moving targets by utilizing depth information hierarchical search; then, positioning the moving target by utilizing vertical and horizontal projection and dividing a plurality of moving targets in the same depth layer, thereby accurately dividing each small region containing the moving object and further dividing the shielding target;
s5: according to the mapping relation between the depth and the height of the moving target, dividing areas with different depths are respectively processed, the morphological density proportion of the moving target in areas with different depths is calculated, and then a template matching method is combined to judge the pedestrian;
s6: searching and matching the head region in the current frame according to the center coordinates of the head region of the pedestrian of the latest frame detected in S5 and the speed of the pedestrian calculated from the previous frames;
s7: and counting the number of people according to the motion trail of the matched people.
As a preferred embodiment of the present invention, the step S4 includes the following steps:
step S41: layering the monitoring area according to the distance, and searching moving targets located at different depth layers;
step S42: respectively carrying out horizontal projection and vertical projection on the moving target detected in each depth layer to obtain the upper and lower boundaries of the moving target of each depth layer; and segmenting a plurality of moving targets existing in the same distance layer according to the peak-valley points.
As a preferable aspect of the present invention, the algorithm for determining a pedestrian in step S5 includes the steps of:
step S51: assuming that the detected moving object is a pedestrian, respectively processing the moving objects with different depths according to the mapping relation between the depth of the moving object and the height in the image: calculating the morphological density proportion of the moving target in different depth areas, and performing first pedestrian judgment on the moving target according to the proportion result; recording a moving target meeting the morphological density proportion as a target A, and recording a moving target not meeting the morphological density proportion as a target B;
step S52: performing second morphological density judgment on the moving object A in the step S51, and finally judging the moving object A to be a pedestrian if the morphological density proportion meets the requirement; and judging the pedestrian by adopting a template matching mode for the moving target B, and finally judging the pedestrian if the template matching result meets the requirement.
As a preferable technical solution of the present invention, in the step S6: the search and matching of the pedestrian head feature region in the current frame are performed in conjunction with the center coordinates of the pedestrian head feature region of the latest frame detected in S5 and the speed of the pedestrian calculated from the previous frames, which includes the following calculation procedures:
in a video image, the time interval between two adjacent frames is short, and the motion state of a pedestrian between the two adjacent frames does not change too fast, so that the motion of the same pedestrian between the two adjacent frames can be considered to be uniform, and the upper limit of the motion distance of the center of the head characteristic region of the same pedestrian between the two frames is recorded as Th;
let a set of image sequences captured by a depth camera be P ═ (P) 1 ,p 2 ,...p n ...) in which the image p is in the nth frame n Detecting m targets s 1,n ,s 2,n ,...,s m,n (ii) a Recording moving objects s i,n (1 < i < m) head center coordinates of
Figure BDA0001760136900000041
The n-1 th frame detects k targets m 1,n-1 ,m 2,n-1 ,m 3,n- 1,...m k,n-1 Moving object m i,n-1 (1 < i < k) head center coordinates of
Figure BDA0001760136900000042
Is provided with
Figure BDA0001760136900000043
T i For the motion situation of k detected objects in the (n-1) th frame, wherein v x ,v y Represents the velocity of the ith target in the (n-1) th frame, which is calculated from the positions detected twice by the target point;
the matching algorithm process is as follows:
(1) predicting the position of the pedestrian target in the current frame according to the information such as the position, the speed and the like of the center coordinate of the head area of the target detected in the latest frame; with the target m in the (n-1) th frame j,n-1 Example (1)<j<k) Description of its state
Figure BDA0001760136900000044
Predict its position in the nth frame
Figure BDA0001760136900000045
Comprises the following steps:
Figure BDA0001760136900000046
Figure BDA0001760136900000047
(2) calculating d | | | S i,n -S p ||
(3) For all targets T in the target chain j (j ═ 1, 2.. k) the measured value D ═ D of the calculation target 1 ,d 2 ,...d k ) Obtaining d j (d) if d j If < Th, the matching is successful, and s is determined i,n Is T j (j ═ 1, 2.. k) at the new position where the nth frame appears, (4) is performed, otherwise s is i,n For a new target, executing (5);
(4) updating T j
Figure BDA0001760136900000051
Figure BDA0001760136900000052
Is as s i,n Of the center of (c), velocity v' x And v' y From s i,n Center coordinates and before-update T j Calculating a central coordinate;
(5) adding target T in target tail chain k+1
Figure BDA0001760136900000053
Figure BDA0001760136900000054
Is s is i,n Of (2), velocity v' x And v' y The initial speed of the new target is set to 0.
Compared with the prior art, the invention has the beneficial effects that: the method provided by the invention can be used for segmenting the positioned moving object according to the depth information projection, screening according to the head proportion characteristic of the human body, and taking the screened object as the tracked target, compared with the traditional method for counting the number of people based on a two-dimensional image, the method is more stable and accurate, cannot be interfered by illumination change and shadow generated by illumination, and can be used for effectively improving the efficiency and accuracy of counting the number of people in a public area in real time.
Drawings
FIG. 1 is a flow chart of a depth camera based people counting method of the present invention;
FIG. 2 is a schematic diagram of a depth camera installation and collection space model for a depth camera-based people counting method of the present invention;
FIG. 3 is a schematic diagram illustrating pedestrian discrimination according to the people counting method based on the depth camera.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a people counting method based on a depth camera according to an embodiment of the invention.
In this embodiment, the people counting method includes the steps of:
s1: a depth camera machine vision platform is built, and a depth camera is used for acquiring a depth image of a monitored area in real time;
referring to fig. 2, the people counting method based on the depth camera of the present invention is based on the fact that the camera is tilted downward by 30 degrees for shooting and is suitable for outdoor and indoor situations. In the present embodiment, the step S1 specifically includes: a video image of a monitoring area 03 is obtained through the depth camera 01 to serve as an input image, the monitoring area is located obliquely below the camera, 02 is an entrance, and pedestrians enter and exit the door in the direction indicated by 04.
Specifically, the depth camera is arranged right above the vicinity of the doorway, and the monitoring area acquired by the camera can completely cover the entire area of the doorway.
S2: acquiring a depth image of an application place in an open state by using a depth camera, and constructing a three-dimensional space background model by using a multi-frame image averaging method;
and calculating an initial three-dimensional space background model D (x, y) by adopting a multi-frame image averaging method according to the following formula.
Figure BDA0001760136900000061
Wherein d is k (x, y, k) represents the depth value of the k frame image at point (x, y), and N is the number of statistical frames. In this embodiment, N is taken to be 3.
S3: obtaining a difference image of the moving target by using a background difference method, and performing binarization processing on the difference image to obtain a binary image of the moving target;
the background difference method and the binarization process for the difference image are expressed by the following formulas:
B(x,y,i)=d(x,y,i)-D(x,y)
Figure BDA0001760136900000071
wherein d (x, y, i) is an ith frame original image in the monitored video, B (x, y, i) is a difference image between the ith frame image and the background template image in the monitored video, and T (x, y, i) is a binarized target image.
S4: positioning and segmenting a target image to mark out regions which may be people;
and identifying moving targets with different distances and segmenting overlapped moving targets by utilizing the depth information hierarchical search. Then utilizing vertical and horizontal projection to position the moving target and divide a plurality of moving targets in the same depth layer, thereby accurately dividing each small region containing the moving object;
step S5, calculating the morphological density proportion of the moving object in different depth areas, and determining the pedestrian by combining a template matching method, wherein the specific method can be divided into two steps S51 and S52:
step S51: assuming that the detected dynamic object is a pedestrian as shown in fig. 3, according to experience, in a depth range of 2m to 6m, a region with an upper length of 15 pixel points is a human head region, such as a region 05 outlined by a red line in fig. 3.
The number of black dots in the area 05 is counted, and is the assumed head area of the person. And calculating the whole area of the area 05 to obtain the proportion of the head area of the person to the whole small area.
According to the empirical value, if the proportion is between 0.2 and 0.5, the first condition of pedestrian judgment is considered to be met, the second morphological density judgment needs to be carried out on the first condition, and the motion target is marked as a target A. If the proportion does not meet the range of 0.2-0.5, the pedestrian may wear a hat or hold an umbrella, and the like, at this time, the moving target is judged to temporarily fail to meet the pedestrian condition, template matching judgment is needed again, and the moving target is marked as a target B.
Step S52 performs a second determination on the region screened out after the first determination.
For moving object a, region 05 is equally divided by width into three small blocks. The area of each patch is 15 x (width/3), as shown by region 051,052,053 in fig. 3. The ratio of the total number of black points in the region 051,052,053 to the area of the region 051,052,053 is counted, and is denoted as z1, z2, z3, the setting threshold value t1 is 0.21, t2 is 0.75, and t3 is 0.21, and when ((z1< ═ t1) | (z3< t3)) & (z2> &t 2) is satisfied, the dynamic object is determined to be a pedestrian.
And for the moving object B, intercepting the upper half part of the moving object B, and judging by adopting a human-shaped template matching method. If the intercepted upper half part meets a certain form proportion and the moving object is in a normal motion state, the dynamic object is judged to be a pedestrian.
S6: searching and matching the head region in the current frame according to the center coordinates of the head feature region of the pedestrian in the latest frame detected in S5 and the speed of the pedestrian calculated from the previous frames;
since the time interval between two adjacent frames is short and the moving state of the pedestrian between the two adjacent frames does not change too fast in the video image, it can be considered that the motion of the same pedestrian between the two adjacent frames is uniform, and the upper limit of the moving distance between the two frames of the head feature region center of the same pedestrian is recorded as Th.
Let a set of image sequences captured by a depth camera be P ═ (P) 1 ,p 2 ,...p n ...) in which the image p is in the nth frame n Detecting m targets s 1,n ,s 2,n ,...,s m,n . Recording moving objects s i,n (1 < i < m) head center coordinates of
Figure BDA0001760136900000081
The (n-1) th frame detects k targets m 1,n-1 ,m 2,n-1 ,m 3,n-1 ,...m k,n-1 Moving object m i,n-1 (1 < i < k) head center coordinates of
Figure BDA0001760136900000082
Is provided with
Figure BDA0001760136900000083
T i The motion situation of k detection targets in the (n-1) th frame. Wherein v is x ,v y Representing the velocity of the ith target in frame n-1, which is calculated from the twice detected positions of the target point.
The matching algorithm process is as follows:
(1) and predicting the position of the target in the current frame according to the position, the speed and other information of the center coordinates of the head area of the pedestrian target detected in the latest frame. With the target m in the (n-1) th frame j,n-1 Example (1)<j<k) Description of its state
Figure BDA0001760136900000091
Predict its position in the nth frame
Figure BDA0001760136900000092
Comprises the following steps:
Figure BDA0001760136900000093
Figure BDA0001760136900000094
(2) calculating d | | | S i,n -S p ||
(3) For all targets T in the target chain j (j ═ 1, 2.. k) the measured value D ═ D of the calculation target 1 ,d 2 ,...d k ) To find d j (d) if d j If < Th, the matching is successful, and s is determined i,n Is T j (j ═ 1, 2.. k) at the new position where the nth frame appears, (4) is performed, otherwise s i,n And (5) executing as a new target.
(4) Updating T j
Figure BDA0001760136900000095
Figure BDA0001760136900000096
Is s is i,n Of the center of (c), velocity v' x And v' y From s i,n Center coordinates and before-update T j And calculating the center coordinates.
(5) Adding target T in target tail chain k+1
Figure BDA0001760136900000097
Figure BDA0001760136900000098
Is s is i,n Of the center of (c), velocity v' x And v' y The initial speed for the new target is set to 0.
And S7, counting the number of people according to the motion trail of the matched people. The method for counting the number of people according to the people and the motion tracks of the people comprises the following steps: if it is detected that one person moves indoors, the number of people indoors is increased by one, and if it is detected that one person moves outdoors, the number of people indoors is decreased by one.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. A people counting method based on a depth camera is characterized by comprising the following steps:
s1: a depth camera machine vision platform is built, and a depth camera is used for acquiring a depth image of a monitored area in real time;
s2: acquiring a depth image of an application place in an open space by using a depth camera, and constructing a three-dimensional space background model by using a multi-frame image averaging method;
s3: obtaining a foreground image of the moving target by using a background difference method, and performing binarization processing on the foreground image to obtain a binary image of the moving target;
s4: identifying moving targets with different distances and segmenting overlapped moving targets by utilizing depth information hierarchical search; then, positioning the moving target by utilizing vertical and horizontal projection and dividing a plurality of moving targets in the same depth layer, thereby accurately dividing each small region containing the moving object and further dividing the shielding target;
s5: according to the mapping relation between the depth and the height of the moving target, dividing areas with different depths are respectively processed, the morphological density proportion of the moving target in areas with different depths is calculated, and then a template matching method is combined to judge the pedestrian;
s6: searching and matching the head region in the current frame according to the center coordinates of the head region of the pedestrian of the latest frame detected in S5 and the speed of the pedestrian calculated from the previous frames;
s7: counting the number of people according to the motion trail of the matched people;
in the step S5, the morphological density ratio is that the pixel point region within the statistical depth range is used as a head region, the number of black dots within the statistical region is used as a head region, the ratio of the head region to the head region is calculated, and whether the pedestrian is a pedestrian is judged by combining the ratio of normal pedestrians;
in the step S6: in conjunction with the center coordinates of the pedestrian head feature region of the latest frame detected in S5 and the speed of the pedestrian calculated from the previous frames, the method searches and matches the pedestrian head feature region in the current frame, which includes the following calculation processes: in a video image, the time interval between two adjacent frames is short, and the motion state of a pedestrian between the two adjacent frames does not change too fast, so that the motion of the same pedestrian between the two adjacent frames can be considered to be uniform, and the upper limit of the motion distance of the center of the head feature region of the same pedestrian between the two frames is recorded as Th; let a set of image sequences captured by a depth camera be P ═ P 1 ,p 2 ,...p n .., wherein in the n frame image p n Detects m targets s 1,n ,s 2,n ,...,s m,n (ii) a Recording moving objects s i,n 1< i < m, the head center coordinate is
Figure FDA0003531340400000011
The n-1 th frame detects k targets m 1,n-1 ,m 2,n-1 ,m 3,n-1 ,...m k,n-1 Moving object m i,n-1 1< i < k, and head center coordinates of
Figure FDA0003531340400000012
Is provided with
Figure FDA0003531340400000013
T i For the motion situation of k detected objects in the (n-1) th frame, wherein v x ,v y Represents the velocity of the ith target in the (n-1) th frame, which is calculated from the positions detected twice by the target point;
the matching algorithm process is as follows:
(1) predicting the position of the target in the current frame according to the position and speed information of the center coordinates of the head area of the pedestrian target detected in the latest frame; with the target m in the (n-1) th frame j,n-1 For example, 1<j<k, description of its state
Figure FDA0003531340400000021
Predict its position in the nth frame
Figure FDA0003531340400000022
Comprises the following steps:
Figure FDA0003531340400000023
Figure FDA0003531340400000024
(2) calculating d | | | | S i,n -S p ||
(3) For all targets T in the target chain j J-1, 2.. k, and calculating a measurement value D-D of the target 1 ,d 2 ,...d k Obtaining d j (d) if d j If < Th, the matching is successful, and s is determined i,n Is T j K, at the new position where the nth frame appears, (4) is performed, otherwise s is i,n For a new target, executing (5);
(4) updating T j
Figure FDA0003531340400000025
Figure FDA0003531340400000026
Is s is i,n Of the center of (c), velocity v' x And v' y From s i,n Center coordinates and before-update T j Calculating a central coordinate;
(5) adding target T in target tail chain k+1
Figure FDA0003531340400000027
Figure FDA0003531340400000028
Is as s i,n Of the center of (c), velocity v' x And v' y The initial speed for the new target is set to 0.
2. The depth camera-based people counting method of claim 1, wherein: the step S4 includes the steps of:
step S41: layering the monitoring area according to the distance, and searching moving targets located at different depth layers;
step S42: respectively carrying out horizontal projection and vertical projection on the moving target detected in each depth layer to obtain the upper and lower boundaries of the moving target of each depth layer; and according to the peak and valley points, segmenting a plurality of moving targets in the same distance layer.
3. The depth camera-based people counting method of claim 1, wherein: the algorithm for judging the pedestrian in the step S5 includes the following steps:
step S51: assuming that the detected moving objects are pedestrians, respectively processing the moving objects with different depths according to the mapping relation between the depth of the moving objects and the height in the image: calculating the morphological density proportion of the moving target in different depth areas, and performing first pedestrian judgment on the moving target according to the proportion result; recording a moving target meeting the morphological density proportion as a target A, and recording a moving target not meeting the morphological density proportion as a target B;
step S52: performing second morphological density judgment on the moving object A in the step S51, and finally judging the moving object A to be a pedestrian if the morphological density proportion meets the requirement; and judging the pedestrian by adopting a template matching mode for the moving target B, and finally judging the pedestrian if the template matching result meets the requirement.
CN201810903897.7A 2018-08-09 2018-08-09 People counting method based on depth camera Active CN109344690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810903897.7A CN109344690B (en) 2018-08-09 2018-08-09 People counting method based on depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810903897.7A CN109344690B (en) 2018-08-09 2018-08-09 People counting method based on depth camera

Publications (2)

Publication Number Publication Date
CN109344690A CN109344690A (en) 2019-02-15
CN109344690B true CN109344690B (en) 2022-09-23

Family

ID=65291465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810903897.7A Active CN109344690B (en) 2018-08-09 2018-08-09 People counting method based on depth camera

Country Status (1)

Country Link
CN (1) CN109344690B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674672B (en) * 2019-07-10 2020-10-27 北京滴普科技有限公司 Multi-scene people counting method based on tof camera
CN110930411B (en) * 2019-11-20 2023-04-28 浙江光珀智能科技有限公司 Human body segmentation method and system based on depth camera
CN111310567B (en) * 2020-01-16 2023-06-23 中国建设银行股份有限公司 Face recognition method and device in multi-person scene
CN111881843B (en) * 2020-07-30 2023-12-29 河南天迈科技有限公司 Face detection-based taxi passenger carrying number counting method
CN112509184A (en) * 2020-12-02 2021-03-16 海南华晟瑞博科技有限公司 Method and system for monitoring house entrance and exit of specific crowd and storage medium
CN112819835A (en) * 2021-01-21 2021-05-18 博云视觉科技(青岛)有限公司 Passenger flow counting method based on 3D depth video
CN113034544A (en) * 2021-03-19 2021-06-25 奥比中光科技集团股份有限公司 People flow analysis method and device based on depth camera
CN113965701B (en) * 2021-09-10 2023-11-14 苏州雷格特智能设备股份有限公司 Multi-target space coordinate corresponding binding method based on two depth cameras
CN114913550A (en) * 2022-05-30 2022-08-16 哈尔滨工业大学(深圳) Wounded person identification method and system based on deep learning under wound point gathering scene

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872422A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of precisely identifying targets
KR101448392B1 (en) * 2013-06-21 2014-10-13 호서대학교 산학협력단 People counting method
CN104517095A (en) * 2013-10-08 2015-04-15 南京理工大学 Head division method based on depth image
CN104751491A (en) * 2015-04-10 2015-07-01 中国科学院宁波材料技术与工程研究所 Method and device for tracking crowds and counting pedestrian flow
CN104835147A (en) * 2015-04-15 2015-08-12 中国科学院上海微系统与信息技术研究所 Method for detecting crowded people flow in real time based on three-dimensional depth map data
KR20160119597A (en) * 2015-04-06 2016-10-14 주식회사 케이티 Method for detecting human using plural depth camera and device
CN106548163A (en) * 2016-11-25 2017-03-29 青岛大学 Method based on TOF depth camera passenger flow countings

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012023639A1 (en) * 2010-08-17 2012-02-23 엘지전자 주식회사 Method for counting objects and apparatus using a plurality of sensors
US9338409B2 (en) * 2012-01-17 2016-05-10 Avigilon Fortress Corporation System and method for home health care monitoring
US10009579B2 (en) * 2012-11-21 2018-06-26 Pelco, Inc. Method and system for counting people using depth sensor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872422A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of precisely identifying targets
KR101448392B1 (en) * 2013-06-21 2014-10-13 호서대학교 산학협력단 People counting method
CN104517095A (en) * 2013-10-08 2015-04-15 南京理工大学 Head division method based on depth image
KR20160119597A (en) * 2015-04-06 2016-10-14 주식회사 케이티 Method for detecting human using plural depth camera and device
CN104751491A (en) * 2015-04-10 2015-07-01 中国科学院宁波材料技术与工程研究所 Method and device for tracking crowds and counting pedestrian flow
CN104835147A (en) * 2015-04-15 2015-08-12 中国科学院上海微系统与信息技术研究所 Method for detecting crowded people flow in real time based on three-dimensional depth map data
CN106548163A (en) * 2016-11-25 2017-03-29 青岛大学 Method based on TOF depth camera passenger flow countings

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于RGB-D相机的实时人数统计方法";张华 等;《计算机工程与应用》;20140918(第23期);第156-162页 *
"基于SVM的公交人数统计方法研究";张文涛 等;《中国科技论文》;20180123;第13卷(第1期);第143-148页 *

Also Published As

Publication number Publication date
CN109344690A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109344690B (en) People counting method based on depth camera
CN104751491B (en) A kind of crowd&#39;s tracking and people flow rate statistical method and device
CN108320510B (en) Traffic information statistical method and system based on aerial video shot by unmanned aerial vehicle
Hu et al. Principal axis-based correspondence between multiple cameras for people tracking
Chen et al. A hierarchical model incorporating segmented regions and pixel descriptors for video background subtraction
CN109076190B (en) Apparatus and method for detecting abnormal condition
CN105574501B (en) A kind of stream of people&#39;s video detecting analysis system
CN104978567B (en) Vehicle checking method based on scene classification
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN106203513B (en) A kind of statistical method based on pedestrian&#39;s head and shoulder multi-target detection and tracking
CN108596129A (en) A kind of vehicle based on intelligent video analysis technology gets over line detecting method
JP2019505866A (en) Passerby head identification method and system
CN109325404A (en) A kind of demographic method under public transport scene
CN106845325B (en) A kind of information detecting method and device
CN106570490B (en) A kind of pedestrian&#39;s method for real time tracking based on quick clustering
CN109614948B (en) Abnormal behavior detection method, device, equipment and storage medium
CN111209781B (en) Method and device for counting indoor people
Jiang et al. Multiple pedestrian tracking using colour and motion models
CN110874592A (en) Forest fire smoke image detection method based on total bounded variation
TWI415032B (en) Object tracking method
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN103413149B (en) Method for detecting and identifying static target in complicated background
CN109145696B (en) Old people falling detection method and system based on deep learning
CN110111362A (en) A kind of local feature block Similarity matching method for tracking target
CN104063692A (en) Method and system for pedestrian positioning detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant