CN109344690A - A kind of demographic method based on depth camera - Google Patents

A kind of demographic method based on depth camera Download PDF

Info

Publication number
CN109344690A
CN109344690A CN201810903897.7A CN201810903897A CN109344690A CN 109344690 A CN109344690 A CN 109344690A CN 201810903897 A CN201810903897 A CN 201810903897A CN 109344690 A CN109344690 A CN 109344690A
Authority
CN
China
Prior art keywords
target
depth
moving target
pedestrian
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810903897.7A
Other languages
Chinese (zh)
Other versions
CN109344690B (en
Inventor
王海宽
戚谢鑫
孙浩翔
李仲秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qingzhi Intelligent Technology Co Ltd
Original Assignee
Shanghai Qingzhi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qingzhi Intelligent Technology Co Ltd filed Critical Shanghai Qingzhi Intelligent Technology Co Ltd
Priority to CN201810903897.7A priority Critical patent/CN109344690B/en
Publication of CN109344690A publication Critical patent/CN109344690A/en
Application granted granted Critical
Publication of CN109344690B publication Critical patent/CN109344690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Abstract

The invention discloses a kind of demographic methods based on depth camera, obtain the depth image of monitoring area in real time using depth camera first, construct three-dimensional space background model;Using depth information hierarchical search, the moving target of different distance and the moving target of segmentation overlapping are identified;Vertical and horizontal projection is recycled to position to moving target and divide same depth layer there are multiple moving targets;Then according to the mapping relations of moving target depth and height, the morphology density content for calculating the moving target in different depth area determines pedestrian;Last centre coordinate and pedestrian's speed according to the nearest frame line head part's characteristic area detected, head zone in present frame is scanned for and matched and carries out demographics, the present invention is more stable compared to the demographic method of traditional approach, accurate, the interference that not will receive the shade of illumination variation and illumination generation, can effectively improve number real-time statistics efficiency and the accuracy in indoor and outdoor public domain.

Description

A kind of demographic method based on depth camera
Technical field
The present invention relates to real-time video image processing and identification technology field, specially a kind of number based on depth camera Statistical method.
Background technique
It is huge for flows of the people such as supermarket, market, station, banks in information system management level increasing today Place carries out the numbers statistics such as demographics real-time estimation, Trip distribution analysis, degree of crowding estimation to become being public area Domain management provides the effective way of the firsthand information, and current detection of passenger flow monitoring mainly uses three types of technology i.e.: infrared detection skill Art, pressure sensing technologies and image processing techniques.
(1) infrared detection technology comparative maturity is widely used and the movements of population such as station, harbour, shop, bookstore frequency Place that is numerous, thering is human assistance to monitor.Its main feature is that correct effective judgement can be made to the stream of people for having certain distance interval, but Stream of people's effect that is crowded for front and back, coming one after another is poor.
(2) pressure detecting, principle is the weight by detecting human body, to perceive the presence of people, in view of detection of passenger flow, system What is counted is the quantity of people, it is common practice that pedal pressure sensor technique is used, when passenger flow passes through, by stepping on to pedal It steps on, causes the variation of sensor internal foil gauge, to influence the variation of electric current in sensor, curent change is sampled, Passenger flow information is counted, the premise of applying pressure monitoring is anyone the every piece of pedal that must all step on, and multi-feet is stepped on simultaneously The case where and positive someone on pedal, and then someone's situation on pedal of stepping on is difficult to effectively be detected again.
(3) and in the method for image procossing, what application was most at present is the demographics based on two dimensional image processing, in The patent of invention of state patent CN102855466A, entitled " a kind of demographic method based on video image processing " disclose A kind of demographic method based on two-dimensional video image processing, but the invention is easy to be illuminated by the light the yin that variation and illumination generate The interference of shadow, so that pedestrian's sub-district regional partition inaccuracy, influences people flow rate statistical effect, while being blocked, being deformed by object Influence factor it is big, so this counting technology is not able to satisfy in the various environment in public domain to continually changing flow of the people Statistical demand.
In view of this, it is necessary to the demographic method in public domain in the prior art be improved, on solving State problem.
Summary of the invention
The problem of for background technique, the present invention provides a kind of demographic method based on depth camera, The invention solve the demographic method in public domain in the prior art to the poor robustness of application environment, moving target with The low technical problem of track accuracy rate.
To achieve the above object, the invention provides the following technical scheme: a kind of demographic method based on depth camera, Method includes the following steps:
S1: depth camera machine vision platform is built, obtains the depth image of monitoring area in real time using depth camera;
S2: depth image when using depth camera acquisition applications place spaciousness, using multiple image method of average building three Dimension space background model;
S3: background subtraction is utilized, the foreground image of moving target is obtained, and binary conversion treatment is done to foreground image, obtains Obtain the bianry image of moving target;
S4: utilizing depth information hierarchical search, identifies the moving target of different distance and the moving target of segmentation overlapping; Vertical and horizontal projection is recycled to position to moving target and divide same depth layer there are multiple moving targets, thus accurately Each zonule comprising moving object is marked off, shelter target is further partitioned into;
S5: according to the mapping relations of moving target depth and height, being handled the division region of different depth respectively, is calculated The morphology density content of the moving target in different depth area determines pedestrian in conjunction with template matching method;
S6: it is calculated according to the centre coordinate in the pedestrian head region of a nearest frame detected in S5, and by former frames The speed of pedestrian out is scanned for and is matched to the head zone in present frame;
S7: demographics are carried out according to the motion profile of matched personnel.
As a preferred technical solution of the present invention, the step S4 the following steps are included:
Step S41: monitoring region is layered according to distance, searches out the movement mesh positioned at different depth layer Mark;
Step S42: carrying out floor projection and upright projection to the moving target detected in each depth layer respectively, obtains The up-and-down boundary of the moving target of each depth layer;According to Wave crest and wave trough point, to same multiple moving targets existing for the layer It is split.
As a preferred technical solution of the present invention, the step S5 judges that the algorithm of pedestrian comprises the steps of:
Step S51: assuming that the moving target detected is pedestrian, according to moving target depth and the medium-altitude mapping of image Relationship is respectively processed the moving target of different depth: calculating the morphology density ratio of the moving target in different depth area Example carries out first time pedestrian determination to moving target according to ratiometric result;The moving target for writing sufficient morphology density content all over is Target A, the moving target for being unsatisfactory for morphology density content is target B;
Step S52: second of morphology density judgement is carried out to moving target A in S51 step, if morphology density content It meets the requirements, is then finally judged as pedestrian;Pedestrian determination is carried out using template matching mode to moving target B, if template matching As a result it meets the requirements, is then also finally judged as pedestrian.
As a preferred technical solution of the present invention, in the step S6: in conjunction with a nearest frame detected in S5 Pedestrian head characteristic area centre coordinate, and the speed of pedestrian calculated by former frames, to pedestrian's head in present frame Portion's characteristic area is scanned for and is matched comprising following calculating process:
In video image, the time interval of adjacent two frame is very short and pedestrian will not become in the motion state of adjacent two interframe Change it is too fast, it is possible to think same a group traveling together between adjacent two frame movement be at the uniform velocity, and by the head of same a group traveling together spy It levies regional center and is denoted as Th in the move distance upper limit of two interframe;
If being P=(p by one group of image sequence that depth camera takes1,p2,...pn...), wherein in n-th frame image pnIn detect m target s1,n,s2,n,...,sm,n;Remember moving target si,nThe head center coordinate of (1 < i < m) is(n-1)th frame detects k target m1,n-1,m2,n-1,m3,n-1,...mk,n-1, moving target mi,n-1(1 < i < k) head center coordinate beIfTiFor k detection target in the (n-1)th frame Motion conditions, wherein vx,vyIndicate the speed of i-th of target in the (n-1)th frame, the position that it is detected twice by the target point It sets and is calculated;
Matching algorithm process is as follows:
(1) position prediction, the position of the centre coordinate of the head zone of the pedestrian target detected by a nearest frame, speed Etc. information predict the target in the position of present frame;With the target m in the (n-1)th framej,n-1For (1 < j < k), state descriptionPredict it in the position of n-th frameAre as follows:
(2) d=is calculated | | Si,n-Sp||
(3) to target T all in object chainj(j=1,2 ... k) calculate the measure value D=(d of target1,d2, ...dk), acquire dj=min (D), if dj< Th, then successful match, determines si,nFor Tj(j=1,2 ... k) occur in n-th frame New position, execute (4), otherwise si,nFor a fresh target, execute (5);
(4) T is updatedj, For si,nCentre coordinate, speed v 'xWith v 'yBy si,n T before centre coordinate and updatejCentre coordinate is calculated;
(5) target T is added in target tail chaink+1, For si,nCentre coordinate, Speed v 'xWith v 'yFor the initial velocity of fresh target, it is set as 0.
Compared with prior art, the beneficial effects of the present invention are: moving object of the present invention to navigating to, believes according to depth Breath projection is split, and is screened according to human body head ratio characteristic, the target that the object filtered out is tracked as us, phase Number method compared with tradition based on two dimensional image is more stable, accurate, not will receive illumination variation and the yin that illumination generates The interference of shadow can effectively improve the efficiency and accuracy of the number real-time statistics in public domain.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow charts of the demographic method of depth camera;
Fig. 2 is that the present invention is based on the signals of the installation of the demographic method depth camera of depth camera and acquisition spatial model Figure;
Fig. 3 is that the present invention is based on the pedestrians of the demographic method of depth camera to differentiate schematic diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
It please join shown in Fig. 1, Fig. 1 is in a kind of specific embodiment of the demographic method based on depth camera of the present invention Flow diagram.
In the present embodiment, the demographic method the following steps are included:
S1: depth camera machine vision platform is built, obtains the depth image of monitoring area in real time using depth camera;
Join shown in Fig. 2, a kind of demographic method based on depth camera of the present invention is based on 30 degree of camera tilted downward It shoots and is suitable for outdoor situations and indoor situations.In the present embodiment, step S1 specifically: pass through depth camera 01 The video image of monitoring area 03 is obtained as input picture, the monitoring area is located at the obliquely downward of video camera, and 02 is discrepancy Mouthful, pedestrian is according to direction access door indicated by 04.
Specifically, the surface near entrance is arranged in depth camera, monitoring area acquired in video camera can be complete The whole region of all standing entrance.
S2: depth image when using depth camera acquisition applications place spaciousness, using multiple image method of average building three Dimension space background model;
Initial three-dimensional space background model D (x, y) is calculated by lower section formula using the multiple image method of average.
Wherein, dk(x, y, k) indicates that depth value of the kth frame image at point (x, y), N are the frame number of statistics.In this implementation In scheme, N=3 is taken.
S3: background subtraction is utilized, the error image of moving target is obtained, and binary conversion treatment is done to error image, obtains To the bianry image of moving target;
Background subtraction and to error image binary conversion treatment by following equation indicate:
B (x, y, i)=d (x, y, i)-D (x, y)
Wherein, d (x, y, i) be monitor video in the i-th frame original image, B (x, y, i) be monitor video in the i-th frame image with Difference image between background template image, T (x, y, i) are the target image after binaryzation.
S4: positioning target image and divided, and marking off to be the region of people;
Using depth information hierarchical search, the moving target of different distance and the moving target of segmentation overlapping are identified.Again Moving target is positioned using vertical and horizontal projection and divides same depth layer there are multiple moving target, it is thus accurate to draw Separate each zonule comprising moving object;
Step S5 calculates the morphology density content of the moving target in different depth area, determines to go in conjunction with template matching method People, specific method can be divided into two big step S51 and S52:
Step S51: assuming that the dynamic object detected is pedestrian as shown in figure 3, rule of thumb, in the depth model of 2m to 6m In enclosing, the region of 15 pixels of upper length in region is the head zone of people, the region 05 outlined such as the red line in Fig. 3.
Stain number in statistical regions 05, the head area of the people as assumed.The entire area of zoning 05 again, obtains The head area of people accounts for the ratio of entire zonule area out.
Based on experience value, if ratio meets between 0.2 to 0.5, it is considered as first condition for meeting pedestrian determination, needs Second of morphology density judgement is carried out to it, and moving target herein is denoted as target A.If ratio is unsatisfactory for 0.2 to 0.5 Between, it may be possible to situations such as pedestrian puts on a hat or supports umbrella judges that this moving target wouldn't meet pedestrian's condition at this time, needs Template matching judgement is carried out again, and moving target herein is denoted as target B.
The region that step S52 is screened after judging the first step carries out second and judges.
For moving target A, region 05 is divided into three fritters by width.The area of each fritter is all 15* (width/ 3), such as the region 051,052,053 in Fig. 3.Stain sum in statistical regions 051,052,053 and region 051,052 respectively, The ratio of 053 area, is denoted as z1, z2, z3, and threshold value t1=0.21, t2=0.75, t3=0.21 is arranged, when satisfaction ((z1≤ T1) | | when (z3≤t3)) && (z2 >=t2), determine that dynamic object is pedestrian.
For moving object B, its top half is intercepted, is judged using humanoid template matching method.If upper after interception Half part meets certain form ratio, and the moving object is in ordinary running condition, then judges this dynamic object for pedestrian.
S6: according to the centre coordinate of the pedestrian head characteristic area of a nearest frame detected in S5, and by former The speed for the pedestrian that frame calculates, scans for and matches to the head zone in present frame;
Because in video image, the time interval of adjacent two frame is very short and pedestrian adjacent two interframe motion state not It can change too fast, it is possible to which the movement for thinking same a group traveling together between adjacent two frame is at the uniform velocity, and by the head of same a group traveling together Portion characteristic area center is denoted as Th in the move distance upper limit of two interframe.
If being P=(p by one group of image sequence that depth camera takes1,p2,...pn...), wherein in n-th frame image pnIn detect m target s1,n,s2,n,...,sm,n.Remember moving target si,nThe head center coordinate of (1 < i < m) is(n-1)th frame detects k target m1,n-1,m2,n-1,m3,n-1,...mk,n-1, moving target mi,n-1(1 < i < k) head center coordinate beIfTiFor k detection target in the (n-1)th frame Motion conditions.Wherein, vx,vyIndicate the speed of i-th of target in the (n-1)th frame, the position that it is detected twice by the target point It sets and is calculated.
Matching algorithm process is as follows:
(1) position prediction, the position of the centre coordinate of the head zone of the pedestrian target detected by a nearest frame, speed Etc. information predict the target in the position of present frame.With the target m in the (n-1)th framej,n-1For (1 < j < k), state descriptionPredict it in the position of n-th frameAre as follows:
(2) d=is calculated | | Si,n-Sp||
(3) to target T all in object chainj(j=1,2 ... k) calculate the measure value D=(d of target1,d2, ...dk), acquire dj=min (D), if dj< Th, then successful match, determines si,nFor Tj(j=1,2 ... k) occur in n-th frame New position, execute (4), otherwise si,nFor a fresh target, execute (5).
(4) T is updatedj For si,nCentre coordinate, speed v 'xWith v 'yBy si,n T before centre coordinate and updatejCentre coordinate is calculated.
(5) target T is added in target tail chaink+1 For si,nCentre coordinate, Speed v 'xWith v 'yFor the initial velocity of fresh target, it is set as 0.
S7, demographics are carried out according to the motion profile of matched personnel.According to the motion profile of the personnel and personnel The method for carrying out demographics are as follows: occupancy to indoor moving, is added one by a personnel if detecting, if detecting one Personnel are mobile to outdoor, then subtract one for occupancy.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (4)

1. a kind of demographic method based on depth camera, it is characterised in that method includes the following steps:
S1: depth camera machine vision platform is built, obtains the depth image of monitoring area in real time using depth camera;
S2: depth image when using depth camera acquisition applications place spaciousness constructs three-dimensional space using the multiple image method of average Between background model;
S3: background subtraction is utilized, the foreground image of moving target is obtained, and binary conversion treatment is done to foreground image, is transported The bianry image of moving-target;
S4: utilizing depth information hierarchical search, identifies the moving target of different distance and the moving target of segmentation overlapping;It is sharp again Moving target is positioned with vertical and horizontal projection and divides same depth layer there are multiple moving target, it is thus accurate to divide Each zonule comprising moving object out, is further partitioned into shelter target;
S5: according to the mapping relations of moving target depth and height, being handled the division region of different depth respectively, is calculated different The morphology density content of the moving target in depth area determines pedestrian in conjunction with template matching method;
S6: according to the centre coordinate in the pedestrian head region of a nearest frame detected in S5, and calculated by former frames The speed of pedestrian is scanned for and is matched to the head zone in present frame;
S7: demographics are carried out according to the motion profile of matched personnel.
2. a kind of demographic method based on depth camera according to claim 1, it is characterised in that: the step S4 The following steps are included:
Step S41: monitoring region is layered according to distance, searches out the moving target positioned at different depth layer;
Step S42: floor projection and upright projection are carried out to the moving target detected in each depth layer respectively, obtained each The up-and-down boundary of the moving target of depth layer;According to Wave crest and wave trough point, same multiple moving targets existing for the layer are carried out Segmentation.
3. a kind of demographic method based on depth camera according to claim 1, it is characterised in that: the step S5 Judge that the algorithm of pedestrian comprises the steps of:
Step S51: it assuming that the moving target detected is pedestrian, is closed according to moving target depth and the medium-altitude mapping of image System, is respectively processed the moving target of different depth: the morphology density content of the moving target in different depth area is calculated, First time pedestrian determination is carried out to moving target according to ratiometric result;The moving target for writing sufficient morphology density content all over is target A, the moving target for being unsatisfactory for morphology density content is target B;
Step S52: second of morphology density judgement is carried out to moving target A in S51 step, if morphology density content meets It is required that being then finally judged as pedestrian;Pedestrian determination is carried out using template matching mode to moving target B, if template matching results It meets the requirements, is then also finally judged as pedestrian.
4. a kind of demographic method based on depth camera according to claim 1, it is characterised in that: the step S6 In: in conjunction with the centre coordinate of the pedestrian head characteristic area of a nearest frame detected in S5, and calculated by former frames The speed of pedestrian is scanned for and is matched to the pedestrian head characteristic area in present frame comprising following calculating process:
In video image, the time interval of adjacent two frame is very short and pedestrian will not change too in the motion state of adjacent two interframe Fastly, it is possible to which the movement for thinking same a group traveling together between adjacent two frame is at the uniform velocity, and by the head feature area of same a group traveling together Domain center is denoted as Th in the move distance upper limit of two interframe;
If being P=(p by one group of image sequence that depth camera takes1,p2,...pn...), wherein in n-th frame image pnMiddle inspection Measure m target s1,n,s2,n,...,sm,n;Remember moving target si,nThe head center coordinate of (1 < i < m) is(n-1)th frame detects k target m1,n-1,m2,n-1,m3,n-1,...mk,n-1, moving target mi,n-1(1 < i < k) head center coordinate beIfTiFor k detection target in the (n-1)th frame Motion conditions, wherein vx,vyIndicate the speed of i-th of target in the (n-1)th frame, the position that it is detected twice by the target point It sets and is calculated;
Matching algorithm process is as follows:
(1) position prediction, the letter such as position, speed of centre coordinate of head zone of pedestrian target detected by a nearest frame Breath is to predict the target in the position of present frame;With the target m in the (n-1)th framej,n-1For (1 < j < k), state descriptionPredict it in the position of n-th frameAre as follows:
(2) d=is calculated | | Si,n-Sp||
(3) to target T all in object chainj(j=1,2 ... k) calculate the measure value D=(d of target1,d2,...dk), it acquires dj=min (D), if dj< Th, then successful match, determines si,nFor Tj(j=1,2 ... k) in the new position that n-th frame occurs, hold It goes (4), otherwise si,nFor a fresh target, execute (5);
(4) T is updatedj, For si,nCentre coordinate, speed v 'xWith v 'yBy si,nCenter T before coordinate and updatejCentre coordinate is calculated;
(5) target T is added in target tail chaink+1, For si,nCentre coordinate, speed v′xWith v 'yFor the initial velocity of fresh target, it is set as 0.
CN201810903897.7A 2018-08-09 2018-08-09 People counting method based on depth camera Active CN109344690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810903897.7A CN109344690B (en) 2018-08-09 2018-08-09 People counting method based on depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810903897.7A CN109344690B (en) 2018-08-09 2018-08-09 People counting method based on depth camera

Publications (2)

Publication Number Publication Date
CN109344690A true CN109344690A (en) 2019-02-15
CN109344690B CN109344690B (en) 2022-09-23

Family

ID=65291465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810903897.7A Active CN109344690B (en) 2018-08-09 2018-08-09 People counting method based on depth camera

Country Status (1)

Country Link
CN (1) CN109344690B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674672A (en) * 2019-07-10 2020-01-10 北京滴普科技有限公司 Multi-scene people counting method based on tof camera
CN110930411A (en) * 2019-11-20 2020-03-27 杭州光珀智能科技有限公司 Human body segmentation method and system based on depth camera
CN111310567A (en) * 2020-01-16 2020-06-19 中国建设银行股份有限公司 Face recognition method and device under multi-person scene
CN111881843A (en) * 2020-07-30 2020-11-03 河南天迈科技有限公司 Taxi passenger carrying number counting method based on face detection
CN112509184A (en) * 2020-12-02 2021-03-16 海南华晟瑞博科技有限公司 Method and system for monitoring house entrance and exit of specific crowd and storage medium
CN112819835A (en) * 2021-01-21 2021-05-18 博云视觉科技(青岛)有限公司 Passenger flow counting method based on 3D depth video
CN113034544A (en) * 2021-03-19 2021-06-25 奥比中光科技集团股份有限公司 People flow analysis method and device based on depth camera
CN113965701A (en) * 2021-09-10 2022-01-21 苏州雷格特智能设备股份有限公司 Multi-target space coordinate corresponding binding method based on two depth cameras
WO2023231290A1 (en) * 2022-05-30 2023-12-07 哈尔滨工业大学(深圳) Casualty recognition method and system based on deep learning in casualty gathering place scene

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872422A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of precisely identifying targets
US20130136307A1 (en) * 2010-08-17 2013-05-30 Jaeshin Yu Method for counting objects and apparatus using a plurality of sensors
US20140139633A1 (en) * 2012-11-21 2014-05-22 Pelco, Inc. Method and System for Counting People Using Depth Sensor
KR101448392B1 (en) * 2013-06-21 2014-10-13 호서대학교 산학협력단 People counting method
CN104517095A (en) * 2013-10-08 2015-04-15 南京理工大学 Head division method based on depth image
CN104751491A (en) * 2015-04-10 2015-07-01 中国科学院宁波材料技术与工程研究所 Method and device for tracking crowds and counting pedestrian flow
CN104835147A (en) * 2015-04-15 2015-08-12 中国科学院上海微系统与信息技术研究所 Method for detecting crowded people flow in real time based on three-dimensional depth map data
US20160140397A1 (en) * 2012-01-17 2016-05-19 Avigilon Fortress Corporation System and method for video content analysis using depth sensing
KR20160119597A (en) * 2015-04-06 2016-10-14 주식회사 케이티 Method for detecting human using plural depth camera and device
CN106548163A (en) * 2016-11-25 2017-03-29 青岛大学 Method based on TOF depth camera passenger flow countings

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872422A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of precisely identifying targets
US20130136307A1 (en) * 2010-08-17 2013-05-30 Jaeshin Yu Method for counting objects and apparatus using a plurality of sensors
US20160140397A1 (en) * 2012-01-17 2016-05-19 Avigilon Fortress Corporation System and method for video content analysis using depth sensing
US20140139633A1 (en) * 2012-11-21 2014-05-22 Pelco, Inc. Method and System for Counting People Using Depth Sensor
KR101448392B1 (en) * 2013-06-21 2014-10-13 호서대학교 산학협력단 People counting method
CN104517095A (en) * 2013-10-08 2015-04-15 南京理工大学 Head division method based on depth image
KR20160119597A (en) * 2015-04-06 2016-10-14 주식회사 케이티 Method for detecting human using plural depth camera and device
CN104751491A (en) * 2015-04-10 2015-07-01 中国科学院宁波材料技术与工程研究所 Method and device for tracking crowds and counting pedestrian flow
CN104835147A (en) * 2015-04-15 2015-08-12 中国科学院上海微系统与信息技术研究所 Method for detecting crowded people flow in real time based on three-dimensional depth map data
CN106548163A (en) * 2016-11-25 2017-03-29 青岛大学 Method based on TOF depth camera passenger flow countings

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张华 等: ""基于RGB-D相机的实时人数统计方法"", 《计算机工程与应用》 *
张文涛 等: ""基于SVM的公交人数统计方法研究"", 《中国科技论文》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674672B (en) * 2019-07-10 2020-10-27 北京滴普科技有限公司 Multi-scene people counting method based on tof camera
CN110674672A (en) * 2019-07-10 2020-01-10 北京滴普科技有限公司 Multi-scene people counting method based on tof camera
CN110930411B (en) * 2019-11-20 2023-04-28 浙江光珀智能科技有限公司 Human body segmentation method and system based on depth camera
CN110930411A (en) * 2019-11-20 2020-03-27 杭州光珀智能科技有限公司 Human body segmentation method and system based on depth camera
CN111310567A (en) * 2020-01-16 2020-06-19 中国建设银行股份有限公司 Face recognition method and device under multi-person scene
CN111310567B (en) * 2020-01-16 2023-06-23 中国建设银行股份有限公司 Face recognition method and device in multi-person scene
CN111881843A (en) * 2020-07-30 2020-11-03 河南天迈科技有限公司 Taxi passenger carrying number counting method based on face detection
CN111881843B (en) * 2020-07-30 2023-12-29 河南天迈科技有限公司 Face detection-based taxi passenger carrying number counting method
CN112509184A (en) * 2020-12-02 2021-03-16 海南华晟瑞博科技有限公司 Method and system for monitoring house entrance and exit of specific crowd and storage medium
CN112819835A (en) * 2021-01-21 2021-05-18 博云视觉科技(青岛)有限公司 Passenger flow counting method based on 3D depth video
CN113034544A (en) * 2021-03-19 2021-06-25 奥比中光科技集团股份有限公司 People flow analysis method and device based on depth camera
CN113965701A (en) * 2021-09-10 2022-01-21 苏州雷格特智能设备股份有限公司 Multi-target space coordinate corresponding binding method based on two depth cameras
CN113965701B (en) * 2021-09-10 2023-11-14 苏州雷格特智能设备股份有限公司 Multi-target space coordinate corresponding binding method based on two depth cameras
WO2023231290A1 (en) * 2022-05-30 2023-12-07 哈尔滨工业大学(深圳) Casualty recognition method and system based on deep learning in casualty gathering place scene

Also Published As

Publication number Publication date
CN109344690B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN109344690A (en) A kind of demographic method based on depth camera
CN108320510B (en) Traffic information statistical method and system based on aerial video shot by unmanned aerial vehicle
CN110425005B (en) Safety monitoring and early warning method for man-machine interaction behavior of belt transport personnel under mine
CN103049787B (en) A kind of demographic method based on head shoulder feature and system
US8213679B2 (en) Method for moving targets tracking and number counting
Boltes et al. T-junction: Experiments, trajectory collection, and analysis
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN103473554B (en) Artificial abortion&#39;s statistical system and method
US9361520B2 (en) Method and system for tracking objects
CN103150549B (en) A kind of road tunnel fire detection method based on the early stage motion feature of smog
CN104517095B (en) A kind of number of people dividing method based on depth image
CN104268598B (en) Human leg detection method based on two-dimensional scanning lasers
CN104183142B (en) A kind of statistical method of traffic flow based on image vision treatment technology
CN108209926A (en) Human Height measuring system based on depth image
TW201324383A (en) Method and apparatus for video analytics based object counting
CN102750527A (en) Long-time stable human face detection and tracking method in bank scene and long-time stable human face detection and tracking device in bank scene
CN102289948A (en) Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN109614948B (en) Abnormal behavior detection method, device, equipment and storage medium
CN103325115B (en) A kind of method of monitoring people counting based on overhead camera head
CN103150559A (en) Kinect three-dimensional depth image-based head identification and tracking method
CN103164711A (en) Regional people stream density estimation method based on pixels and support vector machine (SVM)
CN104778727A (en) Floating car counting method based on video monitoring processing technology
CN104599291B (en) Infrared motion target detection method based on structural similarity and significance analysis
CN108471497A (en) A kind of ship target real-time detection method based on monopod video camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant