CN110633671A - Bus passenger flow real-time statistical method based on depth image - Google Patents

Bus passenger flow real-time statistical method based on depth image Download PDF

Info

Publication number
CN110633671A
CN110633671A CN201910869462.XA CN201910869462A CN110633671A CN 110633671 A CN110633671 A CN 110633671A CN 201910869462 A CN201910869462 A CN 201910869462A CN 110633671 A CN110633671 A CN 110633671A
Authority
CN
China
Prior art keywords
track
depth
passenger flow
camera
extreme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910869462.XA
Other languages
Chinese (zh)
Inventor
李梁燕
靳展
王鹏
李广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Card Intelligent Network Polytron Technologies Inc
Original Assignee
Tianjin Card Intelligent Network Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Card Intelligent Network Polytron Technologies Inc filed Critical Tianjin Card Intelligent Network Polytron Technologies Inc
Priority to CN201910869462.XA priority Critical patent/CN110633671A/en
Publication of CN110633671A publication Critical patent/CN110633671A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to a real-time bus passenger flow rate statistical method based on depth image information. The statistical method comprises the following steps: depth data acquisition, namely acquiring the getting-on and getting-off depth video information of passengers at front and rear doors; preprocessing depth video information, converting depth information format, and smoothly denoising images; modeling a background of the depth video to extract a foreground frame; detecting an extreme point; screening extreme points which accord with human heads; tracking the detected target and extracting track information; judging the direction of a target track; and (5) counting passenger flow counts. The invention overcomes the defects of large passenger flow volume and low accuracy rate when the illumination is poor in the traditional passenger flow instrument, and overcomes the defects of strong data dependence and high requirement on hardware.

Description

Bus passenger flow real-time statistical method based on depth image
The technical field is as follows:
the invention relates to the technical field of pattern recognition of images, in particular to a real-time bus passenger flow rate statistical method based on a depth image.
Background art:
most of intelligent video passenger flow volume statistical methods in the current market relate to technologies including video preprocessing, image processing, mode recognition, machine learning, deep learning and the like, a vertically downward camera is used for detecting, recognizing and tracking the head or the shoulder of a person in a video, and finally a motion track is tracked to judge the passenger flow volume.
At present, the domestic public transport passenger flow statistical method mainly comprises the following steps: the bus IC card statistical method has the problems that the error between the counting of a passenger flow meter and the actual passenger flow is larger due to different bus IC card charging modes; the bus pedal contact type method has the problems that when a large number of people get on or off the bus, the error of a passenger flow meter is large due to the fact that a plurality of people trample the bus, the contact type detection equipment is prone to failure, and the maintenance cost is high; the infrared sensing method has the problems that the missing rate of getting on and off the train is high, the calibration is complex, and the infrared is easily influenced by the illumination; a video image analysis method, in which different algorithms are used according to different camera types.
According to the method, a traditional two-dimensional camera is used for acquiring the video image in a 'head-on-head' mode, and under the condition of poor illumination, the quality of video data is poor, so that the accuracy of the passenger flow instrument is greatly reduced. The three-dimensional data of RGB-D has the problems of large video data, more occupied resources, higher requirement on hardware for instantaneity and the like. The adoption of machine learning and deep learning methods requires a large amount of data training networks in the aspects of detection and identification, has poor portability, needs strong computing resources such as GPU units and the like, and has higher cost.
Passenger flow statistics methods are mainly based on three main categories: the three methods are all defect-based on detection methods of feature points, human body segmentation and tracking and deep learning. The accuracy of the first two detection methods needs to be improved. Although the third type of detection method has high accuracy, its real-time performance and high hardware cost cannot meet the standards for popularization and use.
The invention content is as follows:
the invention aims to provide a depth data-based bus passenger detecting, tracking and counting method, which can count the passenger flow of a bus in real time with higher accuracy on the basis of reducing hardware and calculation cost. The specific technical scheme is as follows:
the depth data of the method is depth passenger flow video data shot from the top areas of the front door and the rear door of the bus by using a depth camera vertically arranged on a pedal area on the bus, and the passenger flow statistics steps are as follows:
step 1: pre-collecting depth data:
vertically installing a calibrated depth camera above a bus door and obtaining internal parameters of the camera; acquiring depth video data information of passengers getting on or off a bus;
step 2: depth data preprocessing:
the original depth data has abnormal depth data, the depth image processing is greatly influenced, and the abnormal data are rationalized in the depth data preprocessing stage; assigning a value of 0 to a value of which the depth value is greater than the threshold value, according to the distance between the height of the camera and the ground as the threshold value;
and step 3: depth data background modeling:
performing background modeling on the depth video data by using a frame difference method with simple calculation, and finally only further processing the extracted foreground; the background modeling is directly calculated on a 16bit depth image, and the threshold value of the background modeling is adjusted according to the installation height of the camera;
and 4, step 4: and (3) detecting and screening extreme points:
a foreground frame obtained by background modeling is an original 16-bit depth image, and the original 16-bit depth image is converted into an 8-bit (0-255) gray level image; zooming the gray level image, detecting a maximum value point by adopting a neighborhood detection maximum value method, and correcting, judging and screening the detected maximum value point;
the gray scale map of the foreground frame is 640x400, which is scaled to the size of 80x50, and the recall rate with the head as an extreme point is 100%, and different heads can be distinguished; firstly, sequentially carrying out Gaussian filtering and median filtering on the zoomed image, and filling holes in the image by using a median filtering result; then, carrying out maximum value detection on the image with the hole compensated by adopting a convolution kernel of 3x 3;
and 5: screening extreme points which accord with human heads:
analyzing the pixels in the neighborhood of the extreme points obtained in the last step to screen out the extreme points which accord with the human head, and providing a human head detection frame according to the actual installation height of the camera;
step 6: tracking the head track:
tracking the detection frame information by using a Kalman filter according to the detection frame information, and connecting the central points of the matched detection frames among different frames to form track information of a target;
adjusting Kalman parameters to enable the tracking speed to accord with the running speed of a person, setting parameters, giving an ID of a track under the condition of continuously matching n frames, finishing track tracking when the continuous n frames are not detected, and calculating and judging whether the ID accords with an on-off counting rule (for example, when continuous 3 frames are not detected in Kalman tracking, track tracking is finished, so that when passenger flow counting is carried out according to the track, the track information of the last 3 frames is not calculated, so that the track can be continuously tracked, and the counting and the judgment of the track direction are eliminated because noise influences the counting when the track is judged;
and 7: judging the track direction:
judging the direction of getting on or off the vehicle of the target according to the direction of the world coordinate of the camera and the initial information of the track; calculating the positive and negative of the difference of the starting points in the y-axis direction of the track, which is the basis for judging the track direction (for the reasons mentioned in the previous step, the information of the last n frames of the track is not considered during the calculation of the track direction);
and 8: passenger flow volume counting
And counting the number of passengers getting on or off the bus in the video according to the number of the tracks and the direction of the tracks.
Preferably, step 5 further includes the following subsequent processes:
then calculating a non-zero proportion of the neighborhood of the extreme point 15x15, and setting a proportion threshold value to screen out abrupt extreme points; combining the points with the close extreme points into an extreme point, taking the maximum extreme point as a final extreme point value, and deleting the small extreme points; the distance between the extreme points is the cosine distance and is calculated as follows:
Figure BDA0002202332140000041
further preferably, the step 5 further comprises the following subsequent processes:
drawing a detection area in an original pixel image, calculating a three-dimensional space value of the detection area according to camera intrinsic parameters, and regarding an extreme point in the three-dimensional space area as an effective head extreme point; the calculation formula for converting the pixel coordinates into the world coordinates is as follows:
Figure BDA0002202332140000042
[wx,wy,wz]Tis the world coordinate, [ x ]c,yc,zc]TThe coordinates are pixel coordinates, and cu, cv, fx and fy are camera internal parameters; and after the world coordinates of the extreme points are obtained, the size of the detection frame is given according to the actual installation height of the camera and the actual size of the head of the person.
In a preferred embodiment, the step 8 comprises the following specific processes:
counting the number of passengers getting on or off the bus in the video according to the number of tracks and the direction of the tracks; because the camera is hung on the bus, the internal parameters of the camera can be slightly changed due to long-time running, jolting and vibration of the bus; the setting of the getting-on and getting-off datum line is to calculate the world coordinate of the datum line according to the internal reference of the camera, and the datum line can be deviated due to the slight change of the internal reference; therefore, in the design, the difference value between the number of track points of the track in the detection area and the track y direction is used as a judgment rule; the last points of the track do not participate in calculation, and because the last points are tracked and no detection points participate, the prediction accuracy cannot be guaranteed; for example, a large deviation of the predicted trajectory point may occur due to a large change in the human trajectory direction or speed before the detected point disappears.
Compared with the prior art, the invention has the advantages that:
the invention overcomes the defects of large passenger flow volume and low accuracy of poor illumination (such as heavy rain, heavy snow, heavy fog and night) of the traditional passenger flow instrument.
The passenger flow instrument overcomes the defects of strong dependence on data and higher requirement on hardware based on artificial intelligence (machine learning or deep learning). The passenger flow instrument based on artificial intelligence needs to continuously train a characteristic model, different application scenes also need different data training models, the requirement of the characteristic model on hardware is high, and particularly, the scene needing to process video flow in real time is required. The passenger flow instrument is simple in calculation and low in requirement on hardware.
And (III) the detection frame in the passenger flow instrument is changed according to the actual size of the human head, the detection frame is different from the camera in actual distance, the size of the human head detection frame is different, the closer the detection frame is to the camera, the larger the mapping of the head on the depth image is, the larger the size of the detection frame is, and the accurate size of the detection frame ensures the continuity of the human head ID in tracking. .
And (IV) the detection part of the head in the depth image in the passenger flow instrument can be conveniently integrated into the application of identifying the weight of the head so as to obtain the specific information of the passenger getting on or off the bus at which station and provide accurate information for bus passenger flow scheduling and the like.
Description of the drawings:
fig. 1 is a schematic view of the bus passenger flow rate flow of the patent.
The specific implementation mode is as follows:
example (b):
a real-time bus passenger flow volume statistical method is characterized in that video data of the method are passenger flow video data shot from top areas of a front door and a rear door of a bus by using cameras, and the statistical method comprises the following steps:
step 1: pre-collecting depth data:
vertically installing a calibrated depth camera above a bus door and obtaining internal parameters of the camera; acquiring depth video data information of passengers getting on or off a bus;
step 2: depth data preprocessing:
the original depth data has abnormal depth data, the depth image processing is greatly influenced, and the abnormal data are rationalized in the depth data preprocessing stage; the height of the general camera installation is less than 3m from the ground, so the value of the depth value of more than 3000 is assigned to 0 in the embodiment;
and step 3: depth data background modeling:
performing background modeling on the depth video data by using a frame difference method with simple calculation, and finally only further processing the extracted foreground; calculating on a depth image of 16bit directly by a background modeling frame difference method, and adjusting a threshold value according to the installation height of a camera;
and 4, step 4: and (3) detecting and screening extreme points:
a foreground frame obtained by background modeling is an original 16-bit depth image, and the original 16-bit depth image is converted into an 8-bit (0-255) gray level image; zooming the gray level image, detecting a maximum value point by adopting a neighborhood detection maximum value method, and correcting, judging and screening the detected maximum value point;
the gray scale map of the foreground frame is 640x400, which is scaled to the size of 80x50, and the recall rate with the head as an extreme point is 100%, and different heads can be distinguished; firstly, sequentially carrying out Gaussian filtering and median filtering on the zoomed image, and filling holes in the image by using a median filtering result; then, carrying out maximum value detection on the image with the hole compensated by adopting a convolution kernel of 3x 3;
and 5: screening extreme points which accord with human heads:
analyzing the pixels in the neighborhood of the extreme points obtained in the last step to screen out the extreme points which accord with the human head, and providing a human head detection frame according to the actual installation height of the camera;
the recall rate of the heads in the detected extreme points is 100%, but the accuracy is low, so that the non-zero proportion is calculated for the neighborhood of the extreme points 15x15, and a proportion threshold is set to screen out abrupt extreme points; combining the points with the close extreme points into an extreme point, taking the maximum extreme point as a final extreme point value, and deleting the small extreme points; the distance between the extreme points is the cosine distance and is calculated as follows:
Figure BDA0002202332140000071
in practice, a detection area is drawn in an original pixel image, a three-dimensional space value of the detection area is calculated according to camera intrinsic parameters, and an extreme point is considered to be an effective human head extreme point in the three-dimensional space area; the calculation formula for converting the pixel coordinates into the world coordinates is as follows:
[wx,wy,wz]Tis the world coordinate, [ x ]c,yc,zc]TThe coordinates are pixel coordinates, and cu, cv, fx and fy are camera internal parameters; after the world coordinates of the extreme points are obtained, the size of a detection frame is given according to the actual installation height of the camera and the actual size of the head of a person;
step 6: head trajectory tracking
Tracking the detection frame information by using a Kalman filter according to the detection frame information, and connecting the central points of the matched detection frames among different frames to form track information of a target;
adjusting Kalman parameters to enable the tracking speed to accord with the running speed of a person, setting parameters, giving an ID of a track under the condition of continuously matching n frames, finishing track tracking when the continuous n frames are not detected, and calculating and judging whether the ID accords with an on-off counting rule (for example, when continuous 3 frames are not detected in Kalman tracking, track tracking is finished, so that when passenger flow counting is carried out according to the track, the track information of the last 3 frames is not calculated, so that the track can be continuously tracked, and the counting and the judgment of the track direction are eliminated because noise influences the counting when the track is judged;
and 7: trajectory direction determination
Judging the direction of getting on or off the vehicle of the target according to the direction of the world coordinate of the camera and the initial information of the track; calculating the positive and negative of the difference of the starting points in the y-axis direction of the track, which is the basis for judging the track direction (for the reasons mentioned in the previous step, the information of the last n frames of the track is not considered during the calculation of the track direction);
and 8: passenger flow volume counting
Counting the number of passengers getting on or off the bus in the video according to the number of tracks and the direction of the tracks; because the camera is hung on the bus, the internal reference of the camera slightly changes when the bus runs for a long time, and the setting of the getting-on and getting-off datum line requires the calculation of world coordinates according to the internal reference of the camera, so that the datum line can be deviated; therefore, in the design, the difference value between the number of track points of the track in the detection area and the track y direction is used as a judgment rule; the last points of the track do not participate in calculation, and because the last points are tracked and no detection points participate, the prediction accuracy cannot be guaranteed; for example, a large deviation of the predicted trajectory point may occur due to a large change in the human trajectory direction or speed before the detected point disappears.

Claims (4)

1. The method for counting the passenger flow of the bus in real time based on the depth image is characterized in that video data of the method are passenger flow video data shot from top areas of a front door and a rear door of the bus by using a depth camera, and the method for counting the passenger flow of the bus comprises the following steps:
step 1: pre-collecting depth data:
vertically installing a calibrated depth camera above a bus door and obtaining internal parameters of the camera; acquiring depth video data information of passengers getting on or off a bus;
step 2: depth data preprocessing:
assigning a value of 0 to a value of which the depth value is greater than the threshold value, according to the distance between the height of the camera and the ground as the threshold value;
and step 3: depth data background modeling:
carrying out background modeling on the depth video data by using a frame difference method, and finally, only carrying out further processing on the extracted foreground; calculating a background modeling frame difference method directly on a 16bit depth image, and adjusting a threshold value according to the installation height of a camera;
and 4, step 4: and (3) detecting and screening extreme points:
the foreground frame obtained by background modeling is an original 16-bit depth image, and is converted into an 8-bit gray image, namely a 0-255 gray image; zooming the gray level image, detecting a maximum value point by adopting a neighborhood detection maximum value method, and correcting, judging and screening the detected maximum value point;
the gray scale map of the foreground frame is 640x400, which is scaled to the size of 80x50, and the recall rate with the head as an extreme point is 100%, and different heads can be distinguished; firstly, sequentially carrying out Gaussian filtering and median filtering on the zoomed image, and filling holes in the image by using a median filtering result; then, carrying out maximum value detection on the image with the hole compensated by adopting a convolution kernel of 3x 3;
and 5: screening extreme points which accord with human heads:
analyzing the pixels in the neighborhood of the extreme points obtained in the last step to screen out the extreme points which accord with the human head, and providing a human head detection frame according to the actual installation height of the camera;
step 6: head trajectory tracking
Tracking the detection frame information by using a Kalman filter according to the detection frame information, and connecting the central points of the matched detection frames among different frames to form track information of a target;
adjusting Kalman parameters to enable the tracking speed to accord with the running speed of a person, setting parameters, giving an ID of a track under the condition of continuously matching n frames, finishing track tracking when the continuous n frames are not detected, and calculating and judging whether the ID accords with an on-off counting rule (for example, when continuous 3 frames are not detected in Kalman tracking, track tracking is finished, so that when passenger flow counting is carried out according to the track, the track information of the last 3 frames is not calculated, so that the track can be continuously tracked, and the counting and the judgment of the track direction are eliminated because noise influences the counting when the track is judged;
and 7: trajectory direction determination
Judging the direction of getting on or off the vehicle of the target according to the direction of the world coordinate of the camera and the initial information of the track; calculating the positive and negative of the initial point difference in the y-axis direction of the track, which is the basis for judging the track direction; (for the reasons mentioned above, the track direction calculation does not take into account the information of the last n frames of the track)
And 8: passenger flow volume counting
And counting the number of passengers getting on or off the bus in the video according to the number of the tracks and the direction of the tracks.
2. The real-time bus passenger flow rate statistical method based on depth images as claimed in claim 1, wherein said step 5 further comprises the following subsequent processes:
then calculating a non-zero proportion of the neighborhood of the extreme point 15x15, and setting a proportion threshold value to screen out abrupt extreme points; combining the points with the close extreme points into an extreme point, taking the maximum extreme point as a final extreme point value, and deleting the small extreme points; the distance between the extreme points is calculated as follows using the cosine distance.
Figure FDA0002202332130000021
3. The real-time bus passenger flow rate statistical method based on depth images as claimed in claim 2, wherein the step 5 further comprises the following subsequent processes:
drawing a detection area in an original pixel image, calculating a three-dimensional space value of the detection area according to camera intrinsic parameters, and regarding an extreme point in the three-dimensional space area as an effective head extreme point; the calculation formula for converting the pixel coordinates into the world coordinates is as follows:
Figure FDA0002202332130000031
[wx,wy,wz]Tis the world coordinate, [ x ]c,yc,zc]TThe coordinates are pixel coordinates, and cu, cv, fx and fy are camera internal parameters; obtaining extreme pointsAnd after the world coordinates are obtained, the size of the detection frame is given according to the actual installation height of the camera and the actual size of the head of the person.
4. The real-time bus passenger flow rate statistical method based on the depth image as claimed in any one of claims 1-3, wherein the step 8 comprises the following specific processes:
taking the difference value of the number of track points of the track in the detection area and the track y direction as a judgment rule; and the last partial point of the trajectory does not participate in the calculation.
CN201910869462.XA 2019-09-16 2019-09-16 Bus passenger flow real-time statistical method based on depth image Pending CN110633671A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910869462.XA CN110633671A (en) 2019-09-16 2019-09-16 Bus passenger flow real-time statistical method based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910869462.XA CN110633671A (en) 2019-09-16 2019-09-16 Bus passenger flow real-time statistical method based on depth image

Publications (1)

Publication Number Publication Date
CN110633671A true CN110633671A (en) 2019-12-31

Family

ID=68972597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910869462.XA Pending CN110633671A (en) 2019-09-16 2019-09-16 Bus passenger flow real-time statistical method based on depth image

Country Status (1)

Country Link
CN (1) CN110633671A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222468A (en) * 2020-01-08 2020-06-02 浙江光珀智能科技有限公司 People stream detection method and system based on deep learning
CN112580633A (en) * 2020-12-25 2021-03-30 博大视野(厦门)科技有限公司 Public transport passenger flow statistical device and method
CN112819835A (en) * 2021-01-21 2021-05-18 博云视觉科技(青岛)有限公司 Passenger flow counting method based on 3D depth video
CN114332184A (en) * 2021-11-30 2022-04-12 南京行者易智能交通科技有限公司 Passenger statistical identification method and device based on monocular depth estimation
CN114926422A (en) * 2022-05-11 2022-08-19 西南交通大学 Method and system for detecting boarding and alighting passenger flow
CN116071710A (en) * 2023-04-06 2023-05-05 南京运享通信息科技有限公司 Passenger flow volume statistics method based on intelligent stadium monitoring video
CN116503789A (en) * 2023-06-25 2023-07-28 南京理工大学 Bus passenger flow detection method, system and equipment integrating track and scale

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385690A (en) * 2010-09-01 2012-03-21 汉王科技股份有限公司 Target tracking method and system based on video image
CN103646253A (en) * 2013-12-16 2014-03-19 重庆大学 Bus passenger flow statistics method based on multi-motion passenger behavior analysis
CN104268506A (en) * 2014-09-15 2015-01-07 郑州天迈科技股份有限公司 Passenger flow counting detection method based on depth images
CN105512720A (en) * 2015-12-15 2016-04-20 广州通达汽车电气股份有限公司 Public transport vehicle passenger flow statistical method and system
US20170286780A1 (en) * 2016-03-18 2017-10-05 Shenzhen University Method and system for calculating passenger crowdedness degree
CN107563347A (en) * 2017-09-20 2018-01-09 南京行者易智能交通科技有限公司 A kind of passenger flow counting method and apparatus based on TOF camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385690A (en) * 2010-09-01 2012-03-21 汉王科技股份有限公司 Target tracking method and system based on video image
CN103646253A (en) * 2013-12-16 2014-03-19 重庆大学 Bus passenger flow statistics method based on multi-motion passenger behavior analysis
CN104268506A (en) * 2014-09-15 2015-01-07 郑州天迈科技股份有限公司 Passenger flow counting detection method based on depth images
CN105512720A (en) * 2015-12-15 2016-04-20 广州通达汽车电气股份有限公司 Public transport vehicle passenger flow statistical method and system
US20170286780A1 (en) * 2016-03-18 2017-10-05 Shenzhen University Method and system for calculating passenger crowdedness degree
CN107563347A (en) * 2017-09-20 2018-01-09 南京行者易智能交通科技有限公司 A kind of passenger flow counting method and apparatus based on TOF camera

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222468A (en) * 2020-01-08 2020-06-02 浙江光珀智能科技有限公司 People stream detection method and system based on deep learning
CN112580633A (en) * 2020-12-25 2021-03-30 博大视野(厦门)科技有限公司 Public transport passenger flow statistical device and method
CN112580633B (en) * 2020-12-25 2024-03-01 博大视野(厦门)科技有限公司 Public transport passenger flow statistics device and method based on deep learning
CN112819835A (en) * 2021-01-21 2021-05-18 博云视觉科技(青岛)有限公司 Passenger flow counting method based on 3D depth video
CN114332184A (en) * 2021-11-30 2022-04-12 南京行者易智能交通科技有限公司 Passenger statistical identification method and device based on monocular depth estimation
CN114926422A (en) * 2022-05-11 2022-08-19 西南交通大学 Method and system for detecting boarding and alighting passenger flow
CN116071710A (en) * 2023-04-06 2023-05-05 南京运享通信息科技有限公司 Passenger flow volume statistics method based on intelligent stadium monitoring video
CN116503789A (en) * 2023-06-25 2023-07-28 南京理工大学 Bus passenger flow detection method, system and equipment integrating track and scale
CN116503789B (en) * 2023-06-25 2023-09-05 南京理工大学 Bus passenger flow detection method, system and equipment integrating track and scale

Similar Documents

Publication Publication Date Title
CN110633671A (en) Bus passenger flow real-time statistical method based on depth image
CN109522793B (en) Method for detecting and identifying abnormal behaviors of multiple persons based on machine vision
CN107563347B (en) Passenger flow counting method and device based on TOF camera
WO2021208275A1 (en) Traffic video background modelling method and system
CN105373135B (en) A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision
CN108549864B (en) Vehicle-mounted thermal imaging pedestrian detection-oriented region-of-interest filtering method and device
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN103400157B (en) Road pedestrian and non-motor vehicle detection method based on video analysis
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
CN104637058B (en) A kind of volume of the flow of passengers identify statistical methods based on image information
CN107491720A (en) A kind of model recognizing method based on modified convolutional neural networks
CN103208185A (en) Method and system for nighttime vehicle detection on basis of vehicle light identification
CN108804992B (en) Crowd counting method based on deep learning
CN111553214B (en) Method and system for detecting smoking behavior of driver
WO2023155483A1 (en) Vehicle type identification method, device, and system
CN109711256B (en) Low-altitude complex background unmanned aerial vehicle target detection method
CN109359549A (en) A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN110991398A (en) Gait recognition method and system based on improved gait energy map
CN107045630B (en) RGBD-based pedestrian detection and identity recognition method and system
CN115166717A (en) Lightweight target tracking method integrating millimeter wave radar and monocular camera
CN117315547A (en) Visual SLAM method for solving large duty ratio of dynamic object
CN111339824A (en) Road surface sprinkled object detection method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191231