WO2017156772A1 - 一种乘客拥挤度的计算方法及其系统 - Google Patents

一种乘客拥挤度的计算方法及其系统 Download PDF

Info

Publication number
WO2017156772A1
WO2017156772A1 PCT/CN2016/076740 CN2016076740W WO2017156772A1 WO 2017156772 A1 WO2017156772 A1 WO 2017156772A1 CN 2016076740 W CN2016076740 W CN 2016076740W WO 2017156772 A1 WO2017156772 A1 WO 2017156772A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
passengers
passenger
getting
human head
Prior art date
Application number
PCT/CN2016/076740
Other languages
English (en)
French (fr)
Inventor
张勇
刘磊
赵东宁
李岩山
陈剑勇
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to PCT/CN2016/076740 priority Critical patent/WO2017156772A1/zh
Priority to JP2018502630A priority patent/JP6570731B2/ja
Priority to US15/628,605 priority patent/US10223597B2/en
Publication of WO2017156772A1 publication Critical patent/WO2017156772A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7747Organisation of the process, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking

Definitions

  • the present invention relates to the field of public transportation, and in particular to a method and system for calculating passenger crowding degree.
  • the bus intelligent dispatching is to count the number of passengers on each bus, the number of passengers on board and the total number of passengers on the bus through the terminal installed on each bus, so as to realize the current bus routes at various times.
  • the passenger load of the segment is monitored.
  • the bus intelligent dispatching can also use the historical data of the flow of people getting on and off at each station to conduct deep data mining, and provide a basis for formulating reasonable and efficient bus routes. Therefore, as the most important part of the intelligent bus dispatching system, accurate statistics of the number of passengers on the bus is the key to the realization of the bus intelligent dispatching system.
  • the traditional method of passenger flow statistics is to use manual detection, or use contact devices such as slot machines and card readers for statistics.
  • the manual statistics method is used to obtain the passenger flow data.
  • the accuracy can meet the requirements, it requires a lot of manpower and financial resources, and the cost is high and the real-time performance is poor.
  • the infrared detection system although the number of people getting on and off can be counted at the same time.
  • the infrared device is easily interfered by external factors, such as continuous passage of personnel or long-term resident of the personnel, which may cause miscalculation and fail to meet the accuracy requirements of the bus intelligent dispatching system for the number of people.
  • the infrared system can only count the number of people passing through the door, it cannot judge the direction of the passenger's movement, that is, it cannot The bicycle flow of the bicycle door is counted in both directions, so it cannot be applied to the BRT system without getting on or off the door.
  • the rapid transit system is becoming more and more widely used, and the applicability of the means of detecting the number of passengers using infrared rays is getting lower and lower.
  • an object of the present invention is to provide a method for calculating passenger crowding degree and a system thereof, which aim to solve the problem that the accuracy of statistical passenger flow in the prior art is not high.
  • the invention provides a method for calculating passenger crowding degree, the method comprising:
  • the passenger's getting off behavior and the getting on behavior are determined in the area where the target object is located, and the crowdedness of the passengers in the vehicle is determined according to the number of passengers getting on and off.
  • the step of reading the collected video data of the passengers getting on and off the vehicle and performing preprocessing of the continuous multi-frame image on the video data specifically includes:
  • each pixel in the frame image is a static background or a dynamic foreground
  • the step of performing the human head recognition according to the result of the pre-processing and using the detected human head as the target object to be tracked by the mean shift specifically includes:
  • Human head recognition based on a cascade classifier that is limited in size of the detection window
  • the detected human head is used as the target object to be tracked by the mean shift.
  • the step of determining the passenger getting-off behavior and the boarding behavior in the area where the target object is located, and determining the crowding degree of the passengers in the vehicle according to the number of passengers getting on and off the passenger includes:
  • Two detection lines are arranged in the shooting area of the camera, and if the centroid of the target object to be tracked by the mean shift crosses the above two detection lines, it is determined that the passenger is getting off or getting on the vehicle;
  • the total number of passengers in actual operation is determined by calculating the number of passengers getting off and the number of passengers boarding, and the ratio of the total number of passengers to the maximum passenger capacity in the vehicle is used to measure the crowdedness of passengers in the vehicle.
  • the present invention also provides a computing system for passenger congestion, the system comprising:
  • a data acquisition module for establishing a video data collection environment and starting to collect video data of passengers getting on and off the vehicle
  • a pre-processing module configured to read video data of the collected passengers on and off the vehicle, and perform pre-processing of the continuous multi-frame image on the video data;
  • An object determining module configured to perform human head recognition according to the result of the preprocessing, and use the detected human head as a target object to be tracked by the mean shift;
  • the congestion determination module is configured to determine a passenger's getting off behavior and a boarding behavior in an area where the target object is located, and determine a passenger crowding degree in the vehicle according to the number of passengers getting on and off.
  • the preprocessing module comprises:
  • the read frame sub-module is configured to read a frame format of the collected video data of the passengers getting on and off the vehicle and determine the number of frames;
  • a modeling sub-module for establishing a single Gaussian model for each pixel in the initial frame
  • a state sub-module configured to analyze a change of pixel points of consecutive image frames and determine that each pixel in the frame image is a static background or a dynamic foreground;
  • the update submodule is configured to record the number of times each pixel in the frame image is continuously determined to be a static background. If the number of times is greater than or equal to a preset threshold, the RGB value of the pixel is immediately updated to the background, if not continuously determined as Static background, then re-recorded.
  • the object determining module comprises:
  • An identification submodule for performing human head recognition according to a cascade classifier that is limited in size of the detection window
  • the target sub-module is used to detect the detected human head as the target object to be tracked by the mean shift.
  • the congestion determination module includes:
  • a first determining sub-module configured to set two detection lines in a shooting area of the camera, and if the centroid of the target object to be tracked by the mean shift crosses the above two detection lines, determine that the passenger is getting off or getting on the vehicle;
  • a second determining sub-module configured to determine the total number of passengers in actual operation by calculating the number of passengers getting off and the number of people entering the vehicle, and using the ratio between the total number of passengers and the maximum passenger capacity in the vehicle to measure the passengers in the vehicle Congestion.
  • the technical solution provided by the invention adopts the pretreatment of removing the static background, can effectively overcome the interference of the change of the intensity of the light and the like on the top recognition of the head in the frame image, and can effectively reduce the false detection and missed detection on the top of the human head by the size limitation of the detection window. And wrong inspection.
  • FIG. 1 is a flow chart of a method for calculating a passenger congestion degree according to an embodiment of the present invention
  • step S12 shown in FIG. 1 according to an embodiment of the present invention
  • step S13 shown in FIG. 1 according to an embodiment of the present invention
  • step S14 is a detailed flowchart of step S14 shown in FIG. 1 according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram showing the internal structure of a passenger congestion degree calculation system 10 according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing the internal structure of the preprocessing module 12 shown in FIG. 5 according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram showing the internal structure of the object determining module 13 shown in FIG. 5 according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram showing the internal structure of the congestion determination module 14 shown in FIG. 5 according to an embodiment of the present invention.
  • a specific embodiment of the present invention provides a method for calculating a passenger congestion degree, and the method mainly includes the following steps:
  • S13 Perform human head recognition according to the result of the preprocessing, and use the detected human head as a target object to be tracked by the mean shift;
  • the calculation method of the passenger crowding degree provided by the invention adopts the pretreatment of removing the static background to effectively overcome the interference of the illumination intensity change and the like on the top recognition of the head image in the frame image, and can effectively reduce the top of the head by detecting the size limit of the window. Misdetection, missed detection and misdetection.
  • FIG. 1 is a flowchart of a method for calculating a passenger congestion degree according to an embodiment of the present invention.
  • step S11 a video data collection environment is established, and video data of passengers getting on and off is started.
  • a video data collection environment is established by constructing an embedded vehicle system, and the embedded vehicle system includes hardware modules: two cameras, an embedded device main control module, a video storage module, and an onboard hardware power supply module.
  • the two cameras are placed at the top of the front and rear doors respectively, and the camera is 90 degrees to the ground.
  • the shooting range covers: the steps of getting on and off the car, part of the road outside the door and part of the car body space inside the door.
  • the in-vehicle embedded device collects the video data of the passengers getting on and off by controlling the camera installed at the top of the front and rear doors of the bus, and temporarily stores the video data recorded by the two cameras to the video storage module for calling the video. deal with.
  • the video data temporary storage method can effectively reduce the requirements on the embedded hardware, thereby reducing the cost of the device.
  • step S12 the collected video data of the passengers getting on and off is read, and the video data is subjected to preprocessing of consecutive multi-frame images.
  • the step S12 of reading the collected video data of the passengers getting on and off the vehicle and performing the preprocessing of the continuous multi-frame image on the video data specifically includes S121-S125, as shown in FIG. 2 .
  • FIG. 2 is a detailed flowchart of step S12 shown in FIG. 1 according to an embodiment of the present invention.
  • step S121 the frame format of the collected video data of the passengers getting on and off is read and the number of frames is determined.
  • the embedded device main control module reads the frame format of the video data and determines the number of frames, and determines the size of the frame image m ⁇ n, where m represents the number of rows of the frame image, and n represents the number of columns of the frame image.
  • step S122 a single Gaussian model is established for each pixel in the initial frame.
  • a single Gaussian model is established for each pixel in the initial frame, and the average value of the initial Gaussian model is the RGB value of the pixel, the variance initial constant V, and then the mixed Gaussian is gradually constructed according to the change of the pixel.
  • Density function among them ⁇ (x t , ⁇ i,t , ⁇ i,t ) is the i-th Gaussian distribution at time t, I is a three-dimensional unit matrix, K is the total number of Gaussian distribution patterns, ⁇ i,t , W i,t , ⁇ ⁇ t , ⁇ i, t represent the mean, weight, covariance matrix, and variance of each Gaussian function , respectively.
  • step S123 the change of pixel points of consecutive image frames is analyzed and it is determined that each pixel in the frame image is a static background or a dynamic foreground.
  • the single Gaussian model of the initial frame is gradually changed to the mixed Gaussian model by successive changes of the image pixels of the plurality of frames, and each pixel has at least K (K ⁇ 1) Gaussian functions, by frame and The change between frames determines that each pixel in the frame image is a static background or a dynamic foreground.
  • the RGB values of a total of m ⁇ n pixels of the subsequent frame image are sequentially matched with the K(K ⁇ 1) Gaussian functions of the pixel points corresponding to the previous frame image.
  • the weight of the successfully matched Gaussian function W i,t is added .
  • Match the remaining remaining (K-1) Gaussian functions update their mean and variance, and determine that the RGB values of the current pixel belong to the static background;
  • the matching is unsuccessful, it means that there is no change in the K Gaussian function that can describe the RGB value of the current pixel. Then the Gaussian function with the lowest weight is deleted. The Gaussian function is newly created and the RGB value of the current pixel is taken as its mean value, and V is used as its variance. And determine that the RGB value of the current pixel belongs to the dynamic foreground.
  • step S124 the pixel value of the pixel determined as the static background is modified, and the pixel value of the pixel determined as the dynamic foreground is not modified.
  • the pixel value of the pixel determined as the static background is modified to RGB s , where RGB s represents the color with the lowest occurrence rate in daily life, and the pixel point determined as the dynamic foreground does not modify its pixel value.
  • the color modification made in this step is that the human head detection and the trajectory tracking in the subsequent steps reduce the interference of the static background, which greatly improves the success rate of the subsequent steps.
  • step S125 the number of times each pixel point in the frame image is continuously determined as a static background is recorded.
  • the number of times is greater than or equal to a preset threshold, the pixel RGB value is immediately updated to the background, and if it is not continuously determined to be static The background is re-recorded.
  • the hybrid Gaussian background modeling takes time, and in the real scene, the time for the passenger to get on and off the vehicle may be less than the time of the mixed Gaussian modeling, and a Gaussian function in the original mixed Gaussian modeling algorithm is added.
  • the weight growth mechanism that is, each pixel in the recorded frame image is continuously judged If the number of times is a static background, if the number of times is greater than or equal to the preset threshold K, then the RGB value of the pixel is updated immediately, and the RGB value of the pixel is not updated after the Gaussian function reaches the threshold in the original algorithm. If it is not continuously judged as a static background, it is re-recorded.
  • each frame image has a total of m ⁇ n pixels, and the number K of Gaussian density functions belonging to each pixel should not exceed four. If more than four, the Gaussian function with the lowest weight is deleted.
  • the small and isolated points produced by image preprocessing are eliminated by the image opening and closing operations.
  • step S13 human head recognition is performed according to the result of the pre-processing, and the detected human head is used as the target object to be tracked by the mean shift.
  • the step S13 of performing the human head recognition based on the result of the preprocessing and using the detected human head as the target object to be tracked by the mean shift specifically includes S131-S134, as shown in FIG.
  • FIG. 3 is a detailed flowchart of step S13 shown in FIG. 1 according to an embodiment of the present invention.
  • step S131 a cascaded classifier for determining whether or not the head of the human head is made using the collected plurality of positive samples including the human head and negative samples not including the human head.
  • the Adaboost training iterative training based on the LBP feature is performed on the collected positive samples including the human head and the negative samples (the size 20*20) that do not include the human head, and the determination is made to determine whether it belongs to the top of the human head. Cascading classifier.
  • step S132 a size range of the detection window in the top of the head is detected in the cascade classifier.
  • the detected image window size w of the cascade classifier is set to be between W min and W max , and W min and W max are determined according to the size of the bus, through the window w
  • the movement on the frame image detects the position of the human head, and the movement detection rule adopts the integral map method. If the human head is not detected, the detection window is enlarged by 1.5 times and cannot exceed the range of W min and W max .
  • step S133 human head recognition is performed in accordance with the cascade classifier that is limited in the size of the detection window.
  • the human head sample that does not satisfy the size range condition of step S132 and has been determined to be the top of the human head by step S131 is deleted, and the detection of the single-step S131 cascade classifier cannot be determined only.
  • the head top samples that must satisfy both of steps S131 and S132 are recognized by the subsequent steps.
  • step S134 the detected human head is used as the target object to be tracked by the mean shift (Meanshift).
  • step S14 the passenger's getting off behavior and the boarding behavior are determined in the area where the target object is located, and the in-vehicle passenger crowding degree is determined according to the number of passengers getting on and off.
  • the step S14 of determining the passenger getting-off behavior and the boarding behavior in the area where the target object is located, and determining the in-vehicle passenger crowding degree according to the number of passengers getting on and off the vehicle specifically includes S141-S142. As shown in Figure 4.
  • FIG. 4 is a detailed flowchart of step S14 shown in FIG. 1 according to an embodiment of the present invention.
  • step S141 two detection lines are disposed in the imaging area of the camera, and if the centroid of the target object to be tracked by the mean shift crosses the above two detection lines, it is determined that the passenger is getting off or getting on the vehicle.
  • two detection lines are disposed in the imaging area of the camera, and are placed on the road surface at a certain distance from the outside of the vehicle door and on the floor of the vehicle at a certain distance from the door, and the detected human head is used as the mean value drift.
  • the region histogram of the current frame is calculated by using the estimated center position y 0 of the (n-1) frame target object as the center coordinate of the search window; and the histogram corresponding to the target template and the candidate region template is calculated by the BH coefficient.
  • the similarity the larger the BH coefficient is, the higher the similarity is, the position of the maximum BH coefficient is the target new position; by calculating the centroid coordinate of the target object in each frame, the centroid passes through the upper and lower detection lines to determine the passenger getting off behavior. And getting on the bus.
  • step S142 the total number of passengers at the time of actual operation is determined by calculating the number of passengers getting off and the number of passengers boarding, and the ratio of the passengers in the vehicle is measured by the ratio between the total number of passengers and the maximum passenger capacity in the vehicle.
  • the number of passengers who get off the train is the total number of passengers in the vehicle. Calculating the total number of passengers in actual operation / the maximum passenger capacity of the bus, you can get the congestion factor that describes the congestion in the car. The higher the factor, the more crowded, and vice versa.
  • the calculation method of the passenger crowding degree provided by the invention adopts the pretreatment of removing the static background to effectively overcome the interference of the illumination intensity change and the like on the top recognition of the head image in the frame image, and can effectively reduce the top of the head by detecting the size limit of the window. Misdetection, missed detection and misdetection.
  • the embodiment of the present invention further provides a calculation system 10 for passenger crowding degree, which mainly includes:
  • the data collection module 11 is configured to establish a video data collection environment, and start collecting video data of passengers getting on and off the vehicle;
  • the pre-processing module 12 is configured to read video data of the collected passengers on and off the vehicle, and perform pre-processing of the continuous multi-frame image on the video data;
  • the object determining module 13 is configured to perform human head recognition according to the result of the preprocessing, and use the detected human head as a target object to be tracked by the mean shift;
  • the congestion determination module 14 is configured to determine the passenger's getting off behavior and the boarding behavior in the area where the target object is located, and determine the in-vehicle passenger crowding degree according to the number of passengers getting on and off.
  • the calculation system 10 for passenger crowding provided by the present invention can effectively overcome the interference of the top of the head image in the frame image by using the pre-processing of removing the static background, and can effectively reduce the size limit of the detection window. Misdetection, missed detection and misdetection at the top of the head.
  • the calculation system 10 for passenger crowding degree mainly includes a data acquisition module 11, a preprocessing module 12, an object determination module 13, and a congestion determination module 14.
  • the data collection module 11 is configured to establish a video data collection environment and start collecting video data of passengers getting on and off the vehicle.
  • the specific collection method of the video data is described in detail in the foregoing step S11, and the description is not repeated here.
  • the pre-processing module 12 is configured to read video data of the collected passengers on and off the vehicle, and perform pre-processing of the continuous multi-frame image on the video data.
  • the pre-processing module 12 specifically includes a read frame sub-module 121, a modeling sub-module 122, The status sub-module 123, the modification sub-module 124, and the update sub-module 125 are as shown in FIG. 6.
  • FIG. 6 is a schematic diagram showing the internal structure of the pre-processing module 12 shown in FIG. 5 according to an embodiment of the present invention.
  • the read frame sub-module 121 is configured to read the frame format of the collected video data of the passengers getting on and off and determine the number of frames.
  • the modeling sub-module 122 is configured to establish a single Gaussian model for each pixel in the initial frame.
  • the specific method for establishing the Gaussian model is described in detail in the foregoing step S122, and the description is not repeated here.
  • the status sub-module 123 is configured to analyze the change of pixel points of consecutive image frames and determine that each pixel point in the frame image is a static background or a dynamic foreground. In the embodiment, the specific method of the determination is described in detail in the foregoing step S123, and the description is not repeated here.
  • the modification sub-module 124 is configured to modify the pixel value of the pixel determined as the static background, and the pixel value of the pixel determined as the dynamic foreground is not modified.
  • the update sub-module 125 is configured to record the number of times that each pixel in the frame image is continuously determined to be a static background. If the number of times is greater than or equal to a preset threshold, the RGB value of the pixel is immediately updated. For a static background, re-record. In this embodiment, the specific method of updating is described in detail in the foregoing step S125, and the repeated description is not repeated here.
  • the object determining module 13 is configured to perform human head recognition according to the result of the pre-processing, and use the detected human head as a target object to be tracked by the mean shift.
  • the object determining module 13 specifically includes a production sub-module 131, a defining sub-module 132, an identifying sub-module 133, and a target sub-module 134, as shown in FIG.
  • FIG. 7 is a schematic diagram showing the internal structure of the object determining module 13 shown in FIG. 5 according to an embodiment of the present invention.
  • the production sub-module 131 is configured to make a cascade classifier for determining whether the top of the human head is determined by using the collected plurality of positive samples including the human head and negative samples not including the human head.
  • the positive sample containing the human head and the human head are included in the collected sheets.
  • the negative sample (size 20*20) is used for the Adaboost training iterative training based on the LBP feature to create a cascade classifier that determines whether the head is at the top of the head.
  • the defining sub-module 132 is configured to define a size range of the detection window in the cascade classifier for detecting the top of the human head.
  • the detected image window size w of the cascade classifier is set to be between W min and W max , and W min and W max are determined according to the size of the bus, through the window w
  • the movement on the frame image detects the position of the human head, and the movement detection rule adopts the integral map method. If the human head is not detected, the detection window is enlarged by 1.5 times and cannot exceed the range of W min and W max .
  • the identification sub-module 133 is configured to perform human head recognition according to a cascade classifier that is limited in size of the detection window.
  • the target sub-module 134 is configured to use the detected human head as a target object to be tracked by the mean shift.
  • the congestion determination module 14 is configured to determine the passenger's getting off behavior and the boarding behavior in the area where the target object is located, and determine the in-vehicle passenger crowding degree according to the number of passengers getting on and off.
  • the congestion determination module 14 specifically includes a first determination sub-module 141 and a second determination sub-module 142, as shown in FIG.
  • FIG. 8 is a schematic diagram showing the internal structure of the congestion determination module 14 shown in FIG. 5 according to an embodiment of the present invention.
  • the first determining sub-module 141 is configured to set two detection lines in the shooting area of the camera. If the centroid of the target object to be tracked by the mean shift passes through the above two detection lines, it is determined that the passenger is getting off or getting on the vehicle. .
  • two detection lines are disposed in the imaging area of the camera, and are placed on the road surface at a certain distance from the outside of the vehicle door and on the floor of the vehicle at a certain distance from the door, and the detected human head is used as the mean value drift.
  • the region histogram of the current frame is calculated by using the estimated center position y 0 of the (n-1) frame target object as the center coordinate of the search window; and the histogram corresponding to the target template and the candidate region template is calculated by the BH coefficient.
  • the similarity the larger the BH coefficient is, the higher the similarity is, the position of the maximum BH coefficient is the target new position; by calculating the centroid coordinate of the target object in each frame, the centroid passes through the upper and lower detection lines to determine the passenger getting off behavior. And getting on the bus.
  • the second determining sub-module 142 is configured to determine the total number of passengers in actual operation by calculating the number of passengers getting off and the number of passengers entering the vehicle, and using the ratio between the total number of passengers and the maximum passenger capacity in the vehicle to measure the interior of the vehicle. Passenger congestion.
  • the number of passengers who get off the train is the total number of passengers in the vehicle.
  • a crowding factor describing the congestion degree in the vehicle can be obtained. The higher the factor, the more crowded, and vice versa.
  • the calculation system 10 for passenger crowding provided by the present invention can effectively overcome the interference of the top of the head image in the frame image by using the pre-processing of removing the static background, and can effectively reduce the size limit of the detection window. Misdetection, missed detection and misdetection at the top of the head.
  • each unit included is only divided according to functional logic, but is not limited to the above division, as long as the corresponding function can be implemented; in addition, the specific name of each functional unit is also They are only used to facilitate mutual differentiation and are not intended to limit the scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种乘客拥挤度的计算方法,包括:建立视频数据采集环境,并开始采集乘客上下车的视频数据(S11);读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理(S12);根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象(S13);在所述目标对象内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度(S14)。还提供一种乘客拥挤度的计算系统。

Description

一种乘客拥挤度的计算方法及其系统 技术领域
本发明涉及公共交通领域,尤其涉及一种乘客拥挤度的计算方法及其系统。
背景技术
随着城市化的进行,城市人口持续膨胀,中国大中型城市交通面临的交通压力日趋严重。在一些大型城市,高峰期的交通拥堵已对城市的可持续发展构成了严重阻碍。解决高峰期的交通拥堵的一个行之有效的手段是鼓励城市居民乘坐公交车出行,但是部分公交线路在高峰期运力不足限制了居民乘坐公交成的热情。解决高峰期运力不足最好的手段是增加运力,但是单纯地增加热门线路的公交车数量,则会导致非高峰期的运力过剩,既不经济也不环保,因此公交智能调度的概念应运而生。公交智能调度是通过安装在每辆公交车上的终端,对各个线路的每一辆公交车上车人数、下车人数以及车上乘客总数进行统计,以此实现对现行各个公交线路在各个时间段的乘客负载情况进行监控。此外公交智能调度还可以利用各个车站各个时段上下车的人流的历史数据进行深层次的数据挖掘,为制定合理高效的公交线路提供依据。因此,作为智能公交调度系统的最重要的部分,对公交乘客数进行精确的统计是公交智能调度系统实现的关键。
传统的客流统计方法是利用人工检测,或利用投币机,刷卡器等接触式设备进行统计。其中采用人工统计的方法来取得客流量数据,虽然精度可以满足要求,但是需要消耗大量人力、财力,费用高昂并且实时性较差。其次是红外线检测系统,虽然可以同时实现对上下车人数的统计。但是,红外装置很容易受到外界因素的干扰,如人员连续通过或人员长时间驻留都可能造成误统计,无法满足公交智能调度系统对人数统计的精确性要求。同时,由于红外系统仅仅能够实现对通过车门的人数进行统计,并不能判断乘客运动的方向,即不能 实现单车门的人流双向计数,因此无法应用于不分上下车门的快速公交系统。在快速公交系统覆面越来越广的今天,利用红外线进行公交人数统计的检测手段的适用性越来越低。
发明内容
有鉴于此,本发明的目的在于提供一种乘客拥挤度的计算方法及其系统,旨在解决现有技术中统计客流量的精度不高的问题。
本发明提出一种乘客拥挤度的计算方法,所述方法包括:
建立视频数据采集环境,并开始采集乘客上下车的视频数据;
读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理;
根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象;
在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
优选的,所述读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理的步骤具体包括:
读取采集到的乘客上下车的视频数据的帧格式并确定帧数;
对初始帧中的每个像素点建立单高斯模型;
通过对连续数帧图象像素点的变化进行分析并判定帧图像中每个像素点为静态背景或者为动态前景;
对判定为静态背景的像素点的像素值进行修改,对判定为动态前景的像素点的像素值不修改;
记录帧图像中每个像素点被连续判断为静态背景的次数,当该次数如果大于或者等于预设阈值,则立即更新该像素点RGB值为背景,若非被连续判断为静态背景,则重新记录。
优选的,所述根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象的步骤具体包括:
利用收集到的多张包含人头部的正样本和不包含人头部的负样本制作用于判定是否人头顶部的级联分类器;
限定所述级联分类器中检测人头顶部的检测窗口的尺寸范围;
根据被限定检测窗口尺寸的级联分类器进行人头部识别;
将检测出的人头部作为均值漂移所要跟踪的目标对象。
优选的,所述在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度的步骤具体包括:
在摄像头所拍摄区域中设置两条检测线,如果所述均值漂移所要跟踪的目标对象的质心穿越了以上两条检测线,则判定乘客为下车或者上车;
通过计算乘客的下车人数和上车人数来确定实际运营时的乘客总数,并利用所述乘客总数与车内最大载客量之间的比值来衡量车内乘客拥挤度。
另一方面,本发明还提供一种乘客拥挤度的计算系统,所述系统包括:
数据采集模块,用于建立视频数据采集环境,并开始采集乘客上下车的视频数据;
预处理模块,用于读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理;
对象确定模块,用于根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象;
拥挤判定模块,用于在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
优选的,所述预处理模块包括:
读帧子模块,用于读取采集到的乘客上下车的视频数据的帧格式并确定帧数;
建模子模块,用于对初始帧中的每个像素点建立单高斯模型;
状态子模块,用于通过对连续数帧图象像素点的变化进行分析并判定帧图像中每个像素点为静态背景或者为动态前景;
修改子模块,用于对判定为静态背景的像素点的像素值进行修改,对判定为动态前景的像素点的像素值不修改;
更新子模块,用于记录帧图像中每个像素点被连续判断为静态背景的次数,当该次数如果大于或者等于预设阈值,则立即更新该像素点RGB值为背景,若非被连续判断为静态背景,则重新记录。
优选的,所述对象确定模块包括:
制作子模块,用于利用收集到的多张包含人头部的正样本和不包含人头部的负样本制作用于判定是否人头顶部的级联分类器;
限定子模块,用于限定所述级联分类器中检测人头顶部的检测窗口的尺寸范围;
识别子模块,用于根据被限定检测窗口尺寸的级联分类器进行人头部识别;
目标子模块,用于将检测出的人头部作为均值漂移所要跟踪的目标对象。
优选的,所述拥挤判定模块包括:
第一判定子模块,用于在摄像头所拍摄区域中设置两条检测线,如果所述均值漂移所要跟踪的目标对象的质心穿越了以上两条检测线,则判定乘客为下车或者上车;
第二判定子模块,用于通过计算乘客的下车人数和上车人数来确定实际运营时的乘客总数,并利用所述乘客总数与车内最大载客量之间的比值来衡量车内乘客拥挤度。
本发明提供的技术方案采用除去静态背景的预处理能有效克服光照强弱变化等对帧图像中人头顶部识别的干扰,通过检测窗口的尺寸限制能有效减小对人头顶部的误检、漏检和错检。
附图说明
图1为本发明一实施方式中乘客拥挤度的计算方法流程图;
图2为本发明一实施方式中图1所示的步骤S12的详细流程图;
图3为本发明一实施方式中图1所示的步骤S13的详细流程图;
图4为本发明一实施方式中图1所示的步骤S14的详细流程图;
图5为本发明一实施方式中乘客拥挤度的计算系统10的内部结构示意图;
图6为本发明一实施方式中图5所示的预处理模块12的内部结构示意图;
图7为本发明一实施方式中图5所示的对象确定模块13的内部结构示意图;
图8为本发明一实施方式中图5所示的拥挤判定模块14的内部结构示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明具体实施方式提供了一种乘客拥挤度的计算方法,所述方法主要包括如下步骤:
S11、建立视频数据采集环境,并开始采集乘客上下车的视频数据;
S12、读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理;
S13、根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象;
S14、在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
本发明提供的一种乘客拥挤度的计算方法采用除去静态背景的预处理能有效克服光照强弱变化等对帧图像中人头顶部识别的干扰,通过检测窗口的尺寸限制能有效减小对人头顶部的误检、漏检和错检。
以下将对本发明所提供的一种乘客拥挤度的计算方法进行详细说明。
请参阅图1,为本发明一实施方式中乘客拥挤度的计算方法流程图。
在步骤S11中,建立视频数据采集环境,并开始采集乘客上下车的视频数据。
在本实施方式中,通过构建嵌入式车载系统来建立视频数据采集环境,所构建的嵌入式车载系统包括硬件模块有:两部摄像机、嵌入式设备主控模块、视频存储模块、车载硬件供电模块,将两部摄像机分别置于前后车门顶部,其摄像头与地面呈90度,其拍摄范围覆盖:上下车的台阶、车门外的一部分马路和车门内的一部分车体空间。在本实施方式中,车载嵌入式设备通过控制安装在公交车前后门顶部的摄像头采集乘客上下车的视频数据,并将两部摄像头录制的视频数据暂存至视频存储模块,以供调用做视频处理。在本实施方式中,视频数据暂存方式能有效的减少对嵌入式硬件的要求,从而降低设备的成本。
在步骤S12中,读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理。
在本实施方式中,所述读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理的步骤S12具体包括S121—S125,如图2所示。
请参阅图2,为本发明一实施方式中图1所示的步骤S12的详细流程图。
在步骤S121中,读取采集到的乘客上下车的视频数据的帧格式并确定帧数。
在本实施方式中,嵌入式设备主控模块读取视频数据的帧格式并确定帧数,确定帧图像的大小m×n,其中m表示帧图像的行数,n表示帧图像的列数。
在步骤S122中,对初始帧中的每个像素点建立单高斯模型。
在本实施方式中,对初始帧中每个像素点建立单高斯模型,初始其单高斯模型的均值为像素点的RGB值,方差初始常数V,其后再根据像素点的变化逐渐构造混合高斯密度函数
Figure PCTCN2016076740-appb-000001
其中
Figure PCTCN2016076740-appb-000002
η(xti,ti,t)为t时刻第i个高斯分布,I为三维单位矩阵,K为高斯分布模式总数,μi,t、Wi,t、τi,t、σi,t分别代表每个高斯函数均值、权值、协方差矩阵、方差。
在步骤S123中,通过对连续数帧图象像素点的变化进行分析并判定帧图像中每个像素点为静态背景或者为动态前景。
在本实施方式中,通过连续的数帧图象像素点的变化,由初始帧的单高斯模型渐变到混合高斯模型,每个像素点至少拥有K(K≥1)个高斯函数,由帧与帧之间的变化判断帧图像中每个像素点为静态背景或者为动态前景。
在本实施方式中,根据视频流的顺序,后一帧图像共m×n个像素点的RGB值依次与前一帧帧图像相对应的像素点的K(K≥1)个高斯函数进行匹配:
如果与K个中某一个或几个函数匹配成功,表示K个高斯函数中至少有一个能描述当前像素点RGB值的变化,则增加匹配成功的高斯函数的权值Wi,t,对未匹配成功的余下(K-1)个高斯函数,更新其均值和方差,并判定当前像素点的RGB值属于静态背景;
如果匹配不成功,表示K个高斯函数中没有能描述当前像素点RGB值的变化,则删除权值最低的高斯函数,新建高斯函数并由当前像素点的RGB值作为其均值,V作为其方差,并判定当前像素点的RGB值属于动态前景。
在步骤S124中,对判定为静态背景的像素点的像素值进行修改,对判定为动态前景的像素点的像素值不修改。
在本实施方式中,将判定为静态背景的像素点的像素值修改为RGBs,其中,RGBs代表日常生活中出现率最低的颜色,将判定为动态前景的像素点不修改其像素值,在本实施方式中,此步骤所做的颜色修改为后续步骤中的人头部检测和轨迹追踪减除了静态背景的干扰,极大地提高了后续步骤的成功率。
在步骤S125中,记录帧图像中每个像素点被连续判断为静态背景的次数,当该次数如果大于或者等于预设阈值,则立即更新该像素点RGB值为背景,若非被连续判断为静态背景,则重新记录。
在本实施方式中,考虑到混合高斯背景建模需要时间,而且在现实场景中,乘客上下车的时间可能小于混合高斯建模的时间,增加一种针对原混合高斯建模算法中的高斯函数权值增长机制,即:记录帧图像中每个像素点被连续判断 为静态背景的次数,当该次数如果大于或等于预设阈值K时,则立即更新该像素点RGB值为背景,不必等该高斯函数达到原始算法中的阈值后再更新该像素点RGB值,若非被连续判断为静态背景,则重新记录。
在本实施方式中,每一帧图像一共m×n个像素点,隶属于每个像素点的高斯密度函数个数K不应超过4个,如果超过4个,则删除权值最低的高斯函数;通过图像的开闭运算消除图像预处理所产生的小的且孤立的点。
请继续参阅图1,在步骤S13中,根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象。
在本实施方式中,所述根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象的步骤S13具体包括S131—S134,如图3所示。
请参阅图3,为本发明一实施方式中图1所示的步骤S13的详细流程图。
在步骤S131中,利用收集到的多张包含人头部的正样本和不包含人头部的负样本制作用于判定是否人头顶部的级联分类器。
在本实施方式中,对以收集的数张包含人头部的正样本和不包含人头部的负样本(大小20*20)做基于LBP特征的Adaboost训练迭代训练,制作判定是否属于人头顶部的级联分类器。
在步骤S132中,限定所述级联分类器中检测人头顶部的检测窗口的尺寸范围。
在本实施方式中,设定级联分类器的检测图像窗口大小w,其尺寸大小要保证在Wmin和Wmax之间,Wmin和Wmax根据公交车的大小而定,通过窗口w在帧图像上的移动检测出人头部位置,其移动检测规则采用积分图法,如果未检测到人头部,则将检测窗口放大1.5倍,且不能超过Wmin和Wmax的范围。
在步骤S133中,根据被限定检测窗口尺寸的级联分类器进行人头部识别。
在本实施方式中,将不满足步骤S132的尺寸范围条件且已被步骤S131判定为人头顶部的人头部样本删除,不能仅仅认定单步骤S131级联分类器的检测 结果,必须同时满足步骤S131和S132的人头顶部样本才被后续步骤所认可。
在步骤S134中,将检测出的人头部作为均值漂移(Meanshift)所要跟踪的目标对象。
请继续参阅图1,在步骤S14中,在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
在本实施方式中,所述在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度的步骤S14具体包括S141—S142,如图4所示。
请参阅图4,为本发明一实施方式中图1所示的步骤S14的详细流程图。
在步骤S141中,在摄像头所拍摄区域中设置两条检测线,如果所述均值漂移所要跟踪的目标对象的质心穿越了以上两条检测线,则判定乘客为下车或者上车。
在本实施方式中,在摄像头所拍摄区域中设置两条检测线,分别置于距车门外一定距离的路面上和距车门内一定距离的车地板上,将检测出的人头部作为均值漂移(Meanshift)要跟踪的目标对象,并计算该区域的概率密度估计{qu}u=1...m(其中,u为直方图的颜色索引)、目标被估计中心位置y0和核窗宽h。
在本实施方式中,以(n-1)帧目标对象被估计的中心位置y0为搜索窗口中心坐标计算当前帧的区域直方图;将目标模板和候选区域模板对应的直方图用BH系数计算其相似性,其BH系数越大代表其相似性高,最大BH系数的位置为目标新位置;通过计算每一帧中目标对象的质心坐标,质心穿越了上下检测线,则判定乘客下车行为和上车行为。
在步骤S142中,通过计算乘客的下车人数和上车人数来确定实际运营时的乘客总数,并利用所述乘客总数与车内最大载客量之间的比值来衡量车内乘客拥挤度。
在本实施方式中,通过上车人数减去下车人数则为车内乘客总数,通过计 算实际运营时的乘客总数/公交车的最大载客量,则可得到描述车内拥挤度的拥挤因子,此因子越高则越拥挤,反之稀疏。
本发明提供的一种乘客拥挤度的计算方法采用除去静态背景的预处理能有效克服光照强弱变化等对帧图像中人头顶部识别的干扰,通过检测窗口的尺寸限制能有效减小对人头顶部的误检、漏检和错检。
本发明具体实施方式还提供一种乘客拥挤度的计算系统10,主要包括:
数据采集模块11,用于建立视频数据采集环境,并开始采集乘客上下车的视频数据;
预处理模块12,用于读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理;
对象确定模块13,用于根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象;
拥挤判定模块14,用于在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
本发明提供的一种乘客拥挤度的计算系统10,采用除去静态背景的预处理能有效克服光照强弱变化等对帧图像中人头顶部识别的干扰,通过检测窗口的尺寸限制能有效减小对人头顶部的误检、漏检和错检。
请参阅图5,所示为本发明一实施方式中乘客拥挤度的计算系统10的结构示意图。在本实施方式中,乘客拥挤度的计算系统10主要包括数据采集模块11、预处理模块12、对象确定模块13以及拥挤判定模块14。
数据采集模块11,用于建立视频数据采集环境,并开始采集乘客上下车的视频数据。在本实施例中,视频数据的具体采集方法详见前述步骤S11中的相关记载,在此不做重复描述。
预处理模块12,用于读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理。
在本实施方式中,预处理模块12具体包括读帧子模块121、建模子模块122、 状态子模块123、修改子模块124以及更新子模块125,如图6所示。
请参阅图6,所示为本发明一实施方式中图5所示的预处理模块12的内部结构示意图。
读帧子模块121,用于读取采集到的乘客上下车的视频数据的帧格式并确定帧数。
建模子模块122,用于对初始帧中的每个像素点建立单高斯模型。在本实施例中,高斯模型的具体建立方法详见前述步骤S122中的相关记载,在此不做重复描述。
状态子模块123,用于通过对连续数帧图象像素点的变化进行分析并判定帧图像中每个像素点为静态背景或者为动态前景。在本实施例中,判定的具体方法详见前述步骤S123中的相关记载,在此不做重复描述。
修改子模块124,用于对判定为静态背景的像素点的像素值进行修改,对判定为动态前景的像素点的像素值不修改。
更新子模块125,用于记录帧图像中每个像素点被连续判断为静态背景的次数,当该次数如果大于或者等于预设阈值,则立即更新该像素点RGB值为背景,若非被连续判断为静态背景,则重新记录。在本实施例中,更新的具体方法详见前述步骤S125中的相关记载,在此不做重复描述。
请继续参阅图5,对象确定模块13,用于根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象。
在本实施方式中,对象确定模块13具体包括制作子模块131、限定子模块132、识别子模块133以及目标子模块134,如图7所示。
请参阅图7,所示为本发明一实施方式中图5所示的对象确定模块13的内部结构示意图。
制作子模块131,用于利用收集到的多张包含人头部的正样本和不包含人头部的负样本制作用于判定是否人头顶部的级联分类器。
在本实施方式中,对以收集的数张包含人头部的正样本和不包含人头部的 负样本(大小20*20)做基于LBP特征的Adaboost训练迭代训练,制作判定是否人头顶部的级联分类器。
限定子模块132,用于限定所述级联分类器中检测人头顶部的检测窗口的尺寸范围。
在本实施方式中,设定级联分类器的检测图像窗口大小w,其尺寸大小要保证在Wmin和Wmax之间,Wmin和Wmax根据公交车的大小而定,通过窗口w在帧图像上的移动检测出人头部位置,其移动检测规则采用积分图法,如果未检测到人头部,则将检测窗口放大1.5倍,且不能超过Wmin和Wmax的范围。
识别子模块133,用于根据被限定检测窗口尺寸的级联分类器进行人头部识别。
目标子模块134,用于将检测出的人头部作为均值漂移所要跟踪的目标对象。
请继续参阅图5,拥挤判定模块14,用于在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
在本实施方式中,拥挤判定模块14具体包括第一判定子模块141以及第二判定子模块142,如图8所示。
请参阅图8,所示为本发明一实施方式中图5所示的拥挤判定模块14的内部结构示意图。
第一判定子模块141,用于在摄像头所拍摄区域中设置两条检测线,如果所述均值漂移所要跟踪的目标对象的质心穿越了以上两条检测线,则判定乘客为下车或者上车。
在本实施方式中,在摄像头所拍摄区域中设置两条检测线,分别置于距车门外一定距离的路面上和距车门内一定距离的车地板上,将检测出的人头部作为均值漂移(Meanshift)要跟踪的目标对象,并计算该区域的概率密度估计{qu}u=1...m(其中,u为直方图的颜色索引)、目标被估计中心位置y0和核窗宽h。
在本实施方式中,以(n-1)帧目标对象被估计的中心位置y0为搜索窗口中心坐标计算当前帧的区域直方图;将目标模板和候选区域模板对应的直方图用BH系数计算其相似性,其BH系数越大代表其相似性高,最大BH系数的位置为目标新位置;通过计算每一帧中目标对象的质心坐标,质心穿越了上下检测线,则判定乘客下车行为和上车行为。
第二判定子模块142,用于通过计算乘客的下车人数和上车人数来确定实际运营时的乘客总数,并利用所述乘客总数与车内最大载客量之间的比值来衡量车内乘客拥挤度。
在本实施方式中,通过上车人数减去下车人数则为车内乘客总数,通过计算实际运营时的乘客总数/公交车的最大载客量,则可得到描述车内拥挤度的拥挤因子,此因子越高则越拥挤,反之稀疏。
本发明提供的一种乘客拥挤度的计算系统10,采用除去静态背景的预处理能有效克服光照强弱变化等对帧图像中人头顶部识别的干扰,通过检测窗口的尺寸限制能有效减小对人头顶部的误检、漏检和错检。
值得注意的是,上述实施例中,所包括的各个单元只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
另外,本领域普通技术人员可以理解实现上述各实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,相应的程序可以存储于一计算机可读取存储介质中,所述的存储介质,如ROM/RAM、磁盘或光盘等。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种乘客拥挤度的计算方法,其特征在于,所述方法包括:
    建立视频数据采集环境,并开始采集乘客上下车的视频数据;
    读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理;
    根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象;
    在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
  2. 如权利要求1所述的乘客拥挤度的计算方法,其特征在于,所述读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理的步骤具体包括:
    读取采集到的乘客上下车的视频数据的帧格式并确定帧数;
    对初始帧中的每个像素点建立单高斯模型;
    通过对连续数帧图象像素点的变化进行分析并判定帧图像中每个像素点为静态背景或者为动态前景;
    对判定为静态背景的像素点的像素值进行修改,对判定为动态前景的像素点的像素值不修改;
    记录帧图像中每个像素点被连续判断为静态背景的次数,当该次数如果大于或者等于预设阈值,则立即更新该像素点RGB值为背景,若非被连续判断为静态背景,则重新记录。
  3. 如权利要求2所述的乘客拥挤度的计算方法,其特征在于,所述根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象的步骤具体包括:
    利用收集到的多张包含人头部的正样本和不包含人头部的负样本制作用于判定是否人头顶部的级联分类器;
    限定所述级联分类器中检测人头顶部的检测窗口的尺寸范围;
    根据被限定检测窗口尺寸的级联分类器进行人头部识别;
    将检测出的人头部作为均值漂移所要跟踪的目标对象。
  4. 如权利要求3所述的乘客拥挤度的计算方法,其特征在于,所述在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度的步骤具体包括:
    在摄像头所拍摄区域中设置两条检测线,如果所述均值漂移所要跟踪的目标对象的质心穿越了以上两条检测线,则判定乘客为下车或者上车;
    通过计算乘客的下车人数和上车人数来确定实际运营时的乘客总数,并利用所述乘客总数与车内最大载客量之间的比值来衡量车内乘客拥挤度。
  5. 一种乘客拥挤度的计算系统,其特征在于,所述系统包括:
    数据采集模块,用于建立视频数据采集环境,并开始采集乘客上下车的视频数据;
    预处理模块,用于读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理;
    对象确定模块,用于根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象;
    拥挤判定模块,用于在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
  6. 如权利要求5所述的乘客拥挤度的计算系统,其特征在于,所述预处理模块包括:
    读帧子模块,用于读取采集到的乘客上下车的视频数据的帧格式并确定帧数;
    建模子模块,用于对初始帧中的每个像素点建立单高斯模型;
    状态子模块,用于通过对连续数帧图象像素点的变化进行分析并判定帧图像中每个像素点为静态背景或者为动态前景;
    修改子模块,用于对判定为静态背景的像素点的像素值进行修改,对判定为动态前景的像素点的像素值不修改;
    更新子模块,用于记录帧图像中每个像素点被连续判断为静态背景的次数,当该次数如果大于或者等于预设阈值,则立即更新该像素点RGB值为背景,若非被连续判断为静态背景,则重新记录。
  7. 如权利要求6所述的乘客拥挤度的计算系统,其特征在于,所述对象确定模块包括:
    制作子模块,用于利用收集到的多张包含人头部的正样本和不包含人头部的负样本制作用于判定是否人头顶部的级联分类器;
    限定子模块,用于限定所述级联分类器中检测人头顶部的检测窗口的尺寸范围;
    识别子模块,用于根据被限定检测窗口尺寸的级联分类器进行人头部识别;
    目标子模块,用于将检测出的人头部作为均值漂移所要跟踪的目标对象。
  8. 如权利要求7所述的乘客拥挤度的计算系统,其特征在于,所述拥挤判定模块包括:
    第一判定子模块,用于在摄像头所拍摄区域中设置两条检测线,如果所述均值漂移所要跟踪的目标对象的质心穿越了以上两条检测线,则判定乘客为下车或者上车;
    第二判定子模块,用于通过计算乘客的下车人数和上车人数来确定实际运营时的乘客总数,并利用所述乘客总数与车内最大载客量之间的比值来衡量车内乘客拥挤度。
PCT/CN2016/076740 2016-03-18 2016-03-18 一种乘客拥挤度的计算方法及其系统 WO2017156772A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2016/076740 WO2017156772A1 (zh) 2016-03-18 2016-03-18 一种乘客拥挤度的计算方法及其系统
JP2018502630A JP6570731B2 (ja) 2016-03-18 2016-03-18 乗客の混雑度の算出方法及びそのシステム
US15/628,605 US10223597B2 (en) 2016-03-18 2017-06-20 Method and system for calculating passenger crowdedness degree

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/076740 WO2017156772A1 (zh) 2016-03-18 2016-03-18 一种乘客拥挤度的计算方法及其系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/628,605 Continuation US10223597B2 (en) 2016-03-18 2017-06-20 Method and system for calculating passenger crowdedness degree

Publications (1)

Publication Number Publication Date
WO2017156772A1 true WO2017156772A1 (zh) 2017-09-21

Family

ID=59852232

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/076740 WO2017156772A1 (zh) 2016-03-18 2016-03-18 一种乘客拥挤度的计算方法及其系统

Country Status (3)

Country Link
US (1) US10223597B2 (zh)
JP (1) JP6570731B2 (zh)
WO (1) WO2017156772A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345878A (zh) * 2018-04-16 2018-07-31 泰华智慧产业集团股份有限公司 基于视频的公共交通工具客流量监测方法及系统
CN111046788A (zh) * 2019-12-10 2020-04-21 北京文安智能技术股份有限公司 一种停留人员的检测方法、装置及系统
CN112183469A (zh) * 2020-10-27 2021-01-05 华侨大学 一种公共交通的拥挤度识别及自适应调整方法、系统、设备及计算机可读存储介质
CN112434566A (zh) * 2020-11-04 2021-03-02 深圳云天励飞技术股份有限公司 客流统计方法、装置、电子设备及存储介质
CN112580633A (zh) * 2020-12-25 2021-03-30 博大视野(厦门)科技有限公司 一种公共交通客流统计装置及方法
CN112785462A (zh) * 2020-02-27 2021-05-11 吴秋琴 基于大数据的景区客流量统计评估系统
CN113255480A (zh) * 2021-05-11 2021-08-13 中国联合网络通信集团有限公司 公交车内拥挤程度识别方法、系统、计算机设备及介质
CN113420693A (zh) * 2021-06-30 2021-09-21 成都新潮传媒集团有限公司 一种门状态检测方法、装置、轿厢乘客流量统计方法及设备
CN114596537A (zh) * 2022-05-10 2022-06-07 深圳市海清视讯科技有限公司 区域人流数据确定方法、装置、设备及存储介质

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10460198B2 (en) * 2015-12-23 2019-10-29 Fotonation Limited Image processing system
US10699572B2 (en) 2018-04-20 2020-06-30 Carrier Corporation Passenger counting for a transportation system
CN108960133B (zh) * 2018-07-02 2022-01-11 京东方科技集团股份有限公司 乘客流量监控的方法、电子设备、系统以及存储介质
CN110895662A (zh) * 2018-09-12 2020-03-20 杭州海康威视数字技术股份有限公司 车辆超载报警方法、装置、电子设备及存储介质
CN109684998A (zh) * 2018-12-21 2019-04-26 佛山科学技术学院 一种基于图像处理的地铁人流管理的系统和方法
CN110490110A (zh) * 2019-01-29 2019-11-22 王馨悦 基于人体工程学特征检测的客流计数装置及方法
JP7200715B2 (ja) 2019-02-05 2023-01-10 トヨタ自動車株式会社 情報処理システム、プログラム、及び車両
CN111079488B (zh) * 2019-05-27 2023-09-26 广东快通信息科技有限公司 一种基于深度学习的公交客流检测系统及方法
CN112026686B (zh) * 2019-06-04 2022-04-12 上海汽车集团股份有限公司 一种自动调节车辆座椅位置的方法及装置
CN112347814A (zh) * 2019-08-07 2021-02-09 中兴通讯股份有限公司 客流估计与展示方法、系统及计算机可读存储介质
CN110633671A (zh) * 2019-09-16 2019-12-31 天津通卡智能网络科技股份有限公司 基于深度图像的公交车客流实时统计方法
CN112541374B (zh) * 2019-09-20 2024-04-30 南京行者易智能交通科技有限公司 一种基于深度学习的乘客属性的获取方法、装置及模型训练方法
CN110852155A (zh) * 2019-09-29 2020-02-28 深圳市深网视界科技有限公司 公交乘客拥挤度检测方法、系统、装置及存储介质
CN110930432A (zh) * 2019-11-19 2020-03-27 北京文安智能技术股份有限公司 一种视频分析方法、装置及系统
KR102470251B1 (ko) * 2019-12-18 2022-11-23 한국교통대학교 산학협력단 플랫폼 대기승객 화상정보를 기반으로 한 기계학습을 이용한 스크린도어 개폐 시간 제어 방법 및 이를 위한 컴퓨팅장치
CN111079696B (zh) * 2019-12-30 2023-03-24 深圳市昊岳电子有限公司 一种基于车辆监控人员拥挤度的检测方法
CN111611951A (zh) * 2020-05-27 2020-09-01 中航信移动科技有限公司 一种基于机器视觉的安检人流量实时监控系统及方法
CN112241688A (zh) * 2020-09-24 2021-01-19 厦门卫星定位应用股份有限公司 车厢拥挤度检测方法及系统
CN112565715A (zh) * 2020-12-30 2021-03-26 浙江大华技术股份有限公司 一种景点客流量监控方法、装置、电子设备及存储介质
CN112766950A (zh) * 2020-12-31 2021-05-07 广州广电运通智能科技有限公司 一种动态路径费用确定方法、装置、设备及介质
CN113470222A (zh) * 2021-05-24 2021-10-01 支付宝(杭州)信息技术有限公司 公共交通车辆客流统计系统和方法
CN114299746B (zh) * 2021-12-30 2023-04-14 武汉长飞智慧网络技术有限公司 基于图像识别的公交车辆调度方法、设备及介质
CN115072510B (zh) * 2022-06-08 2023-04-18 上海交通大学 基于门开关的电梯轿厢乘客智能识别与分析方法及系统
CN115829210B (zh) * 2023-02-20 2023-04-28 安徽阿瑞特汽车电子科技有限公司 一种汽车行驶过程智能监测管理方法、系统及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324121A (zh) * 2011-04-29 2012-01-18 重庆市科学技术研究院 一种公交车内拥挤程度检测方法
CN102622578A (zh) * 2012-02-06 2012-08-01 中山大学 一种乘客计数系统及方法
CN104504377A (zh) * 2014-12-25 2015-04-08 中邮科通信技术股份有限公司 一种公交车乘客拥挤程度识别系统及方法
CN104724566A (zh) * 2013-12-24 2015-06-24 株式会社日立制作所 具备图像识别功能的电梯

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3121190B2 (ja) * 1993-12-15 2000-12-25 松下電器産業株式会社 通過人数検知センサおよびその連動装置
JP2001258016A (ja) * 2000-03-14 2001-09-21 Japan Radio Co Ltd 監視カメラシステム及びその制御方法
US20090041297A1 (en) * 2005-05-31 2009-02-12 Objectvideo, Inc. Human detection and tracking for security applications
CN101464946B (zh) * 2009-01-08 2011-05-18 上海交通大学 基于头部识别和跟踪特征的检测方法
RU2484531C2 (ru) * 2009-01-22 2013-06-10 Государственное научное учреждение центральный научно-исследовательский и опытно-конструкторский институт робототехники и технической кибернетики (ЦНИИ РТК) Устройство обработки видеоинформации системы охранной сигнализации
US8659643B2 (en) * 2011-01-18 2014-02-25 Disney Enterprises, Inc. Counting system for vehicle riders
US20130128050A1 (en) * 2011-11-22 2013-05-23 Farzin Aghdasi Geographic map based control
CN103854273B (zh) * 2012-11-28 2017-08-25 天佑科技股份有限公司 一种近正向俯视监控视频行人跟踪计数方法和装置
JP5674857B2 (ja) * 2013-05-10 2015-02-25 技研トラステム株式会社 乗降者計数装置
JP6264181B2 (ja) * 2014-05-07 2018-01-24 富士通株式会社 画像処理装置、画像処理方法及び画像処理プログラム
EP2975561B1 (en) * 2014-07-14 2023-12-13 Gerrit Böhm Capacity prediction for public transport vehicles
CN104268506B (zh) * 2014-09-15 2017-12-15 郑州天迈科技股份有限公司 基于深度图像的客流计数检测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324121A (zh) * 2011-04-29 2012-01-18 重庆市科学技术研究院 一种公交车内拥挤程度检测方法
CN102622578A (zh) * 2012-02-06 2012-08-01 中山大学 一种乘客计数系统及方法
CN104724566A (zh) * 2013-12-24 2015-06-24 株式会社日立制作所 具备图像识别功能的电梯
CN104504377A (zh) * 2014-12-25 2015-04-08 中邮科通信技术股份有限公司 一种公交车乘客拥挤程度识别系统及方法

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345878B (zh) * 2018-04-16 2020-03-24 泰华智慧产业集团股份有限公司 基于视频的公共交通工具客流量监测方法及系统
CN108345878A (zh) * 2018-04-16 2018-07-31 泰华智慧产业集团股份有限公司 基于视频的公共交通工具客流量监测方法及系统
CN111046788A (zh) * 2019-12-10 2020-04-21 北京文安智能技术股份有限公司 一种停留人员的检测方法、装置及系统
CN112785462A (zh) * 2020-02-27 2021-05-11 吴秋琴 基于大数据的景区客流量统计评估系统
CN112183469A (zh) * 2020-10-27 2021-01-05 华侨大学 一种公共交通的拥挤度识别及自适应调整方法、系统、设备及计算机可读存储介质
CN112183469B (zh) * 2020-10-27 2023-07-28 华侨大学 一种公共交通的拥挤度识别及自适应调整方法
CN112434566A (zh) * 2020-11-04 2021-03-02 深圳云天励飞技术股份有限公司 客流统计方法、装置、电子设备及存储介质
CN112434566B (zh) * 2020-11-04 2024-05-07 深圳云天励飞技术股份有限公司 客流统计方法、装置、电子设备及存储介质
CN112580633A (zh) * 2020-12-25 2021-03-30 博大视野(厦门)科技有限公司 一种公共交通客流统计装置及方法
CN112580633B (zh) * 2020-12-25 2024-03-01 博大视野(厦门)科技有限公司 一种基于深度学习的公共交通客流统计装置及方法
CN113255480A (zh) * 2021-05-11 2021-08-13 中国联合网络通信集团有限公司 公交车内拥挤程度识别方法、系统、计算机设备及介质
CN113420693A (zh) * 2021-06-30 2021-09-21 成都新潮传媒集团有限公司 一种门状态检测方法、装置、轿厢乘客流量统计方法及设备
CN113420693B (zh) * 2021-06-30 2022-04-15 成都新潮传媒集团有限公司 一种门状态检测方法、装置、轿厢乘客流量统计方法及设备
CN114596537A (zh) * 2022-05-10 2022-06-07 深圳市海清视讯科技有限公司 区域人流数据确定方法、装置、设备及存储介质

Also Published As

Publication number Publication date
JP6570731B2 (ja) 2019-09-04
JP2018523234A (ja) 2018-08-16
US10223597B2 (en) 2019-03-05
US20170286780A1 (en) 2017-10-05

Similar Documents

Publication Publication Date Title
WO2017156772A1 (zh) 一种乘客拥挤度的计算方法及其系统
CN111553387B (zh) 一种基于Yolov3的人员目标检测方法
CN105844229B (zh) 一种乘客拥挤度的计算方法及其系统
CN111444821A (zh) 一种城市道路标志自动识别方法
Li et al. Robust people counting in video surveillance: Dataset and system
CN104978567A (zh) 基于场景分类的车辆检测方法
CN104239867B (zh) 车牌定位方法及系统
CN103985182B (zh) 一种公交客流自动计数方法及自动计数系统
WO2023109099A1 (zh) 基于非侵入式检测的充电负荷概率预测系统及方法
CN102254428B (zh) 一种基于视频处理的交通拥塞检测方法
CN103268489A (zh) 基于滑窗搜索的机动车号牌识别方法
CN104239905A (zh) 运动目标识别方法及具有该功能的电梯智能计费系统
CN103886089B (zh) 基于学习的行车记录视频浓缩方法
CN106127812A (zh) 一种基于视频监控的客运站非出入口区域的客流统计方法
CN106710228B (zh) 一种客货分道交通参数监测系统的实现方法
CN114170580A (zh) 一种面向高速公路的异常事件检测方法
CN114049572A (zh) 识别小目标的检测方法
CN104123714A (zh) 一种人流量统计中最优目标检测尺度的生成方法
Orozco et al. Vehicular detection and classification for intelligent transportation system: A deep learning approach using faster R-CNN model
CN111695545A (zh) 一种基于多目标追踪的单车道逆行检测方法
CN106384089A (zh) 基于终生学习的人体可靠检测方法
CN117292322A (zh) 基于深度学习的人员流量检测方法及系统
CN115880620B (zh) 一种应用在推车预警系统中的人员计数方法
CN116311166A (zh) 交通障碍物识别方法、装置及电子设备
Li et al. An efficient self-learning people counting system

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018502630

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16893937

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/01/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16893937

Country of ref document: EP

Kind code of ref document: A1