CN107563347B - Passenger flow counting method and device based on TOF camera - Google Patents

Passenger flow counting method and device based on TOF camera Download PDF

Info

Publication number
CN107563347B
CN107563347B CN201710850191.4A CN201710850191A CN107563347B CN 107563347 B CN107563347 B CN 107563347B CN 201710850191 A CN201710850191 A CN 201710850191A CN 107563347 B CN107563347 B CN 107563347B
Authority
CN
China
Prior art keywords
passenger
head
image
tracking
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710850191.4A
Other languages
Chinese (zh)
Other versions
CN107563347A (en
Inventor
李军
林坚
曹文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Walker Intelligent Traffic Technology Co Ltd
Original Assignee
Nanjing Walker Intelligent Traffic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Walker Intelligent Traffic Technology Co Ltd filed Critical Nanjing Walker Intelligent Traffic Technology Co Ltd
Priority to CN201710850191.4A priority Critical patent/CN107563347B/en
Publication of CN107563347A publication Critical patent/CN107563347A/en
Application granted granted Critical
Publication of CN107563347B publication Critical patent/CN107563347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a passenger flow counting method and a passenger flow counting device based on a TOF camera, and the method comprises the following steps of 1, acquiring a depth image; step 2, screening the detection area through a height threshold value to obtain a mask image; step 3, morphologically processing the mask image; step 4, obtaining a head detection candidate area in the color image according to the mask image, step 5, positioning the head of a passenger in the candidate area by the deep neural network model, step 6, tracking by KCF in combination with Kalman filtering, and step 7, counting passenger flow. The invention solves the problem that only depth images are used and lack a mode identification function, and invalid areas in the images are filtered through the depth images, so that the detection speed and the detection accuracy are increased.

Description

Passenger flow counting method and device based on TOF camera
Technical Field
the invention relates to the field of computer vision, in particular to a passenger flow counting method and device based on a TOF camera.
background
the traditional bus passenger flow counting method comprises an infrared device and a pressure sensor, and the traditional bus passenger flow counting method has large statistical result error and is gradually abandoned. The existing bus passenger flow counting method is developed in the field of image processing, bus passenger flow statistics is carried out based on common color or black and white images, and a deep neural network is generally combined, so that the method has the advantages that due to the combination of pattern recognition, complex scenes such as crowding of multiple persons, luggage bags and the like can be processed, and the defects that the calculation complexity is high and the counting is more and less caused by the failure of pattern recognition are overcome; carry out public transit passenger flow statistics based on depth image, the advantage lies in, the calculation complexity is low, because depth information is not influenced by the light, detect single or simple scene, the rate of accuracy is high, the shortcoming lies in, because depth image loses too much texture information, thereby can not be to the direct use pattern recognition of depth image, count more easily and leak under the complicated scene, and some depth camera are according to infrared reflection detection depth, the smooth object in surface (for example people's hair or cell-phone) can produce the specular reflection to infrared, can not measure the distance, thereby influence the counting result.
disclosure of Invention
aiming at the defects of the method, the invention aims to provide a passenger flow counting method and a passenger flow counting device based on a TOF camera, wherein the passenger flow counting method is carried out by combining a depth image and a 2D color image of the TOF camera with a pattern recognition method, and the passenger flow counting method is high in accuracy, low in calculation complexity and high in calculation speed.
In order to solve the technical problems, the invention adopts the following technical scheme: a TOF camera based passenger flow counting method, the method comprising the steps of:
(1) acquiring a depth image: the TOF camera is installed right above a front door and a rear door of a bus, the shooting direction is vertical to the bottom surface of a bus compartment, a vehicle door opening and closing signal is accessed, a power supply is switched on, when the bus door is opened, the TOF camera shoots a depth image in the bus, and a color image is obtained through a CMOS (complementary metal oxide semiconductor) module in the TOF camera;
(2) filtering all pixels in the depth image, which are lower than a height threshold value H, setting the value of the filtered pixels to be 0, reserving the pixels which are higher than the height threshold value H, and setting the value of the reserved pixels to be 255, so that a mask image is obtained;
(3) Performing morphological processing on the mask image, wherein the morphological processing comprises erosion and expansion operations;
(4) carrying out contour extraction operation on the mask image, and taking out an external rectangle for each contour, wherein an image area corresponding to the external rectangle area in the color image corresponding to the depth image is called a detection candidate area;
(5) detecting each detection candidate area of the color image by using a passenger head detection model trained by a deep neural network, and positioning the head of the passenger, wherein the passenger head detection model is a model trained by a deep neural network algorithm, the training samples comprise a certain number of color pictures obtained by installing cameras right above front and rear doors of a bus, and the shooting direction is vertical to the bottom surface of the bus compartment;
(6) Tracking the head of each detected passenger, and for the current color frame, if the head of the passenger is not detected by the passenger head detection model near the position tracked by using the KCF tracking algorithm, using the tracking result of the KCF as the position of the head of the passenger of the current frame, if the head of the passenger is detected by the passenger head detection model near the position tracked by using the KCF tracking algorithm, using the position obtained by the KCF tracking algorithm as a predicted value, using the position detected by the passenger head detection model as an observed value, and determining the position of the head of the passenger of the current frame by using a Kalman filtering algorithm in combination with the predicted value and the observed value;
(7) counting the number of passengers getting on and off the vehicle to realize passenger flow counting, and in the tracking process, if no observation value appears in the continuous m frames of the passenger head target, giving up the tracking of the target; for the passenger head tracking end, if the duration exceeds n frames, and the position offset of the start and the end exceeds the preset threshold value, a valid count is formed; and determining whether the passenger gets on or off the vehicle according to the starting position and the ending position to realize passenger flow counting.
The invention also provides a passenger flow counting device based on the TOF camera, which comprises an image acquisition device, an image processing device, a target positioning device and a tracking and counting device which are sequentially and electrically connected, and is characterized in that:
the image acquisition device is a TOF camera, the TOF camera is installed right above a front door and a rear door of a bus, the shooting direction is perpendicular to the bottom surface of a bus compartment, a door opening and closing signal of the bus is accessed, a power supply is switched on, when the bus door is opened, the TOF camera shoots a depth image in the bus, and a color image is acquired through a CMOS module in the camera;
one end of the image processing device is electrically connected with the image acquisition device, and the other end of the image processing device is electrically connected with the target positioning device; the image processing device comprises a height screening module, a morphological image processing module and a detection candidate area screening module which are electrically connected in sequence;
The height screening module is used for filtering all pixels with the height lower than a threshold value H in the depth image, setting the value of the filtered pixels to be 0, reserving the pixels with the height higher than the threshold value H, and setting the value of the reserved pixels to be 255 to obtain a mask image;
The morphological image processing module is used for performing morphological processing, including corrosion processing and expansion processing, on the mask image obtained by the height screening module;
The detection candidate region screening module extracts a human head detection candidate region in the color image corresponding to the depth image, firstly, the mask image processed by the morphological image processing module is subjected to contour extraction operation by the detection candidate region screening module, an external rectangle is extracted from each contour, then, the color image corresponding to the depth image acquired by the image acquisition device is acquired, and a region corresponding to the external rectangle in the color image is called a detection candidate region;
One end of the target positioning device is electrically connected with the image processing device, and the other end of the target positioning device is electrically connected with the tracking counting device; detecting each detection candidate area in the color image by adopting a passenger head detection model trained by a deep neural network, and positioning the head position of a passenger; the passenger head detection model is obtained through deep neural network algorithm training, the training samples comprise a certain number of color pictures which are obtained through installing cameras right above front and rear doors of a bus, and shooting directions are vertical to the bottom surface of a bus compartment;
one end of the tracking and counting device is electrically connected with the target positioning device and comprises a tracking module and a counting module; the tracking module tracks the head of each detected passenger, and for the current color image frame, if the head of the passenger is not detected by the passenger head detection model near the position tracked by using a KCF tracking algorithm, the tracking result of the KCF is used as the position of the head of the passenger in the current frame; if the passenger head detection model detects the head of the passenger near the position tracked by the KCF tracking algorithm, taking the position obtained by the KCF tracking algorithm as a predicted value, taking the position detected by the passenger head detection model as an observed value, and determining the position of the passenger head of the current frame by using a Kalman filtering algorithm in combination with the predicted value and the observed value;
The counting module is used for judging the getting-on and getting-off of passengers to realize passenger flow counting, and in the tracking process, if no observation value appears in continuous m frames of a passenger head target, the tracking of the target is abandoned; for the passenger head tracking end, if the duration exceeds n frames, and the position offset of the start and the end exceeds the preset threshold value, a valid count is formed; and determining whether the passenger gets on or off the vehicle according to the starting position and the ending position, and finally realizing the passenger flow counting function.
The invention has the following technical effects and advantages:
1. The problem that only the depth image is used and the mode function is lacked is solved: because the depth image is not a common gray image and loses too much texture information, the depth image cannot be directly subjected to mode recognition, and the depth image is directly processed by image segmentation and other methods to carry out passenger flow statistics, so that the statistical accuracy is not high.
2. Invalid regions in the images are filtered through the depth images, the range of the regions needing to be detected is greatly reduced, and the problem of low detection speed of the depth neural network is effectively solved: the calculation complexity of the deep neural network detection algorithm is high, and the regions with improper height and other invalid regions are directly filtered, so that the false detection of the deep neural network model is relatively avoided, and the detection speed is accelerated.
3. the problem of possible meter leakage of a deep neural network is solved by combining the KCF with the Kalman filtering tracking algorithm, the KCF is high in calculation speed and lacks of the defect of scale change, and the tracking algorithm can be fast and accurate after being combined with the Kalman filtering algorithm.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Figure 2 is a schematic view of the apparatus of the present invention.
FIG. 3 is a depth image obtained by a TOF depth camera.
Fig. 4 is a color image corresponding to a depth image acquired by a CMOS module of a TOF depth camera.
FIG. 5 is a mask diagram after the height threshold screening of the method of the present invention.
FIG. 6 is a mask image after morphological processing by the method of the present invention.
FIG. 7 is a circumscribed rectangle extracted by the inventive method for each contour in the processed mask image.
FIG. 8 is a color image corresponding to the circumscribed rectangular area of the present invention.
FIG. 9 is a graph showing the results of the detection by the method of the present invention.
Detailed Description
the technical solution of the present invention will be described in detail below with reference to the accompanying drawings.
referring to fig. 1, the passenger flow counting method based on TOF camera according to the invention includes the following steps:
step 1, obtaining a depth image: the TOF camera is installed right above a front door and a rear door of a bus, the shooting direction is perpendicular to the bottom surface of a bus compartment, a door opening and closing signal of the bus is accessed, a power supply is switched on, when the bus door is opened, the TOF camera shoots a depth image in the bus, and a color image is obtained through a CMOS module in the camera. Preferably, the depth image resolution is 320 × 240.
step 2, extracting the area above the height threshold in the depth image through the height threshold:
The TOF camera can convert a shooting scene into depth image information, the relative distance between each pixel in an obtained depth image and the camera is recorded by the value of each pixel, the real distance between each pixel in the depth image and the camera can be obtained by the real distance measured during installation, the numerical value of each pixel in the depth image is between 0 and 255 for viewing and calculation, 0 represents infinity, and 255 represents infinity;
Setting a height threshold value H, filtering all pixels lower than H in the depth image, setting the value of the pixels to be 0, reserving the pixels above the height H, and setting the value of the pixels to be 255, so as to obtain a mask image, and screening candidate regions of the head of a passenger possibly existing in the depth image, wherein the mask image is shown in FIG. 5;
Preferably, the height threshold H is 1.3 meters.
and 3, performing morphological processing on the mask image, wherein the morphological processing comprises corrosion and expansion operations, and the specific steps are as follows:
step 31, performing erosion operation on the image, and calculating the minimum value in a 5 × 5 window by taking the coordinate point (x, y) of each pixel as the center, wherein the formula is as follows:
the etching operation can remove isolated noise points in the mask image, and the size of the 5-by-5 window can be changed according to the actual situation;
Step 32, performing dilation operation on the image, and calculating the maximum value in a 5 × 5 window by taking each pixel as the center, wherein the formula is as follows:
where (x, y) is a coordinate point and (x ', y') is a coordinate value with (x, y) as a central origin.
The dilation operation connects adjacent elements in the mask image, the 5 × 5 window size can be changed according to actual conditions, and the mask image after morphological processing is shown in fig. 6.
step 4, extracting the screened human head detection candidate area in the color image: the method comprises the following specific steps:
step 41, performing contour extraction operation on the mask image subjected to morphological processing in the step 3, and extracting a circumscribed rectangle for each contour, wherein the mask image drawn with the circumscribed rectangle is shown in fig. 7;
Step 42, obtaining a color image corresponding to the depth image obtained by the CMOS module of the TOF depth camera at the same time, wherein a region of the color image corresponding to the circumscribed rectangle obtained in step 41 is referred to as a head detection candidate region, a color original image is referred to as fig. 4, and an example of the color image only containing the head detection candidate region is referred to as fig. 8.
it should be noted that some rectangular frames may obviously have no human head, for example, the area is very small and obviously cannot accommodate a human head, so that the rectangular area obviously having no human head may be excluded in advance according to the preset threshold, and it should be noted that the color image example only including the human head detection candidate area in step 42 is shown in fig. 8, and the color image only including the human head detection candidate area is only an example made for better understanding of the invention, and does not represent that the present invention includes this operation.
Through the operations of the steps 2-4, the human head detection candidate area can be preliminarily extracted through the depth image, so that the area needing to be detected of the depth neural network model is obviously reduced, the detection speed is obviously increased, and the potential false detection rate of the depth neural network model is reduced.
Step 5, detecting each detection candidate area in the color image by using a pre-trained deep neural network passenger head detection model, and positioning the position of the passenger head;
the passenger head detection model is a deep neural network model trained in advance, training samples of the passenger head detection model comprise a certain number of cameras which are installed right above front and rear doors of a bus, and the shooting direction is perpendicular to the bottom surface of the bus compartment to shoot obtained color pictures. For the sample set, the position of the head of each passenger is manually marked by using a rectangular frame, and the passenger head detection model can be trained by a deep neural network algorithm. Preferably, the number of training samples is more than 1 ten thousand.
And 6, tracking the head of each detected passenger, and specifically comprising the following steps:
Step 61, regarding the current color image frame, if the head of the passenger is not detected by the deep neural network detection model near the position tracked by the KCF tracking algorithm, using the tracking result of the KCF as the position of the head of the passenger in the current frame;
And step 62, if the head of the passenger is detected by the deep neural network detection model near the position tracked by the KCF tracking algorithm, taking the position obtained by the KCF tracking algorithm as a predicted value, taking the position detected by the model as an observed value, and determining the position of the head of the passenger in the current frame by using a Kalman filtering algorithm in combination with the predicted value and the observed value.
since the deep neural network itself may have missing detection, the above step 61 can solve the problem of missing detection, the KCF is used for human head tracking in step 6 because the KCF has a fast tracking speed, but the KCF lacks target scale estimation, and has a poor video tracking effect on the target scale with significant change, so in step 62, the human head position is judged by combining with the kalman filtering algorithm, and thus the tracking in step 6 can be fast and accurate, and the problem of missing detection in the deep neural network in step 5 is solved.
And 7, counting the getting-on and getting-off of the bus to realize passenger flow counting: the method comprises the following specific steps:
Step 71, in the tracking process, for a passenger head target, if no observation value appears in the continuous m frames, namely the continuous m frames of deep neural network detection model does not detect the passenger head near the position predicted by the KCF tracking algorithm, the tracking of the target is abandoned;
step 72, for the passenger head whose tracking is finished, if its duration exceeds n frames and the amount of positional offset of the start and end exceeds a preset threshold, a valid count is formed;
and step 73, determining whether the passenger gets on or off the vehicle according to the starting position and the ending position, and finally realizing the passenger flow counting function.
the x and y are set according to the actual situation of the video, and preferably, m is 5 and n is 10.
The invention also comprises a passenger flow counting device based on the TOF camera, which sequentially comprises an image acquisition device, an image processing device, a target positioning device and a tracking and counting device which are electrically connected as shown in figure 2.
the image acquisition device is a TOF camera, the TOF camera is installed right above a front door and a rear door of a bus, the shooting direction is perpendicular to the bottom surface of a bus compartment, a door opening and closing signal of the bus is accessed, the power is switched on, when the bus door is opened, the TOF camera shoots a depth image in the bus, and a color image is obtained through a CMOS module in the camera. Preferably, the depth image resolution is 320 × 240.
One end of the image processing device is electrically connected with the image acquisition device, and the other end of the image processing device is electrically connected with the target positioning device. The image processing device comprises a height screening module, a morphological image processing module and a detection candidate area screening module which are sequentially and electrically connected.
the depth image acquired by the image acquisition device, the relative distance between each pixel and the camera is recorded by the value of each pixel of the image, the real distance between each pixel point in the depth image and the camera can be obtained by the real distance measured during installation, the numerical value of each pixel in the depth image is between 0 and 255 for convenient viewing and calculation, 0 represents infinity, and 255 represents infinity.
The height screening module filters all pixels with the height lower than a threshold value H (the size of H is set according to actual conditions) in the depth image, sets the value of the pixels to be 0, reserves the pixels with the height above H, sets the value of the pixels to be 255, and finally obtains a mask image of a candidate area where the head of a passenger possibly exists. Preferably, the height threshold H has a value of 1.3 meters.
the morphology processing module is used for performing morphology processing on the mask image obtained by the height screening module, wherein the morphology processing comprises corrosion processing and expansion processing, the corrosion processing takes the coordinate point (x, y) of each pixel as the center, and the minimum value in a 5-by-5 window is calculated, and the formula is as follows:
The dilation process calculates the maximum in a 5 by 5 window, centered on each pixel, as follows:
It should be understood that the size of the 5 × 5 window may be changed according to actual conditions, and a mask image after morphological processing is finally obtained.
The detection candidate region screening module extracts a human head detection candidate region in the color image corresponding to the depth image, firstly, the mask image processed by the morphology processing module is subjected to contour extraction by the detection candidate region screening module, an external rectangle is extracted from each contour, then, the color image corresponding to the depth image acquired by the image acquisition device is acquired, and the region corresponding to the external rectangle in the color image is the detection candidate region.
One end of the target positioning device is electrically connected with the image processing device, and the other end of the target positioning device is electrically connected with the tracking counting device. And respectively detecting each detection candidate area in the color image by adopting a deep neural network passenger head detection model obtained by pre-training, and positioning the position of the passenger head (namely the target). The passenger head detection model is obtained through deep neural network algorithm training, the training samples comprise a certain number of cameras which are installed right above front and rear doors of the bus, the shooting direction is perpendicular to the bottom surface of the bus compartment, and the obtained color pictures are shot. For the sample set, the position of the head of each passenger is manually noted using a rectangular box. Preferably, the number of training samples is more than 1 ten thousand.
One end of the tracking and counting device is electrically connected with the target positioning device and comprises a tracking module and a counting module. The tracking module tracks the head of each detected passenger, and specifically comprises the following steps: for the current color image frame, if the head of the passenger is not detected by the deep neural network detection model near the position tracked by using the KCF tracking algorithm, using the tracking result of the KCF as the position of the head of the passenger of the current frame; if the head of the passenger is detected by the deep neural network detection model near the position tracked by the KCF tracking algorithm, the position obtained by the KCF tracking algorithm is used as a predicted value, the position detected by the model is used as an observed value, and the Kalman filtering algorithm is used for determining the position of the head of the passenger in the current frame by combining the predicted value and the observed value.
the counting module judges the getting-on and getting-off of passengers to realize passenger flow counting, and specifically comprises the following steps: in the tracking process, for a passenger head target, if no observation value appears in the continuous m frames, namely the continuous m frames of deep neural network detection model does not detect the passenger head near the position predicted by the KCF tracking algorithm, the tracking of the target is abandoned; for the passenger head tracking end, if the duration exceeds n frames, and the position offset of the start and the end exceeds the preset threshold value, a valid count is formed; and determining whether the passenger gets on or off the vehicle according to the starting position and the ending position, and finally realizing the passenger flow counting function. The x and y are set according to the actual situation of the video, and preferably, m is 5 and n is 10.
the passenger flow counting method and device based on the TOF camera have the following advantages that:
1. the problem that only the depth image is used and the mode function is lacked is solved: because the depth image is not a common gray image and loses too much texture information, the depth image cannot be directly subjected to mode recognition, and the depth image is directly processed by image segmentation and other methods to carry out passenger flow statistics, so that the statistical accuracy is not high.
2. Invalid regions in the images are filtered through the depth images, the range of the regions needing to be detected is greatly reduced, and the problem of low detection speed of the depth neural network is effectively solved: the calculation complexity of the deep neural network detection algorithm is high, and the regions with improper height and other invalid regions are directly filtered, so that the false detection of the deep neural network model is relatively avoided, and the detection speed is accelerated.
3. The problem of possible meter leakage of a deep neural network is solved by combining the KCF with the Kalman filtering tracking algorithm, the KCF is high in calculation speed and lacks of the defect of scale change, and the tracking algorithm can be fast and accurate after being combined with the Kalman filtering algorithm.

Claims (7)

1. A passenger flow counting method based on a TOF camera is characterized by comprising the following steps:
(1) acquiring a depth image: the TOF camera is installed right above a front door and a rear door of a bus, the shooting direction is vertical to the bottom surface of a bus compartment, a vehicle door opening and closing signal is accessed, a power supply is switched on, when the bus door is opened, the TOF camera shoots a depth image in the bus, and a color image is obtained through a CMOS (complementary metal oxide semiconductor) module in the TOF camera;
(2) Filtering all pixels in the depth image, which are lower than a height threshold value H, setting the value of the filtered pixels to be 0, reserving the pixels which are higher than the height threshold value H, and setting the value of the reserved pixels to be 255, so that a mask image is obtained;
(3) performing morphological processing on the mask image, wherein the morphological processing comprises erosion and expansion operations;
(4) carrying out contour extraction operation on the mask image, and taking out an external rectangle for each contour, wherein an image area corresponding to the external rectangle area in the color image corresponding to the depth image is called a detection candidate area;
(5) detecting each detection candidate area of the color image by using a passenger head detection model trained by a deep neural network, and positioning the head of the passenger, wherein the passenger head detection model is a model trained by a deep neural network algorithm, the training samples comprise a certain number of color pictures obtained by installing cameras right above front and rear doors of a bus, and the shooting direction is vertical to the bottom surface of the bus compartment;
(6) Tracking the head of each detected passenger, and for the current color frame, if the head of the passenger is not detected by the passenger head detection model near the position tracked by using the KCF tracking algorithm, using the tracking result of the KCF as the position of the head of the passenger of the current frame, if the head of the passenger is detected by the passenger head detection model near the position tracked by using the KCF tracking algorithm, using the position obtained by the KCF tracking algorithm as a predicted value, using the position detected by the passenger head detection model as an observed value, and determining the position of the head of the passenger of the current frame by using a Kalman filtering algorithm in combination with the predicted value and the observed value;
(7) Counting the number of passengers getting on and off the vehicle to realize passenger flow counting, and in the tracking process, if no observation value appears in the continuous m frames of the passenger head target, giving up the tracking of the target; for the passenger head tracking end, if the duration exceeds n frames, and the position offset of the start and the end exceeds the preset threshold value, a valid count is formed; and determining whether the passenger gets on or off the vehicle according to the starting position and the ending position to realize passenger flow counting.
2. A TOF camera based passenger flow counting method according to claim 1, wherein the erosion operation is centered on the coordinate point (x, y) of each pixel, calculating the minimum value in a 5 x 5 window, and the formula is:
The dilation operation, centered on the coordinate point (x, y) of each pixel, calculates the maximum in a 5 x 5 window, by the formula:
Where (x, y) is a coordinate point and (x ', y') is a coordinate value with (x, y) as a central origin.
3. A TOF camera based passenger flow counting method according to claim 1, wherein m is 5 and n is 10.
4. a TOF camera based passenger flow counting method according to any one of claims 1 to 3 wherein the circumscribed rectangle extracted for each contour in step (4) is first screened according to the size of the rectangle.
5. the utility model provides a passenger flow counting assembly based on TOF camera, includes image acquisition device, image processing device, target positioning device and the tracking counting assembly who connects electricity in proper order, its characterized in that:
the image acquisition device is a TOF camera, the TOF camera is installed right above a front door and a rear door of a bus, the shooting direction is perpendicular to the bottom surface of a bus compartment, a door opening and closing signal of the bus is accessed, a power supply is switched on, when the bus door is opened, the TOF camera shoots a depth image in the bus, and a color image is acquired through a CMOS module in the camera;
One end of the image processing device is electrically connected with the image acquisition device, and the other end of the image processing device is electrically connected with the target positioning device; the image processing device comprises a height screening module, a morphological image processing module and a detection candidate area screening module which are electrically connected in sequence;
The height screening module is used for filtering all pixels with the height lower than a threshold value H in the depth image, setting the value of the filtered pixels to be 0, reserving the pixels with the height higher than the threshold value H, and setting the value of the reserved pixels to be 255 to obtain a mask image;
The morphological image processing module is used for performing morphological processing, including corrosion processing and expansion processing, on the mask image obtained by the height screening module;
the detection candidate region screening module extracts a human head detection candidate region in the color image corresponding to the depth image, firstly, the mask image processed by the morphological image processing module is subjected to contour extraction operation by the detection candidate region screening module, an external rectangle is extracted from each contour, then, the color image corresponding to the depth image acquired by the image acquisition device is acquired, and a region corresponding to the external rectangle in the color image is called a detection candidate region;
one end of the target positioning device is electrically connected with the image processing device, and the other end of the target positioning device is electrically connected with the tracking counting device; detecting each detection candidate area in the color image by adopting a passenger head detection model trained by a deep neural network, and positioning the head position of a passenger; the passenger head detection model is obtained through deep neural network algorithm training, the training samples comprise a certain number of color pictures which are obtained through installing cameras right above front and rear doors of a bus, and shooting directions are vertical to the bottom surface of a bus compartment;
one end of the tracking and counting device is electrically connected with the target positioning device and comprises a tracking module and a counting module; the tracking module tracks the head of each detected passenger, and for the current color image frame, if the head of the passenger is not detected by the passenger head detection model near the position tracked by using a KCF tracking algorithm, the tracking result of the KCF is used as the position of the head of the passenger in the current frame; if the passenger head detection model detects the head of the passenger near the position tracked by the KCF tracking algorithm, taking the position obtained by the KCF tracking algorithm as a predicted value, taking the position detected by the passenger head detection model as an observed value, and determining the position of the passenger head of the current frame by using a Kalman filtering algorithm in combination with the predicted value and the observed value;
the counting module is used for judging the getting-on and getting-off of passengers to realize passenger flow counting, and in the tracking process, if no observation value appears in continuous m frames of a passenger head target, the tracking of the target is abandoned; for the passenger head tracking end, if the duration exceeds n frames, and the position offset of the start and the end exceeds the preset threshold value, a valid count is formed; and determining whether the passenger gets on or off the vehicle according to the starting position and the ending position, and finally realizing the passenger flow counting function.
6. A TOF camera based passenger flow counting apparatus as claimed in claim 5 wherein said erosion operation calculates the minimum value in a 5 x 5 window centered on each pixel's coordinate point (x, y) by the formula:
the dilation operation, centered on the coordinate point (x, y) of each pixel, calculates the maximum in a 5 x 5 window, by the formula:
Where (x, y) is a coordinate point and (x ', y') is a coordinate value with (x, y) as a central origin.
7. a TOF camera based passenger flow counting apparatus as claimed in claim 5 or claim 6 wherein said m is 5 and said n is 10.
CN201710850191.4A 2017-09-20 2017-09-20 Passenger flow counting method and device based on TOF camera Active CN107563347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710850191.4A CN107563347B (en) 2017-09-20 2017-09-20 Passenger flow counting method and device based on TOF camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710850191.4A CN107563347B (en) 2017-09-20 2017-09-20 Passenger flow counting method and device based on TOF camera

Publications (2)

Publication Number Publication Date
CN107563347A CN107563347A (en) 2018-01-09
CN107563347B true CN107563347B (en) 2019-12-13

Family

ID=60981675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710850191.4A Active CN107563347B (en) 2017-09-20 2017-09-20 Passenger flow counting method and device based on TOF camera

Country Status (1)

Country Link
CN (1) CN107563347B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108279421B (en) * 2018-01-28 2021-09-28 深圳新亮智能技术有限公司 Time-of-flight camera with high resolution color images
CN108509914B (en) * 2018-04-03 2022-03-11 华录智达科技有限公司 Bus passenger flow statistical analysis system and method based on TOF camera
CN109325963B (en) * 2018-08-07 2021-05-18 长安大学 SVM-based three-dimensional trajectory classification method for bus passengers
CN109285376B (en) * 2018-08-09 2022-04-19 同济大学 Bus passenger flow statistical analysis system based on deep learning
CN109059797B (en) * 2018-08-22 2020-12-18 Oppo广东移动通信有限公司 Time-of-flight module, control method thereof, controller and electronic device
CN109035841B (en) * 2018-09-30 2020-10-09 上海交通大学 Parking lot vehicle positioning system and method
CN109448027B (en) * 2018-10-19 2022-03-29 成都睿码科技有限责任公司 Adaptive and persistent moving target identification method based on algorithm fusion
CN109615647A (en) * 2018-10-24 2019-04-12 北京升哲科技有限公司 Object detection method and device
CN109584297A (en) * 2018-10-24 2019-04-05 北京升哲科技有限公司 Object detection method and device
CN111797661B (en) * 2019-05-07 2021-03-30 湖南琴海数码股份有限公司 Wireless notification method based on big data analysis
CN110837769B (en) * 2019-08-13 2023-08-29 中山市三卓智能科技有限公司 Image processing and deep learning embedded far infrared pedestrian detection method
CN112417939A (en) * 2019-08-21 2021-02-26 南京行者易智能交通科技有限公司 Passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, server and model training method
CN110516602A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of public traffice passenger flow statistical method based on monocular camera and depth learning technology
CN110633671A (en) * 2019-09-16 2019-12-31 天津通卡智能网络科技股份有限公司 Bus passenger flow real-time statistical method based on depth image
CN110717408B (en) * 2019-09-20 2022-04-12 台州智必安科技有限责任公司 People flow counting method based on TOF camera
CN110728227B (en) * 2019-10-09 2022-12-06 北京百度网讯科技有限公司 Image processing method and device
CN112085767B (en) * 2020-08-28 2023-04-18 安徽清新互联信息科技有限公司 Passenger flow statistical method and system based on deep optical flow tracking
CN112767375B (en) * 2021-01-27 2022-03-08 深圳技术大学 OCT image classification method, system and equipment based on computer vision characteristics
CN113034544A (en) * 2021-03-19 2021-06-25 奥比中光科技集团股份有限公司 People flow analysis method and device based on depth camera
CN113012136A (en) * 2021-03-24 2021-06-22 中国民航大学 Airport luggage counting method and counting system based on target detection
JP7342054B2 (en) * 2021-03-25 2023-09-11 矢崎総業株式会社 Boarding and alighting person counting system, boarding and alighting person counting method, and boarding and alighting person counting program
CN113963318B (en) * 2021-12-22 2022-03-25 北京的卢深视科技有限公司 People flow statistical method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777188A (en) * 2010-03-12 2010-07-14 华中科技大学 Real-time bus passenger flow volume statistical method
CN106548163A (en) * 2016-11-25 2017-03-29 青岛大学 Method based on TOF depth camera passenger flow countings
CN106886994A (en) * 2017-02-08 2017-06-23 青岛大学 A kind of flow of the people intelligent detection device and detection method based on depth camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777188A (en) * 2010-03-12 2010-07-14 华中科技大学 Real-time bus passenger flow volume statistical method
CN106548163A (en) * 2016-11-25 2017-03-29 青岛大学 Method based on TOF depth camera passenger flow countings
CN106886994A (en) * 2017-02-08 2017-06-23 青岛大学 A kind of flow of the people intelligent detection device and detection method based on depth camera

Also Published As

Publication number Publication date
CN107563347A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN107563347B (en) Passenger flow counting method and device based on TOF camera
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
JP6549797B2 (en) Method and system for identifying head of passerby
Davis et al. Robust Background-Subtraction for Person Detection in Thermal Imagery.
AU2009295350B2 (en) Detection of vehicles in an image
CN105574515B (en) A kind of pedestrian recognition methods again under non-overlapping visual field
CN110633671A (en) Bus passenger flow real-time statistical method based on depth image
Ha et al. License Plate Automatic Recognition based on edge detection
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN106056078B (en) Crowd density estimation method based on multi-feature regression type ensemble learning
Surkutlawar et al. Shadow suppression using RGB and HSV color space in moving object detection
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN112861797A (en) Method and device for identifying authenticity of license plate and related equipment
Lee et al. Real-time automatic vehicle management system using vehicle tracking and car plate number identification
CN107122732A (en) The quick license plate locating method of high robust under a kind of monitoring scene
CN109784229B (en) Composite identification method for ground building data fusion
CN109306834B (en) Vision-based automobile electric tail gate opening method
Wang et al. A robust algorithm for shadow removal of foreground detection in video surveillance
CN110321828B (en) Front vehicle detection method based on binocular camera and vehicle bottom shadow
Wen et al. People tracking and counting for applications in video surveillance system
CN110502968A (en) The detection method of infrared small dim moving target based on tracing point space-time consistency
Fazli et al. Multiple object tracking using improved GMM-based motion segmentation
JP2995813B2 (en) Traffic flow speed measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant