CN108764017B - Bus passenger flow statistical method, device and system - Google Patents

Bus passenger flow statistical method, device and system Download PDF

Info

Publication number
CN108764017B
CN108764017B CN201810288942.2A CN201810288942A CN108764017B CN 108764017 B CN108764017 B CN 108764017B CN 201810288942 A CN201810288942 A CN 201810288942A CN 108764017 B CN108764017 B CN 108764017B
Authority
CN
China
Prior art keywords
head position
tracking
current
image
position value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810288942.2A
Other languages
Chinese (zh)
Other versions
CN108764017A (en
Inventor
邢映彪
黄海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tongda Auto Electric Co Ltd
Original Assignee
Guangzhou Tongda Auto Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tongda Auto Electric Co Ltd filed Critical Guangzhou Tongda Auto Electric Co Ltd
Priority to CN201810288942.2A priority Critical patent/CN108764017B/en
Publication of CN108764017A publication Critical patent/CN108764017A/en
Application granted granted Critical
Publication of CN108764017B publication Critical patent/CN108764017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • G06Q50/40
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion

Abstract

The invention relates to a bus passenger flow statistical method, a device and a system, wherein the method comprises the following steps: carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue in sequence to obtain each result matching degree; acquiring a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to the preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; recording the updating times of the current first head position value; processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; and processing each current first person head position value and each current second person head position value to obtain a passenger flow statistical result. The method can improve the running speed of passenger flow statistics and reduce the false detection rate.

Description

Bus passenger flow statistical method, device and system
Technical Field
The invention relates to the technical field of data statistics, in particular to a bus passenger flow statistical method, device and system.
Background
With the development of society and the increasing of urbanization, the population of towns is more and more, the number of motor vehicles is also increasing, and the situations of road congestion and the like are easy to cause. The bus is one of important public transport means with large urban passenger flow, and the intelligent management level of the bus is directly related to the urban traffic running condition. The method has the advantages that the statistical data of the bus passenger flow are collected and analyzed, the reasonable arrangement of bus dispatching is facilitated, and the bus intellectualization is improved. At present, the statistics of the passenger flow of the bus is mainly realized by a manual statistical method.
In the implementation process, the inventor finds that at least the following problems exist in the conventional technology: the traditional bus passenger flow statistics has large detection amount, is difficult to cover comprehensively, and has large false detection rate.
Disclosure of Invention
Therefore, it is necessary to provide a method, a device and a system for bus passenger flow statistics, aiming at the problem that the conventional bus passenger flow statistics has a high false detection rate.
In order to achieve the above object, an embodiment of the present invention provides a bus passenger flow statistics method, including the following steps:
carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue in sequence to obtain each result matching degree; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model;
acquiring a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to the preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; recording the updating times of the current first head position value;
processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is a first tracking object which does not update the head position value currently in the tracking queue and the updating times are more than the preset times;
and processing each current first person head position value and each current second person head position value to obtain a passenger flow statistical result.
In one embodiment, the step of processing each current first head position value and each current second head position value to obtain the passenger flow statistics comprises:
when the initial position value of the first tracked object is smaller than the upper lane threshold value and the current first passenger position value is larger than the upper lane threshold value, or the initial position value of the second tracked object is smaller than the upper lane threshold value and the current second passenger position value is larger than the upper lane threshold value, accumulating the upper passenger flow data to obtain an upper passenger flow statistical result;
and accumulating the statistical data of the passenger flow of getting-off when the initial position value of the first tracked object is greater than the threshold value of the getting-off line and the current first head position value is less than the threshold value of the getting-off line or the initial position value of the second tracked object is greater than the threshold value of the getting-off line and the current second head position value is less than the threshold value of the getting-off line, so as to obtain the statistical result of the passenger flow of getting-off.
In one embodiment, the step of sequentially performing similarity matching between the detection object in the detection queue and each first tracking object in the tracking queue to obtain the matching degree of each result includes:
carrying out gray level processing on the acquired original vehicle door image to obtain a gray level image; and performing Gamma correction on the gray level image to obtain the current image at the vehicle door.
In one embodiment, the step of performing Gamma correction on the gray-scale image to obtain the image of the current door comprises the following steps:
carrying out size transformation on the current car door image to obtain a plurality of transformation images;
carrying out HOG characteristic algorithm processing on each transformed image in sequence through a preset detection window to obtain a plurality of image characteristic data;
and processing the image characteristic data through an SVM classifier to obtain a plurality of detection objects and a plurality of first tracking objects.
In one embodiment, the step of processing each current first person head position value and each current second person head position value to obtain the passenger flow statistics comprises:
processing each third tracking object in the tracking queue through a tracking algorithm; updating the current third human head position value of the third tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the third tracking object is the first tracking object which does not update the head position value currently in the tracking queue and the head position difference value is larger than the preset displacement value; the human head position difference is obtained by processing the current first human head position value and the detected human head position value through difference operation.
In one embodiment, the method further comprises the following steps:
and deleting the first tracking object of which the updating times are less than or equal to the preset times and the head position difference value is less than or equal to the preset displacement value in the tracking queue.
In one embodiment, the step of processing each second tracked object by the tracking algorithm comprises:
each second tracked object is processed by a KCF tracking algorithm.
On the other hand, the embodiment of the invention also provides a bus passenger flow statistical device, which comprises:
the object matching unit is used for sequentially carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue to obtain the matching degree of each result; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model;
the first human head position updating unit is used for acquiring a current first human head position numerical value of the first tracking object corresponding to the result matching degree which is greater than or equal to the preset matching degree, and updating the current first human head position numerical value into a detection human head position numerical value of the matched detection object; recording the updating times of the current first head position value;
the second head position updating unit is used for processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is a first tracking object which does not update the head position value currently in the tracking queue and the updating times are more than the preset times;
and the passenger flow statistical unit is used for processing each current first head position value and each current second head position value to obtain a passenger flow statistical result.
On the other hand, the embodiment of the invention also provides a bus passenger flow statistical system, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the bus passenger flow statistical method when executing the computer program.
In one embodiment, the system further comprises a camera and a display which are connected with the processor.
On the other hand, the embodiment of the invention also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for counting the bus passenger flow is implemented.
One of the above technical solutions has the following advantages and beneficial effects:
carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue in sequence to obtain each result matching degree; acquiring a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to the preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; recording the updating times of the current first head position value; processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; and processing each current first person head position value and each current second person head position value to obtain a passenger flow statistical result and realize passenger flow statistics. The method is based on the detection algorithm as a main part and the tracking algorithm as an auxiliary part, and the tracking algorithm is started to track the undetected target under the condition that the target cannot be detected, so that the calculated amount of passenger flow statistics is reduced, the running speed is improved, and the false detection rate is reduced.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a bus passenger flow statistics application;
FIG. 2 is a schematic flow chart diagram of a bus passenger flow statistics method in one embodiment;
FIG. 3 is a flow chart illustrating a passenger flow statistics step in one embodiment;
FIG. 4 is a schematic flow chart of the image pre-processing step in one embodiment;
FIG. 5 is a schematic flow chart of a human head image acquisition step in one embodiment;
FIG. 6 is a flowchart illustrating a head image obtaining step according to an embodiment;
FIG. 7 is a schematic flow chart diagram of a bus passenger flow statistics method in another embodiment;
FIG. 8 is a flow chart diagram of a bus passenger flow statistics method in one embodiment;
FIG. 9 is a flowchart of the tracking decision step in one embodiment;
FIG. 10 is a flowchart of the steps of the tracking process in one embodiment;
FIG. 11 is a schematic view of an original image at a door of an embodiment;
FIG. 12 is a block diagram of a bus passenger flow statistics apparatus in one embodiment;
FIG. 13 is an internal block diagram of a bus passenger flow statistics system in one embodiment;
fig. 14 is a schematic structural diagram of a bus passenger flow statistics system in another embodiment.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The bus passenger flow statistical method provided by the application can be applied to the application environment shown in the figure 1. The server 102 is connected to the bus camera 104 through a network. The server 102 may sequentially perform similarity matching between the detection object in the detection queue and each first tracking object in the tracking queue to obtain each result matching degree; acquiring a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to the preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; recording the updating times of the current first head position value; processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; and processing each current first person head position value and each current second person head position value to obtain a passenger flow statistical result. The server 102 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers. The server 102 may be, but is not limited to, various personal computers and notebook computers. The bus camera 102 may be a wired camera or a wireless camera.
The traditional statistics of bus passenger flow mainly comprises: 1. based on artificial statistics: the number of passengers getting on or off the bus is remotely monitored and recorded by an operator through watching the camera. But has the following disadvantages: the method cannot be covered completely and only can be recorded in a sampling mode; consuming a lot of labor cost. 2. Based on Hough circle transformation and Kalman prediction: the head of a passenger is detected by utilizing Hough circle transformation, the head is tracked through Kalman prediction, and the number of people getting on or off the bus is judged by analyzing the running track. But has the following disadvantages: a circle-like region exists, which causes false detection; more parameter settings cannot deal with complex environments. 3. Based on pedestrian detection: and extracting a pedestrian target through background difference, and analyzing the motion trail of the pedestrian to obtain passenger flow data. But has the following disadvantages: and under the shielding and congestion conditions, the false detection rate is high. The detection algorithm and the tracking algorithm are processed in parallel, so that the operation amount is large, real-time operation processing is difficult to achieve on hardware, and the hardware cost is increased.
In the embodiment of the invention, the bus passenger flow statistics is mainly processed by a detection algorithm, and the tracking algorithm is auxiliary processing. And under the condition that the object cannot be detected, the tracking algorithm is started to track the object which cannot be detected, so that the meaningless calculation amount is reduced, the running speed is increased, and the false detection rate is reduced.
In one embodiment, as shown in fig. 2, a method for counting public transportation passenger flow is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step S210, carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue in sequence to obtain each result matching degree; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model.
The detection queue refers to a buffer for storing the currently acquired detection object, and the detection queue can store an initial head position image obtained by processing the current vehicle door image. The tracking queue refers to a buffer for storing the first tracked object, and the tracking queue can store the initial head position image obtained by processing the current image at the vehicle door. The result matching degree refers to the similarity between the detection object and the first tracking object. The human head detection training model may be a training model obtained by training a human head detection model. The current head position image may be all head position images obtained by processing the current head position image, or a head position image which does not appear for the first time in all head position images obtained by processing the current head position image. The initial head position image may be a head position image that appears first in all head position images obtained by processing the current head position image.
Specifically, the current car door image is processed through a human head detection training model to obtain each current human head position image and each initial human head position image. Taking the current head position image as a detection object and storing the current head position image in a detection queue; and storing the initial human head position image as a tracking object in a tracking queue. And sequentially carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue so as to obtain the matching degree of each result.
Preferably, pixel overlap ratio matching is performed on the detection object in the detection queue and each first tracking object in the tracking queue in sequence, and each result matching degree is obtained.
Step S220, obtaining a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to the preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; and recording the updating times of the current first head position value.
The preset matching degree refers to a preset similarity threshold, and the preset matching degree can be a preset overlapped pixel point threshold. The current first head position value refers to a head position value of the first tracked object. The detected head position value refers to a head position value of the detection object.
Specifically, when the result matching degree is greater than or equal to the preset matching degree, the current first human head position value of the corresponding first tracked object is obtained. And updating the acquired current first human head position value into a detected human head position value of the matched detection object, and recording the updating times of the current first human head position value.
Preferably, a two-dimensional coordinate system can be established for the current image at the vehicle door, and the center pixel point coordinate of the head of the first tracked object is used as the current first head position numerical value.
Step S230, processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is the first tracking object which does not update the head position value currently in the tracking queue and the updating times are more than the preset times.
The second tracking object refers to a first tracking object which meets the tracking condition in the tracking queue, and the second tracking object may be a tracking object which does not update the head position value currently in each first tracking object and whose update frequency is greater than the preset frequency. The current second head position value refers to a head position value of the second tracked object. The tracking head position value refers to a head position value of a tracking processing result.
Specifically, each tracking object which is not updated with the head position value currently in the tracking queue and is updated for more than the preset number is obtained, and the tracking object is used as a second tracking object. And processing each second tracking object in the tracking queue through a tracking algorithm, and updating the current second head position value of the second tracking object into the tracking head position value of the tracking processing result according to the tracking processing result.
Preferably, each second tracked object in the tracking queue is processed through a tracking algorithm, the human head position value of the human head image with the highest responsivity is selected as the tracked human head position value of the tracking processing result according to the tracking processing result, and the current second human head position value of the second tracked object is updated to the tracked human head position value of the tracking processing result.
Step S240, processing each current first head position value and each current second head position value to obtain a passenger flow statistical result.
Specifically, each current first passenger position value is compared with an upper and lower lane position threshold value, and each current second passenger position value is compared with an upper and lower lane position threshold value, so that a bus passenger flow statistical result is obtained.
In the above embodiment, when the result matching degree is greater than or equal to the preset matching degree, the current first human head position value of the corresponding first tracked object is obtained, and the current first human head position value is updated to the detected human head position value of the matched detected object. Processing a first tracking object (namely a second tracking object) which is not updated with the head position numerical value currently in the tracking queue and is updated for more than the preset number by a tracking algorithm, and updating the current second head position numerical value of the second tracking object into a tracking head position numerical value of a tracking processing result according to the tracking processing result; and processing each current first person head position value and each current second person head position value to obtain a passenger flow statistical result and realize passenger flow statistics. The method is based on a detection algorithm as a main part and a tracking algorithm as an auxiliary part, and the tracking algorithm is started to track the undetected target under the condition that the target cannot be detected, so that the meaningless calculation amount is reduced, the running speed is improved, and the false detection rate is reduced.
In one embodiment, as shown in FIG. 3, a flow chart of the passenger flow statistics step is shown. Step S240 includes:
step S310, when the initial position value of the first tracked object is smaller than the upper lane threshold value and the current first head position value is larger than the upper lane threshold value, or the initial position value of the second tracked object is smaller than the upper lane threshold value and the current second head position value is larger than the upper lane threshold value, the upper passenger flow data are accumulated, and the upper passenger flow statistical result is obtained.
Here, the initial position value refers to an initial head position value of a tracked object (first tracked object or second tracked object). The initial position data may be an initial head center pixel point coordinate value of the tracked object.
Specifically, the image at the current vehicle door is divided into a plurality of lines, and the nth line is set as a threshold value of a vehicle-entering line. When the initial position value of the tracked object (the first tracked object or the second tracked object) is smaller than the boarding line threshold value and the current first head position value is larger than the boarding line threshold value, the tracked object is judged to be the boarding object, boarding passenger flow data are accumulated, and a boarding passenger flow statistical result is obtained.
Preferably, the result of dividing the total lane line of the image at the current door by 1.3 is the upper lane threshold.
Step S320, when the initial position value of the first tracked object is greater than the lower lane threshold and the current first head position value is less than the lower lane threshold, or the initial position value of the second tracked object is greater than the lower lane threshold and the current second head position value is less than the lower lane threshold, accumulating the statistical data of the passenger flow of the lower vehicle to obtain the statistical result of the passenger flow of the lower vehicle.
Specifically, the image at the current car door is divided into a plurality of lines, and the mth line is set as a threshold value of the lower car line. When the initial position value of the tracked object (the first tracked object or the second tracked object) is larger than the get-off line threshold value and the current first head position value is smaller than the get-off line threshold value, the tracked object is judged to be the get-off object, and the get-off passenger flow data is accumulated, so that the get-off passenger flow statistical result is obtained.
Preferably, the result of dividing the total lane line of the image at the current door by 15 is the lower lane threshold.
In one embodiment, when a tracking object (a first tracking object or a second tracking object) is determined to be an object for getting off or an object for getting on, the tracking object is deleted in the tracking queue. Therefore, repeated tracking statistics of the tracked objects which get off or get on the train can be prevented, and the remaining storage space of the tracking queue is increased.
In one embodiment, when the current car door image completes the passenger flow statistics, the detection queue is emptied, so that repeated passenger flow statistics is prevented from being performed when the next wheel performs the passenger flow statistics on the current car door image, and meanwhile, the storage space of the detection queue is released.
In one embodiment, as shown in FIG. 4, a flow chart of the image pre-processing step is shown. Step S210 includes:
and step S410, carrying out gray level processing on the acquired original vehicle door image to obtain a gray level image.
The original image at the vehicle door refers to an original image at the vehicle door shot by a camera. Specifically, the original vehicle door image is processed through image graying to obtain a grayscale image.
And step S420, performing Gamma correction on the gray level image to obtain the image of the current car door.
Specifically, the gray level image is corrected and processed through Gamma (Gamma) to obtain the current image at the position of the car door, and further the influence of light on the image of the detected head of the person is reduced.
In the embodiment, the accuracy of passenger flow statistical processing is improved by preprocessing the image of the original vehicle door.
In one embodiment, as shown in fig. 5, a schematic flow chart of the human head image acquisition step is shown. Step S420 is followed by:
step S510, the size of the current vehicle door image is converted to obtain a plurality of converted images.
Specifically, the current car door image is amplified by a preset amplification factor, and then a plurality of amplified transformation images are obtained. And carrying out preset reduction multiple reduction processing on the image at the current car door, and obtaining a plurality of reduced conversion images.
And step S520, carrying out HOG characteristic algorithm processing on each transformed image in sequence through a preset detection window to obtain a plurality of image characteristic data.
Specifically, a preset detection window is initialized, and each transformed image is traversed through the preset detection window in sequence to perform HOG (Histogram of Oriented gradients) feature algorithm processing, so as to obtain feature data of each image.
Step S530, processing the image characteristic data through an SVM classifier to obtain a plurality of detection objects and a plurality of first tracking objects.
Specifically, each image feature data is transmitted to an SVM (Support Vector Machine) classifier, and each image feature data is processed by the SVM classifier to obtain each detection object and each first tracking object.
In the above embodiment, the size of the current vehicle door image is converted to obtain a plurality of converted images. The size of the original image is continuously changed through the scaling factor, so that the size change of the target in the motion process can be adapted.
In one embodiment, as shown in FIG. 6, a flow chart diagram of the human head image acquisition step is shown. The workflow of the human head image acquisition step is as follows:
and converting the input original vehicle door image into a gray image through gray processing, and performing Gamma correction on the gray image to obtain the current vehicle door image. Wherein, the formula of Gamma correction is as follows:
Y(x,y)=I(x,y)γ
wherein gamma is a correction coefficient, Y (x, Y) is an image at the current car door, and I (x, Y) is a gray image variable. Preferably, γ is 0.5.
And generating a preset detection window, and performing HOG characteristic algorithm processing on the current vehicle door image through the preset detection window to obtain a plurality of image characteristic data. And transmitting each image characteristic data to a human head detection training model, processing each image characteristic data through an SVM classifier of the human head detection training model, and transmitting the human head image to a detection queue according to a processing result. And judging whether the input image is detected completely, if not, moving the preset detection window in the horizontal direction and the vertical direction, and continuously performing HOG characteristic algorithm processing on the current vehicle door image through the preset detection window until the current vehicle door image is detected completely. And adjusting the size of the original vehicle door image through the scaling coefficient, and performing human head image acquisition processing on the adjusted vehicle door image to further obtain each detection object and each first tracking object.
In one embodiment, step S240 is preceded by:
step S240, processing each third tracking object in the tracking queue through a tracking algorithm; updating the current third human head position value of the third tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the third tracking object is the first tracking object which does not update the head position value currently in the tracking queue and the head position difference value is larger than the preset displacement value; the human head position difference is obtained by processing the current first human head position value and the detected human head position value through difference operation.
The third tracked object refers to the first tracked object meeting the tracking condition in the tracking queue, and the third tracked object may be the tracked object of which the head position value is not updated currently in each first tracked object and the head position difference value is greater than the preset shift value. The current third person head position value refers to a person head position value of the third tracked object.
Specifically, tracking objects, which are not updated currently in the tracking queue and have a human head position difference value larger than a preset displacement value, are obtained, and the tracking objects are used as third tracking objects. And processing each third tracked object in the tracking queue through a tracking algorithm, and updating the current third head position value of the third tracked object into the tracking head position value of the tracking processing result according to the tracking processing result, so that the accuracy of the bus passenger flow statistics is improved, and the false detection rate is reduced.
In one embodiment, the bus passenger flow statistical method further comprises the following steps:
and deleting the first tracking object of which the updating times are less than or equal to the preset times and the head position difference value is less than or equal to the preset displacement value in the tracking queue.
Specifically, a first tracking object in the tracking queue, in which the updating times are less than or equal to a preset time and the human head position difference is less than or equal to a preset displacement value, is obtained, and the first tracking object is deleted. Whether the image is a real human head image or not is judged through the updating times and the displacement of the human head position, and if the image is a false human head image, tracking is not started, so that the false detection rate is reduced.
Preferably, the number of the detected objects in the tracking queue is less than 3, and the pixel points with the human head position difference value less than 15 are dummy head images.
In one embodiment, the step of processing each second tracked object by the tracking algorithm comprises:
each second tracked object is processed by a KCF (kernel Correlation Filters) tracking algorithm.
Specifically, the KCF algorithm collects positive and negative samples by using a circulation matrix of a region around a target, trains a target detector by using ridge regression, and converts the operation of the matrix into Hadamad product (namely, dot multiplication of elements) of vectors by using the diagonalizable property of the circulation matrix in a Fourier space, so that the operation amount is greatly reduced, the operation speed is improved, and the algorithm meets the real-time requirement.
In one embodiment, the tracking algorithm for processing each second tracked object may further be: CN (Adaptive Color Attributes for Real-time Visual Tracking, Adaptive Color attribute-based target Tracking) Tracking algorithm, or Mean Shift (Mean Shift algorithm) Tracking algorithm
In one embodiment, as shown in fig. 7, a flow chart of a bus passenger flow statistics method in another embodiment is shown, and the method includes the following steps:
step S710, similarity matching is carried out on the detection objects in the detection queue and the first tracking objects in the tracking queue in sequence to obtain matching degrees of results; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model.
Step S720, obtaining a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to the preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; and recording the updating times of the current first head position value.
Step S730, processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is the first tracking object which does not update the head position value currently in the tracking queue and the updating times are more than the preset times.
Step S740, processing each third tracking object in the tracking queue through a tracking algorithm; updating the current third human head position value of the third tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the third tracking object is the first tracking object which does not update the head position value currently in the tracking queue and the head position difference value is larger than the preset displacement value; the human head position difference is obtained by processing the current first human head position value and the detected human head position value through difference operation.
And step S750, processing each current first head position value, each current second head position value and each current third head position value to obtain a passenger flow statistical result.
In the embodiment, the detection algorithm is used as a main algorithm, the tracking algorithm is used as an auxiliary algorithm, and the tracking algorithm is started to track the undetected target under the condition that the target cannot be detected, so that unnecessary calculation amount is reduced, the running speed is increased, and the false detection rate is reduced.
In one embodiment, the specific workflow for generating the human head detection training model is as follows:
normalizing the acquired human head positive sample data and human head negative sample data (non-human head photos) into a size of 48 × 48; converting the human head positive sample data and the human head negative sample data into a gray image through gray processing; and performing Gamma correction on the gray level image to obtain a corrected image. Acquiring a horizontal direction gradient component and a vertical direction gradient component of the corrected image, and processing the horizontal direction gradient component and the vertical direction gradient component to obtain a gradient amplitude and a gradient direction of each pixel point of the corrected image. The specific process is as follows:
the gradient component of the corrected image in the x direction (horizontal direction) is calculated by the following formula:
Gx(x,y)=H(x+1,y)-H(x-1,y)
wherein G isx(x, y) is the horizontal gradient at the pixel point (x, y) in the corrected image; h (x, y) is the pixel value at pixel point (x, y) in the corrected image.
The gradient component of the corrected image in the y direction (vertical direction) is calculated by the following formula:
Gy(x,y)=H(x,y+1)-H(x,y-1)
wherein G isy(x, y) is the vertical gradient at pixel point (x, y) in the corrected image.
Obtaining gradient amplitude G (x, y) at a pixel point (x, y):
obtaining the gradient amplitude alpha (x, y) at the pixel point (x, y):
Figure BDA0001616825250000132
and selecting a proper group distance of 2 pi/9 according to the direction range of the directional gradient of 2 pi, namely the number of histogram groups is 9. The size of the smallest unit defining the corrected image is 8 × 8, and the size of the image block of the corrected image is 16 × 16. The direction histogram corresponding to the minimum unit is converted into a single-dimensional vector, that is, the number of corresponding direction gradients is encoded according to a preset group distance to obtain 9 features of the minimum unit, and the image block includes 36 features. Thus, a 48 × 48 corrected image results in 36 × 5 to 900 features. And (3) setting a kernel function of the SVM as a CvSVM: (LINEAR kernel), setting the SVM type as a CvSVM: (C _ SVC) and C-type support vector classifier, and inputting the obtained feature numbers into the SVM classifier for training to obtain a human head detection training model.
In one embodiment, as shown in fig. 8, a flow structure diagram of a bus passenger flow statistics method is shown. The overall working process of the bus passenger flow comprises the following steps:
according to the bus door opening signal, the camera acquires a current frame of image (such as the image at the original door as shown in fig. 11), and transmits the current frame of image to the server. And the server carries out image preprocessing on the current frame of image to obtain the image at the current car door. And detecting the current car door image through the HOG characteristic head to obtain each detection object and each first tracking object. Running a tracking management mechanism, judging whether an object needing to be tracked exists in a tracking queue, and if not, updating the current first human head position value of the first tracking object to the detection human head position value of the matched detection object; if so, the tracking algorithm processing is carried out on the tracked object, and the current second human head position value of the second tracked object is updated to the tracked human head position value of the tracking processing result according to the tracking processing result. And processing each current first person head position value and each current second person head position value to obtain a passenger flow statistical result, emptying the detection queue and entering next frame of image detection.
It should be noted that fig. 11 is a current frame of door image captured by the camera, where "57640" in the figure may indicate a serial number of the current frame of door image; "2017-06-2312: 46: 12" may represent the time of capture of the image at the door of the current frame; "22 KM/H" refers to the bus speed per hour at the current shooting time; "U: 00' means that the counted number of the current boarding people is 0; "D: 01' means that the current number of people getting off is counted to be 1. The transverse line close to the image car door is a lower car line, and the transverse line parallel to the upper car line is an upper car line.
In an embodiment, as shown in fig. 9, a flowchart of the tracking determination step is as follows:
sequentially taking out the detection objects from the detection queue, judging whether the detection objects exist in the tracking queue, and if the detection objects exist in the tracking queue, updating the information of the corresponding tracking object: such as recording the information of the updating times, the displacement distance and the current detection as true. And traversing the tracking object which is not updated with the head position value at present and has less detection times than 3 times in the tracking queue, and starting a tracking algorithm to process the tracking object.
In one embodiment, as shown in fig. 10, a flowchart of the tracking processing step is as follows:
1. fast training of a tracking algorithm:
extracting a tracking object to be tracked from the tracking queue, and multiplying the head position of the tracking object by 2.5 to obtain a target surrounding area; and obtaining a training sample set by using the cyclic displacement of the area around the target. The training sample set can be composed of a plurality of samples obtained by target area and displacement, the labels corresponding to the training sample set are assigned according to the criterion that the closer the distance is, the more the probability of positive samples is, and the ridge regression training parameters are obtained by training the ridge regression function through the training sample set. Wherein the ridge regression formula is:
Figure BDA0001616825250000151
wherein X is a training sample set obtained by cyclic displacement of a region around a target, and is a cyclic matrix; λ is a regularization parameter; y is the tag value in tag set Y; ω denotes the coefficient in front of the point Z (X, Y) formed by the training sample set X and its corresponding label set Y.
Finding several non-linear mapping functions
Figure BDA0001616825250000152
Line vectors, such that the mapped samples are linearly divided in the kernel space, ω is
Figure BDA0001616825250000153
A vector in the space formed by the row vectors, an
Figure BDA0001616825250000154
The ridge regression training parameter α is then available:
Figure BDA0001616825250000155
let the derivative of the column vector α be 0, derived from the properties of the circulant matrix:
wherein
Figure BDA0001616825250000157
Is an off-line fourier transform of a,
Figure BDA0001616825250000158
for the first row of the kernel matrix K,
Figure BDA0001616825250000159
is a discrete fourier transform of y.
2. And (3) fast detection of a tracking algorithm:
obtaining a sample set to be detected by using cyclic displacement of a region around a target, and obtaining a sample set z by using a prediction region and displacement of the prediction regionj(wherein z isj=Pjz). Selecting
Figure BDA00016168252500001510
The largest sample is taken as the new target region detected by zjAnd judging the moving position of the target. Wherein alpha isTIs derived from the training vector α;
Figure BDA00016168252500001511
is a non-linear mapping function.
Finally, the derivation can be:
Figure BDA00016168252500001512
wherein, KxzIs a matrix (K)z)TThe first row of (a); kzIs a kernel matrix in kernel space between the test sample and the training sample.
The responsiveness of the prediction sample set can be calculated through the formula, and then the sample with the maximum responsiveness can be selected as the current head position value of the tracked object.
It should be understood that although the steps in the flowcharts of fig. 2 to 5, and 7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps of fig. 2-5, and 7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 12, a bus passenger flow statistics apparatus is provided, the apparatus comprises an object matching unit 110, a first head position updating unit 120, a second head position updating unit 130 and a passenger flow statistics unit 140, wherein:
an object matching unit 110, configured to perform similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue in sequence to obtain a matching degree of each result; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model.
A first human head position updating unit 120, configured to obtain a current first human head position value of the first tracked object corresponding to a result matching degree that is greater than or equal to a preset matching degree, and update the current first human head position value to a detected human head position value of the matched detected object; and recording the updating times of the current first head position value.
A second head position updating unit 130 for processing each second tracked object in the tracking queue by a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is the first tracking object which does not update the head position value currently in the tracking queue and the updating times are more than the preset times.
The passenger flow statistics unit 140 is configured to process each current first head position value and each current second head position value to obtain a passenger flow statistics result.
It will be understood by those skilled in the art that the structure shown in fig. 12 is a block diagram of only a portion of the structure relevant to the present disclosure, and does not constitute a limitation on the bus traffic statistics system to which the present disclosure is applied, and a particular bus traffic statistics system may include more or less components than those shown in the figure, or combine certain components, or have a different arrangement of components.
In one embodiment, a bus passenger flow statistics system is provided, which may be a server, and the internal structure thereof may be as shown in fig. 13. The system includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the system is configured to provide computing and control capabilities. The memory of the system comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the system is used for storing the data of bus passenger flow statistical processing. The network interface of the system is used for communicating with an external terminal through network connection. The computer program is executed by a processor to implement a method of bus passenger flow statistics.
Those skilled in the art will appreciate that the structure shown in fig. 13 is a block diagram of only a portion of the structure associated with the present application and does not constitute a limitation on the bus traffic statistics system to which the present application is applied, and that a particular bus traffic statistics system may include more or less components than those shown, or some components may be combined, or have a different arrangement of components.
In one embodiment, a bus passenger flow statistics system is provided, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the following steps when executing the computer program:
carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue in sequence to obtain each result matching degree; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model;
acquiring a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to a preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; recording the updating times of the current first head position value;
processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object to the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is a first tracking object which is not updated with the numerical value of the head position currently in the tracking queue, and the updating times are more than the preset times;
and processing each current first head position value and each current second head position value to obtain a passenger flow statistical result.
In one embodiment, as shown in fig. 14, the bus passenger flow statistics system further comprises a camera 12, a display 14 connected to the processor 10.
Specifically, the camera 12 transmits the original door image to the processor 10, and the processor 10 processes the original door image to obtain the detection object and the tracking object. The processor 10 tracks and counts the head images to be tracked based on the detection algorithm as the main algorithm and the tracking algorithm as the auxiliary algorithm, and obtains the bus passenger flow statistical result. The processor 10 may transmit the bus passenger flow statistics to the display 14, and display the bus passenger flow statistics through the display 14.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue in sequence to obtain each result matching degree; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model;
acquiring a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to the preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; recording the updating times of the current first head position value;
processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object into the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is a first tracking object which does not update the head position value currently in the tracking queue and the updating times are more than the preset times;
and processing each current first person head position value and each current second person head position value to obtain a passenger flow statistical result.
For a specific method that the computer program stored in the computer-readable storage medium can realize its functions when being executed by the processor, reference may be made to the above description of the bus passenger flow statistics method, which is not repeated herein. The respective modules in the above-described computer-readable storage medium may be implemented in whole or in part by software, hardware, and a combination thereof.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the division methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In one embodiment, a storage medium is further provided, on which a computer program is stored, wherein the program is executed by a processor to implement any one of the division methods in the above embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The computer storage medium and the computer program stored in the computer storage medium can reduce the operation period, greatly improve the division operation speed and further improve the division operation efficiency by realizing the flow comprising the embodiments of the division operation methods. .
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A bus passenger flow statistical method is characterized by comprising the following steps:
carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue in sequence to obtain each result matching degree; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model; the matching degree of each result is obtained by sequentially matching the coincidence degree of pixel points of the detection object in the detection queue and each first tracking object in the tracking queue; the current head position image is a whole head position image obtained by processing the current head position image or a non-first-appearing head position image in the whole head position image, and the initial head position image is a first-appearing head position image in the whole head position image obtained by processing the current head position image;
acquiring a current first human head position value of the first tracked object corresponding to the result matching degree which is greater than or equal to a preset matching degree, and updating the current first human head position value into a detection human head position value of the matched detection object; recording the updating times of the current first head position value;
processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object to the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is a first tracking object which is not updated with the numerical value of the head position currently in the tracking queue, and the updating times are more than the preset times;
and processing each current first head position value and each current second head position value to obtain a passenger flow statistical result.
2. The method according to claim 1, wherein the step of processing each of the current first head position values and each of the current second head position values to obtain the passenger flow statistics result comprises:
when the initial position value of the first tracked object is smaller than a boarding line threshold value and the current first head position value is larger than the boarding line threshold value, or the initial position value of the second tracked object is smaller than the boarding line threshold value and the current second head position value is larger than the boarding line threshold value, accumulating boarding passenger flow data to obtain a boarding passenger flow statistical result;
and accumulating the getting-off passenger flow statistical data to obtain a getting-off passenger flow statistical result when the initial position value of the first tracked object is greater than a getting-off line threshold value and the current first head position value is less than the getting-off line threshold value or the initial position value of the second tracked object is greater than the getting-off line threshold value and the current second head position value is less than the getting-off line threshold value.
3. The bus passenger flow statistical method according to claim 1, wherein the step of sequentially matching the detection objects in the detection queue with the first tracking objects in the tracking queue for similarity, and obtaining the matching degree of each result comprises the steps of:
carrying out gray level processing on the acquired original vehicle door image to obtain a gray level image; and performing Gamma correction on the gray level image to obtain the image at the current car door.
4. The method according to claim 3, wherein the step of Gamma correcting the gray level image to obtain the current door image comprises the steps of:
carrying out size conversion on the current car door image to obtain a plurality of conversion images;
carrying out HOG characteristic algorithm processing on each converted image in sequence through a preset detection window to obtain a plurality of image characteristic data;
and processing the image characteristic data through an SVM classifier to obtain a plurality of detection objects and a plurality of first tracking objects.
5. The method according to claim 1, wherein the step of processing each of the current first head position values and each of the current second head position values to obtain the passenger flow statistics result comprises:
processing each third tracking object in the tracking queue through a tracking algorithm; updating the current third human head position value of the third tracked object to the tracked human head position value of the tracking processing result according to the tracking processing result; the third tracking object is a first tracking object which is not updated with the head position value currently in the tracking queue and has a head position difference value larger than a preset displacement value; the human head position difference is obtained by processing the current first human head position value and the detected human head position value through difference operation.
6. The bus passenger flow statistical method according to any one of claims 5, characterized by further comprising the steps of:
and deleting the first tracking object of which the updating times are less than or equal to the preset times and the head position difference value is less than or equal to the preset displacement value in the tracking queue.
7. The bus passenger flow statistical method according to any one of claims 1 to 6, wherein the step of processing each of the second tracked objects by a tracking algorithm comprises:
each of the second tracked objects is processed by a KCF tracking algorithm.
8. A bus passenger flow statistics device, characterized by comprising:
the object matching unit is used for sequentially carrying out similarity matching on the detection objects in the detection queue and each first tracking object in the tracking queue to obtain the matching degree of each result; the detection object is a current head position image obtained by processing a current car door image through a head detection training model; the first tracking object is an initial human head position image obtained by processing the current vehicle door image through a human head detection training model; the matching degree of each result is obtained by sequentially matching the coincidence degree of pixel points of the detection object in the detection queue and each first tracking object in the tracking queue; the current head position image is a whole head position image obtained by processing the current head position image or a non-first-appearing head position image in the whole head position image, and the initial head position image is a first-appearing head position image in the whole head position image obtained by processing the current head position image;
a first human head position updating unit, configured to obtain a current first human head position value of the first tracked object corresponding to the result matching degree that is greater than or equal to a preset matching degree, and update the current first human head position value to a detected human head position value of the matched detected object; recording the updating times of the current first head position value;
the second head position updating unit is used for processing each second tracking object in the tracking queue through a tracking algorithm; updating the current second human head position value of the second tracked object to the tracked human head position value of the tracking processing result according to the tracking processing result; the second tracking object is a first tracking object which is not updated with the numerical value of the head position currently in the tracking queue, and the updating times are more than the preset times;
and the passenger flow statistical unit is used for processing each current first head position value and each current second head position value to obtain a passenger flow statistical result.
9. A bus passenger flow statistical system, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the bus passenger flow statistical method according to any one of claims 1 to 7 when executing the computer program.
10. The system of claim 9, further comprising a camera and a display connected to the processor.
CN201810288942.2A 2018-04-03 2018-04-03 Bus passenger flow statistical method, device and system Active CN108764017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810288942.2A CN108764017B (en) 2018-04-03 2018-04-03 Bus passenger flow statistical method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810288942.2A CN108764017B (en) 2018-04-03 2018-04-03 Bus passenger flow statistical method, device and system

Publications (2)

Publication Number Publication Date
CN108764017A CN108764017A (en) 2018-11-06
CN108764017B true CN108764017B (en) 2020-01-07

Family

ID=63980800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810288942.2A Active CN108764017B (en) 2018-04-03 2018-04-03 Bus passenger flow statistical method, device and system

Country Status (1)

Country Link
CN (1) CN108764017B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776974B (en) * 2018-05-24 2019-05-10 南京行者易智能交通科技有限公司 A kind of real-time modeling method method suitable for public transport scene
CN111611974B (en) * 2020-06-03 2023-06-13 广州通达汽车电气股份有限公司 Vehicle-mounted face snapshot method and system
CN111860261B (en) * 2020-07-10 2023-11-03 北京猎户星空科技有限公司 Passenger flow value statistical method, device, equipment and medium
CN113160603B (en) * 2021-04-27 2022-07-08 华录智达科技股份有限公司 Intelligent bus management based on Internet of vehicles

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2800895B1 (en) * 1999-11-09 2001-12-28 Renault Vehicules Ind BUS PASSENGER COUNTING SYSTEM AND METHOD
CN101794382B (en) * 2010-03-12 2012-06-13 华中科技大学 Method for counting passenger flow of buses in real time
CN103577832B (en) * 2012-07-30 2016-05-25 华中科技大学 A kind of based on the contextual people flow rate statistical method of space-time
CN104021605A (en) * 2014-04-16 2014-09-03 湖州朗讯信息科技有限公司 Real-time statistics system and method for public transport passenger flow
CN106203513B (en) * 2016-07-08 2019-06-21 浙江工业大学 A kind of statistical method based on pedestrian's head and shoulder multi-target detection and tracking
CN106778656A (en) * 2016-12-27 2017-05-31 清华大学苏州汽车研究院(吴江) A kind of counting passenger flow of buses system based on ToF cameras

Also Published As

Publication number Publication date
CN108764017A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108764017B (en) Bus passenger flow statistical method, device and system
Zhang et al. Vehicle-damage-detection segmentation algorithm based on improved mask RCNN
CN111178245B (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN109800629B (en) Remote sensing image target detection method based on convolutional neural network
Cao et al. Vehicle detection and motion analysis in low-altitude airborne video under urban environment
US10607098B2 (en) System of a video frame detector for video content identification and method thereof
CN110569696A (en) Neural network system, method and apparatus for vehicle component identification
US9626599B2 (en) Reconfigurable clear path detection system
CN106919902B (en) Vehicle identification and track tracking method based on CNN
Tsintotas et al. DOSeqSLAM: Dynamic on-line sequence based loop closure detection algorithm for SLAM
CN106845458B (en) Rapid traffic sign detection method based on nuclear overrun learning machine
CN114418298A (en) Charging load probability prediction system and method based on non-invasive detection
CN113034378A (en) Method for distinguishing electric automobile from fuel automobile
Amrouche et al. Vehicle Detection and Tracking in Real-time using YOLOv4-tiny
Zhou et al. A novel object detection method in city aerial image based on deformable convolutional networks
Kalva et al. Smart Traffic Monitoring System using YOLO and Deep Learning Techniques
CN112465854A (en) Unmanned aerial vehicle tracking method based on anchor-free detection algorithm
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
Zhang et al. A front vehicle detection algorithm for intelligent vehicle based on improved gabor filter and SVM
CN111144220A (en) Personnel detection method, device, equipment and medium suitable for big data
Wu et al. Vehicle detection in high-resolution images using superpixel segmentation and CNN iteration strategy
Guo et al. ANMS: attention-based non-maximum suppression
Sankaranarayanan et al. Pre-processing framework with virtual mono-layer sequence of boxes for video based vehicle detection applications
Piniarski et al. Multi-branch classifiers for pedestrian detection from infrared night and day images
Saranya et al. The Proficient ML method for Vehicle Detection and Recognition in Video Sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant