WO2022126669A1 - Subway pedestrian flow network fusion method based on video pedestrian recognition, and pedestrian flow prediction method - Google Patents

Subway pedestrian flow network fusion method based on video pedestrian recognition, and pedestrian flow prediction method Download PDF

Info

Publication number
WO2022126669A1
WO2022126669A1 PCT/CN2020/137804 CN2020137804W WO2022126669A1 WO 2022126669 A1 WO2022126669 A1 WO 2022126669A1 CN 2020137804 W CN2020137804 W CN 2020137804W WO 2022126669 A1 WO2022126669 A1 WO 2022126669A1
Authority
WO
WIPO (PCT)
Prior art keywords
pedestrian
trajectory
target
trajectories
subway
Prior art date
Application number
PCT/CN2020/137804
Other languages
French (fr)
Chinese (zh)
Inventor
徐超
高思斌
李少利
李永强
戴李杰
Original Assignee
中电海康集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中电海康集团有限公司 filed Critical 中电海康集团有限公司
Publication of WO2022126669A1 publication Critical patent/WO2022126669A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Definitions

  • the invention belongs to the technical field of smart cities, and in particular relates to a subway pedestrian flow network fusion method and a pedestrian flow prediction method based on video pedestrian recognition.
  • the application of video surveillance is getting more and more applications in the field of digital security, and people counting through video is becoming more and more important. For example, in stations, tourist attractions, exhibition areas, commercial streets, etc. Staff mobility, resource allocation and better security.
  • the existing subway people flow prediction is generally based on the import and export card swiping data of each station, and the obtained results can only obtain the prediction of the people flow in and out of the station.
  • a passenger flow prediction model of the subway station is constructed to predict the future passenger flow change of the station. Stations and departures.
  • the crowd statistics scheme mainly uses infrared, cameras, communication data, etc., to monitor the crowd density in subway cars in real time.
  • the image-based method generally uses the different degrees of shading of the light source of the image collected by the camera through different crowd densities to obtain crowd distribution images under different densities. Image analysis and processing are carried out on this and before and after comparison, and finally the subway is comprehensively obtained. The density of people in the carriage.
  • each station generally has many entrances and exits, and there are various situations in the way of each entrance and exit.
  • the above-ground and underground transportation networks cannot be deeply integrated, so it is impossible to reasonably allocate the resources of each entrance and exit of the site; at the same time, the impact of the flow of people entering and leaving the site on the above-ground traffic cannot be obtained.
  • the purpose of the present invention is to provide a subway pedestrian flow network fusion method and pedestrian flow prediction method based on video pedestrian recognition, connect the above-ground and underground traffic routes and subway routes, refine the outbound or inbound direction of the pedestrian flow at each station, and improve the Traffic prediction accuracy.
  • the technical scheme adopted by the present invention is:
  • a subway pedestrian flow network fusion method based on video pedestrian recognition is used to realize the fusion statistics of subway and ground pedestrian flow to assist traffic early warning.
  • the subway pedestrian flow network fusion method based on video pedestrian recognition includes:
  • Step 1 Receive monitoring images of various entrances and exits of the subway station, and the monitoring images are acquired by image acquisition equipment disposed at each entrance and exit;
  • Step 2 extracting pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction;
  • Step 3 Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained;
  • Step 4 Perform similarity matching of pedestrian target feature information based on the pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians;
  • Step 5 Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area;
  • Step 6 According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each station, and the inbound and outbound pedestrian flow on the traffic routes corresponding to each entrance and exit in each station;
  • Step 7 On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
  • each optional method can be independently implemented for the above-mentioned overall solution.
  • the combination can also be a combination between multiple optional ways.
  • the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information to obtain the pedestrian trajectory for the same pedestrian, including:
  • Step 3.1 obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device
  • Step 3.2 Determine whether the tracking track set corresponding to the image acquisition device is empty.
  • the tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time.
  • the target coordinate frame information and the pedestrian target feature information are added to the tracking trajectory set and ended;
  • Step 3.3 using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set;
  • Step 3.4 According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image.
  • the feature information of the pedestrian target in the calculation is to calculate the feature similarity between the current pedestrian target and the saved pedestrian target one by one, and the current pedestrian target and the saved pedestrian target are obtained based on the weighted summation of the coordinate frame similarity and the feature similarity. similarity between the two;
  • Step 3.5 Based on the similarity between the current pedestrian target and the saved pedestrian target, the Hungarian matching algorithm is used to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time;
  • Step 3.6 If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched.
  • the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
  • the similarity matching of pedestrian target feature information is performed based on the pedestrian trajectories corresponding to different image acquisition devices, and the successfully matched pedestrian trajectories are combined to update the pedestrian trajectories of the corresponding pedestrians, including:
  • Step 4.1 Take a set of tracking trajectories corresponding to an image acquisition device, and compare the pedestrian trajectories marked as new trajectories in the set of tracking trajectories with the pedestrian trajectories marked as departure trajectories in the tracking trajectories set corresponding to the rest of the image acquisition devices one by one. calculate;
  • Step 4.2 If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched;
  • Step 4.3 Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
  • the present invention also provides a people flow prediction method for predicting people flow based on the fusion of subway and ground people flow to assist traffic early warning.
  • the people flow prediction method includes:
  • the graph neural network is used to predict the total inbound and outbound traffic of each site in a specified time period in the future;
  • the mean value of the inbound and outbound proportions of the traffic route corresponding to each entry and exit of each site is obtained;
  • the total inbound and outbound predicted traffic of each station is allocated to obtain the inbound and outbound predicted traffic on the traffic route corresponding to each entry and exit of each station.
  • the use of the subway pedestrian flow network fusion method based on video pedestrian recognition to obtain the people flow mobile network map within a specified time period including:
  • Step 1 Receive monitoring images of various entrances and exits of the subway station, and the monitoring images are acquired by image acquisition equipment disposed at each entrance and exit;
  • Step 2 extracting pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction;
  • Step 3 Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained;
  • Step 4 Perform similarity matching of pedestrian target feature information based on the pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians;
  • Step 5 Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area;
  • Step 6 According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each station, and the inbound and outbound pedestrian flow on the traffic routes corresponding to each entrance and exit in each station;
  • Step 7 On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
  • the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information to obtain the pedestrian trajectory for the same pedestrian, including:
  • Step 3.1 obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device
  • Step 3.2 Determine whether the tracking track set corresponding to the image acquisition device is empty.
  • the tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time.
  • the target coordinate frame information and the pedestrian target feature information are added to the tracking trajectory set and ended;
  • Step 3.3 using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set;
  • Step 3.4 According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image.
  • the feature information of the pedestrian target in the calculation is to calculate the feature similarity between the current pedestrian target and the saved pedestrian target one by one, and the current pedestrian target and the saved pedestrian target are obtained based on the weighted summation of the coordinate frame similarity and the feature similarity. similarity between the two;
  • Step 3.5 Based on the similarity between the current pedestrian target and the saved pedestrian target, the Hungarian matching algorithm is used to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time;
  • Step 3.6 If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched.
  • the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
  • the similarity matching of pedestrian target feature information is performed based on the pedestrian trajectories corresponding to different image acquisition devices, and the successfully matched pedestrian trajectories are combined to update the pedestrian trajectories of the corresponding pedestrians, including:
  • Step 4.1 Take a set of tracking trajectories corresponding to an image acquisition device, and compare the pedestrian trajectories marked as new trajectories in the set of tracking trajectories with the pedestrian trajectories marked as departure trajectories in the tracking trajectories set corresponding to the rest of the image acquisition devices one by one. calculate;
  • Step 4.2 If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched;
  • Step 4.3 Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
  • the predicted traffic of people entering and exiting each site within a specified time period in the future is predicted by using a graph neural network based on the people flow mobile network graph, including:
  • the people flow network diagram takes subway stations as vertices and the traffic routes corresponding to the entrances and exits of the stations as edges, each vertex has a feature vector including the total inbound and outbound people flow, and the model for constructing the people flow mobile network diagram is as follows :
  • G t is the people flow network graph at time t
  • V t is the vector composed of the feature vectors of all vertices
  • represents the edge set between vertices
  • W is the weight of the adjacency matrix
  • t is the current moment
  • a graph neural network is used to solve the people flow prediction target model, and the total inbound and outbound predicted people flow of each site in a specified time period in the future is obtained.
  • the present invention provides a subway pedestrian flow network fusion method and a pedestrian flow prediction method based on video pedestrian recognition.
  • the video data of the subway station is used to perform statistical analysis on the specific direction of people entering and exiting the subway station, and the aboveground and underground road traffic lines and subway lines are opened up to form a Measurable and complete traffic network of people flow on the ground and underground; the integration of subway network and ground transportation network will make the movement of people in a huge network, and each subway station will become a node in this network, Every subway line and overground road traffic line will be an edge of this network.
  • the graph neural network is used to deduce the change of people flow in the entire transportation network, so as to analyze and predict each station, the number of people flow and the direction of people flow.
  • Fig. 1 is the flow chart of the subway pedestrian flow network fusion method based on video pedestrian recognition of the present invention
  • Fig. 2 is the training schematic diagram of SSD target detection network of the present invention
  • Fig. 3 is the training schematic diagram of MobileNet neural network of the present invention.
  • FIG. 4 is a schematic diagram of the present invention performing a distillation operation on a network neural network
  • Fig. 5 is the flow chart of the pedestrian trajectory tracking of the present invention.
  • FIG. 6 is a flowchart of a multi-factor fusion pedestrian target tracking method of the present invention.
  • FIG. 7 is a flowchart of a target tracking method based on pedestrian image features of the present invention.
  • FIG. 8 is a schematic diagram of an embodiment of a subway traffic network diagram of the present invention.
  • FIG. 9 is a schematic diagram of the present invention performing problem modeling based on a spatiotemporal sequence
  • FIG. 10 is a schematic structural diagram of the framework of the STGCN of the present invention.
  • a method for integrating subway pedestrian flow networks based on video pedestrian recognition which establishes the connection between the traffic routes on the ground and the subway routes on the ground, and analyzes the inbound and outbound flow of people at each entrance and exit of each subway station. whereabouts.
  • the correlation between the above-ground (that is, the ground) and underground (that is, the subway) people flow overcomes the fact that the existing above-ground or underground people flow statistics only consider the above-ground or underground single-level people flow, and do not further consider the factors that affect the above-ground and underground people flow. Insufficient accuracy of statistics or predictions.
  • the people flow statistics method based on the integration of the above-ground and underground networks of the present invention can not only utilize the resource scheduling of each entrance and exit of each subway station, but also can combine the above-ground transportation network for the early warning of the people flow of the subway stations, and at the same time, it can combine the underground subway network. , improve the forward-looking and timeliness of traffic control.
  • the present embodiment of the subway pedestrian flow network fusion method based on video pedestrian recognition includes the following steps:
  • Step 1 Receive monitoring images of various entrances and exits of the subway station, where the monitoring images are acquired by image acquisition equipment disposed at each entrance and exit.
  • the image acquisition device should be deployed so that its monitoring range includes the entire entrance and exit and the above-ground traffic route corresponding to the entrance and exit, which lays the foundation for identifying pedestrian entry and exit status, exit or entry direction.
  • the image capture device in this embodiment may be an optical camera, a binocular camera, a TOF camera, etc., and each image capture device has a unique device id so as to distinguish each image capture device. Therefore, while receiving the monitoring image, the device id of the image acquisition device corresponding to the monitoring image and the corresponding time stamp are obtained.
  • an image acquisition device is installed at each entrance and exit of a subway station to meet the application requirements of the present invention, but the present invention is not limited to installing only one image acquisition device at each entrance and exit.
  • multiple image capture devices can be installed at one entrance and exit to obtain video image information more comprehensively, and image capture devices can be installed inside subway stations and along the traffic routes corresponding to the entrances and exits to expand the range of video image capture. Get more comprehensive and complete people flow statistics or pedestrian trajectories.
  • Step 2 Extract the pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction.
  • the pedestrian target coordinate frame information and the pedestrian target feature information are the basic information for identifying and locating the pedestrian target.
  • two parts of information are extracted based on a neural network.
  • this embodiment is described by taking the SSD target detection network extracting pedestrian target coordinate frame information and the MobileNet neural network extracting pedestrian target feature information as examples.
  • the SSD target detection network is used as the training and application process of the target recognition algorithm.
  • the specific steps are as follows:
  • the dataset includes image data and annotation data.
  • the annotation data indicates the area of the pedestrian target in the specified image.
  • the ratio of the anchor frame in the target recognition network is the n aspect ratio.
  • the training data is input into the SSD target detection network for target recognition, and the neural network outputs the coordinate frame information of the pedestrian in the image.
  • NMS non-maximum suppression method
  • the trained SSD target detection network accepts the image input, outputs the pedestrian coordinate frame, performs non-maximum suppression operation on the coordinate frame, deletes the duplicate coordinate frame, and finally sets a certain threshold, when the coordinate frame Pedestrian target coordinate frame information as output when the reliability of is greater than the threshold.
  • the MobileNet neural network is used as the training application process of the pedestrian re-identification algorithm.
  • the specific steps are as follows:
  • Three images are selected for one training, two different images of pedestrian A and one image of other pedestrians. After image enhancement, the three images are input into the MobileNet neural network respectively, and the pedestrian features are output.
  • the teacher model is generally a relatively large trained neural network model. This model generally has high accuracy, but the network has many parameters and slow running time.
  • the student model is generally a model with a small number of parameters. If this model is directly trained with labeled data, it is often difficult to train.
  • the neural network distillation allows the student model to learn from the labeled data and the teacher model at the same time, which can often achieve better results.
  • the final MobileNet neural network obtained by training receives image data and pedestrian id data, and outputs the pedestrian target feature information corresponding to each pedestrian.
  • the target recognition neural network, the feature recognition neural network and the corresponding training application method provided by this embodiment can completely, comprehensively and accurately extract the corresponding data in the monitoring image, and improve high-quality basic information for the statistics of people flow.
  • the multi-target tracking method in this embodiment is divided into two parts, one of which is a multi-factor fusion pedestrian target tracking method under the monitoring screen of the same image acquisition device, and this method generates the image acquisition device under the monitoring screen. trajectories of pedestrians.
  • the other is a pedestrian feature-based target tracking method across image acquisition devices, which is used to match the pedestrian trajectories of the same pedestrian under different image acquisition devices.
  • the pedestrian target tracking method across image acquisition devices directly uses the pedestrian target feature information to calculate the similarity, and if the similarity is greater than a certain threshold, it is judged as the same pedestrian, and related trajectories are associated. Combining the two methods can obtain cross-regional pedestrian trajectory data, so as to perform complete pedestrian trajectory tracking and improve the accuracy of pedestrian flow statistics.
  • Step 3 Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained. That is, the multi-factor fusion pedestrian target tracking method, as shown in Figure 6, as follows:
  • Step 3.1 obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device.
  • Step 3.2 Determine whether the tracking track set corresponding to the image acquisition device is empty.
  • the tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time.
  • the target coordinate frame information and the pedestrian target feature information are added to the tracking track set and ended.
  • Step 3.3 using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set.
  • Unscented Kalman filter is developed on the basis of Kalman filter and transformation. It uses lossless transformation to apply Kalman filter under linear assumption to nonlinear systems. This method can be used in the case of many overlapping occlusions. Better tracking of pedestrians.
  • the unscented Kalman filter is used to estimate the position of each existing pedestrian trajectory at the current moment.
  • Step 3.4 According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image.
  • Pedestrian target feature information in , calculate the feature similarity between the current pedestrian target (that is, the pedestrian target obtained this time) and the saved pedestrian target one by one, based on the weighted summation of the coordinate frame similarity and the feature similarity to get The similarity between the current pedestrian target and the saved pedestrian target.
  • the IOU of the pedestrian target is calculated.
  • the feature similarity of the information (such as the cosin similarity of the feature, etc.) is weighted and summed to construct the similarity between the existing trajectory and the pedestrian to be matched.
  • the similarity between the current pedestrian target and the saved pedestrian target finally obtained in this embodiment combines the coordinate frame similarity and the feature similarity, and matches the pedestrian target from multiple directions to significantly improve the accuracy of the pedestrian trajectory. It should be noted that the calculation of the similarity of the coordinate frame and the similarity of the features are relatively mature technologies in the field of pedestrian trajectory tracking, which will not be repeated in this embodiment.
  • the weights of the weighted summation can be set according to the focus of actual use.
  • a similarity matrix can be used for storage.
  • the vertical direction in the similarity matrix is the saved pedestrian target, the horizontal direction is the current pedestrian target, and the corresponding value in the matrix is the corresponding value.
  • the similarity between the saved pedestrian target and the current pedestrian target is the similarity between the saved pedestrian target and the current pedestrian target.
  • Step 3.5 Based on the similarity between the current pedestrian target and the saved pedestrian target, the Hungarian matching algorithm is used to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time.
  • Step 3.6 If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched.
  • the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
  • the pedestrian trajectory of each pedestrian is updated in real time, and when a new pedestrian is added within the monitoring range, the newly added pedestrian is confirmed through successive successful matches to avoid false detection; And after the pedestrian trajectory is not successfully matched for many times in a row, the pedestrian trajectory is deleted to reduce the storage pressure and matching pressure, and improve the matching speed.
  • Step 4 Perform similarity matching of pedestrian target feature information based on pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians. That is, the target tracking method based on pedestrian image features, as shown in Figure 7, is as follows:
  • Step 4.1 Take a set of tracking trajectories corresponding to an image acquisition device, and perform similarity between the pedestrian trajectories marked as new trajectories in the set of tracking trajectories and the pedestrian trajectories marked as departure trajectories in the set of tracking trajectories corresponding to the rest of the image acquisition devices one by one. calculate.
  • Step 4.2 If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched.
  • the similarity is calculated based on the pedestrian target feature information carried by the two pedestrian trajectories, and the pedestrian target feature information obtained from the calculation may be the pedestrian target feature information in the latest monitoring image in the pedestrian trajectory, or the latest pedestrian target feature information. The average value of pedestrian target feature information in several frames of surveillance images.
  • the similarity can be cosine similarity and is matched by the Hungarian algorithm.
  • Step 4.3 Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
  • two segments of pedestrian trajectories that have been successfully matched are combined, and the combination is preferably by splicing two segments of pedestrian trajectories in chronological order to obtain a pedestrian trajectory that conforms to the pedestrian's real moving path, and the combined cross-regional pedestrian trajectory is moved to In the tracking track set where the new track is located, the departure track is also moved from the original tracking track set to the tracking track set where the new track is located, realizing joint management of pedestrian tracks.
  • Step 5 Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area.
  • the traffic route on the ground corresponding to each entrance and exit should be understood as the ground road where the entrance and exit are located.
  • the ground road where the entrance is located is the basic operation for the integration of the above-ground and underground transportation networks of the present invention. It is the same as the installation of the image acquisition equipment in step 1. If the image acquisition equipment is not only installed at the entrance and exit, but also extended to the preset range of the subway station.
  • the corresponding ground traffic routes may also be other ground roads extended by the ground road where the entrance and exit are located where the image acquisition device is located.
  • the final constructed subway traffic network diagram is shown in Figure 8. The dots represent subway stations, the solid lines represent ground roads, and the dotted lines represent subway routes.
  • connection point of the subway station indicates the entrance and exit.
  • the formed subway traffic network diagram may also only include the network diagram with the subway station as the point and the ground road as the edge.
  • Step 6 According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each site, and the inbound and outbound pedestrian flow on the traffic route corresponding to each entrance and exit in each site.
  • the pedestrian trajectory includes at least the pedestrian moving path in the same image acquisition device.
  • the moving path has a direction. Therefore, according to the pedestrian trajectory, the pedestrian can be identified as entering or exiting the station, and the corresponding statistics can be obtained. Outbound foot traffic.
  • the pedestrian trajectory in the monitoring screen includes the process of entering the screen from a certain direction of the traffic route to enter the station, or exiting the station and moving out of the screen in a certain direction of the traffic route, that is, the corresponding The inbound and outbound pedestrian flow on the traffic route can be obtained, and the inbound and outbound pedestrian flow on the traffic route includes the direction of pedestrians leaving the station or entering the station (taking the exit and the exit only include two directions of left turn and right turn as
  • the outbound traffic on the traffic route includes the traffic of people who turn left and enter the traffic route after exiting the station and the traffic volume of people who turn right and enter the traffic route after exiting the station).
  • Step 7 On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
  • the mobile network diagram of the flow of people finally obtained by the present invention shows the flow of people entering and leaving the station at each entrance and exit of each subway station.
  • the corresponding traffic route is north-south extension, then the flow of people going south after exiting the station, and the flow of people going north after exiting the station), the direction of the incoming traffic from the traffic route (for example, the traffic route corresponding to the entrance and exit is north-south). If the route is extended, the inbound traffic flows from the route to the south, and the inbound traffic flows from the north).
  • the above-mentioned pedestrian flow direction or direction can be obtained by superimposing multiple monitoring images in a time series according to the pedestrian's entry and exit status and the pedestrian's exit or entry direction in each frame of the monitoring image, that is, the pedestrian trajectory is generated. It is convenient to analyze the possible traffic congestion and other situations of the above-ground traffic routes based on the subway pedestrian flow, so as to take timely measures such as evacuation and early warning.
  • a people flow prediction method which is based on the fusion of subway and ground people flow to predict the flow of people to assist traffic early warning, and the people flow prediction method includes:
  • the pedestrian flow network map in the specified time period is obtained.
  • the graph neural network is used to predict the total inbound and outbound traffic of each site within a specified time period in the future.
  • the mean value of the inbound and outbound ratios of the traffic routes corresponding to each entry and exit of each site is obtained.
  • the total inbound and outbound predicted people flow of each station is allocated to obtain the inbound and outbound predicted people flow on the traffic route corresponding to each entrance and exit of each station.
  • the use of a subway pedestrian flow network fusion method based on video pedestrian recognition to obtain a people flow mobile network map within a specified time period includes:
  • Step 1 receive the monitoring image of each entrance and exit of subway station, and described monitoring image is acquired by the image acquisition equipment that is arranged at each entrance and exit;
  • Step 2 extracting pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction;
  • Step 3 Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained;
  • Step 4 Perform similarity matching of pedestrian target feature information based on the pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians;
  • Step 5 Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area;
  • Step 6 According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each station, and the inbound and outbound pedestrian flow on the traffic routes corresponding to each entrance and exit in each station;
  • Step 7 On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
  • the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information to obtain the pedestrian trajectory for the same pedestrian, including:
  • Step 3.1 obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device
  • Step 3.2 Determine whether the tracking track set corresponding to the image acquisition device is empty.
  • the tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time.
  • the target coordinate frame information and the pedestrian target feature information are added to the tracking trajectory set and ended;
  • Step 3.3 using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set.
  • Step 3.4 According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image.
  • the feature information of the pedestrian target in the calculation is to calculate the feature similarity between the current pedestrian target and the saved pedestrian target one by one, and the current pedestrian target and the saved pedestrian target are obtained based on the weighted summation of the coordinate frame similarity and the feature similarity. similarity between the two.
  • Step 3.5 based on the similarity between the current pedestrian target and the saved pedestrian target, adopt the Hungarian matching algorithm to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time;
  • Step 3.6 If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched.
  • the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
  • the similarity matching of pedestrian target feature information is performed based on pedestrian trajectories corresponding to different image acquisition devices, and the successfully matched pedestrian trajectories are combined to update the pedestrian trajectories of the corresponding pedestrians, including:
  • Step 4.1 Take a set of tracking trajectories corresponding to an image acquisition device, and perform similarity between the pedestrian trajectories marked as new trajectories in the set of tracking trajectories and the pedestrian trajectories marked as departure trajectories in the set of tracking trajectories corresponding to the rest of the image acquisition devices one by one. calculate;
  • Step 4.2 If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched;
  • Step 4.3 Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
  • the construction of the underground and above-ground people flow network graph in this embodiment is to represent the subway network of the entire city as a graph, in which the subway station is the vertex, and the subway line and the traffic routes connecting the entrances and exits of the subway station on the ground are the edges (or only take The traffic route connecting the entrances and exits of the subway station on the ground is an edge.
  • the edge contains a subway line
  • the flow of people on the subway line is no data, or on the basis of the present invention
  • the subway line is obtained by using the existing subway station swiping card information and other methods. internal movement of people).
  • Each vertex has a feature vector composed of video statistics of people flow, and an adjacency matrix can be defined to encode pairwise dependencies between vertices. Therefore, the subway network does not need to use grids to represent subway stations, nor does it need to use CNN to capture features, but can be described by a general network graph, and the use of graph neural network (GCN) can effectively capture the subway network level and Not irregular spatiotemporal dependencies at the grid level.
  • GCN graph neural network
  • the problem modeling is the mathematical modeling of the application of underground and above-ground people flow statistics network in the prediction of people flow in all sides. It uses the historical people flow value of the network to predict the people flow of the network in the next few days.
  • the problem can be modeled with the spatiotemporal sequence shown in Figure 9, where we define a city-wide subway network on a graph and focus on the structured time-series passenger flow.
  • the model for constructing the flow of people mobile network graph is as follows:
  • G t is the flow of people moving network graph at time t, is a graph composed of multiple nodes, V t is a limited set of nodes, representing the vertices in the graph, used to monitor the flow of people at each node, namely V t is a vector composed of the traffic flow of all nodes, ⁇ represents the set of edges between vertices, and W showing the connectivity between vertices is the weight of the adjacency matrix.
  • the target model for predicting the flow of people is constructed as follows:
  • v t+1 ,...,v t+H is also the predicted feature vector from time t+1 to time t+H, A mathematical expression used to distinguish predictors.
  • the predicted feature vector obtained is also the total inbound traffic in a specified time period in the future; the same is true. If the input data is the total outbound traffic of a site from time t-M+1 to t, the predicted feature vector obtained is also the total outbound traffic in a specified time period in the future.
  • a graph neural network is used to solve the people flow prediction target model, and the total inbound and outbound predicted people flow of each site in a specified time period in the future is obtained.
  • the model framework used is the STGCN framework, which consists of multiple spatiotemporal convolution modules, each of which is structured like a sandwich (as shown in Figure 10), with two gated sequence convolution layers and a spatial graph convolution in the middle. module.
  • Temporal Gated-Conv is used to capture temporal correlation, which consists of a 1-D Conv and a gated linear unit GLU;
  • Spatial Graph-Conv is used to capture spatial correlation, mainly composed of the above-mentioned Chebyshev graph convolution module.
  • the total predicted traffic is allocated according to the proportional mean value corresponding to each entry and exit of the site.
  • the mean value of the proportion of inbound traffic at each entrance and exit of a station is calculated based on the inbound traffic at each entrance and exit in the corresponding time period; similarly, the average proportion of the outbound traffic at each entrance and exit of a site is calculated according to the corresponding time period at each entrance and exit. The outbound traffic within the .
  • the mean of the inbound proportion ie, the mean of the proportion of the inbound traffic
  • the mean of the outgoing proportion that is, the mean of the proportion of the outbound traffic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

A subway pedestrian flow network fusion method based on video pedestrian recognition and a pedestrian flow prediction method. Video data of subway stations is used for performing statistical analysis on specific directions of the pedestrian flows entering and exiting the subway stations, the movement of crowds is performed in a huge network by means of fusing a subway network and a ground traffic network, each subway station is a node in the network, and each subway line and each on-ground road traffic line are sides of the network. A graph neural network is used for deducing the changes of the pedestrian flow of the whole traffic network so as to analyze and predict the pedestrian flow amount and the direction of the pedestrian flows at each station. Deeper pedestrian flow analysis for entrances of each subway station is achieved, the resource scheduling of each station and each entrance is facilitated, and the mutual influence of the on-ground pedestrian flow and the underground pedestrian flow can be pre-determined in a timely manner, so that traffic early warning can be performed on the ground or underground in a timely manner to avoid traffic congestion, deploy station security measures in advance, etc.

Description

一种基于视频行人识别的地铁人流网络融合方法及人流预测方法A subway pedestrian flow network fusion method and pedestrian flow prediction method based on video pedestrian recognition 技术领域technical field
本发明属于智慧城市技术领域,具体涉及一种基于视频行人识别的地铁人流网络融合方法及人流预测方法。The invention belongs to the technical field of smart cities, and in particular relates to a subway pedestrian flow network fusion method and a pedestrian flow prediction method based on video pedestrian recognition.
背景技术Background technique
视频监控的应用在数字安全防范领域得到越来越多的应用,通过视频进行人数统计也日益重要,例如在车站、旅游景点、展区、商业街等地点,利用人流统计的数据,可以有效的进行人员调动,资源配置以及提供更好的安全保障。The application of video surveillance is getting more and more applications in the field of digital security, and people counting through video is becoming more and more important. For example, in stations, tourist attractions, exhibition areas, commercial streets, etc. Staff mobility, resource allocation and better security.
现有地铁人流预测一般是基于每个站点的进出口刷卡数据进行的,所获取的结果也只能获得该站点进出的人流预测情况。一般通过分析地铁站的历史刷卡数据以及路网地图,构建地铁站点乘客流量预测模型,预测站点未来的客流量变化,例如预测未来一天00时至24时以10分钟为单位各时段各站点的进站和出站人次。另外的人流统计方案主要是通过红外、摄像头、通信数据等,对地铁车厢人流密度进行实时监。比如,通过图像的方法一般是利用通过不同人群密度对摄像头采集图像光源的遮蔽程度不同,得到不同密度情况下的人群分布图像,对此进行图像分析处理并进行前后比对,最后综合得出地铁车厢人流密度的情况。The existing subway people flow prediction is generally based on the import and export card swiping data of each station, and the obtained results can only obtain the prediction of the people flow in and out of the station. Generally, by analyzing the historical credit card data and road network map of the subway station, a passenger flow prediction model of the subway station is constructed to predict the future passenger flow change of the station. Stations and departures. In addition, the crowd statistics scheme mainly uses infrared, cameras, communication data, etc., to monitor the crowd density in subway cars in real time. For example, the image-based method generally uses the different degrees of shading of the light source of the image collected by the camera through different crowd densities to obtain crowd distribution images under different densities. Image analysis and processing are carried out on this and before and after comparison, and finally the subway is comprehensively obtained. The density of people in the carriage.
但是现有的方案均存在以下问题:每个站点一般都存在很多的出入口,每个出入口的通向也存在多种情况,目前的方案无法对地铁站出入人流进行细分方向的分析,这样就造成地上、地下交通网络无法进行深度融合,因此存在无法合理分配站点各个出入口资源的情况;同时也无法得到出入站点人流对地上交通的影响。However, the existing schemes all have the following problems: each station generally has many entrances and exits, and there are various situations in the way of each entrance and exit. As a result, the above-ground and underground transportation networks cannot be deeply integrated, so it is impossible to reasonably allocate the resources of each entrance and exit of the site; at the same time, the impact of the flow of people entering and leaving the site on the above-ground traffic cannot be obtained.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于视频行人识别的地铁人流网络融合方法及人流预测方法,连通地上、地下交通路线和地铁路线,细化各站点人流量及人流的出站或进站方向,提升交通预测准确性。The purpose of the present invention is to provide a subway pedestrian flow network fusion method and pedestrian flow prediction method based on video pedestrian recognition, connect the above-ground and underground traffic routes and subway routes, refine the outbound or inbound direction of the pedestrian flow at each station, and improve the Traffic prediction accuracy.
为实现上述目的,本发明所采取的技术方案为:To achieve the above object, the technical scheme adopted by the present invention is:
一种基于视频行人识别的地铁人流网络融合方法,用于实现地铁与地面人流的融合统计以辅助交通预警,所述基于视频行人识别的地铁人流网络融合方法,包括:A subway pedestrian flow network fusion method based on video pedestrian recognition is used to realize the fusion statistics of subway and ground pedestrian flow to assist traffic early warning. The subway pedestrian flow network fusion method based on video pedestrian recognition includes:
步骤1、接收地铁站各个出入口的监控图像,所述监控图像由设置在各个出入口的图像采集设备获取; Step 1. Receive monitoring images of various entrances and exits of the subway station, and the monitoring images are acquired by image acquisition equipment disposed at each entrance and exit;
步骤2、提取监控图像中的行人目标坐标框信息以及行人目标特征信息,所述行人目标特征信息包括行人特征、行人进出站状态、行人出站或进站方向; Step 2, extracting pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction;
步骤3、基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹; Step 3. Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained;
步骤4、基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹; Step 4. Perform similarity matching of pedestrian target feature information based on the pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians;
步骤5、获取指定区域内的地铁路线、地铁站点、各个站点的出入口,以及各个出入口所对应的地上的交通路线,融合构建指定区域内的地铁交通网络图; Step 5. Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area;
步骤6、根据预设时间段内最新的行人轨迹,统计各个站点的总进、出站人流量,以及各站点中各个出入口对应的交通路线上的进、出站人流量; Step 6. According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each station, and the inbound and outbound pedestrian flow on the traffic routes corresponding to each entrance and exit in each station;
步骤7、在所述地铁交通网络图的基础上叠加地铁站各个站点的总进、出站人流量以及各站点中各个出入口对应的交通路线上的进、出站人流量,得到地铁、地面人流融合的人流移动网络图。 Step 7. On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
以下还提供了若干可选方式,但并不作为对上述总体方案的额外限定,仅仅是进一步的增补或优选,在没有技术或逻辑矛盾的前提下,各可选方式可单独针对上述总体方案进行组合,还可以是多个可选方式之间进行组合。Several optional methods are also provided below, which are not intended to be additional limitations on the above-mentioned overall solution, but are merely further additions or optimizations. On the premise of no technical or logical contradiction, each optional method can be independently implemented for the above-mentioned overall solution. The combination can also be a combination between multiple optional ways.
作为优选,所述基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹,包括:Preferably, based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information to obtain the pedestrian trajectory for the same pedestrian, including:
步骤3.1、获取当前图像采集设备在本次监控图像中的行人目标坐标框信息以及行人目标特征信息;Step 3.1, obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device;
步骤3.2、判断该图像采集设备对应的跟踪轨迹集合是否为空,所述跟踪轨迹集合用于保存行人的行人轨迹,若跟踪轨迹集合不为空则执行步骤3.3;否则直接将本次获取的行人目标坐标框信息以及行人目标特征信息加入至跟踪轨迹集合中并结束;Step 3.2: Determine whether the tracking track set corresponding to the image acquisition device is empty. The tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time. The target coordinate frame information and the pedestrian target feature information are added to the tracking trajectory set and ended;
步骤3.3、基于跟踪轨迹集合中的行人轨迹采用无迹卡尔曼滤波得到估计目 标坐标框信息;Step 3.3, using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set;
步骤3.4、根据行人目标坐标框信息和估计目标坐标框信息,逐一计算当前行人目标和已保存的行人目标两两之间的坐标框相似度,基于行人轨迹的行人目标特征信息和本次监控图像中的行人目标特征信息逐一计算当前行人目标和已保存的行人目标两两之间的特征相似度,基于坐标框相似度和特征相似度的加权求和得到当前行人目标和已保存的行人目标两两之间的相似度;Step 3.4. According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image. The feature information of the pedestrian target in the calculation is to calculate the feature similarity between the current pedestrian target and the saved pedestrian target one by one, and the current pedestrian target and the saved pedestrian target are obtained based on the weighted summation of the coordinate frame similarity and the feature similarity. similarity between the two;
步骤3.5、基于当前行人目标和已保存的行人目标两两之间的相似度,采用匈牙利匹配算法匹配跟踪轨迹集合中的行人轨迹和本次获取的行人目标;Step 3.5. Based on the similarity between the current pedestrian target and the saved pedestrian target, the Hungarian matching algorithm is used to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time;
步骤3.6、若存在本次未成功匹配的行人目标,则直接将该行人目标对应的行人目标坐标框信息以及行人目标特征信息加入跟踪轨迹集合中,并标记为新生轨迹;若成功匹配行人轨迹和行人目标,则根据行人目标对应的行人目标坐标框信息以及行人目标特征信息更新该行人的行人轨迹;若跟踪轨迹集合中存在标记为新生轨迹的行人轨迹连续多次成功匹配,则将该行人轨迹的新生标记去除;若跟踪轨迹集合中存在连续多帧未成功匹配的行人轨迹,则认为该行人目标已离开当前图像采集设备的监控范围,则将该行人轨迹标记为离开轨迹;若标记为离开轨迹的行人轨迹在指定时间阈值内未成功匹配,则认为该行人轨迹完结,将该行人轨迹从跟踪轨迹集合中删除。Step 3.6. If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched. If the pedestrian target is a pedestrian target, the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
作为优选,所述基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹,包括:Preferably, the similarity matching of pedestrian target feature information is performed based on the pedestrian trajectories corresponding to different image acquisition devices, and the successfully matched pedestrian trajectories are combined to update the pedestrian trajectories of the corresponding pedestrians, including:
步骤4.1、取一个图像采集设备对应的跟踪轨迹集合,将该跟踪轨迹集合中标记为新生轨迹的行人轨迹逐一与其余图像采集设备对应的跟踪轨迹集合中被标记为离开轨迹的行人轨迹进行相似度计算;Step 4.1. Take a set of tracking trajectories corresponding to an image acquisition device, and compare the pedestrian trajectories marked as new trajectories in the set of tracking trajectories with the pedestrian trajectories marked as departure trajectories in the tracking trajectories set corresponding to the rest of the image acquisition devices one by one. calculate;
步骤4.2、若相似度大于预设阈值,则表示两个行人轨迹匹配成功;Step 4.2. If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched;
步骤4.3、将匹配成功的两个行人轨迹进行联合得到该行人新的行人轨迹,并利用新的行人轨迹替换新生轨迹所在的跟踪轨迹集合中对应行人轨迹。Step 4.3: Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
本发明还提供一种人流预测方法,用于基于地铁与地面人流的融合进行人流预测以辅助交通预警,所述人流预测方法包括:The present invention also provides a people flow prediction method for predicting people flow based on the fusion of subway and ground people flow to assist traffic early warning. The people flow prediction method includes:
利用基于视频行人识别的地铁人流网络融合方法得到指定时间段内的人流移动网络图;Using the subway pedestrian flow network fusion method based on video pedestrian recognition to obtain the pedestrian flow network map in the specified time period;
基于人流移动网络图利用图神经网络预测未来指定时间段内各个站点的总 进、出站预测人流量;Based on the people flow network graph, the graph neural network is used to predict the total inbound and outbound traffic of each site in a specified time period in the future;
基于人流移动网络图中各站点的各个出入口对应的交通路线上的进、出站人流量,得到各站点的各个出入口对应的交通路线的进、出比例均值;Based on the inbound and outbound traffic on the traffic route corresponding to each entrance and exit of each site in the pedestrian flow network diagram, the mean value of the inbound and outbound proportions of the traffic route corresponding to each entry and exit of each site is obtained;
根据进、出比例均值分配每一站点的总进、出站预测人流量得到各站点的各个出入口对应的交通路线上的进、出站预测人流量。According to the mean value of inbound and outbound proportions, the total inbound and outbound predicted traffic of each station is allocated to obtain the inbound and outbound predicted traffic on the traffic route corresponding to each entry and exit of each station.
作为优选,所述利用基于视频行人识别的地铁人流网络融合方法得到指定时间段内的人流移动网络图,包括:Preferably, the use of the subway pedestrian flow network fusion method based on video pedestrian recognition to obtain the people flow mobile network map within a specified time period, including:
步骤1、接收地铁站各个出入口的监控图像,所述监控图像由设置在各个出入口的图像采集设备获取; Step 1. Receive monitoring images of various entrances and exits of the subway station, and the monitoring images are acquired by image acquisition equipment disposed at each entrance and exit;
步骤2、提取监控图像中的行人目标坐标框信息以及行人目标特征信息,所述行人目标特征信息包括行人特征、行人进出站状态、行人出站或进站方向; Step 2, extracting pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction;
步骤3、基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹; Step 3. Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained;
步骤4、基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹; Step 4. Perform similarity matching of pedestrian target feature information based on the pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians;
步骤5、获取指定区域内的地铁路线、地铁站点、各个站点的出入口,以及各个出入口所对应的地上的交通路线,融合构建指定区域内的地铁交通网络图; Step 5. Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area;
步骤6、根据预设时间段内最新的行人轨迹,统计各个站点的总进、出站人流量,以及各站点中各个出入口对应的交通路线上的进、出站人流量; Step 6. According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each station, and the inbound and outbound pedestrian flow on the traffic routes corresponding to each entrance and exit in each station;
步骤7、在所述地铁交通网络图的基础上叠加地铁站各个站点的总进、出站人流量以及各站点中各个出入口对应的交通路线上的进、出站人流量,得到地铁、地面人流融合的人流移动网络图。 Step 7. On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
作为优选,所述基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹,包括:Preferably, based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information to obtain the pedestrian trajectory for the same pedestrian, including:
步骤3.1、获取当前图像采集设备在本次监控图像中的行人目标坐标框信息以及行人目标特征信息;Step 3.1, obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device;
步骤3.2、判断该图像采集设备对应的跟踪轨迹集合是否为空,所述跟踪轨迹集合用于保存行人的行人轨迹,若跟踪轨迹集合不为空则执行步骤3.3;否则直接将本次获取的行人目标坐标框信息以及行人目标特征信息加入至跟踪轨迹集合中并结束;Step 3.2: Determine whether the tracking track set corresponding to the image acquisition device is empty. The tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time. The target coordinate frame information and the pedestrian target feature information are added to the tracking trajectory set and ended;
步骤3.3、基于跟踪轨迹集合中的行人轨迹采用无迹卡尔曼滤波得到估计目标坐标框信息;Step 3.3, using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set;
步骤3.4、根据行人目标坐标框信息和估计目标坐标框信息,逐一计算当前行人目标和已保存的行人目标两两之间的坐标框相似度,基于行人轨迹的行人目标特征信息和本次监控图像中的行人目标特征信息逐一计算当前行人目标和已保存的行人目标两两之间的特征相似度,基于坐标框相似度和特征相似度的加权求和得到当前行人目标和已保存的行人目标两两之间的相似度;Step 3.4. According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image. The feature information of the pedestrian target in the calculation is to calculate the feature similarity between the current pedestrian target and the saved pedestrian target one by one, and the current pedestrian target and the saved pedestrian target are obtained based on the weighted summation of the coordinate frame similarity and the feature similarity. similarity between the two;
步骤3.5、基于当前行人目标和已保存的行人目标两两之间的相似度,采用匈牙利匹配算法匹配跟踪轨迹集合中的行人轨迹和本次获取的行人目标;Step 3.5. Based on the similarity between the current pedestrian target and the saved pedestrian target, the Hungarian matching algorithm is used to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time;
步骤3.6、若存在本次未成功匹配的行人目标,则直接将该行人目标对应的行人目标坐标框信息以及行人目标特征信息加入跟踪轨迹集合中,并标记为新生轨迹;若成功匹配行人轨迹和行人目标,则根据行人目标对应的行人目标坐标框信息以及行人目标特征信息更新该行人的行人轨迹;若跟踪轨迹集合中存在标记为新生轨迹的行人轨迹连续多次成功匹配,则将该行人轨迹的新生标记去除;若跟踪轨迹集合中存在连续多帧未成功匹配的行人轨迹,则认为该行人目标已离开当前图像采集设备的监控范围,则将该行人轨迹标记为离开轨迹;若标记为离开轨迹的行人轨迹在指定时间阈值内未成功匹配,则认为该行人轨迹完结,将该行人轨迹从跟踪轨迹集合中删除。Step 3.6. If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched. If the pedestrian target is a pedestrian target, the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
作为优选,所述基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹,包括:Preferably, the similarity matching of pedestrian target feature information is performed based on the pedestrian trajectories corresponding to different image acquisition devices, and the successfully matched pedestrian trajectories are combined to update the pedestrian trajectories of the corresponding pedestrians, including:
步骤4.1、取一个图像采集设备对应的跟踪轨迹集合,将该跟踪轨迹集合中标记为新生轨迹的行人轨迹逐一与其余图像采集设备对应的跟踪轨迹集合中被标记为离开轨迹的行人轨迹进行相似度计算;Step 4.1. Take a set of tracking trajectories corresponding to an image acquisition device, and compare the pedestrian trajectories marked as new trajectories in the set of tracking trajectories with the pedestrian trajectories marked as departure trajectories in the tracking trajectories set corresponding to the rest of the image acquisition devices one by one. calculate;
步骤4.2、若相似度大于预设阈值,则表示两个行人轨迹匹配成功;Step 4.2. If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched;
步骤4.3、将匹配成功的两个行人轨迹进行联合得到该行人新的行人轨迹,并利用新的行人轨迹替换新生轨迹所在的跟踪轨迹集合中对应行人轨迹。Step 4.3: Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
作为优选,所述基于人流移动网络图利用图神经网络预测未来指定时间段内各个站点的总进、出站预测人流量,包括:Preferably, the predicted traffic of people entering and exiting each site within a specified time period in the future is predicted by using a graph neural network based on the people flow mobile network graph, including:
所述人流移动网络图中以地铁的站点为顶点,以站点的各个出入口对应的交通路线为边,每一顶点具有包含总进、出站人流量的特征向量,构建人流移 动网络图的模型如下:The people flow network diagram takes subway stations as vertices and the traffic routes corresponding to the entrances and exits of the stations as edges, each vertex has a feature vector including the total inbound and outbound people flow, and the model for constructing the people flow mobile network diagram is as follows :
G t=(V t,ε,W) G t =(V t , ε, W)
式中,G t为t时刻的人流移动网络图,V t为由所有顶点的特征向量组成的向量,ε代表顶点之间的边集合,W为邻接矩阵的权重,t为当前时刻; In the formula, G t is the people flow network graph at time t, V t is the vector composed of the feature vectors of all vertices, ε represents the edge set between vertices, W is the weight of the adjacency matrix, and t is the current moment;
对每一顶点的人流量进行预测时,基于该顶点在历史时间段t-M+1时刻至t时刻的特征向量,预测未来指定时间段t+1时刻至t+H时刻的特征向量,其中M、H为预设系数,构建人流预测目标模型如下:When predicting the flow of people at each vertex, based on the feature vector of the vertex in the historical time period t-M+1 to t, predict the feature vector of the specified time period t+1 to t+H in the future, where M and H are preset coefficients, and the target model for predicting the flow of people is constructed as follows:
Figure PCTCN2020137804-appb-000001
Figure PCTCN2020137804-appb-000001
式中,
Figure PCTCN2020137804-appb-000002
为预测得到的t+1时刻至t+H时刻的特征向量,v t-M+1,…,v t表示输入的-M+1时刻至t时刻的特征向量;
In the formula,
Figure PCTCN2020137804-appb-000002
is the predicted feature vector from time t+1 to time t+H, v t-M+1 ,...,v t represents the input feature vector from time -M+1 to time t;
基于构建的人流预测目标模型,采用图神经网络对所述人流预测目标模型进行求解,求解得到未来指定时间段内各个站点的总进、出站预测人流量。Based on the constructed people flow prediction target model, a graph neural network is used to solve the people flow prediction target model, and the total inbound and outbound predicted people flow of each site in a specified time period in the future is obtained.
本发明提供的基于视频行人识别的地铁人流网络融合方法及人流预测方法,利用地铁站的视频数据对地铁站出入人流的具体方向进行统计分析,打通地上、地下马路交通线和地铁线路,形成一个地上、地下均可计量的、完整的人流统计交通网络;地铁网络和地面交通网络的融合将使人群的移动在一个庞大的网络中进行,每个地铁站都将成为这个网络中的一个节点,每条地铁路线和地上马路交通线都将成为这个网络的边。利用图神经网络来推演整个交通网络的人流变化,以此分析、预测每个站点、人流数量以及人流的去向。实现各地铁站出入口更深入的人流分析,不仅便于各个站点以及出入口的资源调度,且能够及时预判地上、地下人流的相互影响,以便于及时对地上或地下进行交通预警,规避交通堵塞,提前部署站点安保措施等。The present invention provides a subway pedestrian flow network fusion method and a pedestrian flow prediction method based on video pedestrian recognition. The video data of the subway station is used to perform statistical analysis on the specific direction of people entering and exiting the subway station, and the aboveground and underground road traffic lines and subway lines are opened up to form a Measurable and complete traffic network of people flow on the ground and underground; the integration of subway network and ground transportation network will make the movement of people in a huge network, and each subway station will become a node in this network, Every subway line and overground road traffic line will be an edge of this network. The graph neural network is used to deduce the change of people flow in the entire transportation network, so as to analyze and predict each station, the number of people flow and the direction of people flow. Realizing a more in-depth analysis of the flow of people at the entrances and exits of each subway station not only facilitates the resource scheduling of each station and the entrance and exit, but also can predict the mutual influence of the above-ground and underground people in a timely manner, so as to timely provide traffic warnings on the ground or underground, avoid traffic jams, and Deploy site security measures, etc.
附图说明Description of drawings
图1为本发明的基于视频行人识别的地铁人流网络融合方法的流程图;Fig. 1 is the flow chart of the subway pedestrian flow network fusion method based on video pedestrian recognition of the present invention;
图2为本发明SSD目标检测网络的训练示意图;Fig. 2 is the training schematic diagram of SSD target detection network of the present invention;
图3为本发明MobileNet神经网络的训练示意图;Fig. 3 is the training schematic diagram of MobileNet neural network of the present invention;
图4为本发明对网络神经网络进行蒸馏操作的示意图;4 is a schematic diagram of the present invention performing a distillation operation on a network neural network;
图5为本发明行人轨迹跟踪的流程图;Fig. 5 is the flow chart of the pedestrian trajectory tracking of the present invention;
图6为本发明多因素融合行人目标跟踪方法的流程图;6 is a flowchart of a multi-factor fusion pedestrian target tracking method of the present invention;
图7为本发明基于行人图像特征的目标跟踪方法的流程图;7 is a flowchart of a target tracking method based on pedestrian image features of the present invention;
图8为本发明地铁交通网络图的一种实施例示意图;8 is a schematic diagram of an embodiment of a subway traffic network diagram of the present invention;
图9为本发明基于时空序列进行问题建模的示意图;9 is a schematic diagram of the present invention performing problem modeling based on a spatiotemporal sequence;
图10为本发明STGCN的框架的结构示意图。FIG. 10 is a schematic structural diagram of the framework of the STGCN of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是在于限制本发明。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terms used herein in the description of the present invention are for the purpose of describing specific embodiments only, and are not intended to limit the present invention.
其中一个实施例中,提供一种基于视频行人识别的地铁人流网络融合方法,建立地上的交通路线以及地下的地铁路线的人流联系,分析地铁各站点各个出入口进站人流的来向以及出站人流的去向。地上(即地面)、地下(即地铁)人流量的关联克服了现有地上或地下人流统计仅考虑地上或地下单一层面人流,而未进一步考虑地上、地下人流相互影响的因素,导致地上、地下统计或预测的精度不够。基于本发明地上、地下网络融合的人流统计方法,不仅能够利用地铁各站点各个出入口的资源调度,并且能够结合地上交通网络对地铁站点人流预警,同时能够结合地下地铁网络对地上交通路线的人流预警,提高交通管制前瞻性以及及时性。In one of the embodiments, a method for integrating subway pedestrian flow networks based on video pedestrian recognition is provided, which establishes the connection between the traffic routes on the ground and the subway routes on the ground, and analyzes the inbound and outbound flow of people at each entrance and exit of each subway station. whereabouts. The correlation between the above-ground (that is, the ground) and underground (that is, the subway) people flow overcomes the fact that the existing above-ground or underground people flow statistics only consider the above-ground or underground single-level people flow, and do not further consider the factors that affect the above-ground and underground people flow. Insufficient accuracy of statistics or predictions. The people flow statistics method based on the integration of the above-ground and underground networks of the present invention can not only utilize the resource scheduling of each entrance and exit of each subway station, but also can combine the above-ground transportation network for the early warning of the people flow of the subway stations, and at the same time, it can combine the underground subway network. , improve the forward-looking and timeliness of traffic control.
如图1所示,本实施例基于视频行人识别的地铁人流网络融合方法,包括以下步骤:As shown in FIG. 1, the present embodiment of the subway pedestrian flow network fusion method based on video pedestrian recognition includes the following steps:
步骤1、接收地铁站各个出入口的监控图像,所述监控图像包括由设置在各个出入口的图像采集设备获取。 Step 1. Receive monitoring images of various entrances and exits of the subway station, where the monitoring images are acquired by image acquisition equipment disposed at each entrance and exit.
由于需要关联地上和地下人流数据,因此图像采集设备在部署时应使其监控范围包括整个出入口以及出入口对应的地上交通路线,为识别行人进出站状态、出站或进站方向奠定基础。Due to the need to correlate the above-ground and underground pedestrian flow data, the image acquisition device should be deployed so that its monitoring range includes the entire entrance and exit and the above-ground traffic route corresponding to the entrance and exit, which lays the foundation for identifying pedestrian entry and exit status, exit or entry direction.
本实施例中的图像采集设备可以是光学相机、双目相机、TOF相机等,并且每个图像采集设备具有独有的设备id,以便于区分每一图像采集设备。因此在接收监控图像的同时,获取监控图像对应的图像采集设备的设备id以及对应 的时间戳。The image capture device in this embodiment may be an optical camera, a binocular camera, a TOF camera, etc., and each image capture device has a unique device id so as to distinguish each image capture device. Therefore, while receiving the monitoring image, the device id of the image acquisition device corresponding to the monitoring image and the corresponding time stamp are obtained.
需要说明的是,通常在地铁站每个出入口安装一个图像采集设备即可满足本发明的使用需求,但本发明不限于仅在每个出入口安装一个图像采集设备,在实际监控需要或监控精度要求的情况下,可以在一个出入口安装多个图像采集设备,以更加全面获取视频图像信息,也可以在地铁站内部、出入口对应的交通路线的沿线上安装图像采集设备,以扩展视频图像采集范围,得到更加全面、完整的人流统计或行人轨迹。It should be noted that generally, an image acquisition device is installed at each entrance and exit of a subway station to meet the application requirements of the present invention, but the present invention is not limited to installing only one image acquisition device at each entrance and exit. In actual monitoring needs or monitoring accuracy requirements In the case of a subway station, multiple image capture devices can be installed at one entrance and exit to obtain video image information more comprehensively, and image capture devices can be installed inside subway stations and along the traffic routes corresponding to the entrances and exits to expand the range of video image capture. Get more comprehensive and complete people flow statistics or pedestrian trajectories.
步骤2、提取监控图像中的行人目标坐标框信息以及行人目标特征信息,所述行人目标特征信息包括行人特征、行人进出站状态、行人出站或进站方向。Step 2: Extract the pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction.
行人目标坐标框信息以及行人目标特征信息是识别定位行人目标的基础信息,本实施例基于神经网络进行两部分信息的提取。现有使用的目标识别神经网络以及特征识别神经网络较多,本实施例不对采用的神经网络进行限制。为了便于理解本实施例以SSD目标检测网络提取行人目标坐标框信息,MobileNet神经网络提取行人目标特征信息为例进行说明。The pedestrian target coordinate frame information and the pedestrian target feature information are the basic information for identifying and locating the pedestrian target. In this embodiment, two parts of information are extracted based on a neural network. There are many target recognition neural networks and feature recognition neural networks currently used, and this embodiment does not limit the neural networks used. In order to facilitate understanding, this embodiment is described by taking the SSD target detection network extracting pedestrian target coordinate frame information and the MobileNet neural network extracting pedestrian target feature information as examples.
如图2所示,其中SSD目标检测网络作为目标识别算法的训练应用过程,具体步骤如下:As shown in Figure 2, the SSD target detection network is used as the training and application process of the target recognition algorithm. The specific steps are as follows:
1、构建行人目标数据集,数据集包括图像数据和标注数据,标注数据标明了指定图像中行人目标的区域。1. Construct a pedestrian target dataset. The dataset includes image data and annotation data. The annotation data indicates the area of the pedestrian target in the specified image.
2、统计数据集中行人目标的宽高比,采用聚类方法对宽高比数据进行聚类得到n个聚类中心,即n个宽高比,目标识别网络中锚框的比例即采用这n个宽高比。2. Count the aspect ratio of the pedestrian target in the dataset, and use the clustering method to cluster the aspect ratio data to obtain n cluster centers, that is, n aspect ratios. The ratio of the anchor frame in the target recognition network is the n aspect ratio.
3、对图像数据进行增强得到训练数据,例如颜色变化、随机裁剪、图像放大缩小、旋转等等。3. Enhance the image data to obtain training data, such as color change, random cropping, image enlargement and reduction, rotation, etc.
4、训练数据输入SSD目标检测网络进行目标识别,神经网络输出行人在图像中坐标框信息。4. The training data is input into the SSD target detection network for target recognition, and the neural network outputs the coordinate frame information of the pedestrian in the image.
5、对于SSD目标检测网络输出的坐标框采用非极大值抑制方法(NMS)去除重复的框,得到最终输出的坐标框。5. For the coordinate frame output by the SSD target detection network, the non-maximum suppression method (NMS) is used to remove the duplicate frame, and the final output coordinate frame is obtained.
6、在输出的坐标框中,选择与标注坐标框交并比大于一个较大阈值的框作为正样本,选择交并比小于一个较小阈值的框作为负样本。并在正负样本中随机选择一定比例和数量的样本作为训练样本对神经网络进行训练。6. In the output coordinate frame, select the frame whose intersection ratio with the labeled coordinate frame is greater than a larger threshold as a positive sample, and select a frame whose intersection ratio is less than a smaller threshold as a negative sample. And randomly select a certain proportion and number of samples from the positive and negative samples as training samples to train the neural network.
7、根据最终的训练样本坐标框与标注数据计算损失函数,采用反向传播方法调整神经网络参数。7. Calculate the loss function according to the final training sample coordinate frame and the labeled data, and adjust the neural network parameters using the back-propagation method.
8、经过一定的训练之后得到最终的SSD目标检测网络。8. After a certain training, the final SSD target detection network is obtained.
在SSD目标检测网络应用中,训练得到的SSD目标检测网络接受图像输入,输出为行人坐标框,对坐标框做非极大值抑制操作,删除重复坐标框,最后设定一定阈值,当坐标框的可信度大于阈值时作为输出的行人目标坐标框信息。In the application of SSD target detection network, the trained SSD target detection network accepts the image input, outputs the pedestrian coordinate frame, performs non-maximum suppression operation on the coordinate frame, deletes the duplicate coordinate frame, and finally sets a certain threshold, when the coordinate frame Pedestrian target coordinate frame information as output when the reliability of is greater than the threshold.
如图3所示,其中MobileNet神经网络作为行人重识别算法训练应用过程,具体步骤如下:As shown in Figure 3, the MobileNet neural network is used as the training application process of the pedestrian re-identification algorithm. The specific steps are as follows:
1、构建行人重识别数据集,数据集包括图像数据和图像对应行人的id数据。1. Build a pedestrian re-identification data set, which includes image data and id data of pedestrians corresponding to the image.
2、一次训练选取3张图片,分别是行人A的两张不同图片,其他行人的一张图片,三张图片进行图像增强后分别输入到MobileNet神经网络中,输出行人特征。2. Three images are selected for one training, two different images of pedestrian A and one image of other pedestrians. After image enhancement, the three images are input into the MobileNet neural network respectively, and the pedestrian features are output.
3、根据神经网络的输出计算三元损失函数,该损失函数可以让同一个人的行人图像特征越相似,让不同行人的图像特征越不相似,因此可以用来训练行人重识别算法。计算损失函数后,采用反向传播训练神经网络。3. Calculate the ternary loss function according to the output of the neural network. This loss function can make the pedestrian image features of the same person more similar and the image features of different pedestrians less similar, so it can be used to train the pedestrian re-identification algorithm. After calculating the loss function, backpropagation is used to train the neural network.
在部署目标检测网络时,为了网络运行的更快,需要对网络神经网络进行蒸馏操作,具体如图4所示。老师模型一般是训练好的比较大的神经网络模型,这种模型一般准确率高,但是网络的参数多,运行时间慢。学生模型一般是一个参数量较少的模型,这种模型如果直接采用标注数据训练往往难以训练,神经网络进行蒸馏让学生模型同时从标注数据和老师模型中学习,往往可以取得较好的结果。When deploying the target detection network, in order to run the network faster, it is necessary to perform a distillation operation on the network neural network, as shown in Figure 4. The teacher model is generally a relatively large trained neural network model. This model generally has high accuracy, but the network has many parameters and slow running time. The student model is generally a model with a small number of parameters. If this model is directly trained with labeled data, it is often difficult to train. The neural network distillation allows the student model to learn from the labeled data and the teacher model at the same time, which can often achieve better results.
在MobileNet神经网络应用中,训练得到的最终的MobileNet神经网络接收图像数据和行人的id数据,输出每一行人对应的行人目标特征信息。In the MobileNet neural network application, the final MobileNet neural network obtained by training receives image data and pedestrian id data, and outputs the pedestrian target feature information corresponding to each pedestrian.
本实施例提供的目标识别神经网络以及特征识别神经网络以及对应的训练应用方法,能够完整、全面、准确的提取监控图像中的相应数据,为人流统计提高优质的基础信息。The target recognition neural network, the feature recognition neural network and the corresponding training application method provided by this embodiment can completely, comprehensively and accurately extract the corresponding data in the monitoring image, and improve high-quality basic information for the statistics of people flow.
由于单根据行人在一个监控图像中的状态无法准确识别行人的真实轨迹,因此需要进行行人的轨迹跟踪。如图5所示,本实施例中多目标跟踪方法分为两个部分,其中一个是同一图像采集设备的监控画面下的多因素融合行人目标跟踪方法,该方法生成该图像采集设备监控画面下的行人轨迹。另一个是跨图 像采集设备的基于行人特征的目标跟踪方法,该方法用于匹配同一行人在不同图像采集设备下的行人轨迹。跨图像采集设备的行人目标跟踪方法直接采用行人目标特征信息计算相似度,相似度大于一定阈值即判断为同一行人,并关联相关轨迹。结合两个方法可以得到跨区域的行人轨迹数据,从而进行完整的行人轨迹跟踪,提升人流统计的准确率。Since the real trajectories of pedestrians cannot be accurately identified only based on the pedestrian's state in a monitoring image, it is necessary to track the trajectories of pedestrians. As shown in FIG. 5 , the multi-target tracking method in this embodiment is divided into two parts, one of which is a multi-factor fusion pedestrian target tracking method under the monitoring screen of the same image acquisition device, and this method generates the image acquisition device under the monitoring screen. trajectories of pedestrians. The other is a pedestrian feature-based target tracking method across image acquisition devices, which is used to match the pedestrian trajectories of the same pedestrian under different image acquisition devices. The pedestrian target tracking method across image acquisition devices directly uses the pedestrian target feature information to calculate the similarity, and if the similarity is greater than a certain threshold, it is judged as the same pedestrian, and related trajectories are associated. Combining the two methods can obtain cross-regional pedestrian trajectory data, so as to perform complete pedestrian trajectory tracking and improve the accuracy of pedestrian flow statistics.
步骤3、基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹。即为多因素融合行人目标跟踪方法,如图6所示,具体如下:Step 3: Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained. That is, the multi-factor fusion pedestrian target tracking method, as shown in Figure 6, as follows:
步骤3.1、获取当前图像采集设备在本次监控图像中的行人目标坐标框信息以及行人目标特征信息。Step 3.1, obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device.
步骤3.2、判断该图像采集设备对应的跟踪轨迹集合是否为空,所述跟踪轨迹集合用于保存行人的行人轨迹,若跟踪轨迹集合不为空则执行步骤3.3;否则直接将本次获取的行人目标坐标框信息以及行人目标特征信息加入至跟踪轨迹集合中并结束。Step 3.2: Determine whether the tracking track set corresponding to the image acquisition device is empty. The tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time. The target coordinate frame information and the pedestrian target feature information are added to the tracking track set and ended.
步骤3.3、基于跟踪轨迹集合中的行人轨迹采用无迹卡尔曼滤波得到估计目标坐标框信息。Step 3.3, using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set.
无迹卡尔曼滤波是在卡尔曼滤波和变换的基础上发展而来的,其利用无损变换使线性假设下的卡尔曼滤波应用于非线性系统,该方法可以在重叠遮挡情况较多的情况下更好的跟踪行人。无迹卡尔曼滤波用于估计已经存在的每一个行人轨迹在当前时刻的所处位置。Unscented Kalman filter is developed on the basis of Kalman filter and transformation. It uses lossless transformation to apply Kalman filter under linear assumption to nonlinear systems. This method can be used in the case of many overlapping occlusions. Better tracking of pedestrians. The unscented Kalman filter is used to estimate the position of each existing pedestrian trajectory at the current moment.
步骤3.4、根据行人目标坐标框信息和估计目标坐标框信息,逐一计算当前行人目标和已保存的行人目标两两之间的坐标框相似度,基于行人轨迹的行人目标特征信息和本次监控图像中的行人目标特征信息逐一计算当前行人目标(即为本次获取的行人目标)和已保存的行人目标两两之间的特征相似度,基于坐标框相似度和特征相似度的加权求和得到当前行人目标和已保存的行人目标两两之间的相似度。Step 3.4. According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image. Pedestrian target feature information in , calculate the feature similarity between the current pedestrian target (that is, the pedestrian target obtained this time) and the saved pedestrian target one by one, based on the weighted summation of the coordinate frame similarity and the feature similarity to get The similarity between the current pedestrian target and the saved pedestrian target.
基于无迹卡尔曼滤波预测的轨迹集合中的行人目标估计框和当前检测得到的行人目标框计算行人目标的IOU、行人目标的中心点距离、行人目标大小差距等,对以上指标和行人目标特征信息的特征相似度(例如特征的cosin相似度等)进行加权求和来构建已存在轨迹和待匹配行人之间的相似度。Based on the pedestrian target estimation frame in the trajectory set predicted by the unscented Kalman filter and the pedestrian target frame currently detected, the IOU of the pedestrian target, the center point distance of the pedestrian target, and the size gap of the pedestrian target are calculated. The feature similarity of the information (such as the cosin similarity of the feature, etc.) is weighted and summed to construct the similarity between the existing trajectory and the pedestrian to be matched.
本实施例最终得到的当前行人目标和已保存的行人目标两两之间的相似度 综合了坐标框相似度和特征相似度,从多方位匹配行人目标,以显著提高行人轨迹的准确率。需要说明的是,坐标框相似度以及特征相似度的计算均为行人轨迹跟踪领域较为成熟的技术,本实施例中不再进行赘述。而加权求和的权值根据实际使用时的侧重点进行设置即可。The similarity between the current pedestrian target and the saved pedestrian target finally obtained in this embodiment combines the coordinate frame similarity and the feature similarity, and matches the pedestrian target from multiple directions to significantly improve the accuracy of the pedestrian trajectory. It should be noted that the calculation of the similarity of the coordinate frame and the similarity of the features are relatively mature technologies in the field of pedestrian trajectory tracking, which will not be repeated in this embodiment. The weights of the weighted summation can be set according to the focus of actual use.
并且为了直观表达行人目标两两之间的相似度,可以采用相似度矩阵进行存储,相似度矩阵中纵向为已保存的行人目标,横向为当前行人目标,矩阵中对应的值为所对应的已保存的行人目标和当前行人目标的相似度。And in order to intuitively express the similarity between pedestrian targets, a similarity matrix can be used for storage. The vertical direction in the similarity matrix is the saved pedestrian target, the horizontal direction is the current pedestrian target, and the corresponding value in the matrix is the corresponding value. The similarity between the saved pedestrian target and the current pedestrian target.
步骤3.5、基于当前行人目标和已保存的行人目标两两之间的相似度,采用匈牙利匹配算法匹配跟踪轨迹集合中的行人轨迹和本次获取的行人目标。Step 3.5: Based on the similarity between the current pedestrian target and the saved pedestrian target, the Hungarian matching algorithm is used to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time.
步骤3.6、若存在本次未成功匹配的行人目标,则直接将该行人目标对应的行人目标坐标框信息以及行人目标特征信息加入跟踪轨迹集合中,并标记为新生轨迹;若成功匹配行人轨迹和行人目标,则根据行人目标对应的行人目标坐标框信息以及行人目标特征信息更新该行人的行人轨迹;若跟踪轨迹集合中存在标记为新生轨迹的行人轨迹连续多次成功匹配,则将该行人轨迹的新生标记去除;若跟踪轨迹集合中存在连续多帧未成功匹配的行人轨迹,则认为该行人目标已离开当前图像采集设备的监控范围,则将该行人轨迹标记为离开轨迹;若标记为离开轨迹的行人轨迹在指定时间阈值内未成功匹配,则认为该行人轨迹完结,将该行人轨迹从跟踪轨迹集合中删除。Step 3.6. If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched. If the pedestrian target is a pedestrian target, the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
本实施例在生成同一图像采集设备下的行人轨迹时,实时更新每一行人的行人轨迹,在监控范围内新增行人时,通过连续多次成功匹配确认新增行人,以避免误检情况;并且在连续多次未成功匹配行人轨迹后,将该行人轨迹删除,以降低存储压力和匹配压力,提升匹配速度。In this embodiment, when a pedestrian trajectory under the same image acquisition device is generated, the pedestrian trajectory of each pedestrian is updated in real time, and when a new pedestrian is added within the monitoring range, the newly added pedestrian is confirmed through successive successful matches to avoid false detection; And after the pedestrian trajectory is not successfully matched for many times in a row, the pedestrian trajectory is deleted to reduce the storage pressure and matching pressure, and improve the matching speed.
步骤4、基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹。即基于行人图像特征的目标跟踪方法,如图7所示,具体如下: Step 4. Perform similarity matching of pedestrian target feature information based on pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians. That is, the target tracking method based on pedestrian image features, as shown in Figure 7, is as follows:
步骤4.1、取一个图像采集设备对应的跟踪轨迹集合,将该跟踪轨迹集合中标记为新生轨迹的行人轨迹逐一与其余图像采集设备对应的跟踪轨迹集合中被标记为离开轨迹的行人轨迹进行相似度计算。Step 4.1. Take a set of tracking trajectories corresponding to an image acquisition device, and perform similarity between the pedestrian trajectories marked as new trajectories in the set of tracking trajectories and the pedestrian trajectories marked as departure trajectories in the set of tracking trajectories corresponding to the rest of the image acquisition devices one by one. calculate.
由于正常情况下行人不会同时出现在两个监控画面中,因此本实施例仅取其他图像采集设备对应的离开轨迹进行匹配,以保证匹配结果符合正常行人移动行为,同时也降低了特征匹配压力、提升跨区域匹配速度。Since pedestrians do not appear on the two monitoring images at the same time under normal circumstances, in this embodiment, only the departure trajectories corresponding to other image acquisition devices are used for matching, so as to ensure that the matching results conform to normal pedestrian movement behaviors, and at the same time reduce the feature matching pressure. , Improve the speed of cross-region matching.
步骤4.2、若相似度大于预设阈值,则表示两个行人轨迹匹配成功。本实施例中计算相似度基于两个行人轨迹所携带的行人目标特征信息进行计算,且计算所取的行人目标特征信息可以是行人轨迹中最新监控图像中的行人目标特征信息,也可以是最新的若干帧监控图像中的行人目标特征信息的均值。相似度可以是余弦相似度,并且在于匈牙利算法进行匹配。Step 4.2. If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched. In this embodiment, the similarity is calculated based on the pedestrian target feature information carried by the two pedestrian trajectories, and the pedestrian target feature information obtained from the calculation may be the pedestrian target feature information in the latest monitoring image in the pedestrian trajectory, or the latest pedestrian target feature information. The average value of pedestrian target feature information in several frames of surveillance images. The similarity can be cosine similarity and is matched by the Hungarian algorithm.
步骤4.3、将匹配成功的两个行人轨迹进行联合得到该行人新的行人轨迹,并利用新的行人轨迹替换新生轨迹所在的跟踪轨迹集合中对应行人轨迹。Step 4.3: Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
本实施例将匹配成功的两段行人轨迹进行联合,该联合优选为按照时间顺序拼接两段行人轨迹,以得到符合行人真实移动路径的行人轨迹,并且将联合后的跨区域的行人轨迹移动至新生轨迹所在的跟踪轨迹集合中,即也是将离开轨迹从原来的跟踪轨迹集合移动到了新生轨迹所在的跟踪轨迹集合,实现了行人轨迹的联合管理。In this embodiment, two segments of pedestrian trajectories that have been successfully matched are combined, and the combination is preferably by splicing two segments of pedestrian trajectories in chronological order to obtain a pedestrian trajectory that conforms to the pedestrian's real moving path, and the combined cross-regional pedestrian trajectory is moved to In the tracking track set where the new track is located, the departure track is also moved from the original tracking track set to the tracking track set where the new track is located, realizing joint management of pedestrian tracks.
步骤5、获取指定区域内的地铁路线、地铁站点、各个站点的出入口,以及各个出入口所对应的地上的交通路线,融合构建指定区域内的地铁交通网络图。Step 5: Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area.
本实施例中各个出入口所对应的地上的交通路线应理解为出入口所在的地面道路。当然取出入口所在地面道路为本发明地上、地下交通网络融合的基础操作,与步骤1中图像采集设备安装同理,若图像采集设备不仅安装在出入口,还扩展至该地铁站点预设范围内的所有地面道路,那对应的地上交通路线也可以是图像采集设备所在的由出入口所在地面道路延伸涉及的其他地面道路。最终所构建的地铁交通网络图如图8所示,点表示地铁站点,实线表示地面道路,虚线表示地铁路线,即以地铁站点为点,以地铁路线和交通路线为边,以交通路线与地铁站点的连接点示意出入口。当然由于本发明主要统计地铁站出入口人流与地面道路中的人流,因此所形成的地铁交通网络图也可以是仅包含以地铁站点为点、以地面道路为边的网络图。In this embodiment, the traffic route on the ground corresponding to each entrance and exit should be understood as the ground road where the entrance and exit are located. Of course, taking out the ground road where the entrance is located is the basic operation for the integration of the above-ground and underground transportation networks of the present invention. It is the same as the installation of the image acquisition equipment in step 1. If the image acquisition equipment is not only installed at the entrance and exit, but also extended to the preset range of the subway station. For all ground roads, the corresponding ground traffic routes may also be other ground roads extended by the ground road where the entrance and exit are located where the image acquisition device is located. The final constructed subway traffic network diagram is shown in Figure 8. The dots represent subway stations, the solid lines represent ground roads, and the dotted lines represent subway routes. The connection point of the subway station indicates the entrance and exit. Of course, since the present invention mainly counts the flow of people at the entrance and exit of the subway station and the flow of people on the ground road, the formed subway traffic network diagram may also only include the network diagram with the subway station as the point and the ground road as the edge.
步骤6、根据预设时间段内最新的行人轨迹,统计各个站点的总进、出站人流量,以及各站点中各个出入口对应的交通路线上的进、出站人流量。Step 6: According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each site, and the inbound and outbound pedestrian flow on the traffic route corresponding to each entrance and exit in each site.
行人轨迹至少包含同一图像采集设备内的行人移动路径,该移动路径具有方向,因此根据行人轨迹能够识别行人为进站状态或者出站状态,对应的即可统计得到指定时间段内的总进、出站人流量。The pedestrian trajectory includes at least the pedestrian moving path in the same image acquisition device. The moving path has a direction. Therefore, according to the pedestrian trajectory, the pedestrian can be identified as entering or exiting the station, and the corresponding statistics can be obtained. Outbound foot traffic.
并且由于图像采集设备的监控范围包含地上交通路线,因此行人轨迹在监控画面中包含由交通路线某一方向进入画面进站,或出站向交通路线某一方向 移动出画面的过程,即对应的能够得到交通路线上的进、出站人流量,该交通路线上的进、出站人流量包含行人出站或进站方向(以出站且出站仅包含左转和右转两个方向为例,交通路线上的出站人流量包含出站后左转进入交通路线的人流量以及出站后右转进入交通路线的人流量)。And since the monitoring range of the image acquisition device includes the above-ground traffic route, the pedestrian trajectory in the monitoring screen includes the process of entering the screen from a certain direction of the traffic route to enter the station, or exiting the station and moving out of the screen in a certain direction of the traffic route, that is, the corresponding The inbound and outbound pedestrian flow on the traffic route can be obtained, and the inbound and outbound pedestrian flow on the traffic route includes the direction of pedestrians leaving the station or entering the station (taking the exit and the exit only include two directions of left turn and right turn as For example, the outbound traffic on the traffic route includes the traffic of people who turn left and enter the traffic route after exiting the station and the traffic volume of people who turn right and enter the traffic route after exiting the station).
步骤7、在所述地铁交通网络图的基础上叠加地铁站各个站点的总进、出站人流量以及各站点中各个出入口对应的交通路线上的进、出站人流量,得到地铁、地面人流融合的人流移动网络图。 Step 7. On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
本发明最终得到的人流移动网络图,表示了各地铁站点的各个出入口的进出站人流,该进出站人流包括进站人流量,出站人流量,出站人流流入交通路线中的流向(例如出入口所对应的交通路线为南北延伸,则出站后向南走的人流量,出站后向北走的人流量),进站人流从交通路线的来向(例如出入口所对应的交通路线为南北延伸,则由该路线向南过来进站人流量,向北过来进站人流量)。The mobile network diagram of the flow of people finally obtained by the present invention shows the flow of people entering and leaving the station at each entrance and exit of each subway station. The corresponding traffic route is north-south extension, then the flow of people going south after exiting the station, and the flow of people going north after exiting the station), the direction of the incoming traffic from the traffic route (for example, the traffic route corresponding to the entrance and exit is north-south). If the route is extended, the inbound traffic flows from the route to the south, and the inbound traffic flows from the north).
上述人流流向或来向根据行人在每一帧监控图像中的行人进出站状态、行人出站或进站方向,按照时间序列叠加多张监控图像后,即生成行人轨迹后即可得到。便于基于地铁人流分析地上交通路线可能存在的交通拥塞等情况,以便于及时采取疏散、预警等措施,对风景区、城市主干道路、城市活动场所附近的交通预警具有重要的帮助。The above-mentioned pedestrian flow direction or direction can be obtained by superimposing multiple monitoring images in a time series according to the pedestrian's entry and exit status and the pedestrian's exit or entry direction in each frame of the monitoring image, that is, the pedestrian trajectory is generated. It is convenient to analyze the possible traffic congestion and other situations of the above-ground traffic routes based on the subway pedestrian flow, so as to take timely measures such as evacuation and early warning.
在另一个实施例中,还提供一种人流预测方法,基于地铁与地面人流的融合进行人流预测以辅助交通预警,所述人流预测方法包括:In another embodiment, a people flow prediction method is also provided, which is based on the fusion of subway and ground people flow to predict the flow of people to assist traffic early warning, and the people flow prediction method includes:
利用基于视频行人识别的地铁人流网络融合方法得到指定时间段内的人流移动网络图。Using the subway pedestrian network fusion method based on video pedestrian recognition, the pedestrian flow network map in the specified time period is obtained.
基于人流移动网络图利用图神经网络预测未来指定时间段内各个站点的总进、出站预测人流量。Based on the human flow mobile network graph, the graph neural network is used to predict the total inbound and outbound traffic of each site within a specified time period in the future.
基于人流移动网络图中各站点的各个出入口对应的交通路线上的进、出站人流量,得到各站点的各个出入口对应的交通路线的进、出比例均值。Based on the inbound and outbound flow of people on the traffic routes corresponding to the entrances and exits of each site in the pedestrian flow network diagram, the mean value of the inbound and outbound ratios of the traffic routes corresponding to each entry and exit of each site is obtained.
根据进、出比例均值分配每一站点的总进、出站预测人流量得到各站点的各个出入口对应的交通路线上的进、出站预测人流量。According to the mean value of inbound and outbound proportions, the total inbound and outbound predicted people flow of each station is allocated to obtain the inbound and outbound predicted people flow on the traffic route corresponding to each entrance and exit of each station.
在另一个实施例中,所述利用基于视频行人识别的地铁人流网络融合方法得到指定时间段内的人流移动网络图,包括:In another embodiment, the use of a subway pedestrian flow network fusion method based on video pedestrian recognition to obtain a people flow mobile network map within a specified time period includes:
步骤1、接收地铁站各个出入口的监控图像,所述监控图像由设置在各个出 入口的图像采集设备获取; Step 1, receive the monitoring image of each entrance and exit of subway station, and described monitoring image is acquired by the image acquisition equipment that is arranged at each entrance and exit;
步骤2、提取监控图像中的行人目标坐标框信息以及行人目标特征信息,所述行人目标特征信息包括行人特征、行人进出站状态、行人出站或进站方向; Step 2, extracting pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction;
步骤3、基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹; Step 3. Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained;
步骤4、基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹; Step 4. Perform similarity matching of pedestrian target feature information based on the pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians;
步骤5、获取指定区域内的地铁路线、地铁站点、各个站点的出入口,以及各个出入口所对应的地上的交通路线,融合构建指定区域内的地铁交通网络图; Step 5. Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area;
步骤6、根据预设时间段内最新的行人轨迹,统计各个站点的总进、出站人流量,以及各站点中各个出入口对应的交通路线上的进、出站人流量; Step 6. According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each station, and the inbound and outbound pedestrian flow on the traffic routes corresponding to each entrance and exit in each station;
步骤7、在所述地铁交通网络图的基础上叠加地铁站各个站点的总进、出站人流量以及各站点中各个出入口对应的交通路线上的进、出站人流量,得到地铁、地面人流融合的人流移动网络图。 Step 7. On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
在另一个实施例中,所述基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹,包括:In another embodiment, based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information to obtain the pedestrian trajectory for the same pedestrian, including:
步骤3.1、获取当前图像采集设备在本次监控图像中的行人目标坐标框信息以及行人目标特征信息;Step 3.1, obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device;
步骤3.2、判断该图像采集设备对应的跟踪轨迹集合是否为空,所述跟踪轨迹集合用于保存行人的行人轨迹,若跟踪轨迹集合不为空则执行步骤3.3;否则直接将本次获取的行人目标坐标框信息以及行人目标特征信息加入至跟踪轨迹集合中并结束;Step 3.2: Determine whether the tracking track set corresponding to the image acquisition device is empty. The tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time. The target coordinate frame information and the pedestrian target feature information are added to the tracking trajectory set and ended;
步骤3.3、基于跟踪轨迹集合中的行人轨迹采用无迹卡尔曼滤波得到估计目标坐标框信息。Step 3.3, using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set.
步骤3.4、根据行人目标坐标框信息和估计目标坐标框信息,逐一计算当前行人目标和已保存的行人目标两两之间的坐标框相似度,基于行人轨迹的行人目标特征信息和本次监控图像中的行人目标特征信息逐一计算当前行人目标和已保存的行人目标两两之间的特征相似度,基于坐标框相似度和特征相似度的加权求和得到当前行人目标和已保存的行人目标两两之间的相似度。Step 3.4. According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image. The feature information of the pedestrian target in the calculation is to calculate the feature similarity between the current pedestrian target and the saved pedestrian target one by one, and the current pedestrian target and the saved pedestrian target are obtained based on the weighted summation of the coordinate frame similarity and the feature similarity. similarity between the two.
步骤3.5、基于当前行人目标和已保存的行人目标两两之间的相似度,采用 匈牙利匹配算法匹配跟踪轨迹集合中的行人轨迹和本次获取的行人目标;Step 3.5, based on the similarity between the current pedestrian target and the saved pedestrian target, adopt the Hungarian matching algorithm to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time;
步骤3.6、若存在本次未成功匹配的行人目标,则直接将该行人目标对应的行人目标坐标框信息以及行人目标特征信息加入跟踪轨迹集合中,并标记为新生轨迹;若成功匹配行人轨迹和行人目标,则根据行人目标对应的行人目标坐标框信息以及行人目标特征信息更新该行人的行人轨迹;若跟踪轨迹集合中存在标记为新生轨迹的行人轨迹连续多次成功匹配,则将该行人轨迹的新生标记去除;若跟踪轨迹集合中存在连续多帧未成功匹配的行人轨迹,则认为该行人目标已离开当前图像采集设备的监控范围,则将该行人轨迹标记为离开轨迹;若标记为离开轨迹的行人轨迹在指定时间阈值内未成功匹配,则认为该行人轨迹完结,将该行人轨迹从跟踪轨迹集合中删除。Step 3.6. If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched. If the pedestrian target is a pedestrian target, the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
在另一个实施例中,所述基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹,包括:In another embodiment, the similarity matching of pedestrian target feature information is performed based on pedestrian trajectories corresponding to different image acquisition devices, and the successfully matched pedestrian trajectories are combined to update the pedestrian trajectories of the corresponding pedestrians, including:
步骤4.1、取一个图像采集设备对应的跟踪轨迹集合,将该跟踪轨迹集合中标记为新生轨迹的行人轨迹逐一与其余图像采集设备对应的跟踪轨迹集合中被标记为离开轨迹的行人轨迹进行相似度计算;Step 4.1. Take a set of tracking trajectories corresponding to an image acquisition device, and perform similarity between the pedestrian trajectories marked as new trajectories in the set of tracking trajectories and the pedestrian trajectories marked as departure trajectories in the set of tracking trajectories corresponding to the rest of the image acquisition devices one by one. calculate;
步骤4.2、若相似度大于预设阈值,则表示两个行人轨迹匹配成功;Step 4.2. If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched;
步骤4.3、将匹配成功的两个行人轨迹进行联合得到该行人新的行人轨迹,并利用新的行人轨迹替换新生轨迹所在的跟踪轨迹集合中对应行人轨迹。Step 4.3: Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
关于本实施例中利用基于视频行人识别的地铁人流网络融合方法得到指定时间段内的人流移动网络图的具体限定可参见上述对于基于视频行人识别的地铁人流网络融合方法的具体限定,这里就不再进行赘述。本实施例中地下、地上人流网络图构建,是对整个城市的地铁网络表示为一个图,其中地铁站为顶点,地铁线路以及地面上连接地铁站各出入口的交通路线为边(也可以仅以地面上连接地铁站各出入口的交通路线为边,若边中包含地铁线路,该地铁线路的人流量为无数据,或在本发明的基础上利用现有的地铁站刷卡信息等方式获取地铁线路内部的人流移动)。每个顶点都有一个由视频统计的人流量组成的特征向量,可以定义一个邻接矩阵来编码顶点之间的成对依赖关系。因此,地铁网络不需要用网格来表示地铁站,也不需要用CNN来捕捉特征,而是可以用一个通用的网络图来描述,利用图神经网络(GCN)可以有效地捕捉地铁网络层面而不是网格层面上不规则的时空依赖关系。For the specific limitation of using the subway pedestrian flow network fusion method based on video pedestrian recognition in this embodiment to obtain the pedestrian flow network map within a specified time period, please refer to the above-mentioned specific restrictions on the subway pedestrian flow network fusion method based on video pedestrian recognition, which will not be discussed here. Let's go into details. The construction of the underground and above-ground people flow network graph in this embodiment is to represent the subway network of the entire city as a graph, in which the subway station is the vertex, and the subway line and the traffic routes connecting the entrances and exits of the subway station on the ground are the edges (or only take The traffic route connecting the entrances and exits of the subway station on the ground is an edge. If the edge contains a subway line, the flow of people on the subway line is no data, or on the basis of the present invention, the subway line is obtained by using the existing subway station swiping card information and other methods. internal movement of people). Each vertex has a feature vector composed of video statistics of people flow, and an adjacency matrix can be defined to encode pairwise dependencies between vertices. Therefore, the subway network does not need to use grids to represent subway stations, nor does it need to use CNN to capture features, but can be described by a general network graph, and the use of graph neural network (GCN) can effectively capture the subway network level and Not irregular spatiotemporal dependencies at the grid level.
其中问题建模,是对地下、地上人流统计网络在各边人流预测应用的数学建模,利用网络历史的人流数值来预测该网络接下来的几天里人流量。具体可以用图9所示的时空序列来对该问题进行建模,我们在一个图上定义了一个城市范围的地铁网络,并关注结构化的时间序列客流量。构建人流移动网络图的模型如下:Among them, the problem modeling is the mathematical modeling of the application of underground and above-ground people flow statistics network in the prediction of people flow in all sides. It uses the historical people flow value of the network to predict the people flow of the network in the next few days. Specifically, the problem can be modeled with the spatiotemporal sequence shown in Figure 9, where we define a city-wide subway network on a graph and focus on the structured time-series passenger flow. The model for constructing the flow of people mobile network graph is as follows:
G t=(V t,ε,W) G t =(V t , ε, W)
G t为t时刻的人流移动网络图,,是一个由多个节点组成的图,V t是有限的节点集合,代表了图中的顶点,用来监视每个节点的人流量,即V t是由所有结点的人流量组成的向量,ε代表顶点之间的边集合,显示顶点之间的连接性的W是邻接矩阵的权重。 G t is the flow of people moving network graph at time t, is a graph composed of multiple nodes, V t is a limited set of nodes, representing the vertices in the graph, used to monitor the flow of people at each node, namely V t is a vector composed of the traffic flow of all nodes, ε represents the set of edges between vertices, and W showing the connectivity between vertices is the weight of the adjacency matrix.
对每一顶点的人流量进行预测时,给定前t个时刻的历史数据,来预测未来一个或多个时刻。本文预测的是人流数值,给定t-M+1到t时刻的历史数据,预测t+1到t+H的人流量,构建人流预测目标模型如下:When predicting the flow of people at each vertex, the historical data of the previous t moments are given to predict one or more moments in the future. This article predicts the flow of people. Given the historical data from time t-M+1 to t, to predict the flow of people from t+1 to t+H, the target model for predicting the flow of people is constructed as follows:
Figure PCTCN2020137804-appb-000003
Figure PCTCN2020137804-appb-000003
式中,
Figure PCTCN2020137804-appb-000004
为预测得到的t+1时刻至t+H时刻的特征向量(总进/出站人流量),v t-M+1,…,v t表示输入的-M+1时刻至t时刻的特征向量(总进/出站人流量)。需要说明的是,v t+1,…,v t+H同样为预测得到的t+1时刻至t+H时刻的特征向量,
Figure PCTCN2020137804-appb-000005
为数学表达上用于区分预测变量的方式。
In the formula,
Figure PCTCN2020137804-appb-000004
is the predicted feature vector from time t+1 to time t+H (total inbound/outbound traffic), v t-M+1 ,...,v t represents the input features from time -M+1 to time t vector (total in/out foot traffic). It should be noted that v t+1 ,...,v t+H is also the predicted feature vector from time t+1 to time t+H,
Figure PCTCN2020137804-appb-000005
A mathematical expression used to distinguish predictors.
容易理解的是,若输入数据为某一站点在t-M+1到t时刻的总进站人流量,则得到的预测特征向量同样为未来指定时间段内的总进站人流量;同理若输入数据为某一站点在t-M+1到t时刻的总出站人流量,则得到的预测特征向量同样为未来指定时间段内的总出站人流量。It is easy to understand that if the input data is the total inbound traffic of a certain station from t-M+1 to t, the predicted feature vector obtained is also the total inbound traffic in a specified time period in the future; the same is true. If the input data is the total outbound traffic of a site from time t-M+1 to t, the predicted feature vector obtained is also the total outbound traffic in a specified time period in the future.
基于构建的人流预测目标模型,采用图神经网络对所述人流预测目标模型进行求解,求解得到未来指定时间段内各个站点的总进、出站预测人流量。Based on the constructed people flow prediction target model, a graph neural network is used to solve the people flow prediction target model, and the total inbound and outbound predicted people flow of each site in a specified time period in the future is obtained.
对于图神经网络模型,本文直接使用图结构数据在空间域上进行高阶特征提取,使用的是切比雪夫多项式近似,切比雪夫图卷积公式如下:For the graph neural network model, this paper directly uses the graph structure data to extract high-order features in the spatial domain, using the Chebyshev polynomial approximation. The Chebyshev graph convolution formula is as follows:
Figure PCTCN2020137804-appb-000006
Figure PCTCN2020137804-appb-000006
使用的模型框架为STGCN的框架,由多个时空卷积模块组成,每个模块的结构如三明治(如图10所示),有两个门控序列卷积层和中间的一个空间图卷 积模块。Temporal Gated-Conv用来捕获时间关联,由一个1-D Conv和一个门控线性单元GLU组成;Spatial Graph-Conv用来捕获空间关联,主要是由上述切比雪夫图卷积模块构成。The model framework used is the STGCN framework, which consists of multiple spatiotemporal convolution modules, each of which is structured like a sandwich (as shown in Figure 10), with two gated sequence convolution layers and a spatial graph convolution in the middle. module. Temporal Gated-Conv is used to capture temporal correlation, which consists of a 1-D Conv and a gated linear unit GLU; Spatial Graph-Conv is used to capture spatial correlation, mainly composed of the above-mentioned Chebyshev graph convolution module.
在得到各站点总进、出站预测人流量后,根据站点各个出入口对应的比例均值分配总预测人流量。一个站点各个出入口的进站人流量的比例均值根据各个出入口在对应时间段内的进站人流量计算得到;同理,一个站点各个出入口的出站人流量的比例均值根据各个出入口在对应时间段内的出站人流量计算得到。在进行总预测人流量分配时,同样基于进比例均值(即进站人流量的比例均值)进行分配,基于出比例均值(即出站人流量的比例均值)进行分配,从而得到具有可追溯性的预测结果,保证预测结果具有实际应用价值。After obtaining the total inbound and outbound predicted traffic of each site, the total predicted traffic is allocated according to the proportional mean value corresponding to each entry and exit of the site. The mean value of the proportion of inbound traffic at each entrance and exit of a station is calculated based on the inbound traffic at each entrance and exit in the corresponding time period; similarly, the average proportion of the outbound traffic at each entrance and exit of a site is calculated according to the corresponding time period at each entrance and exit. The outbound traffic within the . When assigning the total predicted flow of people, it is also based on the mean of the inbound proportion (ie, the mean of the proportion of the inbound traffic), and based on the mean of the outgoing proportion (that is, the mean of the proportion of the outbound traffic), so as to obtain traceability. The prediction results are guaranteed to have practical application value.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments can be combined arbitrarily. For the sake of brevity, all possible combinations of the technical features in the above-described embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be regarded as the scope described in this specification.
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only represent several embodiments of the present invention, and the descriptions thereof are more specific and detailed, but should not be construed as a limitation on the scope of the invention patent. It should be pointed out that for those of ordinary skill in the art, without departing from the concept of the present invention, several modifications and improvements can also be made, which all belong to the protection scope of the present invention. Therefore, the protection scope of the patent of the present invention shall be subject to the appended claims.

Claims (8)

  1. 一种基于视频行人识别的地铁人流网络融合方法,用于实现地铁与地面人流的融合统计以辅助交通预警,其特征在于,所述基于视频行人识别的地铁人流网络融合方法,包括:A subway pedestrian flow network fusion method based on video pedestrian recognition is used to realize the fusion statistics of subway and ground pedestrian flow to assist traffic early warning. It is characterized in that, the subway pedestrian flow network fusion method based on video pedestrian recognition includes:
    步骤1、接收地铁站各个出入口的监控图像,所述监控图像由设置在各个出入口的图像采集设备获取;Step 1. Receive monitoring images of various entrances and exits of the subway station, and the monitoring images are acquired by image acquisition equipment disposed at each entrance and exit;
    步骤2、提取监控图像中的行人目标坐标框信息以及行人目标特征信息,所述行人目标特征信息包括行人特征、行人进出站状态、行人出站或进站方向;Step 2, extracting pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction;
    步骤3、基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹;Step 3. Based on the monitoring image of the same image acquisition device, the similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information, and the pedestrian trajectory for the same pedestrian is obtained;
    步骤4、基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹;Step 4. Perform similarity matching of pedestrian target feature information based on the pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians;
    步骤5、获取指定区域内的地铁路线、地铁站点、各个站点的出入口,以及各个出入口所对应的地上的交通路线,融合构建指定区域内的地铁交通网络图;Step 5. Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area;
    步骤6、根据预设时间段内最新的行人轨迹,统计各个站点的总进、出站人流量,以及各站点中各个出入口对应的交通路线上的进、出站人流量;Step 6. According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each station, and the inbound and outbound pedestrian flow on the traffic routes corresponding to each entrance and exit in each station;
    步骤7、在所述地铁交通网络图的基础上叠加地铁站各个站点的总进、出站人流量以及各站点中各个出入口对应的交通路线上的进、出站人流量,得到地铁、地面人流融合的人流移动网络图。Step 7. On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
  2. 如权利要求1所述的基于视频行人识别的地铁人流网络融合方法,其特征在于,所述基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹,包括:The subway pedestrian flow network fusion method based on video pedestrian recognition according to claim 1, characterized in that, the similarity calculation is performed based on the monitoring image of the same image acquisition device according to the pedestrian target coordinate frame information and the pedestrian target feature information to obtain Pedestrian trajectories for the same pedestrian, including:
    步骤3.1、获取当前图像采集设备在本次监控图像中的行人目标坐标框信息以及行人目标特征信息;Step 3.1, obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device;
    步骤3.2、判断该图像采集设备对应的跟踪轨迹集合是否为空,所述跟踪轨迹集合用于保存行人的行人轨迹,若跟踪轨迹集合不为空则执行步骤3.3;否则直接将本次获取的行人目标坐标框信息以及行人目标特征信息加入至跟踪轨迹集合中并结束;Step 3.2: Determine whether the tracking track set corresponding to the image acquisition device is empty. The tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time. The target coordinate frame information and the pedestrian target feature information are added to the tracking trajectory set and ended;
    步骤3.3、基于跟踪轨迹集合中的行人轨迹采用无迹卡尔曼滤波得到估计目标坐标框信息;Step 3.3, using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set;
    步骤3.4、根据行人目标坐标框信息和估计目标坐标框信息,逐一计算当前 行人目标和已保存的行人目标两两之间的坐标框相似度,基于行人轨迹的行人目标特征信息和本次监控图像中的行人目标特征信息逐一计算当前行人目标和已保存的行人目标两两之间的特征相似度,基于坐标框相似度和特征相似度的加权求和得到当前行人目标和已保存的行人目标两两之间的相似度;Step 3.4. According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image. The feature information of the pedestrian target in the calculation is to calculate the feature similarity between the current pedestrian target and the saved pedestrian target one by one, and the current pedestrian target and the saved pedestrian target are obtained based on the weighted summation of the coordinate frame similarity and the feature similarity. similarity between the two;
    步骤3.5、基于当前行人目标和已保存的行人目标两两之间的相似度,采用匈牙利匹配算法匹配跟踪轨迹集合中的行人轨迹和本次获取的行人目标;Step 3.5. Based on the similarity between the current pedestrian target and the saved pedestrian target, the Hungarian matching algorithm is used to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time;
    步骤3.6、若存在本次未成功匹配的行人目标,则直接将该行人目标对应的行人目标坐标框信息以及行人目标特征信息加入跟踪轨迹集合中,并标记为新生轨迹;若成功匹配行人轨迹和行人目标,则根据行人目标对应的行人目标坐标框信息以及行人目标特征信息更新该行人的行人轨迹;若跟踪轨迹集合中存在标记为新生轨迹的行人轨迹连续多次成功匹配,则将该行人轨迹的新生标记去除;若跟踪轨迹集合中存在连续多帧未成功匹配的行人轨迹,则认为该行人目标已离开当前图像采集设备的监控范围,则将该行人轨迹标记为离开轨迹;若标记为离开轨迹的行人轨迹在指定时间阈值内未成功匹配,则认为该行人轨迹完结,将该行人轨迹从跟踪轨迹集合中删除。Step 3.6. If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched. If the pedestrian target is a pedestrian target, the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
  3. 如权利要求2所述的基于视频行人识别的地铁人流网络融合方法,其特征在于,所述基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹,包括:The subway pedestrian flow network fusion method based on video pedestrian recognition according to claim 2, wherein the similarity matching of pedestrian target feature information is performed based on pedestrian trajectories corresponding to different image acquisition devices, and the pedestrian trajectories that are successfully matched are compared. Jointly, update the pedestrian trajectory of the corresponding pedestrian, including:
    步骤4.1、取一个图像采集设备对应的跟踪轨迹集合,将该跟踪轨迹集合中标记为新生轨迹的行人轨迹逐一与其余图像采集设备对应的跟踪轨迹集合中被标记为离开轨迹的行人轨迹进行相似度计算;Step 4.1. Take a set of tracking trajectories corresponding to an image acquisition device, and compare the pedestrian trajectories marked as new trajectories in the set of tracking trajectories with the pedestrian trajectories marked as departure trajectories in the tracking trajectories set corresponding to the rest of the image acquisition devices one by one. calculate;
    步骤4.2、若相似度大于预设阈值,则表示两个行人轨迹匹配成功;Step 4.2. If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched;
    步骤4.3、将匹配成功的两个行人轨迹进行联合得到该行人新的行人轨迹,并利用新的行人轨迹替换新生轨迹所在的跟踪轨迹集合中对应行人轨迹。Step 4.3: Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
  4. 一种人流预测方法,用于基于地铁与地面人流的融合进行人流预测以辅助交通预警,其特征在于,所述人流预测方法包括:A people flow prediction method for predicting people flow based on the fusion of subway and ground people flow to assist traffic early warning, characterized in that the people flow prediction method comprises:
    利用基于视频行人识别的地铁人流网络融合方法得到指定时间段内的人流移动网络图;Using the subway pedestrian flow network fusion method based on video pedestrian recognition to obtain the pedestrian flow network map in the specified time period;
    基于人流移动网络图利用图神经网络预测未来指定时间段内各个站点的总进、出站预测人流量;Using graph neural network to predict the total inbound and outbound traffic of each site within a specified time period in the future based on the people flow mobile network map;
    基于人流移动网络图中各站点的各个出入口对应的交通路线上的进、出站人流量,得到各站点的各个出入口对应的交通路线的进、出比例均值;Based on the inbound and outbound traffic on the traffic route corresponding to each entrance and exit of each site in the pedestrian flow network diagram, the mean value of the inbound and outbound proportions of the traffic route corresponding to each entry and exit of each site is obtained;
    根据进、出比例均值分配每一站点的总进、出站预测人流量得到各站点的各个出入口对应的交通路线上的进、出站预测人流量。According to the mean value of inbound and outbound proportions, the total inbound and outbound predicted people flow of each station is allocated to obtain the inbound and outbound predicted people flow on the traffic route corresponding to each entrance and exit of each station.
  5. 如权利要求4所述的人流预测方法,其特征在于,所述利用基于视频行人识别的地铁人流网络融合方法得到指定时间段内的人流移动网络图,包括:The method for predicting people flow as claimed in claim 4, wherein the use of the subway people flow network fusion method based on video pedestrian recognition to obtain the people flow mobile network map in a specified time period, comprising:
    步骤1、接收地铁站各个出入口的监控图像,所述监控图像由设置在各个出入口的图像采集设备获取;Step 1. Receive monitoring images of various entrances and exits of the subway station, and the monitoring images are acquired by image acquisition equipment disposed at each entrance and exit;
    步骤2、提取监控图像中的行人目标坐标框信息以及行人目标特征信息,所述行人目标特征信息包括行人特征、行人进出站状态、行人出站或进站方向;Step 2, extracting pedestrian target coordinate frame information and pedestrian target feature information in the monitoring image, where the pedestrian target feature information includes pedestrian features, pedestrian entry and exit status, and pedestrian exit or entry direction;
    步骤3、基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹;Step 3, based on the monitoring image of the same image acquisition device, perform similarity calculation according to the pedestrian target coordinate frame information and the pedestrian target feature information, and obtain the pedestrian trajectory for the same pedestrian;
    步骤4、基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹;Step 4. Perform similarity matching of pedestrian target feature information based on the pedestrian trajectories corresponding to different image acquisition devices, and combine the successfully matched pedestrian trajectories to update the pedestrian trajectories of the corresponding pedestrians;
    步骤5、获取指定区域内的地铁路线、地铁站点、各个站点的出入口,以及各个出入口所对应的地上的交通路线,融合构建指定区域内的地铁交通网络图;Step 5. Obtain the subway routes, subway stations, entrances and exits of each station in the designated area, and the traffic routes on the ground corresponding to the entrances and exits, and integrate and construct a subway traffic network map in the designated area;
    步骤6、根据预设时间段内最新的行人轨迹,统计各个站点的总进、出站人流量,以及各站点中各个出入口对应的交通路线上的进、出站人流量;Step 6. According to the latest pedestrian trajectories in the preset time period, count the total inbound and outbound pedestrian flow of each station, and the inbound and outbound pedestrian flow on the traffic routes corresponding to each entrance and exit in each station;
    步骤7、在所述地铁交通网络图的基础上叠加地铁站各个站点的总进、出站人流量以及各站点中各个出入口对应的交通路线上的进、出站人流量,得到地铁、地面人流融合的人流移动网络图。Step 7. On the basis of the subway traffic network map, superimpose the total inbound and outbound traffic of each station of the subway station and the inbound and outbound traffic on the traffic routes corresponding to each entrance and exit in each station to obtain the subway and ground traffic Converged people flow mobile network diagram.
  6. 如权利要求5所述的人流预测方法,其特征在于,所述基于同一图像采集设备的监控图像,根据行人目标坐标框信息以及行人目标特征信息进行相似度计算,得到针对同一行人的行人轨迹,包括:The method for predicting pedestrian flow according to claim 5, characterized in that, based on the monitoring image of the same image acquisition device, similarity calculation is performed according to the pedestrian target coordinate frame information and the pedestrian target feature information to obtain the pedestrian trajectory for the same pedestrian, include:
    步骤3.1、获取当前图像采集设备在本次监控图像中的行人目标坐标框信息以及行人目标特征信息;Step 3.1, obtain the pedestrian target coordinate frame information and the pedestrian target feature information in the current monitoring image of the current image acquisition device;
    步骤3.2、判断该图像采集设备对应的跟踪轨迹集合是否为空,所述跟踪轨迹集合用于保存行人的行人轨迹,若跟踪轨迹集合不为空则执行步骤3.3;否则直接将本次获取的行人目标坐标框信息以及行人目标特征信息加入至跟踪轨迹集合中并结束;Step 3.2: Determine whether the tracking track set corresponding to the image acquisition device is empty. The tracking track set is used to save the pedestrian track of the pedestrian. If the tracking track set is not empty, perform step 3.3; otherwise, directly use the pedestrian obtained this time. The target coordinate frame information and the pedestrian target feature information are added to the tracking trajectory set and ended;
    步骤3.3、基于跟踪轨迹集合中的行人轨迹采用无迹卡尔曼滤波得到估计目 标坐标框信息;Step 3.3, using unscented Kalman filtering to obtain the estimated target coordinate frame information based on the pedestrian trajectory in the tracking trajectory set;
    步骤3.4、根据行人目标坐标框信息和估计目标坐标框信息,逐一计算当前行人目标和已保存的行人目标两两之间的坐标框相似度,基于行人轨迹的行人目标特征信息和本次监控图像中的行人目标特征信息逐一计算当前行人目标和已保存的行人目标两两之间的特征相似度,基于坐标框相似度和特征相似度的加权求和得到当前行人目标和已保存的行人目标两两之间的相似度;Step 3.4. According to the pedestrian target coordinate frame information and the estimated target coordinate frame information, calculate the coordinate frame similarity between the current pedestrian target and the saved pedestrian target one by one, and the pedestrian target feature information based on the pedestrian trajectory and the current monitoring image. The feature information of the pedestrian target in the calculation is to calculate the feature similarity between the current pedestrian target and the saved pedestrian target one by one, and the current pedestrian target and the saved pedestrian target are obtained based on the weighted summation of the coordinate frame similarity and the feature similarity. similarity between the two;
    步骤3.5、基于当前行人目标和已保存的行人目标两两之间的相似度,采用匈牙利匹配算法匹配跟踪轨迹集合中的行人轨迹和本次获取的行人目标;Step 3.5. Based on the similarity between the current pedestrian target and the saved pedestrian target, the Hungarian matching algorithm is used to match the pedestrian trajectory in the tracking trajectory set and the pedestrian target obtained this time;
    步骤3.6、若存在本次未成功匹配的行人目标,则直接将该行人目标对应的行人目标坐标框信息以及行人目标特征信息加入跟踪轨迹集合中,并标记为新生轨迹;若成功匹配行人轨迹和行人目标,则根据行人目标对应的行人目标坐标框信息以及行人目标特征信息更新该行人的行人轨迹;若跟踪轨迹集合中存在标记为新生轨迹的行人轨迹连续多次成功匹配,则将该行人轨迹的新生标记去除;若跟踪轨迹集合中存在连续多帧未成功匹配的行人轨迹,则认为该行人目标已离开当前图像采集设备的监控范围,则将该行人轨迹标记为离开轨迹;若标记为离开轨迹的行人轨迹在指定时间阈值内未成功匹配,则认为该行人轨迹完结,将该行人轨迹从跟踪轨迹集合中删除。Step 3.6. If there is a pedestrian target that has not been successfully matched this time, directly add the pedestrian target coordinate frame information and pedestrian target feature information corresponding to the pedestrian target to the tracking trajectory set, and mark it as a new trajectory; if the pedestrian trajectory and the pedestrian target are successfully matched. If the pedestrian target is a pedestrian target, the pedestrian trajectory of the pedestrian is updated according to the pedestrian target coordinate frame information corresponding to the pedestrian target and the pedestrian target feature information; If there are pedestrian trajectories that have not been successfully matched for multiple consecutive frames in the tracking trajectory set, it is considered that the pedestrian target has left the monitoring range of the current image acquisition device, and the pedestrian trajectory is marked as leaving trajectory; if marked as leaving If the pedestrian trajectory of the trajectory is not successfully matched within the specified time threshold, the pedestrian trajectory is considered to be complete, and the pedestrian trajectory is deleted from the tracking trajectory set.
  7. 如权利要求6所述的人流预测方法,其特征在于,所述基于不同图像采集设备对应的行人轨迹进行行人目标特征信息的相似度匹配,将匹配成功的行人轨迹进行联合,更新对应行人的行人轨迹,包括:The pedestrian flow prediction method according to claim 6, wherein the similarity matching of pedestrian target feature information is performed based on pedestrian trajectories corresponding to different image acquisition devices, and the successfully matched pedestrian trajectories are combined to update the pedestrian corresponding to the pedestrian. trajectories, including:
    步骤4.1、取一个图像采集设备对应的跟踪轨迹集合,将该跟踪轨迹集合中标记为新生轨迹的行人轨迹逐一与其余图像采集设备对应的跟踪轨迹集合中被标记为离开轨迹的行人轨迹进行相似度计算;Step 4.1. Take a set of tracking trajectories corresponding to an image acquisition device, and perform similarity between the pedestrian trajectories marked as new trajectories in the set of tracking trajectories and the pedestrian trajectories marked as departure trajectories in the set of tracking trajectories corresponding to the rest of the image acquisition devices one by one. calculate;
    步骤4.2、若相似度大于预设阈值,则表示两个行人轨迹匹配成功;Step 4.2. If the similarity is greater than the preset threshold, it means that the two pedestrian trajectories are successfully matched;
    步骤4.3、将匹配成功的两个行人轨迹进行联合得到该行人新的行人轨迹,并利用新的行人轨迹替换新生轨迹所在的跟踪轨迹集合中对应行人轨迹。Step 4.3: Combine the two successfully matched pedestrian trajectories to obtain a new pedestrian trajectory of the pedestrian, and use the new pedestrian trajectory to replace the corresponding pedestrian trajectory in the tracking trajectory set where the new trajectory is located.
  8. 如权利要求4所述的人流预测方法,其特征在于,所述基于人流移动网络图利用图神经网络预测未来指定时间段内各个站点的总进、出站预测人流量,包括:The method for predicting people flow according to claim 4, characterized in that, using a graph neural network to predict the total inbound and outbound people flow of each site within a specified time period in the future based on the people flow mobile network graph, comprising:
    所述人流移动网络图中以地铁的站点为顶点,以站点的各个出入口对应的交通路线为边,每一顶点具有包含总进、出站人流量的特征向量,构建人流移 动网络图的模型如下:The people flow network diagram takes subway stations as vertices and the traffic routes corresponding to the entrances and exits of the stations as edges, each vertex has a feature vector including the total inbound and outbound people flow, and the model for constructing the people flow mobile network diagram is as follows :
    G t=(V t,ε,W) G t =(V t , ε, W)
    式中,G t为t时刻的人流移动网络图,V t为由所有顶点的特征向量组成的向量,ε代表顶点之间的边集合,W为邻接矩阵的权重,t为当前时刻; In the formula, G t is the people flow network graph at time t, V t is the vector composed of the feature vectors of all vertices, ε represents the edge set between vertices, W is the weight of the adjacency matrix, and t is the current moment;
    对每一顶点的人流量进行预测时,基于该顶点在历史时间段t-M+1时刻至t时刻的特征向量,预测未来指定时间段t+1时刻至t+H时刻的特征向量,其中M、H为预设系数,构建人流预测目标模型如下:When predicting the flow of people at each vertex, based on the feature vector of the vertex in the historical time period t-M+1 to t, predict the feature vector of the specified time period t+1 to t+H in the future, where M and H are preset coefficients, and the target model for predicting the flow of people is constructed as follows:
    Figure PCTCN2020137804-appb-100001
    Figure PCTCN2020137804-appb-100001
    式中,
    Figure PCTCN2020137804-appb-100002
    为预测得到的t+1时刻至t+H时刻的特征向量,v t-M+1,…,v t表示输入的-M+1时刻至t时刻的特征向量;
    In the formula,
    Figure PCTCN2020137804-appb-100002
    is the predicted feature vector from time t+1 to time t+H, v t-M+1 ,...,v t represents the input feature vector from time -M+1 to time t;
    基于构建的人流预测目标模型,采用图神经网络对所述人流预测目标模型进行求解,求解得到未来指定时间段内各个站点的总进、出站预测人流量。Based on the constructed people flow prediction target model, a graph neural network is used to solve the people flow prediction target model, and the total inbound and outbound predicted people flow of each site in a specified time period in the future is obtained.
PCT/CN2020/137804 2020-12-16 2020-12-19 Subway pedestrian flow network fusion method based on video pedestrian recognition, and pedestrian flow prediction method WO2022126669A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011485904.XA CN112541440B (en) 2020-12-16 2020-12-16 Subway people stream network fusion method and people stream prediction method based on video pedestrian recognition
CN202011485904.X 2020-12-16

Publications (1)

Publication Number Publication Date
WO2022126669A1 true WO2022126669A1 (en) 2022-06-23

Family

ID=75018974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137804 WO2022126669A1 (en) 2020-12-16 2020-12-19 Subway pedestrian flow network fusion method based on video pedestrian recognition, and pedestrian flow prediction method

Country Status (2)

Country Link
CN (1) CN112541440B (en)
WO (1) WO2022126669A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012949A (en) * 2023-02-06 2023-04-25 南京智蓝芯联信息科技有限公司 People flow statistics and identification method and system under complex scene
CN116095269A (en) * 2022-11-03 2023-05-09 南京戴尔塔智能制造研究院有限公司 Intelligent video security system and method thereof
CN116456558A (en) * 2023-06-13 2023-07-18 广州新科佳都科技有限公司 Self-adaptive control method and system for lighting equipment in subway station
CN116631176A (en) * 2023-05-31 2023-08-22 河南海融软件有限公司 Control method and system for station passenger flow distribution state
US20230306321A1 (en) * 2022-03-24 2023-09-28 Chengdu Qinchuan Iot Technology Co., Ltd. Systems and methods for managing public place in smart city
CN116935447A (en) * 2023-09-19 2023-10-24 华中科技大学 Self-adaptive teacher-student structure-based unsupervised domain pedestrian re-recognition method and system
CN117058627A (en) * 2023-10-13 2023-11-14 阳光学院 Public place crowd safety distance monitoring method, medium and system
CN117273285A (en) * 2023-11-21 2023-12-22 北京市运输事业发展中心 Passenger transport data acquisition system based on large passenger flow station of rail transit
CN117275243A (en) * 2023-11-22 2023-12-22 上海随申行智慧交通科技有限公司 Regional flow control prediction and early warning method based on multi-source traffic trip data and application
CN117435934A (en) * 2023-12-22 2024-01-23 中国科学院自动化研究所 Matching method, device and storage medium of moving target track based on bipartite graph
CN117746343A (en) * 2024-02-20 2024-03-22 济南格林信息科技有限公司 Personnel flow detection method and system based on contour map

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705402A (en) * 2021-08-18 2021-11-26 中国科学院自动化研究所 Video behavior prediction method, system, electronic device and storage medium
CN113705470A (en) * 2021-08-30 2021-11-26 北京市商汤科技开发有限公司 Method and device for acquiring passenger flow information, computer equipment and storage medium
CN114119648A (en) * 2021-11-12 2022-03-01 史缔纳农业科技(广东)有限公司 Pig counting method for fixed channel
CN116977934A (en) * 2023-08-02 2023-10-31 无锡八英里电子科技有限公司 Cloud-edge combined people flow early warning control method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425967A (en) * 2013-07-21 2013-12-04 浙江大学 Pedestrian flow monitoring method based on pedestrian detection and tracking
CN109522854A (en) * 2018-11-22 2019-03-26 广州众聚智能科技有限公司 A kind of pedestrian traffic statistical method based on deep learning and multiple target tracking
US20200184229A1 (en) * 2018-12-07 2020-06-11 National Chiao Tung University People-flow analysis system and people-flow analysis method
CN111612281A (en) * 2020-06-23 2020-09-01 中国人民解放军国防科技大学 Method and device for predicting pedestrian flow peak value of subway station and computer equipment
CN111612206A (en) * 2020-03-30 2020-09-01 清华大学 Street pedestrian flow prediction method and system based on space-time graph convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425967A (en) * 2013-07-21 2013-12-04 浙江大学 Pedestrian flow monitoring method based on pedestrian detection and tracking
CN109522854A (en) * 2018-11-22 2019-03-26 广州众聚智能科技有限公司 A kind of pedestrian traffic statistical method based on deep learning and multiple target tracking
US20200184229A1 (en) * 2018-12-07 2020-06-11 National Chiao Tung University People-flow analysis system and people-flow analysis method
CN111612206A (en) * 2020-03-30 2020-09-01 清华大学 Street pedestrian flow prediction method and system based on space-time graph convolutional neural network
CN111612281A (en) * 2020-06-23 2020-09-01 中国人民解放军国防科技大学 Method and device for predicting pedestrian flow peak value of subway station and computer equipment

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11868926B2 (en) * 2022-03-24 2024-01-09 Chengdu Qinchuan Iot Technology Co., Ltd. Systems and methods for managing public place in smart city
US20230306321A1 (en) * 2022-03-24 2023-09-28 Chengdu Qinchuan Iot Technology Co., Ltd. Systems and methods for managing public place in smart city
CN116095269B (en) * 2022-11-03 2023-10-20 南京戴尔塔智能制造研究院有限公司 Intelligent video security system and method thereof
CN116095269A (en) * 2022-11-03 2023-05-09 南京戴尔塔智能制造研究院有限公司 Intelligent video security system and method thereof
CN116012949B (en) * 2023-02-06 2023-11-17 南京智蓝芯联信息科技有限公司 People flow statistics and identification method and system under complex scene
CN116012949A (en) * 2023-02-06 2023-04-25 南京智蓝芯联信息科技有限公司 People flow statistics and identification method and system under complex scene
CN116631176B (en) * 2023-05-31 2023-12-15 河南海融软件有限公司 Control method and system for station passenger flow distribution state
CN116631176A (en) * 2023-05-31 2023-08-22 河南海融软件有限公司 Control method and system for station passenger flow distribution state
CN116456558B (en) * 2023-06-13 2023-09-01 广州新科佳都科技有限公司 Self-adaptive control method and system for lighting equipment in subway station
CN116456558A (en) * 2023-06-13 2023-07-18 广州新科佳都科技有限公司 Self-adaptive control method and system for lighting equipment in subway station
CN116935447A (en) * 2023-09-19 2023-10-24 华中科技大学 Self-adaptive teacher-student structure-based unsupervised domain pedestrian re-recognition method and system
CN116935447B (en) * 2023-09-19 2023-12-26 华中科技大学 Self-adaptive teacher-student structure-based unsupervised domain pedestrian re-recognition method and system
CN117058627A (en) * 2023-10-13 2023-11-14 阳光学院 Public place crowd safety distance monitoring method, medium and system
CN117058627B (en) * 2023-10-13 2023-12-26 阳光学院 Public place crowd safety distance monitoring method, medium and system
CN117273285A (en) * 2023-11-21 2023-12-22 北京市运输事业发展中心 Passenger transport data acquisition system based on large passenger flow station of rail transit
CN117273285B (en) * 2023-11-21 2024-02-02 北京市运输事业发展中心 Passenger transport data acquisition system based on large passenger flow station of rail transit
CN117275243A (en) * 2023-11-22 2023-12-22 上海随申行智慧交通科技有限公司 Regional flow control prediction and early warning method based on multi-source traffic trip data and application
CN117275243B (en) * 2023-11-22 2024-02-02 上海随申行智慧交通科技有限公司 Regional flow control prediction and early warning method based on multi-source traffic trip data and application
CN117435934A (en) * 2023-12-22 2024-01-23 中国科学院自动化研究所 Matching method, device and storage medium of moving target track based on bipartite graph
CN117746343A (en) * 2024-02-20 2024-03-22 济南格林信息科技有限公司 Personnel flow detection method and system based on contour map
CN117746343B (en) * 2024-02-20 2024-05-14 济南格林信息科技有限公司 Personnel flow detection method and system based on contour map

Also Published As

Publication number Publication date
CN112541440A (en) 2021-03-23
CN112541440B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
WO2022126669A1 (en) Subway pedestrian flow network fusion method based on video pedestrian recognition, and pedestrian flow prediction method
CN106874863B (en) Vehicle illegal parking and reverse running detection method based on deep convolutional neural network
CN106845424B (en) Pavement remnant detection method based on deep convolutional network
Hoogendoorn et al. Extracting microscopic pedestrian characteristics from video data
Jafari et al. Real-time water level monitoring using live cameras and computer vision techniques
CN116824859B (en) Intelligent traffic big data analysis system based on Internet of things
KR20200071799A (en) object recognition and counting method using deep learning artificial intelligence technology
US20150350608A1 (en) System and method for activity monitoring using video data
CN103986910A (en) Method and system for passenger flow statistics based on cameras with intelligent analysis function
Zuo et al. Reference-free video-to-real distance approximation-based urban social distancing analytics amid COVID-19 pandemic
CN104320617A (en) All-weather video monitoring method based on deep learning
Xu et al. Efficient CityCam-to-edge cooperative learning for vehicle counting in ITS
Basak et al. Developing an agent-based model for pilgrim evacuation using visual intelligence: A case study of Ratha Yatra at Puri
Tomar et al. Crowd analysis in video surveillance: A review
CN103646254A (en) High-density pedestrian detection method
Martani et al. Pedestrian monitoring techniques for crowd-flow prediction
CN114372503A (en) Cluster vehicle motion trail prediction method
CN116090333A (en) Urban public space disaster modeling and preventing system based on perception blind area estimation
CN114913447B (en) Police intelligent command room system and method based on scene recognition
Minnikhanov et al. Detection of traffic anomalies for a safety system of smart city
Basalamah et al. Deep learning framework for congestion detection at public places via learning from synthetic data
Shao et al. Computer vision-enabled smart traffic monitoring for sustainable transportation management
Rezaee et al. IoMT-assisted medical vehicle routing based on UAV-Borne human crowd sensing and deep learning in smart cities
Morozov et al. Prototype of Urban Transport Passenger Accounting System
Zheng et al. A method of detect traffic police in complex scenes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20965682

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20965682

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20965682

Country of ref document: EP

Kind code of ref document: A1