CN111680638A - Passenger path identification method and passenger flow clearing method based on same - Google Patents
Passenger path identification method and passenger flow clearing method based on same Download PDFInfo
- Publication number
- CN111680638A CN111680638A CN202010527445.0A CN202010527445A CN111680638A CN 111680638 A CN111680638 A CN 111680638A CN 202010527445 A CN202010527445 A CN 202010527445A CN 111680638 A CN111680638 A CN 111680638A
- Authority
- CN
- China
- Prior art keywords
- subway
- passenger
- human body
- pictures
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012544 monitoring process Methods 0.000 claims abstract description 104
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 13
- 238000001514 detection method Methods 0.000 claims abstract description 10
- 238000004140 cleaning Methods 0.000 claims abstract description 5
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims description 33
- 238000012360 testing method Methods 0.000 claims description 24
- 238000013527 convolutional neural network Methods 0.000 claims description 21
- 238000012795 verification Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 10
- 238000012163 sequencing technique Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000012821 model calculation Methods 0.000 description 3
- 238000013179 statistical model Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a passenger path identification method, which comprises the following steps: the method comprises the steps of obtaining monitoring videos of all subway passengers in all subway monitoring areas in a time period, carrying out data cleaning operation on the monitoring videos to obtain processed monitoring videos, carrying out target extraction on each frame in the processed monitoring videos by using a target detection algorithm to obtain face pictures and human body pictures of all subway passengers in the frame, wherein the face pictures of all subway passengers in all frames corresponding to all subway monitoring areas in the monitoring videos form a face picture set of the subway monitoring areas, and the human body pictures of all frames corresponding to all subway passengers in all frames corresponding to all subway monitoring areas in the monitoring videos form a human body picture set of the subway monitoring areas. The invention can solve the technical problems of time and labor consumption and low sorting efficiency of the existing manual sorting and sorting method.
Description
Technical Field
The invention belongs to the technical field of intelligent transportation, and particularly relates to a passenger path identification method and a passenger flow clearing method based on the same.
Background
Subways play an increasingly important role in modern urban traffic, millions of people select the subways every day, and the complexity of the wire network of the subways is higher and higher. In the daily production operation of the subway, different sections of the subway are often related to different operators, but because the travel track of passengers is closely related to the benefits among different operators, the passenger flow is clearly divided into a very important systematic work.
The existing passenger flow clearing method mainly comprises the following steps: the system comprises a manual account-dividing and counting method and a counting method based on a passenger travel path. The manual account-dividing and accounting method is characterized in that asset evaluation is carried out on each rail transit operation line forming a network connection line, evaluation indexes are operation mileage, line trend, investment amount, line quality and the like, after evaluation, an accounting proportion is given according to the investment participation condition of each pair of O-D operation lines in the network, and accounting is carried out according to the accounting proportion. The clearing method based on the passenger travel path is characterized in that the travel behavior of the passenger is analyzed, factors influencing the selection of the passenger travel path are considered, a travel generalized cost function is established, one or more possible paths between the starting and stopping stations of the passenger are determined on the basis, and the freight clearing proportion is determined according to the operation mileage born by each relevant operator in the paths.
However, the above passenger flow sorting methods all have some non-negligible drawbacks: the manual account-dividing and sorting method needs manual treatment, is time-consuming and labor-consuming, and has low sorting efficiency; the clearing method based on the passenger travel path is a statistical model based on probability, the parameter acquisition of the model is usually based on passenger flow investigation, the data sample size is low, and the adjustment period of the model parameter is long, so that certain error exists between the passenger travel path distribution obtained by model calculation and the actual travel path of the passenger, and the accuracy of clearing is further influenced.
Disclosure of Invention
The invention provides a passenger path identification method and a passenger flow clearing method based on the same, aiming at solving the technical problems that the existing manual account clearing method consumes time and labor and has low clearing efficiency due to the need of manual processing, and the existing passenger travel path based clearing method has a certain error between the distribution of the passenger travel path obtained by model calculation and the actual travel path of the passenger due to the low data sample amount and the long adjustment period of the model parameters in the statistical model based on the probability, thereby influencing the accuracy of clearing.
To achieve the above object, according to one aspect of the present invention, there is provided a passenger path identifying method including the steps of:
(1) acquiring monitoring videos of all subway passengers in all subway monitoring areas in a time period, wherein each frame of picture of the monitoring videos comprises time information and space information;
(2) performing data cleaning operation on the monitoring video obtained in the step (1) to obtain a processed monitoring video;
(3) performing target extraction on each frame in the monitoring video processed in the step (2) by using a target detection algorithm to obtain a face picture and a human body picture of each subway passenger in the frame, wherein the face picture of each subway passenger in all frames corresponding to each subway monitoring area in the monitoring video forms a face picture set of the subway monitoring area, the human body picture of each subway passenger in all frames corresponding to each subway monitoring area in the monitoring video forms a human body picture set of the subway monitoring area, the features extracted by a trained face recognition model are used as a face search library in the face picture sets of all subway monitoring areas of all subway passengers, and the features extracted by the trained ReiD model are used as a human body search library in the human body picture sets of all subway monitoring areas of all subway passengers;
(4) acquiring a monitoring video of a human face clear identification area and a monitoring video of a human body clear identification area from the monitoring video processed in the step (2), acquiring a human face picture set of each subway passenger in the human face clear identification area from the monitoring video of the human face clear identification area by using a target detection algorithm and a target tracking algorithm in sequence, and acquiring a human body picture set of each subway passenger in the human body clear identification area from the monitoring video of the human body clear identification area;
(5) for each subway passenger, randomly acquiring M target face pictures from the face picture set of the subway passenger in the face clear identification region obtained in the step (4), and randomly acquiring N target human body pictures from the human body picture set of the subway passenger in the human body clear identification region obtained in the step (4), wherein M and N are natural numbers;
(6) for each subway passenger, sequentially inputting the M target face pictures of the subway passenger obtained in the step (5) into a trained face recognition model so as to respectively obtain a set formed by the face pictures of the subway passenger in each subway monitoring area corresponding to each target face picture and the similarity between each face picture in the set and the target face picture from a face search library;
(7) for each subway passenger, acquiring all sets formed by the face pictures of the subway passenger in the ith subway monitoring area corresponding to all M target face pictures obtained in the step (6), deleting the face pictures of which the similarity between the face pictures and the corresponding target face pictures in all the sets is less than or equal to a threshold value, thereby obtaining a plurality of face pictures of which the subway passenger is in the ith subway monitoring area and is similar to all the M target face pictures, sequencing all the face pictures of which the subway passenger is in all the P subway monitoring areas and is similar to all the M target face pictures according to the time information of the face pictures, establishing a complete first taking path of the subway passenger according to the space information in the sequenced face pictures, wherein each point on the first taking path has space information and corresponding time information, wherein i belongs to {1, P }, and P represents the total number of subway monitoring areas;
(8) for each subway passenger, sequentially inputting the N target human body pictures of the subway passenger obtained in the step (5) into a trained ReID model so as to respectively obtain a set which corresponds to each target human body picture and is formed by the human body pictures of the subway passenger in each subway monitoring area and the similarity between each human body picture in the set and the target human body picture from a human body search library;
(9) for each subway passenger, acquiring all the sets formed by the human body pictures of the subway passenger in the ith subway monitoring area, which correspond to all the N target human body pictures and are obtained in the step (8), deleting the human body pictures of which the similarity between the human body pictures and the corresponding target human body pictures in all the sets is less than or equal to a threshold value, thereby obtaining a plurality of human body pictures of the subway passengers in the ith subway monitoring area and similar to all the N target human body pictures, sequencing all the human body pictures of the subway passengers in all the P subway monitoring areas and similar to all the N target human body pictures according to the time information of the human body pictures, establishing a second complete subway passenger riding path according to the spatial information in the sequenced human body pictures, wherein each point on the second riding path has spatial information and corresponding time information;
(10) and performing union processing on all points on the first riding path and all points on the second riding path, and sequencing all points according to the time information corresponding to each point obtained after union processing, thereby obtaining the final riding path of the subway passenger.
Preferably, the subway monitoring area comprises an entrance, a security door, a gate opening, a transfer passage, a platform waiting area, the interior of a subway carriage and the like of the subway.
Preferably, the human face clear identification area refers to a subway monitoring area capable of clearly shooting the faces of subway passengers, and the human body clear identification area refers to a subway monitoring area capable of clearly shooting the human bodies of the subway passengers.
Preferably, the face recognition model is trained by the following process:
(a) acquiring a public face recognition data set, and dividing a training set, a verification set and a test set according to the requirements of the data set;
(b) inputting the pictures in the training set and the corresponding labels into a convolutional neural network for training;
(c) updating and optimizing the weight parameter and the bias parameter of each layer in the convolutional neural network by using an optimizer to obtain the updated weight parameter and bias parameter of each layer in the convolutional neural network;
(d) iteratively training the convolutional neural network updated in the step (c), and iteratively verifying the convolutional neural network in the iterative training by using a verification set until the Loss function of the convolutional neural network reaches a minimum value, wherein the Loss function of the convolutional neural network uses triple Loss, center Loss, L-softmax, or A-softmax.
(e) And testing the face recognition model by using the test set, and selecting the face recognition model with the optimal performance from the test results.
Preferably, the ReID model may be a multi-granular network MGN, a convolutional baseline network based partitioning PCB, a similarity-guided graph neural network SGGNN, or a re-recognition based video object segmentation VS-ReID.
Preferably, the ReID model is trained by the following procedure:
(a) acquiring a public pedestrian re-identification data set and a self-made subway data set according to scenes, and dividing a training set, a verification set and a test set according to the requirements of the data set;
(b) and training the ReID model by using the pictures in the training set and the corresponding labels.
(c) And updating and optimizing the weight parameter and the bias parameter of each layer in the model by using an optimizer to obtain the updated weight parameter and bias parameter of each layer.
(d) Performing iterative training on the updated model in the step (c), and performing iterative verification on the model in the iterative training by using a verification set until the loss function of the model reaches the minimum; (ii) a
(e) And testing the ReID model by using the test set, and selecting the model with the optimal performance from the test results as the trained ReID model.
According to another aspect of the present invention, a passenger flow clearing method is provided, which is implemented based on the passenger path identification method, and the passenger flow clearing method includes:
(1) calculating the transport mileage or station number born by each operator in the final taking path of the subway passengers;
(2) and clearing the passenger flow of each operator according to the transportation mileage or the station number.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) because the automatic and intelligent path identification process and the passenger flow clearing process are completely free of manual participation, the technical problems that the existing manual counting and clearing method is time-consuming and labor-consuming and has low clearing efficiency due to the fact that manual processing is needed can be solved;
(2) according to the invention, through the steps (6) to (10), the real riding path of the subway passengers is obtained based on the face recognition model and the ReiD model, and real data support is provided for subway passenger flow clearing and subsequent traffic operation decision, so that the technical problem that the accuracy of clearing is influenced due to certain error between the distribution of the passenger traveling path obtained by model calculation and the actual traveling path of the passengers caused by low data sample amount and long adjustment period of model parameters in the statistical model based on probability adopted by the existing clearing method based on the passenger traveling path can be solved.
(3) According to the invention, the threshold values are set in the steps (7) and (9), and the face picture and the human body picture are screened according to the similarity, so that the high accuracy of the finally obtained real riding path can be ensured;
(4) according to the invention, the face recognition technology and the pedestrian re-recognition technology are combined, and the first riding path obtained by the face recognition technology and the second riding path obtained by the pedestrian re-recognition technology are subjected to union processing and sequencing processing in time and space, so that the final riding path is generated, and the accuracy of the riding path recognition is further improved.
Drawings
FIG. 1 is a flow chart of a passenger path identification method of the present invention;
fig. 2 is a flow chart of the passenger flow clearing method of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, according to a first aspect of the present invention, there is provided a passenger path identifying method including the steps of:
(1) acquiring monitoring videos of all subway passengers in all subway monitoring areas in a time period, wherein each frame of picture of the monitoring videos comprises time information and space information;
specifically, the time period may be freely set according to the demand of passenger flow clearing, for example: one year, one month, one week, or one day.
The subway monitoring areas mentioned in this step include, but are not limited to: the subway station comprises a subway station entrance, a security inspection door, a gate opening, a transfer passage, a platform waiting area, the interior of a subway carriage and the like.
(2) Performing data cleaning operation on the monitoring video obtained in the step (1) to obtain a processed monitoring video;
specifically, the data cleaning includes operations of extracting key video frames, removing blank video frames, removing damaged video frames, and the like.
(3) Performing target extraction on each frame in the monitoring video processed in the step (2) by using a target detection algorithm to obtain a face picture and a human body picture of each subway passenger in the frame, wherein the face picture of each subway passenger in all frames corresponding to each subway monitoring area in the monitoring video forms a face picture set of the subway monitoring area, the human body picture of each subway passenger in all frames corresponding to each subway monitoring area in the monitoring video forms a human body picture set of the subway monitoring area, the features extracted by a trained face recognition model are used as a face search library in the face picture sets of all subway monitoring areas of all subway passengers, and the features extracted by the trained ReiD model are used as a human body search library in the human body picture sets of all subway monitoring areas of all subway passengers;
specifically, the target detection method used is a mainstream target detection algorithm, and is trained by YOLOv3, Retina-Net, SSD (Single Shot multi box Detector, SSD for short), fast-RCNN, or the like.
(4) Acquiring a monitoring video of a human face clear identification area and a monitoring video of a human body clear identification area from the monitoring video processed in the step (2), acquiring a human face picture set of each subway passenger in the human face clear identification area from the monitoring video of the human face clear identification area by using a target detection algorithm and a target tracking algorithm in sequence, and acquiring a human body picture set of each subway passenger in the human body clear identification area from the monitoring video of the human body clear identification area;
specifically, the clear face recognition area refers to a subway monitoring area capable of clearly shooting the faces of subway passengers, such as a security inspection door or an entrance gate of a subway, or an exit gate of the subway; similarly, the region for clearly identifying the human body refers to a subway monitoring region capable of clearly shooting the human body of a subway passenger, such as a security inspection door or an entrance gate of a subway or an exit gate of the subway.
It should be noted that the target detection algorithm in this step is the same as that in step (3), and the target tracking algorithm may be, for example, a Kernel Correlation Filter (KCF) tracker, a Deep sort tracker, or a Siamese-FC tracker.
(5) For each subway passenger, randomly acquiring M target face pictures from the face picture set of the subway passenger in the face clear identification region obtained in the step (4), and randomly acquiring N target human body pictures from the human body picture set of the subway passenger in the human body clear identification region obtained in the step (4), wherein M and N are natural numbers;
(6) for each subway passenger, sequentially inputting the M target face pictures of the subway passenger obtained in the step (5) into a trained face recognition model so as to respectively obtain a set formed by the face pictures of the subway passenger in each subway monitoring area corresponding to each target face picture and the similarity between each face picture in the set and the target face picture from a face search library;
the face recognition model used in this step is obtained by training through the following process:
(a) acquiring a public face recognition data set (such as LFW, CASIA-Webface and the like), and dividing a training set, a verification set and a test set according to the requirements of the data set;
(b) inputting the pictures in the training set and the corresponding labels into a convolutional neural network for training;
(c) updating and optimizing the weight parameter and the bias parameter of each layer in the convolutional neural network by using an optimizer to obtain the updated weight parameter and bias parameter of each layer in the convolutional neural network;
in particular, the optimizers used are mainstream optimizers, such as Adam, SGD, etc.
(d) And (c) performing iterative training on the convolutional neural network updated in the step (c), and performing iterative verification on the convolutional neural network in the iterative training by using a verification set until the loss function of the convolutional neural network reaches the minimum.
Specifically, the Loss function of the convolutional neural network is a triple Loss, but the Loss function is not limited to this, and may be a Loss function commonly used for face recognition, such as Center Loss, L-softmax, or a-softmax.
(e) And testing the face recognition model by using the test set, and selecting the face recognition model with the optimal performance from the test results.
(7) For each subway passenger, acquiring all sets of face pictures, corresponding to all M target face pictures, of the subway passenger in the ith subway monitoring area (wherein i belongs to {1, P }, and P represents the total number of the subway monitoring areas), obtained in the step (6), deleting the face pictures, of which the similarity between the face pictures and the corresponding target face pictures in all the sets is less than or equal to a threshold value, so as to obtain a plurality of face pictures, similar to all the M target face pictures, of the subway passenger in the ith subway monitoring area, sequencing all the obtained face pictures, similar to all the M target face pictures, of the subway passenger in all the P subway monitoring areas according to time information of the face pictures, and establishing a complete first taking path of the subway passenger according to spatial information in the sequenced face pictures, each point on the first riding path has space information and corresponding time information;
the value range of the threshold in this step is between 0 and 1, and the preferable range is between 0.9 and 0.99.
One point on the first ride path may be, for example (scientific park gate entry, 12 points, 5 month, 1 day, 2020, 20 minutes, 20 seconds)
(8) For each subway passenger, sequentially inputting the N target human body pictures of the subway passenger obtained in the step (5) into a trained pedestrian Re-identification (ReID) model, so as to respectively obtain a set which corresponds to each target human body picture and is formed by the human body pictures of the subway passenger in each subway monitoring area and the similarity between each human body picture in the set and the target human body picture from a human body search library;
in this step, the ReID model may be a Multi-grid Network (MGN), a partial-based volumetric basic Network (PCB), a Similarity-Guided Graph Neural Network (SGGNN), a Video Object Segmentation with re-identification (VS-ReID), and the like.
The ReID model used in this step is obtained by training through the following process:
(a) acquiring public pedestrian re-identification data sets (such as Market-1501, CUHK03, DukeMTMC-reiD and the like) and subway data sets (subway data sets) self-made according to scenes, and dividing a training set, a verification set and a test set according to the requirements of the data sets;
(b) and training the ReID model by using the pictures in the training set and the corresponding labels.
(c) And updating and optimizing the weight parameter and the bias parameter of each layer in the model by using an optimizer to obtain the updated weight parameter and bias parameter of each layer.
In particular, the optimizers used are mainstream optimizers, such as Adam, SGD, etc.
(d) And (c) performing iterative training on the updated model in the step (c), and performing iterative verification on the model in the iterative training by using a verification set until the loss function of the model reaches the minimum.
Specifically, when the present invention uses the MGN model, it is the softmax function for classification and the ternary loss function (Triplet loss) for metric learning that are used as the loss functions in the training process.
(e) And testing the ReID model by using the test set, and selecting the model with the optimal performance from the test results as the trained ReID model.
(9) For each subway passenger, acquiring all the sets formed by the human body pictures of the subway passenger in the ith subway monitoring area, which correspond to all the N target human body pictures and are obtained in the step (8), deleting the human body pictures of which the similarity between the human body pictures and the corresponding target human body pictures in all the sets is less than or equal to a threshold value, thereby obtaining a plurality of human body pictures of the subway passengers in the ith subway monitoring area and similar to all the N target human body pictures, sequencing all the human body pictures of the subway passengers in all the P subway monitoring areas and similar to all the N target human body pictures according to the time information of the human body pictures, establishing a second complete subway passenger riding path according to the spatial information in the sequenced human body pictures, wherein each point on the second riding path has spatial information and corresponding time information;
the threshold value in this step is in the range of 0 to 1, and the preferred range is 0.9 to 0.95.
(10) And performing union processing on all points on the first riding path and all points on the second riding path, and sequencing all points according to the time information corresponding to each point obtained after union processing, thereby obtaining the final riding path of the subway passenger.
As shown in fig. 2, according to a second aspect of the present invention, a passenger flow clearing method is provided, which is implemented based on the passenger path identification method, and specifically includes: firstly, the transport mileage or the station number born by each operator in the final riding path of the subway passengers is calculated, and then the passenger flow of each operator is cleared according to the transport mileage or the station number.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (7)
1. A passenger path identification method, comprising the steps of:
(1) acquiring monitoring videos of all subway passengers in all subway monitoring areas in a time period, wherein each frame of picture of the monitoring videos comprises time information and space information;
(2) performing data cleaning operation on the monitoring video obtained in the step (1) to obtain a processed monitoring video;
(3) performing target extraction on each frame in the monitoring video processed in the step (2) by using a target detection algorithm to obtain a face picture and a human body picture of each subway passenger in the frame, wherein the face picture of each subway passenger in all frames corresponding to each subway monitoring area in the monitoring video forms a face picture set of the subway monitoring area, the human body picture of each subway passenger in all frames corresponding to each subway monitoring area in the monitoring video forms a human body picture set of the subway monitoring area, the features extracted by a trained face recognition model are used as a face search library in the face picture sets of all subway monitoring areas of all subway passengers, and the features extracted by the trained ReiD model are used as a human body search library in the human body picture sets of all subway monitoring areas of all subway passengers;
(4) acquiring a monitoring video of a human face clear identification area and a monitoring video of a human body clear identification area from the monitoring video processed in the step (2), acquiring a human face picture set of each subway passenger in the human face clear identification area from the monitoring video of the human face clear identification area by using a target detection algorithm and a target tracking algorithm in sequence, and acquiring a human body picture set of each subway passenger in the human body clear identification area from the monitoring video of the human body clear identification area;
(5) for each subway passenger, randomly acquiring M target face pictures from the face picture set of the subway passenger in the face clear identification region obtained in the step (4), and randomly acquiring N target human body pictures from the human body picture set of the subway passenger in the human body clear identification region obtained in the step (4), wherein M and N are natural numbers;
(6) for each subway passenger, sequentially inputting the M target face pictures of the subway passenger obtained in the step (5) into a trained face recognition model so as to respectively obtain a set formed by the face pictures of the subway passenger in each subway monitoring area corresponding to each target face picture and the similarity between each face picture in the set and the target face picture from a face search library;
(7) for each subway passenger, acquiring all sets formed by the face pictures of the subway passenger in the ith subway monitoring area corresponding to all M target face pictures obtained in the step (6), deleting the face pictures of which the similarity between the face pictures and the corresponding target face pictures in all the sets is less than or equal to a threshold value, thereby obtaining a plurality of face pictures of which the subway passenger is in the ith subway monitoring area and is similar to all the M target face pictures, sequencing all the face pictures of which the subway passenger is in all the P subway monitoring areas and is similar to all the M target face pictures according to the time information of the face pictures, establishing a complete first taking path of the subway passenger according to the space information in the sequenced face pictures, wherein each point on the first taking path has space information and corresponding time information, wherein i belongs to {1, P }, and P represents the total number of subway monitoring areas;
(8) for each subway passenger, sequentially inputting the N target human body pictures of the subway passenger obtained in the step (5) into a trained ReID model so as to respectively obtain a set which corresponds to each target human body picture and is formed by the human body pictures of the subway passenger in each subway monitoring area and the similarity between each human body picture in the set and the target human body picture from a human body search library;
(9) for each subway passenger, acquiring all the sets formed by the human body pictures of the subway passenger in the ith subway monitoring area, which correspond to all the N target human body pictures and are obtained in the step (8), deleting the human body pictures of which the similarity between the human body pictures and the corresponding target human body pictures in all the sets is less than or equal to a threshold value, thereby obtaining a plurality of human body pictures of the subway passengers in the ith subway monitoring area and similar to all the N target human body pictures, sequencing all the human body pictures of the subway passengers in all the P subway monitoring areas and similar to all the N target human body pictures according to the time information of the human body pictures, establishing a second complete subway passenger riding path according to the spatial information in the sequenced human body pictures, wherein each point on the second riding path has spatial information and corresponding time information;
(10) and performing union processing on all points on the first riding path and all points on the second riding path, and sequencing all points according to the time information corresponding to each point obtained after union processing, thereby obtaining the final riding path of the subway passenger.
2. The passenger path identifying method according to claim 1, wherein the subway monitoring area comprises an entrance of a subway, a security door, a gate opening, a transfer passage, a platform waiting area, an interior of a subway car, and the like.
3. The passenger path recognition method according to claim 1, wherein the human face clear recognition area refers to a subway monitoring area in which faces of subway passengers can be clearly photographed, and the human body clear recognition area refers to a subway monitoring area in which human bodies of subway passengers can be clearly photographed.
4. The passenger path recognition method according to claim 1, wherein the face recognition model is trained by the following process:
(a) acquiring a public face recognition data set, and dividing a training set, a verification set and a test set according to the requirements of the data set;
(b) inputting the pictures in the training set and the corresponding labels into a convolutional neural network for training;
(c) updating and optimizing the weight parameter and the bias parameter of each layer in the convolutional neural network by using an optimizer to obtain the updated weight parameter and bias parameter of each layer in the convolutional neural network;
(d) iteratively training the convolutional neural network updated in the step (c), and iteratively verifying the convolutional neural network in the iterative training by using a verification set until the Loss function of the convolutional neural network reaches a minimum value, wherein the Loss function of the convolutional neural network uses triple Loss, Center Loss, L-softmax, or A-softmax.
(e) And testing the face recognition model by using the test set, and selecting the face recognition model with the optimal performance from the test results.
5. The passenger path identification method according to claim 1, wherein the ReID model can be a multi-granularity network MGN, a block PCB based on a convolutional baseline network, a semblance-guided graph neural network SGGNN, or a video object segmentation VS-ReID based on re-identification.
6. The passenger path identification method according to claim 5, wherein the ReiD model is trained by the following process:
(a) acquiring a public pedestrian re-identification data set and a self-made subway data set according to scenes, and dividing a training set, a verification set and a test set according to the requirements of the data set;
(b) and training the ReID model by using the pictures in the training set and the corresponding labels.
(c) And updating and optimizing the weight parameter and the bias parameter of each layer in the model by using an optimizer to obtain the updated weight parameter and bias parameter of each layer.
(d) Performing iterative training on the updated model in the step (c), and performing iterative verification on the model in the iterative training by using a verification set until the loss function of the model reaches the minimum;
(e) and testing the ReID model by using the test set, and selecting the model with the optimal performance from the test results as the trained ReID model.
7. A passenger flow clearing method implemented based on the passenger path identification method according to any one of claims 1 to 6, characterized in that the passenger flow clearing method comprises:
(1) calculating the transport mileage or station number born by each operator in the final taking path of the subway passengers;
(2) and clearing the passenger flow of each operator according to the transportation mileage or the station number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010527445.0A CN111680638B (en) | 2020-06-11 | 2020-06-11 | Passenger path identification method and passenger flow clearing method based on same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010527445.0A CN111680638B (en) | 2020-06-11 | 2020-06-11 | Passenger path identification method and passenger flow clearing method based on same |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111680638A true CN111680638A (en) | 2020-09-18 |
CN111680638B CN111680638B (en) | 2020-12-29 |
Family
ID=72454605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010527445.0A Active CN111680638B (en) | 2020-06-11 | 2020-06-11 | Passenger path identification method and passenger flow clearing method based on same |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111680638B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113536914A (en) * | 2021-06-09 | 2021-10-22 | 重庆中科云从科技有限公司 | Object tracking identification method, system, equipment and medium |
CN115580830A (en) * | 2022-12-07 | 2023-01-06 | 成都智元汇信息技术股份有限公司 | AP probe multipoint positioning-based passenger violation path detection method and device |
CN116721390A (en) * | 2023-08-09 | 2023-09-08 | 克伦斯(天津)轨道交通技术有限公司 | Subway train passenger state determining method and system based on data processing |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934378A (en) * | 2017-03-16 | 2017-07-07 | 山东建筑大学 | A kind of dazzle light identifying system and method based on video depth study |
US20180285659A1 (en) * | 2017-03-31 | 2018-10-04 | Here Global B.V. | Method, apparatus, and system for a parametric representation of lane lines |
CN108725357A (en) * | 2018-05-15 | 2018-11-02 | 上海博泰悦臻网络技术服务有限公司 | Parameter control method, system based on recognition of face and cloud server |
CN108921483A (en) * | 2018-07-16 | 2018-11-30 | 深圳北斗应用技术研究院有限公司 | A kind of logistics route planing method, device and driver arrange an order according to class and grade dispatching method, device |
CN108932343A (en) * | 2018-07-24 | 2018-12-04 | 南京甄视智能科技有限公司 | The data set cleaning method and system of face image database |
CN109344787A (en) * | 2018-10-15 | 2019-02-15 | 浙江工业大学 | A kind of specific objective tracking identified again based on recognition of face and pedestrian |
CN109614984A (en) * | 2018-10-29 | 2019-04-12 | 深圳北斗应用技术研究院有限公司 | A kind of homologous image detecting method and system |
CN109753920A (en) * | 2018-12-29 | 2019-05-14 | 深圳市商汤科技有限公司 | A kind of pedestrian recognition method and device |
CN109886440A (en) * | 2019-01-08 | 2019-06-14 | 平安科技(深圳)有限公司 | Net about vehicle control method, device, computer equipment and storage medium |
CN110263219A (en) * | 2019-06-26 | 2019-09-20 | 银河水滴科技(北京)有限公司 | A kind of method and device that target object route is screened, shown |
US20190306463A1 (en) * | 2018-03-28 | 2019-10-03 | Gal Zuckerman | Analyzing past events by utilizing imagery data captured by a plurality of on-road vehicles |
CN110348301A (en) * | 2019-06-04 | 2019-10-18 | 平安科技(深圳)有限公司 | Implementation method of checking tickets, device, computer equipment and storage medium based on video |
CN110353958A (en) * | 2019-08-09 | 2019-10-22 | 朱原灏 | A kind of mancarried device and its method for assisting blind-man riding |
CN110516600A (en) * | 2019-08-28 | 2019-11-29 | 杭州律橙电子科技有限公司 | A kind of bus passenger flow detection method based on Face datection |
CN110796079A (en) * | 2019-10-29 | 2020-02-14 | 深圳龙岗智能视听研究院 | Multi-camera visitor identification method and system based on face depth features and human body local depth features |
CN111159475A (en) * | 2019-12-06 | 2020-05-15 | 中山大学 | Pedestrian re-identification path generation method based on multi-camera video image |
CN111241932A (en) * | 2019-12-30 | 2020-06-05 | 广州量视信息科技有限公司 | Automobile exhibition room passenger flow detection and analysis system, method and storage medium |
-
2020
- 2020-06-11 CN CN202010527445.0A patent/CN111680638B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934378A (en) * | 2017-03-16 | 2017-07-07 | 山东建筑大学 | A kind of dazzle light identifying system and method based on video depth study |
US20180285659A1 (en) * | 2017-03-31 | 2018-10-04 | Here Global B.V. | Method, apparatus, and system for a parametric representation of lane lines |
US20190306463A1 (en) * | 2018-03-28 | 2019-10-03 | Gal Zuckerman | Analyzing past events by utilizing imagery data captured by a plurality of on-road vehicles |
CN108725357A (en) * | 2018-05-15 | 2018-11-02 | 上海博泰悦臻网络技术服务有限公司 | Parameter control method, system based on recognition of face and cloud server |
CN108921483A (en) * | 2018-07-16 | 2018-11-30 | 深圳北斗应用技术研究院有限公司 | A kind of logistics route planing method, device and driver arrange an order according to class and grade dispatching method, device |
CN108932343A (en) * | 2018-07-24 | 2018-12-04 | 南京甄视智能科技有限公司 | The data set cleaning method and system of face image database |
CN109344787A (en) * | 2018-10-15 | 2019-02-15 | 浙江工业大学 | A kind of specific objective tracking identified again based on recognition of face and pedestrian |
CN109614984A (en) * | 2018-10-29 | 2019-04-12 | 深圳北斗应用技术研究院有限公司 | A kind of homologous image detecting method and system |
CN109753920A (en) * | 2018-12-29 | 2019-05-14 | 深圳市商汤科技有限公司 | A kind of pedestrian recognition method and device |
CN109886440A (en) * | 2019-01-08 | 2019-06-14 | 平安科技(深圳)有限公司 | Net about vehicle control method, device, computer equipment and storage medium |
CN110348301A (en) * | 2019-06-04 | 2019-10-18 | 平安科技(深圳)有限公司 | Implementation method of checking tickets, device, computer equipment and storage medium based on video |
CN110263219A (en) * | 2019-06-26 | 2019-09-20 | 银河水滴科技(北京)有限公司 | A kind of method and device that target object route is screened, shown |
CN110353958A (en) * | 2019-08-09 | 2019-10-22 | 朱原灏 | A kind of mancarried device and its method for assisting blind-man riding |
CN110516600A (en) * | 2019-08-28 | 2019-11-29 | 杭州律橙电子科技有限公司 | A kind of bus passenger flow detection method based on Face datection |
CN110796079A (en) * | 2019-10-29 | 2020-02-14 | 深圳龙岗智能视听研究院 | Multi-camera visitor identification method and system based on face depth features and human body local depth features |
CN111159475A (en) * | 2019-12-06 | 2020-05-15 | 中山大学 | Pedestrian re-identification path generation method based on multi-camera video image |
CN111241932A (en) * | 2019-12-30 | 2020-06-05 | 广州量视信息科技有限公司 | Automobile exhibition room passenger flow detection and analysis system, method and storage medium |
Non-Patent Citations (5)
Title |
---|
UL HAQ, EJAZ ET AL: "A fast hybrid computer vision technique for real-time embedded bus passenger flow calculation through camera", 《MULTIMEDIA TOOLS AND APPLICATIONS》 * |
刘硕山: "论人脸识别对铁路客运的影响", 《科技创新与应用》 * |
周春杰: "基于人脸识别及行人重识别技术的考勤系统", 《工业控制计算机》 * |
姜昊: "基于深度学习的辅助驾驶系统中行人检测跟踪研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
王飞: "基于视频图像处理的公交客流统计技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113536914A (en) * | 2021-06-09 | 2021-10-22 | 重庆中科云从科技有限公司 | Object tracking identification method, system, equipment and medium |
CN115580830A (en) * | 2022-12-07 | 2023-01-06 | 成都智元汇信息技术股份有限公司 | AP probe multipoint positioning-based passenger violation path detection method and device |
CN116721390A (en) * | 2023-08-09 | 2023-09-08 | 克伦斯(天津)轨道交通技术有限公司 | Subway train passenger state determining method and system based on data processing |
CN116721390B (en) * | 2023-08-09 | 2023-10-27 | 克伦斯(天津)轨道交通技术有限公司 | Subway train passenger state determining method and system based on data processing |
Also Published As
Publication number | Publication date |
---|---|
CN111680638B (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111680638B (en) | Passenger path identification method and passenger flow clearing method based on same | |
CN110348445B (en) | Instance segmentation method fusing void convolution and edge information | |
CN110097755B (en) | Highway traffic flow state identification method based on deep neural network | |
CN113205176B (en) | Method, device and equipment for training defect classification detection model and storage medium | |
CN110751099B (en) | Unmanned aerial vehicle aerial video track high-precision extraction method based on deep learning | |
CN112330591B (en) | Steel rail surface defect detection method and device capable of achieving sample-less learning | |
CN111353395A (en) | Face changing video detection method based on long-term and short-term memory network | |
CN110969166A (en) | Small target identification method and system in inspection scene | |
CN110362557B (en) | Missing path repairing method based on machine learning and license plate recognition data | |
CN108197326A (en) | A kind of vehicle retrieval method and device, electronic equipment, storage medium | |
CN110120218A (en) | Expressway oversize vehicle recognition methods based on GMM-HMM | |
CN110543883A (en) | license plate recognition method based on deep learning | |
CN108830882B (en) | Video abnormal behavior real-time detection method | |
CN114049612A (en) | Highway vehicle charge auditing system based on graph searching technology and dual-obtaining and checking method for driving path | |
CN116168356B (en) | Vehicle damage judging method based on computer vision | |
CN112861840A (en) | Complex scene character recognition method and system based on multi-feature fusion convolutional network | |
CN112288700A (en) | Rail defect detection method | |
CN112419289A (en) | Intelligent detection method for urban subway rail fastener defects | |
CN110164136B (en) | Fake-licensed vehicle identification method | |
CN110164137B (en) | Method, system and medium for identifying fake-licensed vehicle based on driving time of bayonet pair | |
CN116524725B (en) | Intelligent driving traffic sign image data identification system | |
CN116564551B (en) | Data-knowledge driven urban rail transit risk identification method | |
CN110532904B (en) | Vehicle identification method | |
CN115359306B (en) | Intelligent identification method and system for high-definition images of railway freight inspection | |
Gao et al. | Anomaly detection of trackside equipment based on GPS and image matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |