CN110647855A - Subway station passenger flow statistical method based on face recognition - Google Patents

Subway station passenger flow statistical method based on face recognition Download PDF

Info

Publication number
CN110647855A
CN110647855A CN201910931510.3A CN201910931510A CN110647855A CN 110647855 A CN110647855 A CN 110647855A CN 201910931510 A CN201910931510 A CN 201910931510A CN 110647855 A CN110647855 A CN 110647855A
Authority
CN
China
Prior art keywords
image
face
passenger flow
station
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910931510.3A
Other languages
Chinese (zh)
Other versions
CN110647855B (en
Inventor
屈霞
乔鼎
刘行健
赵佳怡
陈宏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou University
Original Assignee
Changzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou University filed Critical Changzhou University
Priority to CN201910931510.3A priority Critical patent/CN110647855B/en
Publication of CN110647855A publication Critical patent/CN110647855A/en
Application granted granted Critical
Publication of CN110647855B publication Critical patent/CN110647855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Primary Health Care (AREA)
  • Educational Administration (AREA)
  • Computer Security & Cryptography (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a subway station passenger flow statistical method based on face recognition, and belongs to the technical field of rail transit operation management. The method comprises the steps that a station control room processor analyzes images collected by image collecting devices installed at each entrance and exit of a station and at the entrance and exit of a shielding door, extracts face images, and performs functional processing such as face recognition, entrance and exit people counting, face tracking based on a Hash algorithm, automatic capturing, automatic storage and the like; and the station control room terminal carries out station passenger flow statistics and draws a passenger flow time distribution map. The system solves the problem of real-time statistics of passenger flow in the station, and tracks figure images to identify criminals; and drawing a passenger flow time distribution diagram in real time, and providing reference data for the subway operation manager to perfect safety monitoring, reasonably arrange an operation plan, schedule a vehicle, plan a route and design and the like. The method has the advantages of high recognition rate, strong real-time performance and good practicability, and can be used for passenger flow statistics in other places.

Description

Subway station passenger flow statistical method based on face recognition
Technical Field
The invention relates to the technical field of rail transit operation management, in particular to a subway station passenger flow statistical method based on face recognition.
Background
Subway stations are important nodes of urban rail transit networks, passenger flow congestion often occurs, when the large passenger flow approaches or exceeds the passenger flow bearing capacity of the stations, passenger flow dispersion is subject to severe examination, and potential safety hazards such as detention, crowding and even treading can occur. The passenger flow volume of the station is directly related to the safety of passengers, the operation safety and the service quality.
The existing statistical techniques for passenger flow in subway stations are mainly divided into the following types: (1) the traditional platform manual passenger flow counting method has the main defects that: the passenger flow in the station can not be counted timely and accurately, and a large amount of manpower and financial resources are consumed. (2) AFC data analysis, namely statistics by using an automatic ticketing system, has the main defects that: the method mainly aims at counting the passenger flow volume of each gate in and out of a station, and cannot count the number of entrances and exits of a shield door or the number of the upper and lower parts of a train door. (3) The video monitoring method, namely the image intelligent analysis technology, has the main defects that: the video analysis processing workload is large, and the requirements of wide view field, high installation structure and the like are met for application occasions. In each station in China, only cameras are arranged to monitor station halls, platforms and carriages, and special video passenger flow statistical equipment is not arranged.
How to count the passenger flow in the subway station in real time, and providing reliable data for a station attendant to quickly count various passenger flow indexes of the station and make measures for dealing with the large passenger flow in advance; further, a face recognition technology is utilized to form an online criminal pursuit system, and support is provided for reducing crime rate; the method has important significance for perfecting safety monitoring, reasonably arranging operation plans, operation management, driving scheduling, line planning design and the like of subway operation managers, and is a problem to be solved urgently at present.
Disclosure of Invention
The technical problem to be solved by the invention is as follows:
(1) accurately detecting and counting passenger flows at each entrance and exit of the station and passenger flows at the entrance and exit of the shielding door, and realizing real-time counting of the passenger flows in the station;
(2) comparing and analyzing the images of the face detected in the image and the person to be tracked, and identifying criminals;
(3) capturing, collecting and storing the human face detected in the image;
(4) the distribution diagram of passenger flow time in the station is counted in real time, and reference data are provided for a subway operation manager to perfect safety monitoring, reasonably arrange an operation plan, schedule a vehicle and plan and design a line.
The technical scheme adopted for solving the technical problems is as follows: a subway station passenger flow statistical method based on face recognition is disclosed. In order to clearly express the technical scheme of the invention, the following description is made:
the statistical method mainly realizes the functions of people counting, face tracking, face capturing, face storing, automatic recording and the like of the entrance and exit of the subway station and the upper and lower ports of the train shield door. The specific functions are as follows:
(1) the people counting analyzes and judges each extracted human face to see whether the human faces are the same person or not; after the people are identified, counting the number of people;
(2) the human face tracking realizes traversing judgment of the figure image to be tracked and the extracted human face, and calculates the similarity of the figure image and the extracted human face, thereby realizing the function of the human face tracking;
(3) and the face capturing realizes the real-time storage of the extracted face.
(4) The face storage function automatically acquires each frame of image screen so as to quickly inquire people at a later period;
(5) the system also has an automatic entry function: and storing the faces to be confirmed to be counted according to a time sequence.
At least one image acquisition device is pre-installed at each entrance and exit of the station and at each entrance and exit of the shielding door and is used for acquiring real-time images of each entrance and exit of the station, and the image acquisition devices can adopt cameras, video cameras and the like; each image acquisition device is connected to a corresponding processor and a data server of the station control room through a wired or wireless network. The processor of the station control room is used for setting an image control interface, displaying and storing images. And the data server numbers each station entrance image acquisition device and each station exit image acquisition device, and allocates an ID number in the database face statistical table for each station entrance.
The passenger flow statistical method comprises the following steps:
step 1: collecting image information;
image acquisition equipment arranged at an entrance and an exit of a station and an entrance and an exit of a shielding door acquires images of each entrance and exit of the station in real time and transmits acquired image data to a station control room processor;
preferably, the method for acquiring the passenger images of the subway station by using the cameras at the entrances and exits comprises the following steps: and the station control room processor opens a corresponding camera, intercepts a frame of image and loads a face feature library, and specifically can adopt an OpenCV face recognition Haar feature classifier.
Step 2: extracting a face image;
and graying the image, and searching all the faces in the image by using the face recognition feature library object to obtain the coordinate and the size of each face. And (4) sequentially marking one face by using a rectangle or a circle, and updating the face count value.
Preferably, the image is grayed by using a formula Gray of 0.299R + 0.587G + 0.114B, wherein R, G and B respectively represent a red luminance value, a green luminance value and a blue luminance value, so as to reduce the operation intensity. Specifically, face search is carried out on the gray level image by using an OpenCV face recognition Haar feature classifier, 10% of a search window is expanded through each time, and the positions of all faces in the gray level image in the image are detected.
Each face is sequentially marked by a rectangle or a circle, and the method for updating the face count value comprises the following steps:
and (3) sequentially drawing all faces with coordinates in the gray level image by using a rectangular frame or a circle, wherein the number counting value of the faces is updated at first, and then whether the number of people is counted, whether the people is tracked or not and whether the people is captured or not are judged.
And step 3: function processing;
the station control room processor is provided with a display control interface, various states are input through the control interface, and the control interface is provided with 6 picture buttons, such as a picture folder opening button, a picture saving button, a face capturing button, a start tracking button, a number counting button, an automatic entry button and the like, and is used for setting 6 working states. When the number of people needs to be counted, pressing a 'counting number' button in a control interface; when a human face picture needs to be captured, a 'capture human face' button is pressed in a control interface; when the tracking is needed, a 'start tracking' button is pressed in a control interface; when the picture needs to be saved, pressing a 'save picture' button in the control interface; when the image information needs to be automatically input, an 'automatic input' button is pressed in the control interface.
The specific determination process of each functional state is as follows:
step 3.1: judging whether the number of people is in a people counting state, specifically, judging whether the number of people counting button is pressed, namely, the number of people is in the people counting state, detecting whether the lower edge of the current face meets the requirement, and whether the face is kept in 2s, if the condition of the extracted face information is met, counting the number of the face according to the extracted face information, and updating the number field data of the number of people at the exit or the number field data of the number of people at the entrance in the face counting table of the station control room database by using a new counting value according to the number of the entrance ID or the number of the exit ID, wherein the counting mode adopts a mode of adding the counting value by 1 continuously or subtracting continuously, and the counting mode of the embodiment adopts a mode of adding the counting value by 1; each entrance/exit has a unique ID number, and is distinguished according to the entrance ID number or the exit ID number. If the number of people is not in the counting state, the step 3.2 is carried out;
when the face quantity information in the database table is updated, the face image information needs to be recorded and stored, the implementation method can adopt a manual recording mode and also can adopt an automatic recording mode, and in order to improve the working efficiency, the automatic recording mode is adopted, and the method specifically comprises the following steps: and judging whether the face image is automatically input or not, namely judging whether an automatic input button is pressed or not, if the automatic input button is pressed, automatically inputting the face, intercepting the current face image, and storing the face image in a station control room processor according to time.
Step 3.2: judging whether the human face is in a human face tracking state, namely detecting whether a 'start tracking' button is pressed, entering the human face tracking state if the 'start tracking' button is pressed, intercepting a current human face image in the human face information extracted in the step 2, comparing the current human face image with a figure image to be tracked stored in a station control room database, calculating the similarity between the current human face image and the figure image to be tracked by adopting a Hash algorithm, judging whether the human face image is the figure to be tracked, and ringing and alarming on a control interface if the human face image is the figure to be tracked. Entering step 3.3; if the face tracking state is not the face tracking state, entering a step 3.3;
calculating the similarity between the current face image and the image of the person to be tracked by adopting a Hash algorithm, mainly adopting a mean Hash algorithm to the current face image to obtain a value a1, adopting a mean Hash algorithm to the image of the person to be tracked to obtain a value a2, and calculating a mean similarity value an according to the mean Hash algorithm values a1 and a 2; obtaining a value d1 by adopting a difference hash algorithm for the current face image, obtaining a value d2 by adopting a difference hash algorithm for the person image to be tracked, and obtaining a difference similarity value dn by using difference hash algorithm values d1 and d 2; whether the current face image is the person to be tracked is determined by an and dn.
Preferably, the mean hash algorithm mainly compresses the picture to i × j pixels to eliminate the influence caused by different sizes and proportions; and converting the gray scale image into an i x j gray scale image, wherein i represents the number of row pixels, and j represents the number of column pixels; traversing and accumulating the gray level graph by using a formula s + gray [ i, j ], and calculating a gray level pixel sum; calculating the average gray value by the formula avg ═ s/(i × j); traversing i x j pixels, when the gray value is greater than the average value, namely, gray [ i, j ] > avg, recording the current face image mean hash algorithm value a1 or the person image to be tracked mean hash algorithm value a2 as 1, when the gray value is less than or equal to the average value, namely, gray [ i, j ] ≦ avg, recording the current face image mean hash algorithm value a1 or the person image to be tracked mean hash algorithm value a2 as 0, and accumulating to obtain i j mean hash algorithm values a1 or a2 consisting of 1 and 0 of the picture.
Preferably, the difference hash algorithm mainly compresses the picture to i × j pixels to eliminate the influence caused by different sizes and proportions; and converting the image into an i × j-level gray scale image gray, wherein i represents the number of row pixels, j represents the number of column pixels to simplify colors, traversing pixels, when the previous pixel of each row is larger than the next pixel, the gray [ i, j ] > gray [ i, j +1] is satisfied, the current face image difference hash algorithm value d1 or the to-be-tracked person image difference hash algorithm value d2 is recorded as 1, when the previous pixel of each row is smaller than or equal to the next pixel, namely the gray [ i, j ] ≦ gray [ i, j +1], the current face image difference hash algorithm value d1 or the to-be-tracked person image difference hash algorithm value d2 is recorded as 0, and accumulating i × j difference hash algorithm values d1 or d2 consisting of 1 and 0 of the picture.
Preferably, the similarity calculation method performs traversal judgment on the average hash algorithm value a1 of the current face image and the average hash algorithm value a2 of the person image to be tracked, if a1 is not equal to a2, counting is performed, and the accumulated value is used as a similarity value an; and traversing and judging the difference hash algorithm value d1 of the current face image and the difference hash algorithm value d2 of the person image to be tracked, counting if d1 is not equal to d2, and taking the accumulated value as a similarity value dn.
Step 3.3: judging whether the human face is in a human face capturing state or not, namely whether a human face capturing button is pressed or not, if the human face capturing button is pressed, entering the human face capturing state, reading a human face picture in the current picture in the human face information extracted in the step (2), and storing the human face picture according to time; the method for saving the picture in the embodiment comprises the following steps: and when the image is in a face capturing state, the captured face count is added with 1, the current face image is intercepted and stored as an image of which the file name comprises the current time and the captured face count. If not, go to step 4.
And 4, step 4: updating the display data;
the station control room processor repeats the step 3 to perform function processing on each face image in the collected images, and then updates the state of the control interface and the face display;
the method for displaying the face image shot by the camera comprises the following steps:
after the gray processing is carried out on the image shot by the camera, all faces are detected, and each face is sequentially displayed by a rectangular frame or a circle, 6 picture key fairings managed by a fairy group are updated, the current face number counting value, the current statistical counting value, the current working state and the like are updated for display, and the gray processed face image with the rectangular frame shot by the camera is displayed.
And 5: counting passenger flow;
according to the number of the outbound persons or the number of the inbound persons of each entrance and exit obtained in the step 3, the station control room terminal statistically calculates the inbound passenger flow volume and the outbound passenger flow volume of each entrance and exit according to the time period to be counted; and obtaining the passenger flow rate of the station at a certain moment or within a certain time according to the total passenger flow rate of the station entering the station and the total passenger flow rate of the station leaving the station in the same time period.
In this embodiment, the method for calculating the passenger flow volume at each entrance and exit according to the time period to be counted includes: and segmenting the time to be counted by taking hours as a unit according to the departure time of the first shift and the last shift of the station. And when the integral time in each time period arrives, taking out the number field of the persons entering the station and the number field of the persons leaving the station from each entrance according to the ID number of each entrance and the ID number of each exit in the face statistical table in the data server.
Step 6: and 5, drawing a passenger flow time distribution map according to the result of the passenger flow statistics in the step 5, and scheduling and monitoring the vehicles according to the passenger flow time distribution map.
The method for obtaining the station passenger flow by subtracting the outbound passenger flow from the inbound passenger flow in each time period comprises the following steps: when the time of the whole point in each time slot arrives, the number of the entering passengers taken out from each entrance is accumulated, the number of the exiting passengers taken out from each exit is accumulated, and the number of the entering passengers is added to the accumulated number of the exiting passengers to obtain the passenger flow of the station in each time slot.
The passenger distribution map can be a daily passenger distribution map, a weekly passenger distribution map, a monthly passenger distribution map and a yearly passenger distribution map, and different statistical time periods are selected according to requirements to analyze and solve different problems, such as: the daily passenger flow distribution map can obtain the peak time of the daily passenger flow and manage the operation in different time periods of each day according to the passenger flow; the weekly passenger flow distribution map can obtain the difference of passenger flow among different working days in a week, thereby allocating the number of station workers, the traffic scheduling and the route planning; the annual passenger flow distribution map can obtain the difference of passenger flow between different months and different seasons in one year, thereby planning the work in the holiday time period in advance and ensuring the travel.
The method for counting the daily passenger flow time distribution comprises the following steps: and drawing a passenger flow histogram by taking the time as an abscissa and the station passenger flow as an ordinate, marking the passenger flow and the proportion of the passenger flow in each time period, and drawing a passenger flow curve in each time period according to the passenger flow proportion.
The invention has the beneficial effects that:
(1) the method has the advantages that the statistics is carried out on the passenger flow at each entrance and exit of the station and the passenger flow at the entrance and exit of the shield door, the statistics of the passenger flow in the station is realized, the problem that the passenger flow statistics in the station is not carried out in real time in the current station is solved, and the real-time performance, the accuracy and the safety are high.
(2) Comparing and analyzing the images of the faces of passengers entering and leaving the station and the person to be tracked, and identifying criminals;
(3) capturing and storing the collected faces of passengers;
(4) the passenger flow time distribution diagram of real-time statistics provides reference data for perfecting safety monitoring, reasonably arranging an operation plan, driving scheduling, line planning design and the like for a subway operation manager.
(5) A subway station passenger flow statistical method based on face recognition adopts Python language and OpenCV programming, image acquisition equipment and vehicle control room equipment are connected through a network, and a vehicle control room processor and a server can realize data sharing in a B/S mode.
Drawings
The invention is further illustrated by the following figures and examples.
Fig. 1 is a schematic diagram of a station system of the present invention.
Fig. 2 is a schematic flow chart of a subway station passenger flow statistical method based on face recognition in the embodiment of the present invention.
Figure 3 is a station control room processor control interface.
Fig. 4 is a schematic diagram of the distribution of the daily passenger flow time.
Detailed Description
The present invention will now be described in detail with reference to the accompanying drawings. This figure is a simplified schematic diagram, and merely illustrates the basic structure of the present invention in a schematic manner, and therefore it shows only the constitution related to the present invention.
As shown in fig. 1, which is a schematic view of a station system according to the present invention and is described by taking a station as an example, the station in the figure includes 5 entry gates and 5 exit gates, monocular cameras are mounted in advance at the entry gates or the entry gate, the exit gate or the exit gate, and the getting-on/off positions of the platform screen door of the station, and the monocular cameras are connected to respective processors. And each access processor independently stores txt text files of the IP address of the station control room data server, and is independently connected with the MySQL data server of the station control room by the aid of the station wifi through the IP address by utilizing the pymysql technology. And numbering the entrance and the exit of each subway station, and allocating an ID number to each entrance and exit in a station control room data server face statistical table. And each entrance processor initializes the entrance number field of the entrance station entering people or the exit station exiting people of the face statistical table in the data server according to the entrance ID number or the exit ID number.
It should be noted that, the implementation does not limit the installation position of the camera, and for example, a head-up acquisition mode is adopted, that is, the camera is placed right in front of or at the side of the target, and the passenger passing images at the subway entrance and exit are acquired in real time.
As shown in fig. 2, the subway station passenger flow statistical method based on face recognition of the invention comprises the following steps:
the passenger flow statistical method comprises the following steps:
step 1: collecting image information;
image acquisition equipment arranged at an entrance and an exit of a station and an entrance and an exit of a shielding door acquires images of each entrance and exit of the station in real time and transmits acquired image data to a station control room processor;
preferably, the method for acquiring the passenger images of the subway station by using the cameras at the entrances and exits comprises the following steps: and the station control room processor opens a corresponding camera, intercepts a frame of image and loads a face feature library, and specifically can adopt an OpenCV face recognition Haar feature classifier.
Step 2: extracting a face image;
and graying the image, and searching all the faces in the image by using the face recognition feature library object to obtain the coordinate and the size of each face. And (4) sequentially marking one face by using a rectangle or a circle, and updating the face count value.
Preferably, the image is grayed by using a formula Gray of 0.299R + 0.587G + 0.114B to reduce the operation intensity, wherein R, G and B represent a red luminance value, a green luminance value and a blue luminance value. Specifically, face search is carried out on the gray level image by using an OpenCV face recognition Haar feature classifier, 10% of a search window is expanded through each time, and the positions of all faces in the gray level image in the image are detected.
Each face is sequentially marked by a rectangle or a circle, and the method for updating the face count value comprises the following steps:
and (3) sequentially drawing all faces with coordinates in the gray level image by using a rectangular frame or a circle, wherein the number counting value of the faces is updated at first, and then whether the number of people is counted, whether the people is tracked or not and whether the people is captured or not are judged.
And step 3: function processing;
as shown in fig. 3, the station control room processor is provided with a display control interface, and the control interface is provided with 6 picture buttons, such as a picture folder opening button, a picture saving button, a face picture capturing button, a tracking start button, a people counting button, and an automatic entry button, for setting 6 working states. The control interface can monitor mouse events, and when a certain button of 6 picture buttons is pressed for odd times by the mouse, the working states of setting and opening a picture folder, opening and saving pictures, opening and capturing faces, opening and tracking, opening the number of statistical people, opening automatic input and the like are respectively represented. And creates a sprite group to uniformly manage the drawing and updating of the 6 key sprites. The control interface can display the current face number counting value, the current statistical counting value and the current working state in real time. The current face count value current faceCount is 1, the current statistical count value current passer is 5, and the current working state current _ state is "Start Statistics", that is, counting the number of people is started.
And each access processor independently loads a face feature library by means of an opencv computer vision library and a Python programming language, opens a camera connected with the access processor, intercepts a frame of image, grays the image and detects the positions of all faces in the image in the grayscale image.
Each entrance processor performs the following processing on each face in the image: each time, a face is drawn by a rectangular frame or a circle, firstly, the face count value is added with 1, then, whether people count is carried out or not is judged, if the face count is statistical, whether the value of the lower edge of the current face meets the requirement or not is further detected, exemplarily, the lower edge of the face is between 360 and 420, the face is kept above 2s, if the 2 conditions are met, the count value is added with 1, and according to the entrance ID number or the exit ID number, the entrance number field of the people entering the station or the exit number field of the people exiting the station of the face counting table of the data server is updated by a new count value. And then judging whether the input is automatic, if so, intercepting the current face and storing the face image with the current time. And when the image is tracked, reading out the image of the tracked suspect, intercepting the current face image, and comparing 2 images by adopting a Hash algorithm. In this embodiment, the hash algorithm is to satisfy that the similarity value an of the mean hash algorithm value of 2 images is less than 35, and the similarity value dn of the difference hash algorithm value of 2 images is less than 35. Judging whether to capture the human face or not when the human face is not tracked; when the face is captured, the captured face count is added with 1, the current face image is intercepted and stored as an image with the file name containing the current time and the captured face count.
In this embodiment, the average hash algorithm and the difference hash algorithm both compress the picture to 8 × 8 sizes to obtain 64 pixels, so as to eliminate the influence caused by different sizes and proportions. And simplifies color and converts the image to 64 levels of gray scale such that all pixels have 64 colors in total.
And 4, step 4: updating the display data;
the station control room processor repeats the step 3 to perform function processing on each face image in the collected images, and then updates the state of the control interface and the face display;
in this embodiment, after each of the access processors performs gray scale processing on the image acquired by the camera and sequentially circles each face with a rectangular frame, the 6-picture key sprites are updated, the current face count value, the current statistical count value, the current working state and the like are updated for display, and the gray-scale-processed face image with the rectangular frame photographed by the camera is displayed.
Step 5 and step 6: counting passenger flow and drawing a distribution diagram;
specifically, the station control room data server is responsible for storing the number of people entering the station and the number of people leaving the station at each exit of each subway, and the station control room terminal is used for drawing the daily passenger flow time distribution map.
As shown in fig. 4, the daily passenger flow time distribution diagram is displayed by the station control room terminal, and exemplarily, the departure time according to the first shift and the last shift is 5: 50 and 22: and 50, dividing the time to be counted into 18 time periods in hours from 5:30 to 23: 30. And when the integral time in each time period arrives, the station control room terminal takes out each entrance number field of the entering station and each exit number field of the exiting station according to each entrance ID number and each exit ID number in the face statistical table in the station control room data server. Accumulating the number of the passengers taken out from each entrance, accumulating the number of the passengers taken out from each exit, and subtracting the accumulated number of the passengers from the accumulated number of the passengers to obtain the passenger flow of the station in 18 time periods. And (3) drawing a passenger flow histogram by taking the time as an abscissa and the station passenger flow as an ordinate, marking the passenger flow of 18 time periods and the proportion of the passenger flow to the total passenger flow, and drawing a passenger flow curve of each time period according to the passenger flow proportion.
In this embodiment, the station control room terminal transmits the statistical result of the passenger flow volume of the station to the control center by using the existing communication network of the subway, so as to provide reference data for the control center to reasonably arrange operation plans, operation management, driving scheduling and the like.
In light of the foregoing description of preferred embodiments in accordance with the invention, it is to be understood that numerous changes and modifications may be made by those skilled in the art without departing from the scope of the invention. The technical scope of the present invention is not limited to the contents of the specification, and must be determined according to the scope of the claims.

Claims (8)

1. A subway station passenger flow statistical method based on face recognition is characterized in that: the method comprises the following steps:
step 1: collecting image information;
image acquisition equipment arranged at an entrance and an exit of a station and an entrance and an exit of a shielding door acquires images of each entrance and exit of the station in real time and transmits acquired image data to a station control room processor;
step 2: extracting a face image;
graying the image, and searching all human faces in the image by using a human face recognition feature library object to obtain the coordinate and the size of each human face; sequentially marking one face each by using a rectangle or a circle, and updating the face count value;
and step 3: function processing;
step 3.1: judging whether the number of people is in a people counting state, if so, counting the number of the faces after confirming the current face image in the face information extracted in the step 2, and updating the number of the people going out of the station or the number of the people going in the station corresponding to the entrance and the exit in a data server data table of the station control room; if the number of people is not in the counting state, the step 3.2 is carried out;
step 3.2: judging whether the human face is in a human face tracking state, if so, intercepting a current human face image in the human face information extracted in the step (2), comparing the current human face image with a figure image to be tracked stored in a station control room database, calculating the similarity between the current human face image and the figure image to be tracked by adopting a Hash algorithm, judging whether the human face image is the figure to be tracked, and if so, ringing and alarming on a control interface; if the face tracking state is not the face tracking state, entering a step 3.3;
step 3.3: judging whether the image is in a face capturing state, if so, intercepting a current face image in the face information extracted in the step (2), and storing the face image according to time; if the number of people is not in the people capturing state, entering the step 4;
and 4, step 4: updating the display data;
the station control room processor repeats the step 3 to perform function processing on each face image in the collected images, and then updates the state of the control interface and the face display;
and 5: counting passenger flow;
according to the number of the outbound persons or the number of the inbound persons of each entrance and exit obtained in the step 3, the station control room terminal statistically calculates the inbound passenger flow volume and the outbound passenger flow volume of each entrance and exit according to the time period to be counted; and obtaining the passenger flow rate of the station at a certain moment or within a certain time according to the total passenger flow rate of the station entering the station and the total passenger flow rate of the station leaving the station in the same time period.
2. A subway station passenger flow statistical method based on face recognition as claimed in claim 1, characterized in that: and 6, drawing a passenger flow time distribution map according to the result of the passenger flow statistics in the step 5, and scheduling and monitoring the vehicles according to the passenger flow time distribution map.
3. A subway station passenger flow statistical method based on face recognition as claimed in claim 1, characterized in that: the method for acquiring passenger images of subway stations by using cameras at all entrances and exits and detecting faces in the images comprises the following steps: opening a camera and intercepting a frame of image; loading a face feature library; performing graying processing on the image by using a formula Gray of 0.299R + 0.587G + 0.114B to reduce the operation intensity, performing face search on the grayscale image by using an OpenCV face recognition Haar feature classifier, expanding a search window by 10% each time, and detecting the positions of all faces in the grayscale image in the image;
each face is sequentially marked by a rectangle or a circle, and the method for updating the face count value comprises the following steps:
and (3) sequentially drawing all faces with coordinates in the gray level image by using a rectangular frame or a circle, wherein the number counting value of the faces is updated at first, and then whether the number of people is counted, whether the people is tracked or not and whether the people is captured or not are judged.
4. A subway station passenger flow statistical method based on face recognition as claimed in claim 1 or 2, characterized in that: the step 3.1 also comprises the following steps: and judging whether the face information is automatically input, if so, intercepting the face image and storing the face image according to time.
5. A subway station passenger flow statistical method based on face recognition as claimed in claim 1 or 2, characterized in that: 3.2, calculating the similarity between the current face image and the person image to be tracked by adopting a Hash algorithm, mainly adopting a mean Hash algorithm to the current face image to obtain a value a1, adopting a mean Hash algorithm to the person image to be tracked to obtain a value a2, and calculating a mean similarity value an according to the mean Hash algorithm values a1 and a 2; obtaining a value d1 by adopting a difference hash algorithm for the current face image, obtaining a value d2 by adopting a difference hash algorithm for the person image to be tracked, and obtaining a difference similarity value dn by using difference hash algorithm values d1 and d 2; whether the current face image is the person to be tracked is determined by an and dn.
6. A subway station passenger flow statistical method based on face recognition as claimed in claim 5, characterized in that: the mean hash algorithm specifically comprises:
compressing the picture to i x j pixels; and converting the gray scale image into an i x j gray scale image, wherein i represents the number of row pixels, and j represents the number of column pixels; traversing and accumulating the gray level graph by using a formula s + gray [ i, j ], and calculating a gray level pixel sum; calculating the average gray value by the formula avg ═ s/(i × j); traversing i x j pixels, when the gray value is greater than the average value, namely, gray [ i, j ] > avg, recording the current face image mean hash algorithm value a1 or the person image to be tracked mean hash algorithm value a2 as 1, when the gray value is less than or equal to the average value, namely, gray [ i, j ] ≦ avg, recording the current face image mean hash algorithm value a1 or the person image to be tracked mean hash algorithm value a2 as 0, and accumulating to obtain i j mean hash algorithm values a1 or a2 consisting of 1 and 0 of the picture.
7. A subway station passenger flow statistical method based on face recognition as claimed in claim 5, characterized in that: the difference hash algorithm specifically comprises:
compressing the picture to i x j pixels; and converting the image into an i × j-level gray scale image gray, wherein i represents the number of row pixels, j represents the number of column pixels, traversing pixels, when the previous pixel of each row is larger than the next pixel, the gray [ i, j ] > gray [ i, j +1] is satisfied, the current face image difference hash algorithm value d1 or the to-be-tracked person image difference hash algorithm value d2 is recorded as 1, when the previous pixel of each row is smaller than or equal to the next pixel, namely the gray [ i, j ] ≦ gray [ i, j +1], the current face image difference hash algorithm value d1 or the to-be-tracked person image difference hash algorithm value d2 is recorded as 0, and accumulating to obtain i × j difference hash algorithm values d1 or d2 consisting of 1 and 0 of the picture.
8. A subway station passenger flow statistical method based on face recognition as claimed in claim 5, characterized in that: the similarity calculation method specifically comprises the following steps:
traversing and judging the average hash algorithm value a1 of the current face image and the average hash algorithm value a2 of the person image to be tracked, counting if a1 is not equal to a2, and taking the accumulated value as a similarity value an; and traversing and judging the current human face image difference hash algorithm value d1 and the human image difference hash algorithm value d2, counting if d1 is not equal to d2, and taking the accumulated value as a similarity value dn.
CN201910931510.3A 2019-09-29 2019-09-29 Subway station passenger flow statistical method based on face recognition Active CN110647855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910931510.3A CN110647855B (en) 2019-09-29 2019-09-29 Subway station passenger flow statistical method based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910931510.3A CN110647855B (en) 2019-09-29 2019-09-29 Subway station passenger flow statistical method based on face recognition

Publications (2)

Publication Number Publication Date
CN110647855A true CN110647855A (en) 2020-01-03
CN110647855B CN110647855B (en) 2023-04-18

Family

ID=68993079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910931510.3A Active CN110647855B (en) 2019-09-29 2019-09-29 Subway station passenger flow statistical method based on face recognition

Country Status (1)

Country Link
CN (1) CN110647855B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260140A (en) * 2020-01-19 2020-06-09 武汉中科通达高新技术股份有限公司 Method for predicting instantaneous return large passenger flow in subway station
CN111601091A (en) * 2020-06-08 2020-08-28 张世杰 Passenger flow monitoring device and method for subway station
CN111784750A (en) * 2020-06-22 2020-10-16 深圳日海物联技术有限公司 Method, device and equipment for tracking moving object in video image and storage medium
CN112308193A (en) * 2020-10-29 2021-02-02 山西大学 Station ticket checking entrance people flow data collection device
CN113129580A (en) * 2021-03-09 2021-07-16 北京航空航天大学 Vehicle dispatching system based on big dipper data and face identification
CN114004486A (en) * 2021-10-29 2022-02-01 广州广电运通智能科技有限公司 Rail transit passenger flow scheduling system, method, storage medium and equipment
CN114973680A (en) * 2022-07-01 2022-08-30 哈尔滨工业大学 Bus passenger flow obtaining system and method based on video processing
CN115257860A (en) * 2022-07-11 2022-11-01 深圳益实科技有限公司 Intelligent reminding system and method capable of providing subway arrival information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279482A (en) * 2015-09-28 2016-01-27 孙佩瑜 Face identification method used for subway monitoring
CN106296307A (en) * 2016-08-24 2017-01-04 郑州天迈科技股份有限公司 Electronic stop plate advertisement delivery effect based on recognition of face analyzes method
CN106778632A (en) * 2016-12-22 2017-05-31 东南大学 Track traffic large passenger flow recognizes early warning system and method
CN107992786A (en) * 2016-10-27 2018-05-04 中国科学院沈阳自动化研究所 A kind of people streams in public places amount statistical method and system based on face
CN108182403A (en) * 2017-12-28 2018-06-19 河南辉煌城轨科技有限公司 Subway train passenger flow statistical method based on image
CN108334642A (en) * 2018-03-23 2018-07-27 东华大学 A kind of similar head portrait searching system
CN108805111A (en) * 2018-09-07 2018-11-13 杭州善贾科技有限公司 A kind of detection of passenger flow system and its detection method based on recognition of face

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279482A (en) * 2015-09-28 2016-01-27 孙佩瑜 Face identification method used for subway monitoring
CN106296307A (en) * 2016-08-24 2017-01-04 郑州天迈科技股份有限公司 Electronic stop plate advertisement delivery effect based on recognition of face analyzes method
CN107992786A (en) * 2016-10-27 2018-05-04 中国科学院沈阳自动化研究所 A kind of people streams in public places amount statistical method and system based on face
CN106778632A (en) * 2016-12-22 2017-05-31 东南大学 Track traffic large passenger flow recognizes early warning system and method
CN108182403A (en) * 2017-12-28 2018-06-19 河南辉煌城轨科技有限公司 Subway train passenger flow statistical method based on image
CN108334642A (en) * 2018-03-23 2018-07-27 东华大学 A kind of similar head portrait searching system
CN108805111A (en) * 2018-09-07 2018-11-13 杭州善贾科技有限公司 A kind of detection of passenger flow system and its detection method based on recognition of face

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260140A (en) * 2020-01-19 2020-06-09 武汉中科通达高新技术股份有限公司 Method for predicting instantaneous return large passenger flow in subway station
CN111601091A (en) * 2020-06-08 2020-08-28 张世杰 Passenger flow monitoring device and method for subway station
CN111784750A (en) * 2020-06-22 2020-10-16 深圳日海物联技术有限公司 Method, device and equipment for tracking moving object in video image and storage medium
CN112308193A (en) * 2020-10-29 2021-02-02 山西大学 Station ticket checking entrance people flow data collection device
CN113129580A (en) * 2021-03-09 2021-07-16 北京航空航天大学 Vehicle dispatching system based on big dipper data and face identification
CN114004486A (en) * 2021-10-29 2022-02-01 广州广电运通智能科技有限公司 Rail transit passenger flow scheduling system, method, storage medium and equipment
CN114973680A (en) * 2022-07-01 2022-08-30 哈尔滨工业大学 Bus passenger flow obtaining system and method based on video processing
CN115257860A (en) * 2022-07-11 2022-11-01 深圳益实科技有限公司 Intelligent reminding system and method capable of providing subway arrival information

Also Published As

Publication number Publication date
CN110647855B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110647855B (en) Subway station passenger flow statistical method based on face recognition
CN109886096B (en) Wisdom tourism supervision and safe emergency management linkage command system
EP4012606A1 (en) Method for estimating and presenting passenger flow, system, and computer readable storage medium
CN110826538A (en) Abnormal off-duty identification system for electric power business hall
CN108710827B (en) A kind of micro- police service inspection in community and information automatic analysis system and method
CN113033293A (en) Access & exit management equipment, system and method
CN105825350A (en) Video analysis-based intelligent tourism early warning decision-making system and use method thereof
CN116452379B (en) Intelligent campus management system based on big data
CN107483894A (en) Judge to realize the high ferro station video monitoring system of passenger transportation management based on scene
CN113269902A (en) Intelligent building worker direction and attendance management method and system
CN110473428A (en) A kind of intelligent parking method, apparatus and system
CN110796014A (en) Garbage throwing habit analysis method, system and device and storage medium
CN108009491A (en) A kind of object recognition methods solved in fast background movement and system
WO2023241595A1 (en) Parking space range processing method and computing device
CN111091047B (en) Living body detection method and device, server and face recognition equipment
CN112633249A (en) Embedded pedestrian flow detection method based on light deep learning framework
CN116486332A (en) Passenger flow monitoring method, device, equipment and storage medium
CN110046535B (en) Intelligent travel time prediction system, method and storage medium based on machine learning
CN114283386A (en) Analysis and adaptation intensive scene people stream real-time monitoring system based on big data
CN115116174A (en) System and method for automatically acquiring pass health code based on user authorization information
CN110796091B (en) Sales exhibition hall passenger flow batch statistics based on face recognition technology and assisted by manual correction
CN114882518A (en) Standardized management system of construction engineering drawing based on image recognition technology
CN113052058A (en) Vehicle-mounted passenger flow statistical method and device and storage medium
CN113033314A (en) Mobile scenic spot intelligent service system and service method for travel peak deployment
KR20120105795A (en) Method for guiding train density, system and apparatus therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant