CN112417939A - Passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, server and model training method - Google Patents

Passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, server and model training method Download PDF

Info

Publication number
CN112417939A
CN112417939A CN202010809680.7A CN202010809680A CN112417939A CN 112417939 A CN112417939 A CN 112417939A CN 202010809680 A CN202010809680 A CN 202010809680A CN 112417939 A CN112417939 A CN 112417939A
Authority
CN
China
Prior art keywords
passenger
getting
image
image recognition
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010809680.7A
Other languages
Chinese (zh)
Inventor
林坚
李军
周金明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xingzheyi Intelligent Transportation Technology Co ltd
Original Assignee
Nanjing Xingzheyi Intelligent Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xingzheyi Intelligent Transportation Technology Co ltd filed Critical Nanjing Xingzheyi Intelligent Transportation Technology Co ltd
Publication of CN112417939A publication Critical patent/CN112417939A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a passenger flow OD data acquisition method based on image recognition, which comprises the following steps: step 1, obtaining image information of getting on or off of each passenger and marking a passenger area in an image; step 2, extracting the characteristics of each passenger area as the unique characteristic attribute of the passenger; and 3, performing characteristic matching on all the passengers getting on the vehicle and the passengers getting off the vehicle, wherein the closest matching of the characteristics is one person. The image recognition technology can correspond the getting-on and getting-off images of a certain passenger, and the spatio-temporal information of each passenger is correlated, so that the complete getting-on and getting-off passenger flow OD information of each passenger can be accurately obtained under the condition that the passengers are not matched, and effective data support is provided for bus network optimization and intelligent scheduling.

Description

Passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, server and model training method
Technical Field
The invention relates to the field of image recognition and machine learning, in particular to a passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, a server and a model training method.
Background
In recent years, urban public transport development has become one of the most important parts of urban development, public transport is an important component of public transport, and the problem of urgent need to be solved for urban development is to reasonably carry out bus route planning. The effective acquisition of the bus passenger flow OD (origin destination) data has extremely important significance for the line network optimization, the driving organization, the intelligent scheduling and the station layout of the bus. The existing bus OD acquisition methods need the cooperation of passengers, and if the passengers need to swipe a card, carry a signal receiving device and the like, accurate passenger flow OD data cannot be acquired.
Disclosure of Invention
Aiming at the problem that the acquisition of passenger flow OD data needs the cooperation of passengers and accurate passenger flow OD data cannot be obtained in the prior art, the invention acquires images of passengers getting on or off the bus by the camera devices arranged at the front door and the rear door of the bus, can correspond the images of a certain passenger getting on or off the bus by the image recognition technology, and simultaneously associates the time-space information (time, place and station information) of each passenger, thereby accurately acquiring the complete passenger flow OD information of each passenger getting on or off the bus without the cooperation of the passengers; and effective data support is provided for bus network optimization and intelligent scheduling.
In a first aspect, a method for acquiring passenger flow OD data based on image recognition is provided, which includes the following steps:
step 1, obtaining image information of getting on or off of each passenger and marking a passenger area in an image; the image information of each passenger getting on or off the vehicle is related to the time, the place and/or the station information of the passengers getting on or off the vehicle;
step 2, extracting the characteristics of each passenger area as the unique characteristic attribute of the passenger;
and 3, performing characteristic matching on all the passengers getting on the vehicle and the passengers getting off the vehicle, wherein the closest matching of the characteristics is one person.
Preferably, the step 1 of acquiring image information of each passenger getting on or off the vehicle and marking the passenger area in the image specifically includes: and acquiring a complete image track of each passenger getting on or off, and marking a target frame area containing the passenger of each image in the complete image track as a passenger area.
Further, in step 1, the marking out a target frame region containing a passenger of each image in the complete image track as a passenger region specifically includes: and 11, detecting the position of the passenger in the video frame image through an image target detection method, marking a target frame area containing the passenger, if the passenger target frame appears for the first time, adding a passenger id number to the passenger target frame, otherwise, performing similarity calculation with the corresponding target frame of the previous frame, if the similarity calculation result is greater than a threshold value alpha, determining that the passenger target frame is the same passenger id number, otherwise, adding a new passenger id number to the target frame.
Preferably, the step 1 marks a passenger-containing target frame region of each image in the complete image track as a passenger region, and then further includes a step 13: and after the extraction of a certain passenger track point is finished, setting a displacement length threshold value of a passenger according to the position of the camera device, if the track displacement length of the id passenger is smaller than a threshold value beta, judging that the track is invalid, and discarding the result of the track.
Preferably, the step 1 marks a passenger-containing target frame region of each image in the complete image track as a passenger region, and then includes a step 12 of completing the passenger target frame for some video frame images lost by the target detection method.
Preferably, in step 12, the passenger target frame is completed for some video frame images lost by the target detection method, and the specific method is as follows: if a passenger target frame with a certain id number is detected or tracked from a previous video frame image, the current video frame image does not detect a passenger target frame corresponding to the id number, and the number of times of detecting the passenger with the id number in m continuous video frame images after the previous video frame exceeds n, wherein n is more than or equal to 1 and less than m, an image tracking method is used for performing tracking prediction on the current video frame image by using the passenger target frame detected or tracked by the previous video frame image, the obtained tracking prediction frame is used as the passenger target frame of the id number lost in the current frame by the detection method, and the target frames of all the passengers with the id number are combined to form a complete image track point for the passenger to get on or get off the vehicle.
Preferably, the extracting the feature of each passenger region in step 2 specifically includes: respectively extracting features of all target frames in the complete image track of each passenger getting on or off the bus within a user-defined time period, calculating the average features of all the target frames to serve as the unique getting on or off characteristic attribute of a certain passenger, and respectively storing the getting on characteristic attribute and the getting off characteristic attribute of the passenger.
Further, the step 2 of extracting the features of each passenger region is to perform feature extraction by using a deep learning twin network, wherein the twin network comprises two networks of the same type or different types and two corresponding image input ends.
Preferably, the step 3 of performing feature matching on all the boarding passengers and the alighting passengers is to use a feature matching method for the boarding feature attributes and the alighting feature attributes of all the passengers in the user-defined time period, and the features are most closely matched to be one person, that is, the boarding feature attributes and the alighting feature attributes of the same passenger are associated.
Further, the feature matching method in step 3 specifically includes calculating all the euclidean distances of the features of all the boarding passengers and the features of all the disembarking passengers, removing the euclidean distance values of the boarding time after the disembarking time, and then matching the boarding and disembarking passengers with the euclidean distance data by using an iterative algorithm.
In a second aspect, the device for acquiring the OD data of the passenger flow based on the image recognition comprises an image acquisition module, a feature extraction module and a feature matching module, wherein the image acquisition module, the feature extraction module and the feature matching module are electrically connected in sequence;
the image acquisition module is used for executing the step 1 of the passenger flow OD data acquisition method based on image recognition in any one of all possible implementation modes;
the feature extraction module is used for executing the step 2 of the passenger flow OD data acquisition method based on image recognition in any one of all possible implementation modes;
the feature matching module is used for the step 3 of the passenger flow OD data acquisition method based on image recognition in any one of all possible implementation manners.
In a third aspect, a mobile terminal device is provided, where the device includes any one of all possible implementation manners of the device for acquiring the passenger flow OD data based on image recognition.
In a fourth aspect, a server is provided, where the server includes any one of all possible implementation manners of the image recognition-based passenger flow OD data obtaining apparatus.
In a fifth aspect, a passenger flow OD data model training method based on image recognition is provided, where the training method uses a deep learning twin network as a feature extraction model of a passenger flow OD, and is used in any one of all possible implementation manners to extract features of each passenger region in step 2 of the passenger flow OD data acquisition method based on image recognition, where the deep learning twin network includes two networks of the same type or different types and two corresponding image input ends, and the training process includes the following steps:
(1) constructing a training sample set
Acquiring an image sequence of getting on and off a passenger, constructing a training sample set by using image preprocessing and image amplification measures, and respectively inputting an getting-on image and a getting-off image of the same passenger into two image input ends;
(2) model training
Training a twin network in deep learning by using a training sample set, and drawing the distance between the getting-on characteristic and the getting-off characteristic of the same passenger closer through training, while keeping the distance between the getting-on characteristic and the getting-off characteristic of different passengers unchanged or far, wherein the network loss during training is the weighted sum of classification loss 1, characteristic loss and classification loss 2;
(3) performing feature extraction
And respectively extracting features of all target frames in the complete image track points of each getting-on/off passenger in a user-defined time period by utilizing any one of the two networks of the trained twin network, calculating the average features of all the target frames to serve as the unique getting-on or getting-off feature attribute of a certain passenger, storing all the getting-on feature attributes into a getting-on feature storage module, and storing all the getting-off feature attributes into a getting-off feature storage module to form a getting-on feature library and a getting-off feature library of the passenger.
Preferably, the two networks of the same type comprised by the deep learning twin network are resnet50 networks.
Preferably, the feature loss is a contrast loss or a triplet loss or a boundary mining loss or a center loss or a regularized face loss of two network output features.
Compared with the prior art, one of the technical schemes has the following beneficial effects:
by acquiring the image track of passengers during getting on and off, the image recognition technology can correspond the getting-on and getting-off images of a certain passenger, and meanwhile, the time, the place, the station information and other blank information of the passengers during getting on and off are related, so that the complete on-off passenger flow OD information of a certain passenger is accurately acquired; the invention realizes the accurate and complete acquisition of the passenger flow OD information, realizes the passenger flow OD information to be accurate to each passenger, and has higher data accuracy; meanwhile, passenger flow OD information is obtained without the cooperation of passengers, so that the influence of human factors is reduced; and effective data support is provided for bus network optimization and intelligent scheduling.
Drawings
Fig. 1 is a flow chart and a schematic diagram of a passenger flow OD information acquisition method and apparatus based on image recognition according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a deep learning twin network structure according to an embodiment of the present invention;
Detailed Description
For the purpose of illustrating the technical solutions and working principles of the present invention, the following detailed description of the present invention with reference to the accompanying drawings and specific embodiments is obviously described as a part of the embodiments of the present application, rather than the whole embodiments, and all other embodiments obtained by a person of ordinary skill in the art without inventive labor based on the embodiments in the present application belong to the protection scope of the present application.
The terms "step 1," "step 2," "step 3," "step 12," "step 13," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
The following adopts a public transportation field passenger flow OD scene as an exemplary application scene of the embodiment of the application, and the embodiment of the application provides a passenger flow OD data acquisition method based on image recognition.
Fig. 1 is a flow chart and a schematic device diagram of a passenger flow OD information acquisition method based on image recognition according to an embodiment of the present invention, and in combination with the flow chart, the method mainly includes the following steps:
step 1, obtaining image information of getting on or off of each passenger and marking a passenger area in an image; the image information of each passenger getting on or off the vehicle is associated with the time, place and/or station information of the passenger getting on or off the vehicle (the associated passenger information comprises three conditions of (1) time and place, (2) time and station, and (3) time, place and station);
preferably, the step 1 of acquiring image information of each passenger getting on or off the vehicle and marking the passenger area in the image specifically includes: acquiring a complete image track of each passenger getting on or off the vehicle, and marking a target frame area containing the passenger of each image in the complete image track as a passenger area; further, a target frame area containing a passenger of each image in the complete image track is marked as a passenger area, specifically: and step 11, detecting the position of the passenger in the video frame image through an image target detection method, marking a target frame area containing the passenger, and adding a passenger id number to the passenger target frame when the passenger target frame appears for the first time. The detailed steps of the step 11 are as follows: the method comprises the steps that video streams of passengers getting on or off the bus at front and rear doors are obtained through original camera devices installed at the front and rear doors of the bus, the positions of the passengers in video frame images are detected through an image target detection method, and target frame areas containing the passengers are marked; if the passenger target frame appears for the first time, adding a passenger id number to the passenger target frame; when the passenger target frame of the id number is not detected in the continuous N (10-30) video frame images after the last video frame, the passenger track detection of the id number is finished, and the target frames of all the passengers of the id number are combined to form a complete image track point for the passengers to get on or off the vehicle.
In the actual passenger flow, the situation that passengers stay at the entrance of the vehicle for a moment after getting on the vehicle, find that the vehicle is wrong or consult a driver for a line, and then get off the vehicle before the vehicle is closed at the upper vehicle door is avoided, and in addition, the situation that some objects stored in the shooting area of the camera device are mistakenly identified as people is also avoided in the image identification; in order to avoid the passenger flow counting error caused by the above situation, step 1 (step 1 marks the target frame area containing the passenger in each image in the complete image track as the passenger area) preferably further includes step 13: and after the extraction of a certain passenger track point is finished, setting a displacement length threshold value of a passenger according to the position of the camera device, if the track displacement length of the id passenger is smaller than a threshold value beta, judging that the track is invalid, and discarding the result of the track, wherein the displacement of the passenger refers to the vector length of the starting point of the track pointing to the end point of the track.
Preferably, in order to compensate for the problem that the accuracy rate is reduced due to a certain loss rate of the detection method, the method performs target completion on some video frame images lost by the target detection method, and after step 1 (marking a target frame region containing a passenger in each image in the complete image track as a passenger region in step 1), the method further includes: and step 12, completing passenger target frames for some video frame images lost by the target detection method. Further, the specific method is as follows: if a passenger target frame with a certain id number is detected or tracked from a previous video frame image, the passenger target frame corresponding to the id number is not detected in the current video frame image, and the number of times of detecting the passenger with the id number in continuous m (10-30) video frame images after the previous video frame exceeds n, wherein n is more than or equal to 1 and less than m, an image tracking method is used for performing tracking prediction on the current video frame image by using the passenger target frame detected or tracked by the previous video frame image, the obtained tracking prediction frame is used as the passenger target frame with the id number lost in the current frame, and the target frames of all the passengers with the id number are combined to form a complete image track point of getting on or off the passenger;
it should be noted that step 12 and step 13 are interchangeable.
Step 2, extracting the characteristics of each passenger area as the unique characteristic attribute of the passenger;
preferably, the extracting the feature of each passenger region in step 2 specifically includes: respectively extracting features of all target frames in the complete image track of each passenger getting on or off the bus within a user-defined time period, taking the extracted features as the unique getting on or off characteristic attribute of a certain passenger, and respectively storing the getting on characteristic attribute and the getting off characteristic attribute of the passenger to form a getting on characteristic library and a getting off characteristic library of the passenger; the characteristic attribute is associated with time information; the custom time period can be selected from a time range of a shift, and can also be a time dimension of hours or days and the like.
Further, the step 2 of extracting the features of each passenger region is to perform feature extraction by using a deep learning twin network, wherein the twin network comprises two networks of the same type or different types and two corresponding image input ends, as shown in fig. 2. Preferably, the twin network comprises two resnet50 networks, and the specific deep learning twin network training process comprises the following steps:
(1) constructing a training sample set
The method comprises the steps of collecting an image sequence of getting on and off a passenger, constructing a training sample set by using image preprocessing and image amplification measures, and respectively inputting getting-on images and getting-off images of the same passenger into two image input ends.
(2) Model training
Training a twin network in deep learning by using a training sample set, wherein the distance between the getting-on characteristic and the getting-off characteristic of the same passenger is shortened by training, the distance between the getting-on characteristic and the getting-off characteristic of different passengers is kept unchanged or is kept far, the network loss during training is the weighted sum of classification loss 1, characteristic loss and classification loss 2, and the preferred weight ratio of the three is 1:3: 1; the feature loss is a comparative loss of two network output features, and the feature loss is not limited to the comparative loss, and may be triple loss (triple loss), boundary mining loss (MSML), center loss (center loss), regularized face loss (norm loss), and the like.
(3) Performing feature extraction
And extracting features of all target frames in the complete image track points of each getting-on/off passenger in a user-defined time period by utilizing any one of the 2 networks of the trained twin network to serve as the unique getting-on/off feature attribute of a certain passenger, storing all the getting-on feature attributes into a getting-on feature storage module, and storing all the getting-off feature attributes into a getting-off feature storage module to form a getting-on feature library and a getting-off feature library of the passengers.
And 3, performing characteristic matching on all the passengers getting on the vehicle and the passengers getting off the vehicle, wherein the closest matching of the characteristics is one person.
And 3, matching the passengers getting on and off the train, namely associating the characteristic attributes of getting on and getting off of the same passenger. Because the image information of getting on or off of each passenger is related to the time, place and/or station information of getting on or off of the passenger, the OD data of when a certain passenger gets on from the place and/or station and gets off from the place and/or station is obtained as the passenger flow OD data.
Preferably, in the step 3, the characteristic matching is performed on all the boarding passengers and the alighting passengers, specifically, the boarding characteristic attributes and the alighting characteristic attributes of all the passengers in the user-defined time period are matched with one person by using a characteristic matching method, namely, the boarding characteristic attributes and the alighting characteristic attributes of the same passenger are associated; further, the feature matching method specifically includes calculating all distances of features of all boarding passengers and features of all disembarking passengers, removing a distance value of the boarding time after the disembarking time, setting a distance value of the boarding time after the disembarking time in the table to be null, or setting the distance value to be a number or a letter which can effectively distinguish the boarding time from the distance value, such as 1000, and then matching the boarding and disembarking passengers with all distance data by using an iterative algorithm. I.e. more accurate OD data of the passenger flow, i.e. when a passenger gets on from where and/or from a station, and then gets off from where and/or from a station.
The target detection method, the target tracking method, the feature extraction method and the feature matching method are not limited to the algorithms mentioned in the methods, all algorithms capable of achieving the purposes can be used, and the algorithms can be combined automatically to obtain the technical scheme of the invention, so that the implementation effect is achieved.
In a second aspect, the device for acquiring the OD data of the passenger flow based on the image recognition comprises an image acquisition module, a feature extraction module and a feature matching module, wherein the image acquisition module, the feature extraction module and the feature matching module are electrically connected in sequence;
the image acquisition module is used for executing the step 1 of the passenger flow OD data acquisition method based on image recognition in any one of all possible implementation modes;
the feature extraction module is used for executing the step 2 of the passenger flow OD data acquisition method based on image recognition in any one of all possible implementation modes;
the feature matching module is configured to execute the step of step 3 of the method for acquiring the passenger flow OD data based on image recognition in any one of all possible implementation manners.
In a third aspect, a mobile terminal device is provided, where the mobile terminal device includes any one of all possible implementation manners of the passenger flow OD data obtaining apparatus based on image recognition.
In a fourth aspect, a server is provided, where the server includes a passenger flow OD data acquisition device based on image recognition in any one of all possible implementation manners.
In a fifth aspect, a passenger flow OD data model training method based on image recognition is provided, wherein a deep learning twin network is used as a feature extraction model of a passenger flow OD, and is used for extracting features of each passenger region in step 2 in a passenger flow OD data acquisition method based on image recognition, the deep learning twin network comprises two networks of the same type or different types and two corresponding image input ends, and the training process comprises the following steps:
(1) constructing a training sample set
Acquiring an image sequence of getting on and off a passenger, constructing a training sample set by using image preprocessing and image amplification measures, and respectively inputting an getting-on image and a getting-off image of the same passenger into two image input ends;
(2) model training
Training a twin network in deep learning by using a training sample set, and drawing the distance between the getting-on characteristic and the getting-off characteristic of the same passenger closer through training, while keeping the distance between the getting-on characteristic and the getting-off characteristic of different passengers unchanged or far, wherein the network loss during training is the weighted sum of classification loss 1, characteristic loss and classification loss 2;
(3) performing feature extraction
And respectively extracting features of all target frames in the complete image track points of each getting-on/off passenger in a user-defined time period by utilizing any one of the two networks of the trained twin network, taking the extracted features as the unique getting-on or getting-off feature attributes of a certain passenger, storing all the getting-on feature attributes into a getting-on feature storage module, and storing all the getting-off feature attributes into a getting-off feature storage module to form a getting-on feature library and a getting-off feature library of the passenger.
Preferably, the two networks of the same type comprised by the deep learning twin network are resnet50 networks;
preferably, the network loss during training is a weighted sum of the classification loss 1, the feature loss and the classification loss 2, and the preferred weight ratio of the three is 1:3: 1; the feature loss is a comparative loss of two network output features, and the feature loss is not limited to the comparative loss, and may be triple loss (triple loss), boundary mining loss (MSML), center loss (center loss), regularized face loss (norm loss), and the like.
The invention has been described above by way of example, it is obvious that the specific implementation of the invention is not limited by the above-described manner, and that various insubstantial modifications are possible using the method concepts and technical solutions of the invention; or directly apply the conception and the technical scheme of the invention to other occasions without improvement and equivalent replacement, and the invention is within the protection scope of the invention.

Claims (16)

1. The passenger flow OD data acquisition method based on image recognition is characterized by comprising the following steps of:
step 1, obtaining image information of getting on or off of each passenger and marking a passenger area in an image; the image information of each passenger getting on or off the vehicle is related to the time, the place and/or the station information of the passengers getting on or off the vehicle;
step 2, extracting the characteristics of each passenger area as the unique characteristic attribute of the passenger;
and 3, performing characteristic matching on all the passengers getting on the vehicle and the passengers getting off the vehicle, wherein the closest matching of the characteristics is one person.
2. The method for acquiring the OD data of passenger flow based on image recognition according to claim 1, wherein the step 1 acquires the image information of getting on or off the vehicle of each passenger, and marks the passenger area in the image, specifically: and acquiring a complete image track of each passenger getting on or off, and marking a target frame area containing the passenger of each image in the complete image track as a passenger area.
3. The method for acquiring the OD data of passenger flow based on image recognition according to claim 2, wherein the step 1 marks a passenger-containing target frame area of each image in the complete image track as a passenger area, specifically: and 11, detecting the position of the passenger in the video frame image through an image target detection method, marking a target frame area containing the passenger, if the passenger target frame appears for the first time, adding a passenger id number to the passenger target frame, otherwise, performing similarity calculation with the corresponding target frame of the previous frame, if the similarity calculation result is greater than a threshold value alpha, determining that the passenger target frame is the same passenger id number, otherwise, adding a new passenger id number to the target frame.
4. The method for acquiring OD data based on image recognition as claimed in any one of claims 2-3, wherein step 1 marks the passenger-containing target frame area of each image in the complete image track as the passenger area, and then further comprises step 13: and after the extraction of a certain passenger track point is finished, setting a displacement length threshold value of a passenger according to the position of the camera device, if the track displacement length of the id passenger is smaller than a threshold value beta, judging that the track is invalid, and discarding the result of the track.
5. The method for acquiring OD data based on image recognition as claimed in any one of claims 2-3, wherein step 1 marks the passenger-containing target frame area of each image in the complete image track as the passenger area, and then further comprises step 12 of complementing the passenger target frame for some video frame images lost by the target detection method.
6. The method for acquiring OD data of passenger flow based on image recognition according to claim 5, wherein the step 12 completes the passenger target frame for some video frame images lost by the target detection method, and the specific method is as follows: if a passenger target frame with a certain id number is detected or tracked from a previous video frame image, the current video frame image does not detect a passenger target frame corresponding to the id number, and the number of times of detecting the passenger with the id number in m continuous video frame images after the previous video frame exceeds n, wherein n is more than or equal to 1 and less than m, an image tracking method is used for performing tracking prediction on the current video frame image by using the passenger target frame detected or tracked by the previous video frame image, the obtained tracking prediction frame is used as the passenger target frame of the id number lost in the current frame by the detection method, and the target frames of all the passengers with the id number are combined to form a complete image track point for the passenger to get on or get off the vehicle.
7. The method for acquiring OD data based on image recognition as claimed in any one of claims 2-3 or 6, wherein the step 2 of extracting the features of each passenger region specifically comprises: respectively extracting features of all target frames in the complete image track of each passenger getting on or off the bus within a user-defined time period, calculating the average features of all the target frames to serve as the unique getting on or off characteristic attribute of a certain passenger, and respectively storing the getting on characteristic attribute and the getting off characteristic attribute of the passenger.
8. The method for acquiring OD data of passenger flow based on image recognition as claimed in claim 7, wherein the step 2 of extracting the features of each passenger region is to perform feature extraction by using a deep learning twin network, and the twin network comprises two networks of the same type or different types and two corresponding image inputs.
9. The method for acquiring OD data of passenger flow based on image recognition according to any of claims 1-3, 6 or 8, wherein in step 3, feature matching is performed on all passengers getting on and off, specifically, on the characteristic attributes of all passengers getting on and off within a custom time period, by using a feature matching method, the closest matching of features is one person, that is, the characteristic attributes of the same passenger getting on and off are associated.
10. The method for acquiring OD data of passenger flow based on image recognition according to claim 9, wherein the feature matching method in step 3 specifically removes the euclidean distance value of the getting-on time after the getting-off time by calculating all euclidean distances of the features of all the getting-on passengers and the features of all the getting-off passengers, and then matches the one-to-one correspondence between the getting-on and the getting-off passengers with the euclidean distance data using an iterative algorithm.
11. The passenger flow OD data acquisition device based on image recognition is characterized by comprising an image acquisition module, a feature extraction module and a feature matching module, wherein the image acquisition module, the feature extraction module and the feature matching module are electrically connected in sequence;
the image acquisition module is used for executing the step 1 of the passenger flow OD data acquisition method based on image recognition according to any one of claims 1 to 10;
the feature extraction module is used for executing the step 2 of the image recognition-based passenger flow OD data acquisition method according to any one of claims 1 to 10;
the feature matching module is used for executing the step 3 of the image recognition-based passenger flow OD data acquisition method according to any one of claims 1 to 10.
12. A mobile terminal device, characterized in that the device comprises the passenger flow OD data acquisition device based on image recognition according to claim 11.
13. A server, characterized in that the server comprises the image recognition-based passenger flow OD data acquisition device of claim 11.
14. The passenger flow OD data model training method based on image recognition is characterized in that a deep learning twin network is adopted as a feature extraction model of a passenger flow OD and is used for extracting features of each passenger area in the step 2 in the passenger flow OD data acquisition method based on image recognition according to any one of claims 1 to 10, the deep learning twin network comprises two networks of the same type or different types and two corresponding image input ends, and the training process comprises the following steps:
(1) constructing a training sample set
Acquiring an image sequence of getting on and off a passenger, constructing a training sample set by using image preprocessing and image amplification measures, and respectively inputting an getting-on image and a getting-off image of the same passenger into two image input ends;
(2) model training
Training a twin network in deep learning by using a training sample set, and drawing the distance between the getting-on characteristic and the getting-off characteristic of the same passenger closer through training, while keeping the distance between the getting-on characteristic and the getting-off characteristic of different passengers unchanged or far, wherein the network loss during training is the weighted sum of classification loss 1, characteristic loss and classification loss 2;
(3) performing feature extraction
And respectively extracting features of all target frames in the complete image track points of each getting-on/off passenger in a user-defined time period by utilizing any one of the two networks of the trained twin network, calculating the average features of all the target frames to serve as the unique getting-on or getting-off feature attribute of a certain passenger, storing all the getting-on feature attributes into a getting-on feature storage module, and storing all the getting-off feature attributes into a getting-off feature storage module to form a getting-on feature library and a getting-off feature library of the passenger.
15. The image recognition-based passenger flow OD data model training method of claim 14, wherein the two same type networks involved in the deep learning twin network are resnet50 networks.
16. The image recognition-based passenger flow OD data model training method of any one of claims 14-15, wherein the feature loss is a contrast loss or a triplet loss or a boundary mining loss or a center loss or a regularized face loss of two network output features.
CN202010809680.7A 2019-08-21 2020-08-13 Passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, server and model training method Pending CN112417939A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019107715898 2019-08-21
CN201910771589 2019-08-21

Publications (1)

Publication Number Publication Date
CN112417939A true CN112417939A (en) 2021-02-26

Family

ID=74853896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010809680.7A Pending CN112417939A (en) 2019-08-21 2020-08-13 Passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, server and model training method

Country Status (1)

Country Link
CN (1) CN112417939A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255552A (en) * 2021-06-04 2021-08-13 深圳市城市交通规划设计研究中心股份有限公司 Bus-mounted video passenger OD (origin-destination) analysis system, method and device and storage medium
CN113408587A (en) * 2021-05-24 2021-09-17 支付宝(杭州)信息技术有限公司 Bus passenger OD matching method and device and electronic equipment
CN113538814A (en) * 2021-06-22 2021-10-22 华录智达科技股份有限公司 Intelligent bus vehicle-mounted terminal supporting digital RMB payment
CN113870254A (en) * 2021-11-30 2021-12-31 中国科学院自动化研究所 Target object detection method and device, electronic equipment and storage medium
CN114973680A (en) * 2022-07-01 2022-08-30 哈尔滨工业大学 Bus passenger flow obtaining system and method based on video processing

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201812378U (en) * 2010-08-18 2011-04-27 上海遥薇(集团)有限公司 Passenger information automatic processing device based on image identification, GPS and wireless communication
CN105913367A (en) * 2016-04-07 2016-08-31 北京晶众智慧交通科技股份有限公司 Public bus passenger flow volume detection system and method based on face identification and position positioning
CN106781451A (en) * 2016-11-23 2017-05-31 李雅琪 A kind of bus passenger flow OD data acquisition devices and method
CN107563347A (en) * 2017-09-20 2018-01-09 南京行者易智能交通科技有限公司 A kind of passenger flow counting method and apparatus based on TOF camera
CN108416780A (en) * 2018-03-27 2018-08-17 福州大学 A kind of object detection and matching process based on twin-area-of-interest pond model
CN108509914A (en) * 2018-04-03 2018-09-07 华录智达科技有限公司 Bus passenger flow statistical analysis system based on TOF camera and method
CN109285376A (en) * 2018-08-09 2019-01-29 同济大学 A kind of bus passenger flow statistical analysis system based on deep learning
CN109325404A (en) * 2018-08-07 2019-02-12 长安大学 A kind of demographic method under public transport scene
CN109376596A (en) * 2018-09-14 2019-02-22 广州杰赛科技股份有限公司 Face matching process, device, equipment and storage medium
CN109543559A (en) * 2018-10-31 2019-03-29 东南大学 Method for tracking target and system based on twin network and movement selection mechanism
CN109543534A (en) * 2018-10-22 2019-03-29 中国科学院自动化研究所南京人工智能芯片创新研究院 Target loses the method and device examined again in a kind of target following
CN109815882A (en) * 2019-01-21 2019-05-28 南京行者易智能交通科技有限公司 A kind of subway carriage intensity of passenger flow monitoring system and method based on image recognition
CN110009153A (en) * 2019-04-04 2019-07-12 南京行者易智能交通科技有限公司 A kind of public transport based on OD passenger flow is arranged an order according to class and grade optimization method and system
CN110110593A (en) * 2019-03-27 2019-08-09 广州杰赛科技股份有限公司 Face Work attendance method, device, equipment and storage medium based on self study

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201812378U (en) * 2010-08-18 2011-04-27 上海遥薇(集团)有限公司 Passenger information automatic processing device based on image identification, GPS and wireless communication
CN105913367A (en) * 2016-04-07 2016-08-31 北京晶众智慧交通科技股份有限公司 Public bus passenger flow volume detection system and method based on face identification and position positioning
CN106781451A (en) * 2016-11-23 2017-05-31 李雅琪 A kind of bus passenger flow OD data acquisition devices and method
CN107563347A (en) * 2017-09-20 2018-01-09 南京行者易智能交通科技有限公司 A kind of passenger flow counting method and apparatus based on TOF camera
CN108416780A (en) * 2018-03-27 2018-08-17 福州大学 A kind of object detection and matching process based on twin-area-of-interest pond model
CN108509914A (en) * 2018-04-03 2018-09-07 华录智达科技有限公司 Bus passenger flow statistical analysis system based on TOF camera and method
CN109325404A (en) * 2018-08-07 2019-02-12 长安大学 A kind of demographic method under public transport scene
CN109285376A (en) * 2018-08-09 2019-01-29 同济大学 A kind of bus passenger flow statistical analysis system based on deep learning
CN109376596A (en) * 2018-09-14 2019-02-22 广州杰赛科技股份有限公司 Face matching process, device, equipment and storage medium
CN109543534A (en) * 2018-10-22 2019-03-29 中国科学院自动化研究所南京人工智能芯片创新研究院 Target loses the method and device examined again in a kind of target following
CN109543559A (en) * 2018-10-31 2019-03-29 东南大学 Method for tracking target and system based on twin network and movement selection mechanism
CN109815882A (en) * 2019-01-21 2019-05-28 南京行者易智能交通科技有限公司 A kind of subway carriage intensity of passenger flow monitoring system and method based on image recognition
CN110110593A (en) * 2019-03-27 2019-08-09 广州杰赛科技股份有限公司 Face Work attendance method, device, equipment and storage medium based on self study
CN110009153A (en) * 2019-04-04 2019-07-12 南京行者易智能交通科技有限公司 A kind of public transport based on OD passenger flow is arranged an order according to class and grade optimization method and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408587A (en) * 2021-05-24 2021-09-17 支付宝(杭州)信息技术有限公司 Bus passenger OD matching method and device and electronic equipment
CN113255552A (en) * 2021-06-04 2021-08-13 深圳市城市交通规划设计研究中心股份有限公司 Bus-mounted video passenger OD (origin-destination) analysis system, method and device and storage medium
CN113255552B (en) * 2021-06-04 2024-03-26 深圳市城市交通规划设计研究中心股份有限公司 Method and device for analyzing OD (optical density) of bus-mounted video passengers and storage medium
CN113538814A (en) * 2021-06-22 2021-10-22 华录智达科技股份有限公司 Intelligent bus vehicle-mounted terminal supporting digital RMB payment
CN113870254A (en) * 2021-11-30 2021-12-31 中国科学院自动化研究所 Target object detection method and device, electronic equipment and storage medium
CN114973680A (en) * 2022-07-01 2022-08-30 哈尔滨工业大学 Bus passenger flow obtaining system and method based on video processing

Similar Documents

Publication Publication Date Title
CN112417939A (en) Passenger flow OD data acquisition method and device based on image recognition, mobile terminal equipment, server and model training method
CN110472467A (en) The detection method for transport hub critical object based on YOLO v3
CN109882019B (en) Automobile electric tail door opening method based on target detection and motion recognition
CN104239867B (en) License plate locating method and system
CN107491720A (en) A kind of model recognizing method based on modified convolutional neural networks
CN112487862B (en) Garage pedestrian detection method based on improved EfficientDet model
CN106707296A (en) Dual-aperture photoelectric imaging system-based unmanned aerial vehicle detection and recognition method
US20210327040A1 (en) Model training method and system for automatically determining damage level of each of vehicle parts on basis of deep learning
CN109934127B (en) Pedestrian identification and tracking method based on video image and wireless signal
CN110516633A (en) A kind of method for detecting lane lines and system based on deep learning
CN110569785B (en) Face recognition method integrating tracking technology
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
CN110543917B (en) Indoor map matching method by utilizing pedestrian inertial navigation track and video information
CN110210433B (en) Container number detection and identification method based on deep learning
CN111353487A (en) Equipment information extraction method for transformer substation
CN111241932A (en) Automobile exhibition room passenger flow detection and analysis system, method and storage medium
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN106935059A (en) One kind positioning looks for car system, positioning to look for car method and location positioning method
CN108345878B (en) Public transport passenger flow monitoring method and system based on video
CN110569819A (en) Bus passenger re-identification method
CN114022837A (en) Station left article detection method and device, electronic equipment and storage medium
CN103605960B (en) A kind of method for identifying traffic status merged based on different focal video image
CN117292322A (en) Deep learning-based personnel flow detection method and system
CN115690046B (en) Article carry-over detection and tracing method and system based on monocular depth estimation
You et al. Research on bus passenger flow statistics based on video images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210226

WD01 Invention patent application deemed withdrawn after publication