WO2021159865A1 - 一种基于数据校准的公交路线预测方法及系统 - Google Patents

一种基于数据校准的公交路线预测方法及系统 Download PDF

Info

Publication number
WO2021159865A1
WO2021159865A1 PCT/CN2020/139574 CN2020139574W WO2021159865A1 WO 2021159865 A1 WO2021159865 A1 WO 2021159865A1 CN 2020139574 W CN2020139574 W CN 2020139574W WO 2021159865 A1 WO2021159865 A1 WO 2021159865A1
Authority
WO
WIPO (PCT)
Prior art keywords
bus
getting
passengers
station
boarding
Prior art date
Application number
PCT/CN2020/139574
Other languages
English (en)
French (fr)
Inventor
苏松剑
连桄雷
苏松志
蔡国榕
陈延行
杨子扬
Original Assignee
罗普特科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 罗普特科技集团股份有限公司 filed Critical 罗普特科技集团股份有限公司
Publication of WO2021159865A1 publication Critical patent/WO2021159865A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present disclosure relates to the field of intelligent transportation technology, and in particular to a method and system for bus route prediction based on data calibration.
  • Buses are the most important means of public transportation.
  • the planned bus routes are directly related to the operational efficiency of buses, which are reflected in the road congestion and carriage congestion of different bus routes, the travel time of people, and the transfer of people. The number of multiplications and so on.
  • On-board GPS information it can be used to track vehicle trajectory, arrival time, driving speed, and route congestion
  • credit card record it can be used to estimate the number of people on the bus at each stop.
  • bus route prediction based on face recognition there are possible situations such as face occlusion, head down, side face, etc. through the non-perceptual face recognition solution, and the front face photos of all people cannot be obtained, resulting in less than the number of people who recognize and match.
  • the actual number of people getting on and off the bus brings a large deviation to the statistical data of bus operation, which makes it difficult to accurately predict bus ride demand and bus route planning.
  • the present disclosure proposes the following technical solutions in view of the above-mentioned defects in the prior art.
  • a method for bus route prediction based on data calibration includes:
  • the bus stops and the corresponding number of getting on and off the bus, and the getting on and off stations of the personnel on the bus are sent to the bus operation server;
  • the transfer recognition step is to determine whether a passenger boarding a bus is a passenger getting off another bus through face matching, and if so, determine the transfer station of the passenger, the transfer station, the boarding station And the drop-off station constitutes all the stations for passengers to board;
  • the prediction step the information of the stations passed by all passengers identified and matched within the first time threshold is counted, and the ride data of all bus stops are obtained and multiplied by the matching number calibration coefficient ⁇ to obtain the predicted bus travel demand data.
  • the step of counting passengers getting on and off the bus is performed by a vehicle-mounted processing terminal, which is used to perform the following operations: obtain the bus stop station through the GPS module, and analyze the video stream of the passenger door surveillance camera in real time; Carry out face capture and extract the facial features of passengers boarding and count the number of people boarding at the station; analyze the video stream of the surveillance camera at the door in real time, capture the faces and extract the facial features of the passengers who got off the bus, and count the site’s Number of people getting off the bus; Recognizing and matching the facial features of passengers boarding and getting off the bus, and identifying the boarding station and getting off station of the passengers in the current vehicle;
  • the in-vehicle processing terminal is also used to pass the bus stop station and the corresponding number of getting on and off the bus, the getting on and off station of the vehicle's personnel, and the facial feature information of the passengers captured and getting off the bus. Upload from the network to the bus operation server.
  • the operation of recognizing the boarding site and the getting off site of the passengers in the current vehicle is the facial features of the passengers who got off the vehicle, and the similarity with the facial features of each passenger on the vehicle is calculated, and the similarity is the largest And the face feature pairs larger than the first threshold are used as the recognition matching result to determine the boarding station and the getting off station of the passengers in the current vehicle.
  • the transfer recognition step is performed by a bus operation server, and the bus operation server is configured to perform the following operations: receiving the faces of passengers getting on and off at each stop of each bus After the features, store the facial features in the database; calculate the similarity between the facial features of the passengers on each bus and the facial features of other buses, and take the largest similarity and greater than the second Threshold face feature pairs are used as recognition and matching results to determine the passenger's transfer station.
  • the transfer recognition step it is defined that the distance between the boarding and the alighting station is less than the first distance threshold and the facial features of the passengers boarding are matched with the facial features of the passengers getting off the vehicle within the second time threshold. Recognition.
  • the present disclosure also proposes a bus route prediction system based on data calibration.
  • the system includes on-board equipment and a background bus operation server.
  • the on-board equipment includes a vehicle-mounted processing terminal, a passenger boarding door monitoring camera, and a passenger boarding door monitoring camera.
  • the in-vehicle processing terminal is connected to the passenger boarding door monitoring camera and the unloading door monitoring camera, and the vehicle-mounted processing terminal is connected to a bus operation server through a wireless network;
  • the vehicle-mounted processing terminal is used to: obtain the stop station of the bus, count the number of people getting on and off the bus, determine the boarding station and getting off station of the person on the vehicle through facial feature matching; and park the bus
  • the station and the corresponding number of getting on and off the bus and the getting on and off stations of the person on the bus are sent to the bus operation server;
  • the bus operation server determines whether a passenger boarding a bus is a passenger getting off another bus through face matching, and if so, determines the transfer station of the passenger, the transfer station, boarding station Stops and drop-off stops constitute all the stops for passengers to board;
  • the bus operation server is used to count the information of the stops passed by all passengers identified and matched within the first time threshold, obtain the ride data of all the bus stops, and multiply the matching number calibration coefficient ⁇ to obtain the predicted bus travel demand data.
  • the vehicle-mounted processing terminal is used to perform the following operations to determine the passenger's boarding and disembarking locations: obtaining the bus's stopping station through the GPS module, analyzing the video stream of the passenger boarding surveillance camera in real time, and performing the operation Face capture and extract the facial features of boarding passengers and count the number of people boarding at the station; real-time analysis of the video stream of the door monitoring camera for disembarking passengers, taking face captures and extracting the facial features of the alighting passengers and counting the number of getting off the station Number of people; Recognize and match the facial features of passengers getting on and off, and identify the getting on and off stations of passengers in the current vehicle;
  • the in-vehicle processing terminal is also used to pass the bus stop station and the corresponding number of getting on and off the bus, the getting on and off station of the vehicle's personnel, and the facial feature information of the passengers captured and getting off the bus. Upload from the network to the bus operation server.
  • the operation of recognizing the boarding site and the getting off site of the passengers in the current vehicle is the facial features of the passengers who got off the vehicle, and the similarity with the facial features of each passenger on the vehicle is calculated, and the similarity is the largest And the face feature pairs larger than the first threshold are used as the recognition matching result to determine the boarding station and the getting off station of the passengers in the current vehicle.
  • the bus operation server is configured to perform the following operations to determine the transfer station of the passenger: receive the facial features of the passengers boarding and getting off at each station of each bus, and transfer the The facial features are stored in the database; the facial features of the passengers on each bus are calculated similarly to the facial features of other buses, and the face feature pair with the largest similarity and greater than the second threshold is selected. As a result of the identification and matching, the transfer station of the passenger is determined.
  • the distance between the boarding station and the getting off station is less than the first distance threshold and the facial features of the passengers who got on the bus and the facial features of the passengers who got off the bus within the second time threshold. Perform matching recognition.
  • the technical effect of the present disclosure lies in the following: a bus route prediction method based on data calibration of the present disclosure.
  • the method includes: obtaining bus stops, counting the number of people getting on and off the bus, and determining the cost through facial feature matching.
  • the boarding and disembarking sites of the bus personnel; the bus stopping sites and the corresponding number of boarding and disembarking persons, and the boarding and disembarking sites of the own vehicle personnel are sent to the bus operation server; through the face matching Determine whether a passenger boarding a bus is a passenger alighting on another bus, and if so, determine the transfer station of the passenger.
  • the transfer station, the boarding station, and the getting off station constitute all the passengers boarding.
  • This method is based on face recognition technology to capture and identify the faces of passengers getting on and off the bus, and calibrate the matching pairs of statistical data based on face recognition, so as to more accurately predict the complete ride route for citizens to travel through each bus Stations enable public transport operations, after grasping the law of citizens’ travel and riding route demand, by rationally planning bus operating routes and departure schedules, improving bus carrying capacity, shortening citizens’ travel time, reducing road congestion, and improving the operation of urban public transport efficient.
  • Fig. 1 is a flowchart of a method for predicting a bus route based on data calibration according to an embodiment of the present disclosure.
  • Fig. 2 is a structural diagram of a bus route prediction system based on data calibration according to an embodiment of the present disclosure.
  • FIG. 1 shows a bus route prediction method based on data calibration of the present disclosure.
  • the method of the present disclosure is implemented by a bus route prediction system based on face recognition.
  • the system includes on-board equipment and a background bus operation server.
  • the equipment is composed of a vehicle-mounted processing terminal and a surveillance camera at the entrance and exit doors.
  • the vehicle-mounted processing terminal is connected to the surveillance camera through the vehicle Ethernet network, and the vehicle-mounted processing terminal is connected to the mobile signal base station through wireless signals to access the Internet to realize the connection between the vehicle-mounted equipment and the bus operation server.
  • the vehicle-mounted processing terminal includes NVIDIA Jetson Nano high-performance embedded computing module, GPS module, Ethernet module, and communication module.
  • the surveillance cameras at the passenger gate and the passenger gate transmit the video stream to the on-board processing terminal for real-time video analysis and processing through Ethernet communication.
  • the Jetson Nano embedded module can process face capture, face feature extraction, and face recognition comparison of 2 video streams in real time.
  • the vehicle-mounted processing terminal includes a 4G communication module or a 5G communication module. Therefore, the vehicle-mounted processing terminal may be called a vehicle-mounted 4G processing terminal or a vehicle-mounted 5G processing terminal. Of course, the vehicle-mounted processing terminal may also include communication modules such as WIFI and Bluetooth.
  • the method includes the following steps.
  • step S101 the passengers getting on and off the bus obtain the stop station of the bus, count the number of getting on and off the bus at the station, and determine the boarding station and the getting off station of the person on the vehicle through facial feature matching.
  • the step S101 of counting passengers getting on and off the bus is performed by a vehicle-mounted processing terminal, and the vehicle-mounted processing terminal is used to perform the following operations: obtain bus stops through the GPS module, and analyze the passenger gate surveillance camera in real time Video stream, take face capture and extract the facial features of the passengers boarding and count the number of people boarding at the station; analyze the video stream of the surveillance camera of the exit door in real time, capture the faces and extract the facial features of the alighting passengers and count them The number of people getting off the bus at this station; identifying and matching the facial features of the passengers getting on and off the bus, and identifying the getting on and off stations of the passengers in the current vehicle; the on-board processing terminal is also used to park the bus The station and the corresponding number of getting on and off the bus, the getting on and off stations of the person on the bus, and the facial feature information of the passengers getting on and off the bus are uploaded to the bus operation server through the network.
  • the upper and lower passenger door monitoring cameras are used to collect the videos of passengers getting on and off the car, and send them to the on-board processing terminal via the Ethernet network.
  • the on-board terminal analyzes the video streams of the upper and lower passenger door monitoring cameras in real time, and counts the number of passengers getting on and off the car. Face capture and extract the facial features of the passengers boarding, and then realize the recognition and matching of the facial features of the passengers boarding and getting off the bus, and identify the boarding and disembarking sites of the passengers in the current vehicle, which realizes the identification of passengers
  • the station of getting on and off the vehicle, the position of the station is obtained through the GPS module, of course, it can also be obtained through other positioning modules, such as the Beidou navigation module.
  • the operation of recognizing the boarding and disembarking sites of passengers in the current vehicle is for the facial features of the passengers who got off the vehicle, calculating the similarity with the facial features of each passenger who boarded the vehicle, and taking the similarity
  • the face feature pair with the largest degree and greater than the first threshold is used as the recognition matching result to determine the boarding station and the getting off station of the passengers in the current vehicle. That is, when a passenger gets on the bus, the boarding station is obtained through the GPS module, and its facial features are saved.
  • the facial features are compared with the saved facial features of all passengers in the car to find In order to ensure the accuracy of recognition, the facial feature with the greatest similarity needs to be greater than the first threshold (for example, 85%, 90%, etc.) before it is considered as the corresponding alighting passenger.
  • the drop-off site is saved corresponding to the corresponding boarding site and facial features, and the drop-off site is transmitted to the bus operation server together with the corresponding boarding site and facial features.
  • Sending step S102 is to send the bus stop and the corresponding number of boarding and getting off the bus and the boarding and getting off points of the own vehicle personnel to the bus operation server.
  • the vehicle-mounted terminal captures the bus stop and the corresponding number of getting on and off the bus, the getting on and off stations of the person on the bus, and the faces of the passengers getting on and off in real time.
  • the characteristic information is uploaded to the bus operation server through the network, and the purpose of uploading to the bus operation server is to further determine whether the passenger has made a transfer, determine the transfer station, and lay the foundation for subsequent data calibration.
  • counting the number of people getting on and off the bus is not based on facial features, it is just a count, that is, the statistics are more accurate, and because the faces of passengers may be blocked when passengers get on and off the bus, the facial features are not necessarily accurate, that is, some faces Subsequent matching cannot be achieved, and data calibration is required.
  • Transfer recognition step S103 through special face matching, it is determined whether a passenger boarding a bus is a passenger getting off another bus, and if so, the transfer station of the passenger is determined, and the transfer station, boarding station Stations and drop-off stations constitute all the stations where passengers board.
  • the transfer identification step S103 is performed by a bus operation server, and the bus operation server is configured to perform the following operations: receiving passengers boarding and getting off at each stop of each bus Store the facial features in the database; calculate the similarity between the facial features of the passengers on each bus and the facial features of other buses, and take the maximum similarity and greater than
  • the face feature pair of the second threshold is used as the recognition matching result to determine the transfer station of the passenger.
  • the second threshold may be equal to the first threshold, and the facial feature similarity calculation method in the transfer recognition step S103 is the same as the facial feature similarity method in step S101.
  • the distance between the boarding and disembarking station is limited to be less than the first distance threshold (for example, 0-1000m) and the facial features of the passengers boarding and the second time threshold (For example, 15-30 minutes) the facial features of the passengers who got off the bus are matched and recognized, so that it is determined that the passenger is making the transfer more accurately, which is another important disclosure point of this application.
  • the first distance threshold for example, 0-1000m
  • the facial features of the passengers boarding and the second time threshold For example, 15-30 minutes
  • the number of people who recognize and match faces is lower than the actual number of people getting on and off the bus, and the number of people counting is not based on facial features. Therefore, the data of people counting is more accurate, which is based on the number of matches for face recognition and the total number of people counting.
  • the calibration coefficient is calculated by the quantity and used in the subsequent bus travel data prediction. This is another important disclosure point of this application.
  • the prediction step S105 the information of the stations passed by all the passengers identified and matched within the first time threshold is counted, and the ride data of all the bus stops are obtained and multiplied by the matching number calibration coefficient ⁇ to obtain the predicted bus travel demand data.
  • This method is based on face recognition technology to capture and identify the faces of passengers getting on and off the bus, and calibrate the matching pairs of statistical data based on face recognition, so as to more accurately predict the complete ride route for citizens to travel through each bus Stations enable public transport operations, after grasping the law of citizens’ travel and riding route demand, by rationally planning bus operating routes and departure schedules, improving bus carrying capacity, shortening citizens’ travel time, reducing road congestion, and improving the operation of urban public transport efficient.
  • FIG. 2 shows a bus route prediction system based on face recognition of the present disclosure.
  • the system includes a vehicle-mounted device 201 and a background bus operation server 202.
  • the vehicle-mounted device 201 includes a vehicle-mounted processing terminal 203 and a passenger gate monitoring system.
  • the camera 204, the disembarkation door monitoring camera 205, the in-vehicle processing terminal 203 is connected to the in-vehicle door monitoring camera 204 and the disembarking door monitoring camera 205 via an Ethernet network, and the in-vehicle processing terminal 203 communicates with the bus operation server 202 via a wireless network Phase connection.
  • the in-vehicle processing terminal 203 obtains the passengers getting on and off the bus through face recognition from the obtained video stream, determines the boarding station and the getting off station of the passenger, and counts the number of people getting on and off the bus at the corresponding station.
  • the vehicle-mounted processing terminal 203 includes a NVIDIA Jetson Nano high-performance embedded computing module, a GPS module, an Ethernet module, and a communication module.
  • the surveillance cameras at the passenger gate and the passenger gate transmit the video stream to the vehicle-mounted processing terminal 203 through Ethernet communication for real-time video analysis and processing.
  • the Jetson Nano embedded module can process face capture, face feature extraction, and face recognition comparison of 2 video streams in real time.
  • the vehicle-mounted processing terminal 203 includes a 4G communication module or a 5G communication module. Therefore, the vehicle-mounted processing terminal 203 may be referred to as a vehicle-mounted 4G processing terminal or a vehicle-mounted 5G processing terminal.
  • the vehicle-mounted processing terminal 203 may also include communication modules such as WIFI and Bluetooth.
  • the vehicle-mounted processing terminal 203 performs the following operations for counting and identifying and matching the number of passengers getting on and off the bus: obtaining the bus stop station through the GPS module, analyzing the video stream of the door-to-door surveillance camera in real time, and taking face capture And extract the facial features of the passengers boarding and count the number of people boarding at the station; analyze the video stream of the monitoring camera of the off-passenger door in real time, take facial captures, extract the facial features of the passengers who got off the bus, and count the number of people getting off the station; Recognize and match the facial features of passengers boarding and getting off the bus, and identify the boarding and disembarking sites of passengers in the current vehicle; the on-board processing terminal 203 is also used to identify the bus stop and the corresponding boarding site And the number of people getting off the bus, the boarding site and getting off site of the vehicle's personnel, and the facial feature information of the passengers boarding and getting off the bus are uploaded to the bus operation server 202 through the network.
  • the upper and lower passenger door monitoring cameras are used to collect the videos of passengers getting on and off the car, and send them to the on-board processing terminal 203 through the Ethernet network.
  • the on-board terminal analyzes the video streams of the upper and lower passenger door monitoring cameras in real time, and counts the number of passengers getting on and off the car. Carry out face capture and extract the facial features of the passengers boarding, and then realize the recognition and matching of the facial features of the passengers boarding and getting off the bus, and identify the boarding and disembarking sites of the passengers in the current vehicle, that is, the determination is achieved
  • Passengers get on and off the station the position of the station is obtained through the GPS module, of course, it can also be obtained through other positioning modules, such as the Beidou navigation module.
  • the operation of the vehicle-mounted processing terminal 203 to identify the boarding station and the getting-off station of the passengers in the current vehicle is the facial features of the passengers who got off the vehicle, and calculate the facial features of each passenger who boarded the vehicle. Take the face feature pair with the greatest similarity and greater than the first threshold as the recognition matching result to determine the boarding station and the getting off station of the passengers in the current vehicle. That is, when a passenger gets on the bus, the boarding station is obtained through the GPS module, and its facial features are saved.
  • the facial features are compared with the saved facial features of all passengers in the car to find In order to ensure the accuracy of recognition, the facial feature with the greatest similarity needs to be greater than the first threshold (for example, 85%, 90%, etc.) before it is considered as the corresponding alighting passenger.
  • the drop-off site is saved corresponding to the corresponding boarding site and facial features, and the drop-off site is transmitted to the bus operation server 202 together with the corresponding boarding site and facial features. Matching the stations where passengers get on and off the bus through face recognition lays a solid foundation for accurately identifying each passenger's riding route, which is an important disclosure point of this disclosure.
  • the vehicle-mounted terminal captures the bus stop and the corresponding number of getting on and off the bus, the getting on and off stations of the person on the bus, and the faces of the passengers getting on and off in real time.
  • the characteristic information is uploaded to the bus operation server 202 through the network, and the purpose of uploading to the bus operation server 202 is to further determine whether the passenger has made a transfer, determine the transfer station, and provide a basis for subsequent data calibration.
  • the bus operation server 202 is used to determine whether a passenger getting on a bus is a passenger getting off another bus through face matching, and if so, to determine the transfer station of the passenger, the transfer station, The boarding station and the getting off station constitute all the stations for passengers to board.
  • the transfer recognition is implemented by the bus operation server 202 performing the following operations: receiving the facial features of passengers boarding and alighting at each stop of each bus, and comparing the person
  • the facial features are stored in the database; the facial features of the passengers on each bus and the facial features of other buses are calculated for similarity, and the face feature pair with the largest similarity and greater than the second threshold is taken as Identify the matching result to determine the passenger's transfer station.
  • the second threshold may be equal to the first threshold, and the method for calculating the similarity of facial features in the transfer recognition is the same as the method for calculating the similarity of facial features in the vehicle-mounted processing terminal 203 recognition.
  • the distance between the boarding station and the getting off station is limited to be less than the first distance threshold (such as 0-1000m) and the facial features of the passengers boarding and the second time threshold (such as The facial features of the passengers who got off the bus within 15-30 minutes) are matched and recognized, so that it is determined that the passenger is making the transfer more accurately.
  • the first distance threshold such as 0-1000m
  • the facial features of the passengers boarding and the second time threshold such as The facial features of the passengers who got off the bus within 15-30 minutes
  • the bus operation server 202 is used to count the information of the stops passed by all the matched passengers identified within the first time threshold, obtain the ride data of all the bus stops, and multiply the matching number calibration coefficient ⁇ to obtain the predicted bus travel demand data.
  • the system is based on face recognition technology to capture and identify the faces of passengers getting on and off the bus, and calibrate the matching pairs based on face recognition based on statistical data, so as to more accurately predict the complete ride route of citizens to travel through each bus Stations enable public transport operations, after grasping the law of citizens’ travel and riding route demand, by rationally planning bus operating routes and departure schedules, improving bus carrying capacity, shortening citizens’ travel time, reducing road congestion, and improving the operation of urban public transport efficient.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Multimedia (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于数据校准的公交路线预测方法及系统,该方法包括:获取公交车的停靠站点,统计该站点的上车及下车人数,通过人脸特征匹配确定本车人员的上车站点和下车站点(S101);将公交车的所有信息发送至公交车运营服务器(S102);通过人脸特征匹配确定一辆公交车上车的乘客是否是其他公交车下车的乘客,如果是,则确定所述乘客的换乘站点(S103);累加第一时间阈值内所有公交车各站点的上车和下车的人数得到计数总数N,同时累加所有同一辆公交车上的上下车识别匹配数量得到匹配总数P,则匹配数量校准系数为:α=N÷(P×2)(S104);基于α得到预测的公交出行需求数据(S105)。该方法基于人脸识别技术对上下车乘客进行人脸抓拍和识别匹配,并对统计数据基于人脸识别的匹配对进行校准。

Description

一种基于数据校准的公交路线预测方法及系统
相关申请
本申请要求保护在2020年2月11日提交的申请号为202010086978.X的中国专利申请的优先权,该申请的全部内容以引用的方式结合到本文中。
技术领域
本公开涉及智能交通技术领域,具体涉及一种基于数据校准的公交路线预测方法及系统。
背景技术
城市交通日益拥堵,公交车作为最主要的公共交通工具,城市规划的公交路线直接关系到公交车的运营效率,体现在不同公交路线的路面拥堵和车厢拥挤情况、人员出行时间、人员乘车换乘次数等。
为了科学合理的规划城市公交路线,需要掌握市民的出行规律,统计不同时间段市民乘车需求,以达到更精准的规划预测。
目前在公交车运营中,能够采集到的主要数据有:车载GPS信息:可用于跟踪车辆行驶轨迹、到站时间、行驶速度、路线拥堵情况;刷卡记录:可用于估算各站点上车人数。
目前公交车上存在基于视频流分析的人头统计客流分析系统,可以较为准确的统计各站点的上车和下车人数信息,但是系统无法对上车和下车的人员进行匹配关联,无法预测人员去向,无法预测人员的上车和下车站点,无法预测市民出行的乘车路线信息,也就无法为城市的公交路线规划提供充分的决策数据。
在基于人脸识别的公交乘车路线预测中,通过无感知人脸识别方案存在人 脸遮挡、低头、侧脸等可能情况,无法获取所有人员的正面人脸照片,导致识别匹配的人数低于实际的上下车人数,给公交运营的统计数据带来较大偏差,从而难以准确预测公交乘车需求和公交路线规划。
公开内容
本公开针对上述现有技术中的缺陷,提出了如下技术方案。
一种基于数据校准的公交路线预测方法,该方法包括:
上下车乘客统计步骤,获取公交车的停靠站点,统计该站点的上车及下车人数,通过人脸特征匹配确定本车人员的上车站点和下车站点;
发送步骤,将公交车的停靠站点及对应的上车及下车人数和本车人员的上车站点和下车站点发送至公交车运营服务器;
换乘识别步骤,通过人脸特别匹配确定一辆公交车上车的乘客是否其他公交车下车的乘客,如果是,则确定所述乘客的换乘站点,所述换乘站点、上车站点和下车站点组成乘客乘车的所有站点;
校准步骤,累加第一时间阈值内所有公交车各站点的上车和下车的人数得到计数总数N,同时累加第一时间阈值所有同一辆公交车上的上下车识别匹配数量得到匹配总数P,则人脸特征识别匹配数量校准系数为:α=N÷(P×2);
预测步骤,统计第一时间阈值内识别匹配到的所有乘客经过的站点信息,得到所有公交站点乘车数据并乘于匹配数量校准系数α得到预测的公交出行需求数据。
更进一步地,所述上下车乘客统计步骤是通过车载处理终端执行的,所述车载处理终端用于执行以下操作:通过GPS模块获取公交车的停靠站点,实时分析上客门监控摄像头视频流,进行人脸抓拍并提取上车乘客的人脸特征并统计该站点的上车人数;实时分析下客门监控摄像头视频流,进行人脸抓拍并提取下车乘客的人脸特征并统计该站点的下车人数;对上车和下车的乘客的人脸特征进行识别匹配,识别当前车辆中乘客的上车站点和下车站点;
所述车载处理终端还用于将公交车的停靠站点及对应的上车及下车人数、本车人员的上车站点及下车站点和上车和下车抓拍的乘客的人脸特征信息通过网络上传到公交运营服务器。
更进一步地,识别当前车辆中乘客的上车站点和下车站点的操作为对于下车的乘客的人脸特征,计算与上车的每一个乘客的人脸特征的相似度,取相似度最大且大于第一阈值的人脸特征对作为识别匹配结果,以确定当前车辆中乘客的上车站点和下车站点。
更进一步地,所述换乘识别步骤是通过公交运营服务器执行的,所述公交运营服务器用于执行以下操作:收到每一辆公交车每一站点的上车和下车的乘客的人脸特征后,将所述人脸特征存储到数据库中;将每一辆公交车上车的乘客的人脸特征与其它公交车下车的人脸特征计算相似度,取相似度最大且大于第二阈值的人脸特征对作为识别匹配结果,以确定所述乘客的换乘站点。
更进一步地,在所述换乘识别步骤中,限定上车与下车站点距离小于第一距 离阈值且上车的乘客人脸特征与第二时间阈值内下车的乘客的人脸特征进行匹配识别。
本公开还提出了一种基于数据校准的公交路线预测系统,该系统包括车上设备和后台公交运营服务器,所述车上设备包括车载处理终端和上客门监控摄像头、下客门监控摄像头,所述车载处理终端连接所述上客门监控摄像头和下客门监控摄像头,所述车载处理终端通过无线网络与公交运营服务器相连接;
所述车载处理终端用于:获取公交车的停靠站点,统计该站点的上车及下车人数,通过人脸特征匹配确定本车人员的上车站点和下车站点;并将公交车的停靠站点及对应的上车及下车人数和本车人员的上车站点和下车站点发送至公交车运营服务器;
所述公交车运营服务器通过人脸特别匹配确定一辆公交车上车的乘客是否其他公交车下车的乘客,如果是,则确定所述乘客的换乘站点,所述换乘站点、上车站点和下车站点组成乘客乘车的所有站点;
所述公交车运营服务器用于累加第一时间阈值内所有公交车各站点的上车和下车的人数得到计数总数N,同时累加第一时间阈值所有同一辆公交车上的上下车识别匹配数量得到匹配总数P,则人脸特征识别匹配数量校准系数为:α=N÷(P×2);
所述公交车运营服务器用于统计第一时间阈值内识别匹配到的所有乘客经过的站点信息,得到所有公交站点乘车数据并乘于匹配数量校准系数α得到预 测的公交出行需求数据。
更进一步地,所述车载处理终端用于执行以下操作以确定所述乘客的上车站点和下车站点:通过GPS模块获取公交车的停靠站点,实时分析上客门监控摄像头视频流,进行人脸抓拍并提取上车乘客的人脸特征并统计该站点的上车人数;实时分析下客门监控摄像头视频流,进行人脸抓拍并提取下车乘客的人脸特征并统计该站点的下车人数;对上车和下车的乘客的人脸特征进行识别匹配,识别当前车辆中乘客的上车站点和下车站点;
所述车载处理终端还用于将公交车的停靠站点及对应的上车及下车人数、本车人员的上车站点及下车站点和上车和下车抓拍的乘客的人脸特征信息通过网络上传到公交运营服务器。
更进一步地,识别当前车辆中乘客的上车站点和下车站点的操作为对于下车的乘客的人脸特征,计算与上车的每一个乘客的人脸特征的相似度,取相似度最大且大于第一阈值的人脸特征对作为识别匹配结果,以确定当前车辆中乘客的上车站点和下车站点。
更进一步地,所述公交运营服务器用于执行以下操作以确定所述乘客的换乘站点:收到每一辆公交车每一站点的上车和下车的乘客的人脸特征,将所述人脸特征存储到数据库中;将每一辆公交车上车的乘客的人脸特征与其它公交车下车的人脸特征计算相似度,取相似度最大且大于第二阈值的人脸特征对作为识别匹配结果,以确定所述乘客的换乘站点。
更进一步地,在确定所述乘客的换乘站点时,限定上车与下车站点距离小于第一距离阈值且上车的乘客人脸特征与第二时间阈值内下车的乘客的人脸特征进行匹配识别。
本公开的技术效果在于:本公开的一种基于数据校准的公交路线预测方法,该方法包括:获取公交车的停靠站点,统计该站点的上车及下车人数,通过人脸特征匹配确定本车人员的上车站点和下车站点;将公交车的停靠站点及对应的上车及下车人数和本车人员的上车站点和下车站点发送至公交车运营服务器;通过人脸特别匹配确定一辆公交车上车的乘客是否其他公交车下车的乘客,如果是,则确定所述乘客的换乘站点,所述换乘站点、上车站点和下车站点组成乘客乘车的所有站点;累加第一时间阈值内所有公交车各站点的上车和下车的人数得到计数总数N,同时累加第一时间阈值所有同一辆公交车上的上下车识别匹配数量得到匹配总数P,则人脸特征识别匹配数量校准系数为:α=N÷(P×2);统计第一时间阈值内识别匹配到的所有乘客经过的站点信息,得到所有公交站点乘车数据并乘于匹配数量校准系数α得到预测的公交出行需求数据。本方法基于人脸识别技术对上下车乘客进行人脸抓拍和识别匹配,并对统计数据基于人脸识别的匹配对进行校准,从而更加准确地预测市民出行的完整乘车路线经过的每一个公交站点,使得公交运营在掌握市民出行乘车路线需求规律后,通过合理规划公交车运行路线和发车班次,提高公交车运载能力、缩短市民乘车时间、减少道路拥堵状况,提高了城市公交的运行效率。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显。
图1是根据本公开的实施例的一种基于数据校准的公交路线预测方法的流程图。
图2是根据本公开的实施例的一种基于数据校准的公交路线预测系统的结构图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅用于解释相关公开,而非对该公开的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关公开相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了本公开的一种基于数据校准的公交路线预测方法,本公开的方法是基于人脸识别的公交路线预测系统实现的,该系统包括车上设备和后台公交运营服务器,车上设备由车载处理终端和上客门、下客门监控摄像头组成,车载处理终端通过车载以太网络连接监控摄像头,车载处理终端通过无线信号连接移动信号基站接入互联网,实现车载设备与公交运营服务器之间的信息通信。车载处理终端包含NVIDIA Jetson Nano高性能嵌入式计算模块、GPS模块、以太网模块、通信模块。上客门和下客门的监控摄像头通过以太网通信将视频流传输到车载处理终端进行实时的视频分析处理。Jetson Nano嵌入式模块可以实时 处理2路视频流的人脸抓拍、人脸特征提取、人脸识别比对。车载处理终端包括4G通信模块或5G通信模块,因此,车载处理终端可以称为车载4G处理终端或车载5G处理终端,当然车载处理终端还可以包括WIFI、蓝牙等通信模块。该方法包括以下步骤。
上下车乘客统计步骤S101,获取公交车的停靠站点,统计该站点的上车及下车人数,通过人脸特征匹配确定本车人员的上车站点和下车站点。
在一个实施例中,所述上下车乘客统计步骤S101是通过车载处理终端执行的,所述车载处理终端用于执行以下操作:通过GPS模块获取公交车的停靠站点,实时分析上客门监控摄像头视频流,进行人脸抓拍并提取上车乘客的人脸特征并统计该站点的上车人数;实时分析下客门监控摄像头视频流,进行人脸抓拍并提取下车乘客的人脸特征并统计该站点的下车人数;对上车和下车的乘客的人脸特征进行识别匹配,识别当前车辆中乘客的上车站点和下车站点;所述车载处理终端还用于将公交车的停靠站点及对应的上车及下车人数、本车人员的上车站点及下车站点和上车和下车抓拍的乘客的人脸特征信息通过网络上传到公交运营服务器。
所述上、下客门监控摄像头用于采集乘客上下车的视频,并通过以太网络发送至车载处理终端,车载终端实时分析上、下客门监控摄像头视频流,统计上下车乘客的数量并进行人脸抓拍并提取上车乘客的人脸特征,进而实现对上车和下车的乘客的人脸特征进行识别匹配,识别当前车辆中乘客的上车站点和下车 站点,即实现了确定乘客上下车的站点,站点的位置通过GPS模块获得,当然也可以通过其他定位模块获得,比如北斗导航模块。
在一个实施例中,识别当前车辆中乘客的上车站点和下车站点的操作为对于下车的乘客的人脸特征,计算与上车的每一个乘客的人脸特征的相似度,取相似度最大且大于第一阈值的人脸特征对作为识别匹配结果,以确定当前车辆中乘客的上车站点和下车站点。即在乘客上车时,通过GPS模块获得其上车站点,并与其人脸特征进行保存,在乘客下车时,将人脸特征与保存的车内的所有乘客的人脸特征进行比较,找到相似度最大的人脸特征,为了保证识别的准确性,该最大的相似度也需要大于第一阈值(比如,85%,90%等等),才认为是对应的下车乘客,将该乘客下车站点与对应的上车站点、人脸特征进行对应保存,并将下车站点与对应的上车站点、人脸特征一起传送至公交运营服务器中。通过上下车乘客统计步骤S101实现了精准识别乘客上下车的站点,为了精确识别每个乘客的乘车路线打下了坚实的基础,这是本公开的一个重要公开点。
发送步骤S102,将公交车的停靠站点及对应的上车及下车人数和本车人员的上车站点和下车站点发送至公交车运营服务器。
在一个实施例中,车载终端并实时将将公交车的停靠站点及对应的上车及下车人数、本车人员的上车站点及下车站点和上车和下车抓拍的乘客的人脸特征信息通过网络上传到公交运营服务器,上传至公交运营服务器中的目的是为了进一步确定乘客是否进行了换乘,以及确定换乘的站点,并为后续数据的校准 做基础。且由于统计上下车的人数不是基于人脸特征的,仅仅是计数,即该统计数量较为准确,而由于乘客上下车时人脸可能被遮挡,人脸特征并不一定准确,即有的人脸后续无法匹配,进而需要数据的校准。
换乘识别步骤S103,通过人脸特别匹配确定一辆公交车上车的乘客是否其他公交车下车的乘客,如果是,则确定所述乘客的换乘站点,所述换乘站点、上车站点和下车站点组成乘客乘车的所有站点。
在一个实施例中,所述换乘识别步骤S103是通过公交运营服务器执行的,所述公交运营服务器用于执行以下操作:收到每一辆公交车每一站点的上车和下车的乘客的人脸特征,将所述人脸特征存储到数据库中;将每一辆公交车上车的乘客的人脸特征与其它公交车下车的人脸特征计算相似度,取相似度最大且大于第二阈值的人脸特征对作为识别匹配结果,以确定所述乘客的换乘站点。所述第二阈值可以等于第一阈值,在所述换乘识别步骤S103中的人脸特征相似度计算方法与步骤S101中的人脸特征相似度方法相同。为了准确的识别换乘站点,在所述换乘识别步骤S103中,限定上车与下车站点距离小于第一距离阈值(比如0-1000m)且上车的乘客人脸特征与第二时间阈值(比如15-30分钟)内下车的乘客的人脸特征进行匹配识别,这样才确定乘客是进行的换乘比较准确,这是本申请的另一重要公开点。
校准步骤S104,累加第一时间阈值内所有公交车各站点的上车和下车的人数得到计数总数N,同时累加第一时间阈值所有同一辆公交车上的上下车识别匹 配数量得到匹配总数P,则人脸特征识别匹配数量校准系数为:α=N÷(P×2);前面已经描述了通过无感知人脸识别方案存在人脸遮挡、低头、侧脸等可能情况,无法获取所有人员的正面人脸照片,导致识别匹配的人数低于实际的上下车人数,而人数统计不是基于人脸特征,因此,人数统计的数据比较准确,从而基于人脸识别的匹配数量及人数统计的总数量计算得到校准系数,用于后续的公交出行数据预测中,这是本申请的另一个重要公开点。
预测步骤S105,统计第一时间阈值内识别匹配到的所有乘客经过的站点信息,得到所有公交站点乘车数据并乘于匹配数量校准系数α得到预测的公交出行需求数据。
本方法基于人脸识别技术对上下车乘客进行人脸抓拍和识别匹配,并对统计数据基于人脸识别的匹配对进行校准,从而更加准确地预测市民出行的完整乘车路线经过的每一个公交站点,使得公交运营在掌握市民出行乘车路线需求规律后,通过合理规划公交车运行路线和发车班次,提高公交车运载能力、缩短市民乘车时间、减少道路拥堵状况,提高了城市公交的运行效率。
图2示出了本公开的一种基于人脸识别的公交路线预测系统,该系统包括车上设备201和后台公交运营服务器202,所述车上设备201包括车载处理终端203和上客门监控摄像头204、下客门监控摄像头205,所述车载处理终端203通过以太网络连接所述上客门监控摄像头204和下客门监控摄像头205,所述车载处理终端203通过无线网络与公交运营服务器202相连接。
所述车载处理终端203从获取的视频流中通过人脸识别获取上下车的乘客,确定所述乘客的上车站点和下车站点,并统计相应站点的上下车的人数。车载处理终端203包含NVIDIA Jetson Nano高性能嵌入式计算模块、GPS模块、以太网模块、通信模块。上客门和下客门的监控摄像头通过以太网通信将视频流传输到车载处理终端203进行实时的视频分析处理。Jetson Nano嵌入式模块可以实时处理2路视频流的人脸抓拍、人脸特征提取、人脸识别比对。车载处理终端203包括4G通信模块或5G通信模块,因此,车载处理终端203可以称为车载4G处理终端或车载5G处理终端,当然车载处理终端203还可以包括WIFI、蓝牙等通信模块。
在一个实施例中,所述车载处理终端203通过以下操作进行上下车乘客的人数统计和识别匹配:通过GPS模块获取公交车的停靠站点,实时分析上客门监控摄像头视频流,进行人脸抓拍并提取上车乘客的人脸特征并统计该站点的上车人数;实时分析下客门监控摄像头视频流,进行人脸抓拍并提取下车乘客的人脸特征并统计该站点的下车人数;对上车和下车的乘客的人脸特征进行识别匹配,识别当前车辆中乘客的上车站点和下车站点;所述车载处理终端203还用于将公交车的停靠站点及对应的上车及下车人数、本车人员的上车站点及下车站点和上车和下车抓拍的乘客的人脸特征信息通过网络上传到公交运营服务器202。
所述上、下客门监控摄像头用于采集乘客上下车的视频,并通过以太网络发 送至车载处理终端203,车载终端实时分析上、下客门监控摄像头视频流,统计上下车乘客的数量并进行人脸抓拍并提取上车乘客的人脸特征,进而实现对上车和下车的乘客的人脸特征进行识别匹配,识别当前车辆中乘客的上车站点和下车站点,即实现了确定乘客上下车的站点,站点的位置通过GPS模块获得,当然也可以通过其他定位模块获得,比如北斗导航模块。
在一个实施例中,所述车载处理终端203识别当前车辆中乘客的上车站点和下车站点的操作为对于下车的乘客的人脸特征,计算与上车的每一个乘客的人脸特征的相似度,取相似度最大且大于第一阈值的人脸特征对作为识别匹配结果,以确定当前车辆中乘客的上车站点和下车站点。即在乘客上车时,通过GPS模块获得其上车站点,并与其人脸特征进行保存,在乘客下车时,将人脸特征与保存的车内的所有乘客的人脸特征进行比较,找到相似度最大的人脸特征,为了保证识别的准确性,该最大的相似度也需要大于第一阈值(比如,85%,90%等等),才认为是对应的下车乘客,将该乘客下车站点与对应的上车站点、人脸特征进行对应保存,并将下车站点与对应的上车站点、人脸特征一起传送至公交运营服务器202中。通过人脸识别匹配乘客上下车的站点,为了精确识别每个乘客的乘车路线打下了坚实的基础,这是本公开的一个重要公开点。
在一个实施例中,车载终端并实时将将公交车的停靠站点及对应的上车及下车人数、本车人员的上车站点及下车站点和上车和下车抓拍的乘客的人脸特征信息通过网络上传到公交运营服务器202,上传至公交运营服务器202中的目 的是为了进一步确定乘客是否进行了换乘,以及确定换乘的站点,并为后续数据的校准做基础。且由于统计上下车的人数不是基于人脸特征的,仅仅是计数,即该统计数量较为准确,而由于乘客上下车时人脸可能被遮挡,人脸特征并不一定准确,即有的人脸后续无法匹配,进而需要数据的校准。
所述公交运营服务器202用于通过人脸特别匹配确定一辆公交车上车的乘客是否其他公交车下车的乘客,如果是,则确定所述乘客的换乘站点,所述换乘站点、上车站点和下车站点组成乘客乘车的所有站点。
在一个实施例中,所述换乘识别是通过公交运营服务器202执行以下操作实现的:收到每一辆公交车每一站点的上车和下车的乘客的人脸特征,将所述人脸特征存储到数据库中;将每一辆公交车上车的乘客的人脸特征与其它公交车下车的人脸特征计算相似度,取相似度最大且大于第二阈值的人脸特征对作为识别匹配结果,以确定所述乘客的换乘站点。所述第二阈值可以等于第一阈值,在所述换乘识别时中的人脸特征相似度计算方法与车载处理终端203识别中的人脸特征相似度方法相同。为了准确的识别换乘站点,在所述换乘识别中,限定上车与下车站点距离小于第一距离阈值(比如0-1000m)且上车的乘客人脸特征与第二时间阈值(比如15-30分钟)内下车的乘客的人脸特征进行匹配识别,这样才确定乘客是进行的换乘比较准确,这是本申请的另一重要公开点。
所述公交运营服务器202用于累加第一时间阈值内所有公交车各站点的上车和下车的人数得到计数总数N,同时累加第一时间阈值所有同一辆公交车上的 上下车识别匹配数量得到匹配总数P,则人脸特征识别匹配数量校准系数为:α=N÷(P×2);前面已经描述了通过无感知人脸识别方案存在人脸遮挡、低头、侧脸等可能情况,无法获取所有人员的正面人脸照片,导致识别匹配的人数低于实际的上下车人数,而人数统计不是基于人脸特征,因此,人数统计的数据比较准确,从而基于人脸识别的匹配数量及人数统计的总数量计算得到校准系数,用于后续的公交出行数据预测中,这是本申请的另一个重要公开点。
所述公交运营服务器202用于统计第一时间阈值内识别匹配到的所有乘客经过的站点信息,得到所有公交站点乘车数据并乘于匹配数量校准系数α得到预测的公交出行需求数据。
本系统基于人脸识别技术对上下车乘客进行人脸抓拍和识别匹配,并对统计数据基于人脸识别的匹配对进行校准,从而更加准确地预测市民出行的完整乘车路线经过的每一个公交站点,使得公交运营在掌握市民出行乘车路线需求规律后,通过合理规划公交车运行路线和发车班次,提高公交车运载能力、缩短市民乘车时间、减少道路拥堵状况,提高了城市公交的运行效率。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体 现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例或者实施例的某些部分所述的装置。
最后所应说明的是:以上实施例仅以说明而非限制本公开的技术方案,尽管参照上述实施例对本公开进行了详细说明,本领域的普通技术人员应当理解:依然可以对本公开进行修改或者等同替换,而不脱离本公开的精神和范围的任何修改或局部替换,其均应涵盖在本公开的权利要求范围当中。

Claims (10)

  1. 一种基于数据校准的公交路线预测方法,其特征在于,该方法包括:
    上下车乘客统计步骤,获取公交车的停靠站点,统计该站点的上车及下车人数,通过人脸特征匹配确定本车人员的上车站点和下车站点;
    发送步骤,将公交车的停靠站点及对应的上车及下车人数和本车人员的上车站点和下车站点发送至公交车运营服务器;
    换乘识别步骤,通过人脸特别匹配确定一辆公交车上车的乘客是否其他公交车下车的乘客,如果是,则确定所述乘客的换乘站点,所述换乘站点、上车站点和下车站点组成乘客乘车的所有站点;
    校准步骤,累加第一时间阈值内所有公交车各站点的上车和下车的人数得到计数总数N,同时累加第一时间阈值所有同一辆公交车上的上下车识别匹配数量得到匹配总数P,则人脸特征识别匹配数量校准系数为:α=N÷(P×2);
    预测步骤,统计第一时间阈值内识别匹配到的所有乘客经过的站点信息,得到所有公交站点乘车数据并乘于匹配数量校准系数α得到预测的公交出行需求数据。
  2. 根据权利要求1所述的方法,其特征在于,所述上下车乘客统计步骤是通过车载处理终端执行的,所述车载处理终端用于执行以下操作:通过GPS模块获取公交车的停靠站点,实时分析上客门监控摄像头视频流,进行人脸抓拍并提取上车乘客的人脸特征并统计该站点的上车人数;实时分析下客门监控摄像头视频流,进行人脸抓拍并提取下车乘客的人脸特征并统计该站点的下车人数;对 上车和下车的乘客的人脸特征进行识别匹配,识别当前车辆中乘客的上车站点和下车站点;
    所述车载处理终端还用于将公交车的停靠站点及对应的上车及下车人数、本车人员的上车站点及下车站点和上车和下车抓拍的乘客的人脸特征信息通过网络上传到公交运营服务器。
  3. 根据权利要求2所述的方法,其特征在于,识别当前车辆中乘客的上车站点和下车站点的操作为对于下车的乘客的人脸特征,计算与上车的每一个乘客的人脸特征的相似度,取相似度最大且大于第一阈值的人脸特征对作为识别匹配结果,以确定当前车辆中乘客的上车站点和下车站点。
  4. 根据权利要求1所述的方法,其特征在于,所述换乘识别步骤是通过公交运营服务器执行的,所述公交运营服务器用于执行以下操作:收到每一辆公交车每一站点的上车和下车的乘客的人脸特征后,将所述人脸特征存储到数据库中;将每一辆公交车上车的乘客的人脸特征与其它公交车下车的人脸特征计算相似度,取相似度最大且大于第二阈值的人脸特征对作为识别匹配结果,以确定所述乘客的换乘站点。
  5. 根据权利要求4所述的方法,其特征在于,在所述换乘识别步骤中,限定上车与下车站点距离小于第一距离阈值且上车的乘客人脸特征与第二时间阈值内下车的乘客的人脸特征进行匹配识别。
  6. 一种基于数据校准的公交路线预测系统,其特征在于,该系统包括车上 设备和后台公交运营服务器,所述车上设备包括车载处理终端和上客门监控摄像头、下客门监控摄像头,所述车载处理终端连接所述上客门监控摄像头和下客门监控摄像头,所述车载处理终端通过无线网络与公交运营服务器相连接;
    所述车载处理终端用于:获取公交车的停靠站点,统计该站点的上车及下车人数,通过人脸特征匹配确定本车人员的上车站点和下车站点;并将公交车的停靠站点及对应的上车及下车人数和本车人员的上车站点和下车站点发送至公交车运营服务器;
    所述公交车运营服务器通过人脸特别匹配确定一辆公交车上车的乘客是否其他公交车下车的乘客,如果是,则确定所述乘客的换乘站点,所述换乘站点、上车站点和下车站点组成乘客乘车的所有站点;
    所述公交车运营服务器用于累加第一时间阈值内所有公交车各站点的上车和下车的人数得到计数总数N,同时累加第一时间阈值所有同一辆公交车上的上下车识别匹配数量得到匹配总数P,则人脸特征识别匹配数量校准系数为:α=N÷(P×2);
    所述公交车运营服务器用于统计第一时间阈值内识别匹配到的所有乘客经过的站点信息,得到所有公交站点乘车数据并乘于匹配数量校准系数α得到预测的公交出行需求数据。
  7. 根据权利要求6所述的系统,其特征在于,所述车载处理终端用于执行以下操作以确定所述乘客的上车站点和下车站点:通过GPS模块获取公交车的停 靠站点,实时分析上客门监控摄像头视频流,进行人脸抓拍并提取上车乘客的人脸特征并统计该站点的上车人数;实时分析下客门监控摄像头视频流,进行人脸抓拍并提取下车乘客的人脸特征并统计该站点的下车人数;对上车和下车的乘客的人脸特征进行识别匹配,识别当前车辆中乘客的上车站点和下车站点;
    所述车载处理终端还用于将公交车的停靠站点及对应的上车及下车人数、本车人员的上车站点及下车站点和上车和下车抓拍的乘客的人脸特征信息通过网络上传到公交运营服务器。
  8. 根据权利要求7所述的系统,其特征在于,识别当前车辆中乘客的上车站点和下车站点的操作为对于下车的乘客的人脸特征,计算与上车的每一个乘客的人脸特征的相似度,取相似度最大且大于第一阈值的人脸特征对作为识别匹配结果,以确定当前车辆中乘客的上车站点和下车站点。
  9. 根据权利要求6所述的系统,其特征在于,所述公交运营服务器用于执行以下操作以确定所述乘客的换乘站点:收到每一辆公交车每一站点的上车和下车的乘客的人脸特征,将所述人脸特征存储到数据库中;将每一辆公交车上车的乘客的人脸特征与其它公交车下车的人脸特征计算相似度,取相似度最大且大于第二阈值的人脸特征对作为识别匹配结果,以确定所述乘客的换乘站点。
  10. 根据权利要求9所述的系统,其特征在于,在确定所述乘客的换乘站点时,限定上车与下车站点距离小于第一距离阈值且上车的乘客人脸特征与第二时间阈值内下车的乘客的人脸特征进行匹配识别。
PCT/CN2020/139574 2020-02-11 2020-12-25 一种基于数据校准的公交路线预测方法及系统 WO2021159865A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010086978.X 2020-02-11
CN202010086978.XA CN111310994B (zh) 2020-02-11 2020-02-11 一种基于数据校准的公交路线预测方法及系统

Publications (1)

Publication Number Publication Date
WO2021159865A1 true WO2021159865A1 (zh) 2021-08-19

Family

ID=71148338

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/139574 WO2021159865A1 (zh) 2020-02-11 2020-12-25 一种基于数据校准的公交路线预测方法及系统

Country Status (2)

Country Link
CN (1) CN111310994B (zh)
WO (1) WO2021159865A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781822A (zh) * 2021-09-24 2021-12-10 湖北惠诚共创科技有限公司 一种基于大数据的公交车用调度系统
CN113870574A (zh) * 2021-09-17 2021-12-31 南京熊猫电子股份有限公司 一种自动检测客车违章载客预警的系统和方法
CN113888857A (zh) * 2021-10-11 2022-01-04 南京微道科技有限公司 基于车联网的公共交通管理系统、装置及方法
CN114141044A (zh) * 2021-11-22 2022-03-04 东南大学 考虑乘客选择行为的公交时刻表协调优化方法
CN114724365A (zh) * 2022-03-29 2022-07-08 深圳市综合交通与市政工程设计研究总院有限公司 一种基于定位信息的公交载客量采集系统
CN114912657A (zh) * 2022-04-12 2022-08-16 东南大学 一种基于多种收费票制的公交客流od推导方法
CN114926153A (zh) * 2022-07-20 2022-08-19 浙江大学滨海产业技术研究院 一种智慧养老出行辅助管理方法及系统
CN114925297A (zh) * 2022-06-24 2022-08-19 深圳市规划和自然资源数据管理中心 一种利用多源出行数据的城市公交出行管理系统
CN114973680A (zh) * 2022-07-01 2022-08-30 哈尔滨工业大学 一种基于视频处理的公交客流获取系统及方法
CN114999034A (zh) * 2022-06-01 2022-09-02 中交机电工程局有限公司 基于轨道交通的综合监控管理系统
CN115035725A (zh) * 2022-08-11 2022-09-09 山东恒宇电子有限公司 基于机器视觉的客流统计方法及系统
CN115565274A (zh) * 2022-12-06 2023-01-03 成都智元汇信息技术股份有限公司 一种降低安检比对开销量的检票方法、系统及存储介质
CN115691128A (zh) * 2022-10-27 2023-02-03 大连海事大学 一种基于多源公交数据联合挖掘的公交站点客流推算方法
CN116127210A (zh) * 2023-04-12 2023-05-16 深圳柯赛标识智能科技有限公司 一种信息数据推送系统及方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310994B (zh) * 2020-02-11 2022-08-12 罗普特科技集团股份有限公司 一种基于数据校准的公交路线预测方法及系统
CN112116811B (zh) * 2020-09-23 2021-11-02 佳都科技集团股份有限公司 一种进行乘车路径识别确定的方法及装置
US11995863B2 (en) 2020-09-29 2024-05-28 Boe Technology Group Co., Ltd. Method for counting regional population, computer device and computer readable storage medium
CN113158923B (zh) * 2021-04-27 2022-09-06 华录智达科技股份有限公司 一种基于人脸识别的公交换乘提醒系统
CN115358645B (zh) * 2022-10-21 2023-01-17 安徽中科中涣信息技术有限公司 一种基于公交客流量监控及调度管理终端

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130024114A1 (en) * 2010-03-08 2013-01-24 International Truck Intellectual Property Company, Llc System and method for setting a bus route for transporting passengers
CN103714698A (zh) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 基于距离图像的公交车辆客流量统计系统
CN105913367A (zh) * 2016-04-07 2016-08-31 北京晶众智慧交通科技股份有限公司 基于人脸识别和位置定位的公交客流量检测系统与方法
CN107240289A (zh) * 2017-07-24 2017-10-10 济南博图信息技术有限公司 一种公交车线路优化管理方法及系统
CN111310994A (zh) * 2020-02-11 2020-06-19 罗普特科技集团股份有限公司 一种基于数据校准的公交路线预测方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185486A1 (en) * 2009-01-21 2010-07-22 Disney Enterprises, Inc. Determining demand associated with origin-destination pairs for bus ridership forecasting
CN105654032A (zh) * 2015-12-15 2016-06-08 重庆凯泽科技有限公司 基于人脸检测的公交车人数统计系统及统计方法
CN108288321A (zh) * 2018-01-24 2018-07-17 哈尔滨工业大学 基于ic卡数据与车辆gps信息的公交站点上下客流量确定方法
CN108389420A (zh) * 2018-03-13 2018-08-10 重庆邮电大学 一种基于历史出行特征的公交乘客下车站点实时识别方法
CN110097249A (zh) * 2019-03-19 2019-08-06 厦门交保通达信息科技有限公司 一种公交客流监测分析系统及其方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130024114A1 (en) * 2010-03-08 2013-01-24 International Truck Intellectual Property Company, Llc System and method for setting a bus route for transporting passengers
CN103714698A (zh) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 基于距离图像的公交车辆客流量统计系统
CN105913367A (zh) * 2016-04-07 2016-08-31 北京晶众智慧交通科技股份有限公司 基于人脸识别和位置定位的公交客流量检测系统与方法
CN107240289A (zh) * 2017-07-24 2017-10-10 济南博图信息技术有限公司 一种公交车线路优化管理方法及系统
CN111310994A (zh) * 2020-02-11 2020-06-19 罗普特科技集团股份有限公司 一种基于数据校准的公交路线预测方法及系统

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870574B (zh) * 2021-09-17 2022-09-02 南京熊猫电子股份有限公司 一种自动检测客车违章载客预警的系统和方法
CN113870574A (zh) * 2021-09-17 2021-12-31 南京熊猫电子股份有限公司 一种自动检测客车违章载客预警的系统和方法
CN113781822A (zh) * 2021-09-24 2021-12-10 湖北惠诚共创科技有限公司 一种基于大数据的公交车用调度系统
CN113888857A (zh) * 2021-10-11 2022-01-04 南京微道科技有限公司 基于车联网的公共交通管理系统、装置及方法
CN114141044B (zh) * 2021-11-22 2023-02-21 东南大学 考虑乘客选择行为的公交时刻表协调优化方法
CN114141044A (zh) * 2021-11-22 2022-03-04 东南大学 考虑乘客选择行为的公交时刻表协调优化方法
CN114724365A (zh) * 2022-03-29 2022-07-08 深圳市综合交通与市政工程设计研究总院有限公司 一种基于定位信息的公交载客量采集系统
CN114912657A (zh) * 2022-04-12 2022-08-16 东南大学 一种基于多种收费票制的公交客流od推导方法
CN114912657B (zh) * 2022-04-12 2024-05-28 东南大学 一种基于多种收费票制的公交客流od推导方法
CN114999034A (zh) * 2022-06-01 2022-09-02 中交机电工程局有限公司 基于轨道交通的综合监控管理系统
CN114999034B (zh) * 2022-06-01 2023-08-29 中交机电工程局有限公司 基于轨道交通的综合监控管理系统
CN114925297A (zh) * 2022-06-24 2022-08-19 深圳市规划和自然资源数据管理中心 一种利用多源出行数据的城市公交出行管理系统
CN114973680A (zh) * 2022-07-01 2022-08-30 哈尔滨工业大学 一种基于视频处理的公交客流获取系统及方法
CN114926153A (zh) * 2022-07-20 2022-08-19 浙江大学滨海产业技术研究院 一种智慧养老出行辅助管理方法及系统
CN114926153B (zh) * 2022-07-20 2022-09-23 浙江大学滨海产业技术研究院 一种智慧养老出行辅助管理方法及系统
CN115035725A (zh) * 2022-08-11 2022-09-09 山东恒宇电子有限公司 基于机器视觉的客流统计方法及系统
CN115691128A (zh) * 2022-10-27 2023-02-03 大连海事大学 一种基于多源公交数据联合挖掘的公交站点客流推算方法
CN115565274B (zh) * 2022-12-06 2023-03-10 成都智元汇信息技术股份有限公司 一种降低安检比对开销量的检票方法、系统及存储介质
CN115565274A (zh) * 2022-12-06 2023-01-03 成都智元汇信息技术股份有限公司 一种降低安检比对开销量的检票方法、系统及存储介质
CN116127210A (zh) * 2023-04-12 2023-05-16 深圳柯赛标识智能科技有限公司 一种信息数据推送系统及方法

Also Published As

Publication number Publication date
CN111310994B (zh) 2022-08-12
CN111310994A (zh) 2020-06-19

Similar Documents

Publication Publication Date Title
WO2021159865A1 (zh) 一种基于数据校准的公交路线预测方法及系统
WO2021159866A1 (zh) 一种基于人脸识别的公交路线预测方法及系统
CN112530166B (zh) 基于信令数据与大数据分析识别公交出行上下车站点的方法与系统
WO2017140175A1 (zh) 基于路径识别系统的收费公路网交通信息采集与诱导系统
CN110493816B (zh) 一种用于轨交地铁车站客流量的实时预测方法
CN102622798B (zh) 一种客流统计分析系统
CN112598182A (zh) 一种轨道交通智能调度方法及系统
CN109544946A (zh) 基于车流量大数据的隧道实时监控管理系统及其实现方法
CN104899947A (zh) 公交客流统计方法
CN108364464A (zh) 一种基于概率模型的公交车辆旅行时间建模方法
CN114693495B (zh) 智慧城市公共交通管理方法、物联网系统、装置及介质
CN112712600A (zh) 基于移动物联网的停车场移动值守系统及方法
CN115423222B (zh) 一种开放自助式城市轨道交通车站进站检查管理方法
CN115410371A (zh) 基于无感支付的城市轨道交通客流数据采集及分析方法
CN112258723A (zh) 综合客运枢纽内预告排队长度的系统和方法
CN111027929A (zh) 地铁票务清分方法及装置
CN110334858A (zh) 一种公交车剩余座位智能预测方法与装置
CN110321982A (zh) 一种轨道交通断面客流量实时计算方法
JP3502156B2 (ja) 交通監視システム
CN112601187B (zh) 基于手机信令的公交常乘客预测方法与系统
CN112528867B (zh) 一种地铁车站突发大客流预警方法及其应用
CN114333120A (zh) 一种公交客流检测方法及系统
CN109410597B (zh) 一种园区出入口车流量检测方法、装置和系统
CN114999034B (zh) 基于轨道交通的综合监控管理系统
CN110046535B (zh) 基于机器学习的智能出行时间预测系统、方法及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918982

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20918982

Country of ref document: EP

Kind code of ref document: A1