CN110046535B - Intelligent travel time prediction system, method and storage medium based on machine learning - Google Patents

Intelligent travel time prediction system, method and storage medium based on machine learning Download PDF

Info

Publication number
CN110046535B
CN110046535B CN201810039996.5A CN201810039996A CN110046535B CN 110046535 B CN110046535 B CN 110046535B CN 201810039996 A CN201810039996 A CN 201810039996A CN 110046535 B CN110046535 B CN 110046535B
Authority
CN
China
Prior art keywords
time
detection module
picture
inbound
outbound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810039996.5A
Other languages
Chinese (zh)
Other versions
CN110046535A (en
Inventor
付展宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nationz Technologies Inc
Original Assignee
Nationz Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nationz Technologies Inc filed Critical Nationz Technologies Inc
Priority to CN201810039996.5A priority Critical patent/CN110046535B/en
Publication of CN110046535A publication Critical patent/CN110046535A/en
Application granted granted Critical
Publication of CN110046535B publication Critical patent/CN110046535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an intelligent travel time prediction system, method and storage medium based on machine learning. The system comprises a server module, an inbound detection module, a vehicle detection module, an outbound detection module and an integration judgment module, wherein the modules are connected through a network, and the server module receives user demands and sends the user demands to the detection modules; the inbound detection module receives the demand, analyzes the inbound video by using a machine learning technology to obtain inbound congestion degree and inbound time, and sends the inbound congestion degree and the inbound time to the integration judgment module; the vehicle detection module receives the demand, analyzes the vehicle video by using a machine learning technology to obtain the crowding degree, and sends the time in the vehicle to the integration judgment module; the outbound detection module receives the demand, analyzes the outbound video by using a machine learning technology to obtain the outbound crowdedness and the outbound time, and sends the outbound crowdedness and the outbound time to the integration judgment module; the integration judging module integrates the congestion degree, the time in the vehicle, the inbound time and the outbound time to judge the flow time, and sends the flow time to the server module.

Description

Intelligent travel time prediction system, method and storage medium based on machine learning
Technical Field
The invention belongs to the field of artificial intelligence, and mainly relates to an intelligent travel time prediction system, an intelligent travel time prediction method and a storage medium based on machine learning.
Background
In daily work and life, riding public transportation is a non-negligible public travel mode, and although subway travel relieves and avoids the time cost caused by ground traffic jam to a certain extent, the following jam is a queuing or even queue insertion phenomenon entering a subway port. For mass transit points with large traffic, a great deal of time is wasted in queuing, even if people are not easy to enter a team waiting for getting on, the subways and buses which have been extruded and exploded are collapsed. We can of course avoid peak times with experience, but experience always pays "academic fees" and experience is not always effective.
The artificial intelligence technology has been extended to the monitoring field, and the technology of intelligent monitoring is added to the movable security robots of the fixed cameras of the corridor, the gateway and the park and the community. The application of artificial intelligence technology in video monitoring should not only stay in the security and protection field, but we should apply it in a wider field, such as intelligent transportation.
Existing machine learning techniques have been capable of implementing techniques such as pedestrian detection, vehicle detection, license plate recognition, and object recognition. At present, intelligent monitoring is mostly limited in the aspects of safety protection and area management, most of monitoring pictures are used for identifying and judging behaviors of pedestrians or vehicles in picture scenes, and some intelligent monitoring needs to generate alarm prompts under proper conditions.
Most of intelligent traffic schemes given by the existing machine learning technology are from the perspective of macro management to regulate and control the traffic flow of public traffic stations, and from the perspective of individuals to master the crowding degree of traffic means in real time, the scheme of reasonably planning routes becomes the expectations of people going out.
Disclosure of Invention
The invention aims to provide real-time mass transit station people flow and the number of people in subways or vehicles so as to calculate the travel time, so that the most efficient travel mode at the moment is provided, and more efficient work and life can be realized.
The invention provides an intelligent travel time prediction system based on machine learning, which comprises a server module, an inbound detection module, a vehicle detection module, an outbound detection module and an integration judgment module, wherein the modules are connected through a network, and the intelligent travel time prediction system comprises the following components:
the server module receives the starting point, the finishing point and the user preference requirement and sends the starting point, the finishing point and the user preference requirement to the three detection modules;
the inbound detection module is used for receiving the requirements sent by the server module, analyzing inbound videos to obtain inbound congestion degree and inbound time by utilizing a machine learning technology, and sending the inbound congestion degree and the inbound time to the integration judgment module;
the vehicle detection module receives the requirements sent by the server module, analyzes the vehicle video to obtain the congestion degree and the time in the vehicle by utilizing a machine learning technology, and sends the congestion degree and the time in the vehicle to the integration judgment module;
the outbound detection module receives the requirements sent by the server module, analyzes outbound videos to obtain outbound crowding degree and outbound time by using a machine learning technology, and sends the outbound crowding degree and the outbound time to the integration judgment module;
the integration judging module integrates the congestion degree, the time in the vehicle, the inbound time and the outbound time to judge the whole flow time and sends the whole flow time to the server module.
Further, the vehicle detection module comprises a camera and a pedestrian detection module, wherein the camera collects video in the vehicle and sends the video to the pedestrian detection module, and the pedestrian detection module analyzes the video data by utilizing a neural network technology of a machine learning technology to obtain the crowding degree and the time in the vehicle.
Further, the inbound detection module comprises a camera, a pedestrian detection module and a time detection module; the camera acquires an inbound video and sends the inbound video to the pedestrian detection module and the time detection module, and the pedestrian detection module and the time detection module analyze the video by utilizing a neural network technology of a machine learning technology to obtain an inbound congestion degree and an inbound time;
the outbound detection module comprises a camera, a pedestrian detection module and a time detection module; the camera collects outbound videos and sends the videos to the pedestrian detection module and the time detection module, and the pedestrian detection module and the time detection module analyze the videos by utilizing a neural network technology of a machine learning technology to obtain outbound crowding degree and outbound time.
The invention also provides a travel time prediction method by using the intelligent travel time prediction system based on machine learning, which comprises the following steps:
step S1, the server module receives a starting point, an ending point and user preference requirements and sends the starting point, the ending point and the user preference requirements to the three detection modules;
step S2, the inbound detection module receives the requirement sent by the server module, acquires inbound video by using a camera, and analyzes and obtains inbound congestion degree and inbound time by using the pedestrian detection module and the time detection module to send the inbound congestion degree and the inbound time to the integration judgment module;
step S3, the vehicle detection module receives the requirements sent by the server module, acquires various public transportation videos by using a camera, analyzes and obtains time and crowding degree in the vehicle by using the pedestrian detection module, and sends the time and crowding degree to the integration judgment module;
step S4, the outbound detection module receives the requirements sent by the server module, acquires outbound videos by using a camera, analyzes and obtains outbound crowding degree and outbound time by using the pedestrian detection module and the time detection module, and sends the outbound crowding degree and the outbound time to the integration judgment module;
and S5, the integration judging module integrates the congestion degree, the time in the vehicle, the inbound time and the outbound time to judge the whole flow time and sends the whole flow time to the server module.
Further, the pedestrian detection module detects the crowding degree by counting the number of pictures in the current frame of the video from the video, thereby obtaining the crowding degree.
Further, in steps S2 and S4, the method for recording the time by the time detection module is as follows: the face picture is to record the time from the upper edge entering the picture until the lower edge leaving the picture; the non-face picture is a record of the time from the lower edge entering the picture until the upper edge leaving the picture.
Further, in step S2, the manner in which the time detection module records the time includes the following steps:
if the time difference between the picture input and output of a plurality of detection objects is not large, the average value is calculated as the stay time;
if the time difference between the picture input and output of several detection objects is larger, the detection objects are divided into two types according to short time and long time, and average values are respectively obtained to obtain shorter time and longer time.
Further, in step S5, the method for calculating the time of the whole flow by the integration judging module is a method of adding segments, the average time of each flow segment is added to obtain the time of the whole flow, the time and the congestion degree of the whole flow of several vehicles are compared at the same time, the optimal travel time is determined, and each flow time and the optimal flow time are sent to the server module.
The present invention also provides a computer-readable storage medium having stored thereon a processor program, characterized in that the program, when executed by a computer, realizes the steps of:
step S1, the server module receives a starting point, an ending point and user preference requirements and sends the starting point, the ending point and the user preference requirements to the three detection modules;
step S2, the inbound detection module receives the requirement sent by the server module, acquires inbound video by using a camera, and analyzes and obtains inbound congestion degree and inbound time by using the pedestrian detection module and the time detection module to send the inbound congestion degree and the inbound time to the integration judgment module;
step S3, the vehicle detection module receives the requirements sent by the server module, acquires various videos in the vehicle by using a camera, acquires time and crowding degree in the vehicle by using the pedestrian detection module and sends the time and crowding degree to the integration judgment module;
step S4, the outbound detection module receives the requirements sent by the server module, acquires outbound videos by using a camera, analyzes and obtains outbound crowding degree and outbound time by using the pedestrian detection module and the time detection module, and sends the outbound crowding degree and the outbound time to the integration judgment module;
and S5, the integration judging module judges the whole flow time according to the congestion degree, the time in the transportation means, the inbound time and the outbound time, and sends the whole flow time to the server module.
Further, in step S3, the manner in which the congestion degree detection module detects the congestion degree is to count the number of people in the current frame of the video from the video by using a pedestrian detection technology, so as to obtain the congestion degree;
in steps S2 and S4, the method for recording the time of the time detection module is as follows: the face picture is to record the time from the upper edge entering the picture until the lower edge leaving the picture; the non-face picture is to record the time from the lower edge entering the picture until the upper edge leaving the picture;
in step S2, the manner in which the time detection module records the time includes the following steps:
if the time difference between the picture input and output of a plurality of detection objects is not large, the average value is calculated as the stay time;
if the time difference between the picture input and output of several detection objects is larger, the detection objects are divided into two types according to short time and long time, and average values are respectively obtained to obtain shorter time and longer time.
In step S3, the method for calculating the time of the whole flow by the integration judging module is a method of adding segments, average time of each flow segment is added to obtain time of the whole flow, time and congestion of the whole flow of several vehicles are compared at the same time, an optimal travel time is determined, and each flow time and the optimal flow time are sent to the server module.
The invention achieves very obvious benefits:
the scheme supports a reminding function, a user needs to input a travel starting point and approximate departure time in advance, and a fluctuation receiving period (for example, front and back half hours) is selected, the scheme can provide an optimal scheme according to real-time data and history records of the same day, so that users unfamiliar with the line can obtain an optimal travel scheme and experience, and users familiar with the line can avoid errors caused by experience;
the functional vacancies of the existing travel app, such as a map, an intelligent bus and the like, can only display approximate travel duration, road congestion degree and real-time bus positioning information, and the scheme is not so detailed and humanized.
Drawings
FIG. 1 is a schematic diagram of the system composition of the present invention.
Fig. 2 is a schematic clustering diagram of faces and non-faces in the same picture.
Detailed Description
The following detailed description of specific embodiments of the invention is provided in connection with the accompanying drawings and examples in order to provide a better understanding of the aspects of the invention and advantages thereof. However, the following description of specific embodiments and examples is for illustrative purposes only and is not intended to be limiting of the invention.
The invention provides an intelligent travel time prediction system based on machine learning, as shown in fig. 1, the system comprises a server module 1, an inbound detection module 2, a vehicle detection module 3, an outbound detection module 4 and an integration judgment module 5, wherein the modules are connected through a network, and the intelligent travel time prediction system comprises the following components: the server module 1 receives the starting point, the finishing point and the user preference requirement and sends the starting point, the finishing point and the user preference requirement to the three detection modules; the inbound detection module 2 receives the request sent from the server module 1, and obtains inbound congestion degree and inbound time according to the inbound video by using machine learning technology, and sends the inbound congestion degree and inbound time to the integration judgment module 4; the vehicle detection module 3 receives the requirement sent by the server module 1, obtains the crowding degree according to the vehicle video and sends the time in the vehicle to the integration judgment module 4 by utilizing a machine learning technology; the outbound detection module 4 receives the requirement sent by the server module 1, obtains the outbound crowding degree and the outbound time according to the outbound video by utilizing a machine learning technology, and sends the outbound crowding degree and the outbound time to the integration judgment module 4; the integration judging module 5 judges the whole flow time according to the congestion degree, the time in the vehicle, the inbound time and the outbound time, and sends the whole flow time to the server module 1.
The vehicle detection module 3 comprises one or more cameras 31 and a pedestrian detection module 32, wherein the cameras 31 collect videos in the vehicle and send the videos to the pedestrian detection module 32, and the pedestrian detection module 32 analyzes the videos to obtain crowding degree and time in the vehicle by utilizing a neural network technology of a machine learning technology.
The inbound detection module 2 comprises one or more cameras 21, a pedestrian detection module 22 and a time detection module 23; the camera 21 collects an inbound video and sends the inbound video to the pedestrian detection module 22 and the time detection module 23, and the pedestrian detection module 22 and the time detection module 23 analyze the video by using a neural network technology of a machine learning technology to obtain an inbound congestion degree and an inbound time;
the outbound detection module 4 comprises one or more cameras 41, a pedestrian detection module 42 and a time detection module 43; the camera 41 collects outbound video and sends the video to the pedestrian detection module 42 and the time detection module 43, and the pedestrian detection module 42 and the time detection module 43 analyze the video by using a neural network technology of a machine learning technology to obtain the outbound congestion degree and the outbound time.
The method for predicting the appearance time by using the intelligent travel time prediction system based on machine learning comprises the following steps:
step S1, the server module 1 receives the starting point and the end point and the user preference requirement and sends the starting point, the end point and the user preference requirement to the three detection modules;
step S2, the inbound detection module 2 receives the request sent from the server module 1, acquires inbound video data by using the camera 21, and analyzes and obtains the inbound congestion level and inbound time by using the pedestrian detection module 22 and the time detection module 23, and sends the inbound congestion level and inbound time to the integration judgment module 5;
the method for recording time by the time detection module comprises the following steps: if the time difference between the picture input and output of a plurality of detection objects is not large, the average value is calculated as the stay time; if the time difference between the input and output pictures of a plurality of detection objects is larger, dividing the detection objects into two types according to short time and long time, and respectively averaging to obtain shorter time and longer time; for example, in a subway traveling mode, for inbound, the situation of no baggage is possible in a shorter time, security inspection is not needed, and the situation of queuing security inspection is possible in a longer time.
Step S3, the vehicle detection module 3 receives the requirements sent by the server module 1, obtains real-time video data of various public transportation by using the camera 31, and analyzes and obtains time and congestion degree in the vehicle by using the pedestrian detection module 32 to send to the integration judgment module 5; the means of transportation detection module detects the crowding degree by using the pedestrian detection module 32 to count the number of pictures in the current frame of the video from the video to obtain the crowding degree;
step S4, the outbound detection module 4 receives the request sent from the server module 1, obtains outbound video data by using the camera 41, and obtains the degree of outbound congestion and the outbound time by using the pedestrian detection module 42 and the time detection module 43 through analysis, and sends the obtained degree of outbound congestion and the outbound time to the integration judgment module 5;
in step S5, the integration determination module 5 integrates the congestion degree, the in-vehicle time, the inbound time, and the outbound time to determine the entire process time, and sends the entire process time to the server module 1. The method for calculating the time of the whole flow by the integration judging module is a method of adding in sections, average time of each flow section is added to obtain time of the whole flow, time and crowding degree of the whole flow of a plurality of vehicles are compared at the same time, optimal travel time is determined, and each flow time and the optimal flow time are sent to the server module.
In steps S2 and S4, the method for recording the time of the time detection module is as follows: the time from the upper edge of the face picture entering the picture to the lower edge leaving the picture; the time from the lower edge of the non-face picture entering the picture until the upper edge leaves the picture. Note that the sampling edge corresponds to the camera, the directionality of the whole process from the inbound to the riding should be unified so as to avoid data confusion and then influence the flow time statistics.
A readable storage medium storing the above-mentioned processor program, which when executed by a computer, implements the steps of:
step S1, the server module 1 receives the starting point and the end point and the user preference requirement and sends the starting point, the end point and the user preference requirement to the three detection modules;
step S2, the inbound detection module 2 receives the request sent from the server module 1, acquires inbound video data by using the camera 21, and analyzes and obtains the inbound congestion level and inbound time by using the pedestrian detection module 22 and the time detection module 23, and sends the inbound congestion level and inbound time to the integration judgment module 5;
the method for recording time by the time detection module comprises the following steps: if the time difference between the picture input and output of a plurality of detection objects is not large, the average value is calculated as the stay time; if the time difference between the input and output pictures of a plurality of detection objects is larger, dividing the detection objects into two types according to short time and long time, and respectively averaging to obtain shorter time and longer time; for example, in a subway traveling mode, for inbound, the situation of no baggage is possible in a shorter time, security inspection is not needed, and the situation of queuing security inspection is possible in a longer time.
Step S3, the vehicle detection module 3 receives the requirements sent by the server module 1, obtains real-time video data of various public transportation by using the camera 31, and analyzes and obtains time and congestion degree in the vehicle by using the pedestrian detection module 32 to send to the integration judgment module 5; the means of transportation detection module 3 detects the crowding degree by using a pedestrian detection module 32 to count the number of pictures in the current frame of the video to obtain the crowding degree;
step S4, the outbound detection module 4 receives the request sent from the server module 1, obtains outbound video data by using the camera 41, and obtains the degree of outbound congestion and the outbound time by using the pedestrian detection module 42 and the time detection module 43 through analysis, and sends the obtained degree of outbound congestion and the outbound time to the integration judgment module 5;
in step S5, the integration determination module 5 determines the entire process time according to the congestion level, the time in the vehicle, and the time out of the vehicle, and sends the entire process time to the server module 1. The method for calculating the time of the whole flow by the integration judging module is a method of adding the sections in a sectional way, average time of each flow section is added to obtain time of the whole flow, time and crowding degree of the whole flow of a plurality of vehicles are compared at the same time, optimal travel time is determined, and each flow time and the optimal flow time are sent to the server module.
The training method of the neural network training model, such as CNN, firstly collects a large number of photos, then manually frames the people and faces in the photos, and records the coordinates, center points and types (pedestrians or faces) of the people and faces in the photo; the second step is to design a CNN model, and design the number of convolution layers, the number of full connection layers, the number of convolution kernels and a loss function of the neural network; the third step is to perform the training, which requires a large amount of computation, typically on a high performance GPU server. And after training is finished, obtaining the parameters of the whole neural network.
The whole working flow is as follows: firstly, if the video is camera data in a vehicle, a pedestrian detection module for processing the video data by using a neural network model CNN detects pedestrians in the video frame, and records the total number of people belonging to pedestrians in a picture and the coordinates thereof in the picture, namely the crowding degree.
Simultaneously, the picture of the first camera at the entrance and the exit starts to be processed, and the picture is subjected to pedestrian detection and face detection.
The pedestrian detection module for processing video data by utilizing the neural network model CNN is used for detecting pedestrians by the video frame, and recording the total number of people belonging to pedestrians in a picture and the coordinates thereof in the picture, namely the crowding degree.
And simultaneously, carrying out face detection, and selecting a target in a camera picture by utilizing a time detection module for processing video data by utilizing a neural network model CNN, and then tracking the target to obtain the total duration of stay of the target in the picture.
As shown in fig. 2, fig. 2 is a schematic diagram of the correspondence between the sampling edge and the camera. Specifically, the picture is divided into four parts of left, middle right and right. Likewise, the total amount belonging to the categories of "face" and "non-face" within the frame and its coordinates within the frame are recorded. Comparing the total number of the face classes with the total number of the non-face classes, drawing out a denser area of the classes in the picture by using the coordinates of the less people, selecting the target for tracking again, and then calculating the stay time of the target in the picture.
The specific selection target and tracking method are as follows: the area is allocated to four parts from left to right, and the specific cases can be divided into the following points: 1. the distribution is relatively dense, so that the two samples are clustered into a cluster and mapped to one (two) of the four most parts, then two samples of the edge class of the part are randomly selected, one sample of the two parts is taken as a tracking target, and the total time length of the two samples in the picture is recorded; the other category randomly selects tracking targets at the other edges of the other three parts (the other two parts), namely three tracking targets in three large blocks (namely two tracking targets in two large blocks), and records time lengths. 2. The distribution is loose, but can be clustered into two clusters and mapped to one (two) of the four most parts, then two samples of the edge class of the part are randomly selected (one sample of the two parts is respectively taken) as tracking targets, and the total time length of the two samples in the picture is recorded; the other category randomly selects tracking targets at the other edges of the other three parts (the other two parts), namely three tracking targets in three large blocks (namely two tracking targets in two large blocks), and records time lengths. 3. Sparse distribution, then randomly taking a sample at the edges of the two parts for tracking; another classification samples and tracks at the other edge of the remaining two parts, each recording a time duration. And selecting a tracking algorithm, such as a KCF tracker or a filter type tracker Kalman, and taking the coordinates of the sampled sample as input to perform multi-target tracking. And (2) taking the time length of the target appearing in the picture as output, sequentially recording according to the classifications of the outgoing picture and the incoming picture, recording the time mode and the standard record of the step according to the step S2, and preparing the next target selection and tracking.
Note that the sampling edge corresponds to the camera, the directionality of the whole process from the inbound to the riding should be unified so as to avoid data confusion and then influence the flow time statistics. For example: face detection in the camera I is outbound detection, and the directivity should be: the upper edge of the picture enters the picture, and the lower edge of the picture exits the picture; the face detection in the next camera two is the inbound detection, and the directivity should be: the upper edge of the picture enters the picture, and the lower edge of the picture exits the picture.
Finally, the integration judging module 5 is used for judging the whole process time according to the congestion degree, the time in the vehicle, the inbound time and the outbound time, and the whole process time is sent to the server module 1.
Examples
The embodiment of the invention can be used for booking hotels and the like on websites such as carrying courses, the same courses and the like. If the user authorizes, the user can acquire the travel of the user, and a travel plan is formulated according to the preference of the user before the travel, and the user is reminded of the travel. More specifically, for example, the customer orders an air ticket of ten points Shenzhen Beijing in the open and the evening on the carrying way, and a travel plan is made according to the system and the method provided by the software and the preference of the user. The user preference may be subway or taxi, accepting travel within one hour in advance, disagreeing congestion, etc.
The whole flow of the mobile terminal of the user is as follows: log in, pick up start point and destination, set preferences (not crowded, time consuming, etc.), set estimated travel time and acceptable fluctuation time (within half an hour, etc.). If the estimated travel time is set as the current time, the travel route, the total duration, the time consumption of the entrance to the inside of the traffic tool (including the time for displaying the next time of the traffic tool to the starting station) and the crowding degree of the current subway, the bus and the like are immediately given on the mobile terminal, and the user can select a travel scheme according to the preference; if the distance from the expected travel time is still longer than the expected travel time, the mobile terminal is started automatically in the fluctuation time of the expected departure, and the background operates. And according to the travel condition and the history at the current moment and the travel preference of the user, giving an optimal scheme in the fluctuation time and reminding the user of traveling. If not satisfied, the user can check the current trip condition by himself and select a preference scheme.
The invention is applicable to all travel schemes, if the starting point or the end point is set as a public place, for example: the travel scheme can be provided for all public places with camera monitoring such as high-speed rail stations, airports and the like. The user can know the crowding degree of each link (ticket selling port, consignment, security check port and the like) at the next moment and the time required by each link under the current condition, so that the user can regulate and control the state of the user (continue to rest or start in real time).
According to the system and the method, a scheme is formulated for a user, for example, forty minutes are needed from home to airport, at least one hour is advanced for shipping, the system starts to operate in the background two hours and forty minutes in advance, the state of all travel modes from home to airport is monitored, the optimal travel mode and time are given by combining the preference of the customer and the history of the journey, and the customer is informed of travel in advance.
Further embodiments may provide the overall route and length of time for the public transportation vehicle to travel, as well as the real-time required length from the doorway to the interior of the public transportation, and the degree of congestion, according to user preference (uncongested, short time, etc.). In addition, the embodiment can automatically give the optimal scheme without manual setting by combining the travel plan of the user (acquiring the ticket booking record of the user on the equipment and the like) and the preference of the user (not crowded or short in travel time and the like) and remind the user of departure.
Finally, it should be noted that: the above examples are only illustrative of the invention and are not intended to be limiting of the embodiments. Other variations or modifications of the various aspects of the invention are within the scope of the invention as described above.

Claims (8)

1. An intelligent travel time prediction system based on machine learning, the system comprises a server module, an inbound detection module, a vehicle detection module, an outbound detection module and an integration judgment module, wherein the modules are connected through a network, and the intelligent travel time prediction system comprises the following components:
the server module receives the starting point, the finishing point and the user preference requirement and sends the starting point, the finishing point and the user preference requirement to the three detection modules;
the inbound detection module is used for receiving the requirements sent by the server module, analyzing inbound videos by using a machine learning technology to obtain inbound congestion degree and inbound time, sending the inbound videos to the integration judgment module, processing the inbound videos by using a neural network model CNN by the inbound detection module to enable the inbound videos to be subjected to pedestrian detection, recording the total number of people belonging to pedestrians in a picture and coordinates of the people in the picture, and processing the inbound videos by using the neural network model CNN by the inbound detection module to obtain targets in the inbound video picture, tracking the targets, and further obtaining the total stay time of the targets in the picture, wherein the total stay time is the inbound time;
the vehicle detection module is used for receiving the requirements sent by the server module, analyzing a vehicle video to obtain a congestion degree by utilizing a machine learning technology, sending the time in the vehicle to the integration judgment module, processing the vehicle video by utilizing a neural network model CNN to enable the vehicle video to perform pedestrian detection, recording the total number of people belonging to pedestrians in a picture and the coordinates of the people in the picture, wherein the total number is the congestion degree, the vehicle detection module is used for processing the vehicle video by utilizing the neural network model CNN, selecting a target in a vehicle video picture, tracking the target, and further obtaining the total stay time of the target in the picture, wherein the total stay time is the time in the vehicle;
the outbound detection module is used for receiving the demands sent by the server module, analyzing outbound videos to obtain outbound crowding degree and outbound time by using a machine learning technology, sending the outbound videos to the integration judgment module, processing the outbound videos by using a neural network model CNN, enabling the outbound videos to perform pedestrian detection, recording the total number of pedestrians in a picture and the coordinates of the pedestrians in the picture, and processing the outbound videos by using the neural network model CNN, selecting a target in the outbound video picture, tracking the target, and further obtaining the total stay time of the target in the picture, wherein the total stay time is the outbound time;
the integration judging module integrates the crowdedness, the time in the transportation means, the inbound time and the outbound time to judge the whole flow time and sends the whole flow time to the server module;
the method for calculating the time of the whole flow by the integration judging module is a method of adding in sections, average time of each flow section is added to obtain time of the whole flow, time and crowding degree of the whole flow of a plurality of vehicles are compared at the same time, optimal travel time is determined, and each flow time and the optimal flow time are sent to the server module.
2. The system of claim 1, wherein the vehicle detection module comprises a camera and a pedestrian detection module, the camera collects video in the vehicle and sends the video to the pedestrian detection module, and the pedestrian detection module analyzes the video data to obtain the congestion degree and the time in the vehicle by using a neural network technology of a machine learning technology.
3. The system of claim 1, wherein the inbound detection module comprises a camera, a pedestrian detection module, a time detection module; the camera acquires an inbound video and sends the inbound video to the pedestrian detection module and the time detection module, and the pedestrian detection module and the time detection module analyze the inbound video by utilizing a neural network technology of a machine learning technology to obtain an inbound congestion degree and an inbound time;
the outbound detection module comprises a camera, a pedestrian detection module and a time detection module; the camera collects outbound videos and sends the outbound videos to the pedestrian detection module and the time detection module, and the pedestrian detection module and the time detection module analyze the outbound videos by utilizing a neural network technology of a machine learning technology to obtain outbound crowding degree and outbound time.
4. A method for travel time prediction by an intelligent travel time prediction system based on machine learning, the method comprising the following steps:
step S1, a server module receives a starting point, an ending point and user preference requirements and sends the starting point, the ending point and the user preference requirements to three detection modules;
step S2, an inbound detection module receives the requirement sent by the server module, acquires inbound video by using a camera, analyzes and obtains inbound crowding degree and inbound time by using a pedestrian detection module and a time detection module, and sends the inbound video to an integration judgment module, wherein the pedestrian detection module processes the inbound video by using a neural network model CNN to enable the inbound video to be subjected to pedestrian detection, records the total number of people belonging to pedestrians in a picture and coordinates thereof in the picture, and is the inbound crowding degree, and the time detection module processes the inbound video by using the neural network model CNN, selects a target in the inbound video picture, tracks the target, and further obtains the total stay time of the target in the picture, wherein the total stay time is the inbound time;
step S3, a traffic tool detection module receives the requirements sent by the server module, acquires various public transportation videos by using a camera, analyzes and acquires time and congestion degree in a traffic tool by using a pedestrian detection module and a time detection module, and sends the time and congestion degree in the traffic tool to the integration judgment module, wherein the pedestrian detection module processes the public transportation videos by using a neural network model CNN to enable the public transportation videos to carry out pedestrian detection, records the total number of people belonging to pedestrians in a picture and coordinates thereof in the picture as the congestion degree, and the time detection module processes the public transportation videos by using a neural network model CNN, selects a target in the public transportation video picture, tracks the target, and further obtains the total stay time of the target in the picture as the time in the traffic tool;
step S4, an outbound detection module receives the demands sent by the server module, acquires an outbound video by using a camera, analyzes and obtains an outbound crowding degree and an outbound time by using a pedestrian detection module and a time detection module, and sends the outbound video to the integration judgment module, wherein the pedestrian detection module processes the outbound video by using a neural network model CNN to enable the outbound video to perform pedestrian detection, records the total number of people belonging to pedestrians in a picture and the coordinates thereof in the picture as the outbound crowding degree, and the time detection module processes the outbound video by using a neural network model CNN, selects a target in the outbound video picture, tracks the target, and further obtains the total stay time of the target in the picture as the outbound time;
step S5, the integration judging module integrates the crowdedness degree, the time in the vehicle and the time out of the vehicle to judge the whole flow time, and sends the whole flow time to the server module;
in step S5, the method for calculating the time of the whole flow by the integration judging module is a method of adding segments, average time of each flow segment is added to obtain time of the whole flow, time and congestion of the whole flow of several vehicles are compared at the same time, an optimal travel time is determined, and each flow time and the optimal flow time are sent to the server module.
5. The method according to claim 4, wherein in steps S2 and S4, the method of recording time of the time detection module is: the face picture is to record the time from the upper edge entering the picture until the lower edge leaving the picture; the non-face picture is a record of the time from the lower edge entering the picture until the upper edge leaving the picture.
6. The method according to claim 4, wherein in step S2, the manner in which the time detection module records time includes the steps of:
if the time difference between the picture input and output of a plurality of detection objects is not large, the average value is calculated as the stay time;
if the time difference between the picture input and output of several detection objects is larger, the detection objects are divided into two types according to short time and long time, and average values are respectively obtained to obtain shorter time and longer time.
7. A computer-readable storage medium having stored thereon a program executable by a processor, characterized in that the program, when executed by a computer, realizes the steps of:
step S1, a server module receives a starting point, an ending point and user preference requirements and sends the starting point, the ending point and the user preference requirements to three detection modules;
step S2, an inbound detection module receives the requirement sent by the server module, acquires inbound video by using a camera, analyzes and obtains inbound crowding degree and inbound time by using a pedestrian detection module and a time detection module, and sends the inbound video to an integration judgment module, wherein the pedestrian detection module processes the inbound video by using a neural network model CNN to enable the inbound video to be subjected to pedestrian detection, records the total number of people belonging to pedestrians in a picture and coordinates thereof in the picture, and is the inbound crowding degree, and the time detection module processes the inbound video by using the neural network model CNN, selects a target in the inbound video picture, tracks the target, and further obtains the total stay time of the target in the picture, wherein the total stay time is the inbound time;
step S3, a vehicle detection module receives requirements sent by the server module, acquires videos in various vehicles by using a camera, acquires time and crowding degree in the vehicles by using a pedestrian detection module and a time detection module, and sends the time and crowding degree in the vehicles to the integration judgment module, wherein the pedestrian detection module processes the videos in the vehicles by using a neural network model CNN to enable the videos in the vehicles to perform pedestrian detection, records the total number of people belonging to pedestrians in a picture and coordinates thereof in the picture as the crowding degree, and the time detection module processes the videos in the vehicles by using a neural network model CNN, selects a target in the video picture in the vehicles, tracks the target, and further obtains the total stay time of the target in the picture as the time in the vehicles;
step S4, an outbound detection module receives the demands sent by the server module, acquires an outbound video by using a camera, analyzes and obtains an outbound crowding degree and an outbound time by using a pedestrian detection module and a time detection module, and sends the outbound video to the integration judgment module, wherein the pedestrian detection module processes the outbound video by using a neural network model CNN to enable the outbound video to perform pedestrian detection, records the total number of people belonging to pedestrians in a picture and the coordinates thereof in the picture as the outbound crowding degree, and the time detection module processes the outbound video by using a neural network model CNN, selects a target in the outbound video picture, tracks the target, and further obtains the total stay time of the target in the picture as the outbound time;
s5, the integration judging module judges the whole flow time according to the congestion degree, the time in the transportation means, the inbound time and the outbound time, and sends the whole flow time to the server module;
in step S5, the method for calculating the time of the whole flow by the integration judging module is a method of adding segments, average time of each flow segment is added to obtain time of the whole flow, time and congestion of the whole flow of several vehicles are compared at the same time, an optimal travel time is determined, and each flow time and the optimal flow time are sent to the server module.
8. The storage medium according to claim 7, wherein:
in steps S2 and S4, the method for recording the time of the time detection module is as follows: the face picture is to record the time from the upper edge entering the picture until the lower edge leaving the picture; the non-face picture is to record the time from the lower edge entering the picture until the upper edge leaving the picture;
in step S2, the manner in which the time detection module records the time includes the following steps:
if the time difference between the picture input and output of a plurality of detection objects is not large, the average value is calculated as the stay time;
if the time difference between the picture input and output of several detection objects is larger, the detection objects are divided into two types according to short time and long time, and average values are respectively obtained to obtain shorter time and longer time.
CN201810039996.5A 2018-01-16 2018-01-16 Intelligent travel time prediction system, method and storage medium based on machine learning Active CN110046535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810039996.5A CN110046535B (en) 2018-01-16 2018-01-16 Intelligent travel time prediction system, method and storage medium based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810039996.5A CN110046535B (en) 2018-01-16 2018-01-16 Intelligent travel time prediction system, method and storage medium based on machine learning

Publications (2)

Publication Number Publication Date
CN110046535A CN110046535A (en) 2019-07-23
CN110046535B true CN110046535B (en) 2023-06-23

Family

ID=67272989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810039996.5A Active CN110046535B (en) 2018-01-16 2018-01-16 Intelligent travel time prediction system, method and storage medium based on machine learning

Country Status (1)

Country Link
CN (1) CN110046535B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647929B (en) * 2019-09-19 2021-05-04 北京京东智能城市大数据研究院 Method for predicting travel destination and method for training classifier
CN111062294B (en) * 2019-12-10 2024-03-22 北京文安智能技术股份有限公司 Passenger flow queuing time detection method, device and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4361300A (en) * 1980-10-08 1982-11-30 Westinghouse Electric Corp. Vehicle train routing apparatus and method
EP2568414A1 (en) * 2011-09-09 2013-03-13 Nederlandse Organisatie voor toegepast- natuurwetenschappelijk onderzoek TNO Surveillance system and method for detecting behavior of groups of actors
CN104243900A (en) * 2013-06-21 2014-12-24 中兴通讯股份有限公司 Vehicle arrival time prediction system and method
CN104021674B (en) * 2014-06-17 2016-07-06 武汉烽火众智数字技术有限责任公司 A kind of quick and precisely prediction vehicle method by road trip time
CN104463364B (en) * 2014-12-04 2018-03-20 中国科学院深圳先进技术研究院 A kind of Metro Passenger real-time distribution and subway real-time density Forecasting Methodology and system
CN105261206A (en) * 2015-09-29 2016-01-20 无锡高联信息技术有限公司 Common traffic service system and method used for calculating public traffic paths
US9531877B1 (en) * 2015-10-16 2016-12-27 Noble Systems Corporation Pacing outbound communications that generate inbound communications
CN106327632A (en) * 2016-08-17 2017-01-11 成都仁通融合信息技术有限公司 System and method for acquiring real-time road conditions of all stations of subway line
CN106485923B (en) * 2016-12-20 2019-01-18 武汉理工大学 A kind of public transport crowding real-time status acquisition method and device
CN107527501A (en) * 2017-06-05 2017-12-29 交通运输部公路科学研究所 The building method of travel time data and the method for predicting the motorway journeys time between a kind of highway station

Also Published As

Publication number Publication date
CN110046535A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
JP6570731B2 (en) Method and system for calculating passenger congestion
Jenelius Public transport experienced service reliability: Integrating travel time and travel conditions
CN114390079B (en) Smart city public place management method and Internet of things system
CN111310994A (en) Bus route prediction method and system based on data calibration
CN106529711B (en) User behavior prediction method and device
Tiedemann et al. Concept of a data thread based parking space occupancy prediction in a berlin pilot region
CN109615864A (en) Vehicle congestion analysis method, system, terminal and storage medium based on video structural
CN110472999B (en) Passenger flow mode analysis method and device based on subway and shared bicycle data
CN109979197B (en) Method and system for constructing highway traffic time map based on fusion data
CN105825350A (en) Video analysis-based intelligent tourism early warning decision-making system and use method thereof
CN105844229A (en) Method and system for calculating passenger crowdedness degree
CN112700473B (en) Carriage congestion degree judging system based on image recognition
CN104239386A (en) Method and system for prioritizion of facial recognition matches
CN112204604A (en) Information processing apparatus, information processing method, and program
CN112687110B (en) Parking space level navigation method and system based on big data analysis
CN110046535B (en) Intelligent travel time prediction system, method and storage medium based on machine learning
CN115545975A (en) Management method, system and storage medium based on scenic spot parking lot data
CN113592785A (en) Target flow statistical method and device
Blumer et al. Cost-effective single-camera multi-car parking monitoring and vacancy detection towards real-world parking statistics and real-time reporting
CN117576943A (en) Intelligent analysis management system for vehicle information of parking lot
EP3425606B1 (en) Traffic situation estimation system and traffic situation estimation method
CN116090785B (en) Custom bus planning method for two stages of large-scale movable loose scene
CN108230670B (en) Method and apparatus for predicting number of mobile bodies appearing at given point in given time period
CN114283386A (en) Analysis and adaptation intensive scene people stream real-time monitoring system based on big data
CN114323027A (en) Data analysis system and method based on multi-source heterogeneous data processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant