WO2020215793A1 - 一种城市聚集事件预测与定位方法及装置 - Google Patents

一种城市聚集事件预测与定位方法及装置 Download PDF

Info

Publication number
WO2020215793A1
WO2020215793A1 PCT/CN2019/130287 CN2019130287W WO2020215793A1 WO 2020215793 A1 WO2020215793 A1 WO 2020215793A1 CN 2019130287 W CN2019130287 W CN 2019130287W WO 2020215793 A1 WO2020215793 A1 WO 2020215793A1
Authority
WO
WIPO (PCT)
Prior art keywords
urban
layer
neural network
time
event
Prior art date
Application number
PCT/CN2019/130287
Other languages
English (en)
French (fr)
Inventor
石婧文
须成忠
叶可江
王洋
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2020215793A1 publication Critical patent/WO2020215793A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Definitions

  • the invention relates to the field of traffic monitoring, and in particular to a method and device for predicting and positioning urban agglomeration events.
  • An urban agglomeration event is a phenomenon where a large number of moving objects (taxi, pedestrians, etc.) converge in a small area within a period of time.
  • Typical urban gathering events include traffic jams and crowd gatherings at concerts.
  • Large-scale urban agglomeration events have an important impact on urban traffic and urban safety. Predicting the occurrence and location of gathering events in advance can help relevant departments plan and adjust police and other resources, ensure the healthy operation of the city, and improve people's life satisfaction.
  • IoT sensor technology we can already collect a large amount of mobile data such as traffic and mobile phones.
  • human activities are very complicated. At present, many studies can only detect urban agglomeration events. However, it is still a very big challenge to analyze the trend of urban agglomeration events from the messy mass data and predict the likely locations of urban agglomeration events.
  • the prior art finds one or two features based on observation, and uses this as a basis for subsequent algorithms to detect urban agglomeration events. This process may cause a large amount of loss of original information.
  • the current technology can only use the data information in the adjacent limited area in the judgment of the candidate position, and the data information is less.
  • it is difficult to predict cluster events in the existing technology because geographic cluster event pattern mining is already very complicated, and there are fewer systems that can perform joint time-dimensional pattern mining.
  • the embodiments of the present invention provide a method and device for predicting and locating urban agglomeration events, so as to at least avoid the existing technical problem that long-term prediction and locating of key areas cannot be performed.
  • a method for predicting and locating urban agglomeration events including:
  • Class activation mapping is performed on the time series and spatial characteristics of the multi-frame pictures to determine the location of the urban agglomeration event.
  • converting the actual trajectory of a city vehicle in a period of time into a multi-frame picture at a preset time interval includes:
  • each small square Divides the city into several small squares evenly according to the latitude and longitude, count the actual trajectory of the city's vehicles at a preset time interval, generate multiple pictures corresponding to several small squares each time, and each generated multiple pictures form a picture , Generate multiple frames of pictures within a period of time; each picture represents the number of vehicles turning in from adjacent directions in the previous period of time corresponding to each small square.
  • the method for predicting and positioning urban agglomeration events also includes:
  • Event simulation is performed on the actual trajectory of urban vehicles in a period of time, and the simulated traffic flow is added to the actual traffic flow to become a simulated event sample and multi-frame pictures as the input data of the deep neural network.
  • the deep neural network includes sequentially set:
  • Convolutional layer used to retain local spatial information and extract more comprehensive spatial features
  • Convolutional long and short-term memory layer used to visually predict time series of multi-frame pictures and generate descriptive text from the image sequence of multi-frame pictures to extract more comprehensive time series features
  • Expanding the convolutional layer is used to expand the field of view of neurons in the deep neural network without pooling.
  • H t represents the output of the convolutional layer
  • X t represents the input of the convolutional layer
  • * represents the convolution operation
  • W and b are training parameters
  • f is the modified activation function Relu.
  • the convolutional long and short-term memory layer includes:
  • Convolutional neural network layer used to generate descriptive text from image sequences of multiple frames
  • the convolutional neural network layer is combined with the long- and short-term memory layer to extract temporal features.
  • time sequence feature and spatial feature of the multi-frame picture to perform probability prediction includes:
  • n is an integer ⁇ 1
  • the probability of the rth frame of the aggregation event occurring during training is set to r/n, where r is An integer ⁇ n.
  • the layer structure of the class activation mapping is formed by the convolutional layer, the average pooling layer, and the fully connected layer in sequence;
  • the class activation mapping uses the binary cross-entropy loss function in the classification judgment, and the specific formula is expressed as:
  • yi is the prediction result in classification judgment
  • Is the expected result in classification judgment
  • i represents the i-th frame of the input sequence
  • k is the total number of input sequences
  • yi′ is the prediction result in probability prediction, It is the expected result in the probability prediction;
  • the loss of the entire deep neural network is the sum of two parts:
  • Loss loss 1 +loss 2 .
  • a device for predicting and locating urban agglomeration events including:
  • the picture generating unit is used to convert the actual trajectory of the city vehicle in a period of time into multiple frames of pictures at a preset time interval;
  • the feature extraction unit is used for feature extraction of multi-frame pictures using a deep neural network to extract the time series and spatial features of the multi-frame pictures;
  • the probability prediction unit is used to make probability prediction using the time series and spatial characteristics of the multi-frame pictures, and calculate the probability of urban agglomeration events;
  • the class activation mapping unit is used to perform class activation mapping on the time series and spatial features of the multi-frame pictures, and determine the location of the urban agglomeration event.
  • the device for predicting and positioning urban agglomeration events further includes:
  • the event simulation unit is used to perform event simulation on the actual trajectory of urban vehicles in a period of time, adding the simulated traffic flow to the actual traffic flow to become a simulated event sample and multi-frame pictures as the input data of the deep neural network.
  • the method and device for predicting and locating urban gathering events in the embodiment of the present invention uses a deep neural network for feature extraction of multi-frame pictures, extracts the time-series and spatial features of the multi-frame pictures, and uses the time-series and spatial features of the multi-frame pictures
  • Probability prediction and class activation mapping are performed to determine the probability and location of urban agglomeration events, which can not only detect the agglomeration events, but also predict the urban agglomeration events in advance and locate the gathering location.
  • the present invention predicts and locates urban agglomeration events in advance based on a deep neural network under complex Internet of Things mobile data, and can perform long-term prediction and positioning of key areas where agglomeration events have occurred or simulated.
  • Figure 1 is a flow chart of the method for predicting and positioning urban agglomeration events according to the present invention
  • Figure 2 is a specific flow chart of the method for predicting and positioning urban agglomeration events according to the present invention
  • Fig. 3 is a schematic diagram of the method for predicting and positioning urban agglomeration events according to the present invention.
  • FIG. 4 is a connection block diagram of the traffic flow prediction device of the present invention.
  • Fig. 5 is a detailed connection block diagram of the method for predicting and positioning urban agglomeration events of the present invention.
  • the prior art finds one or two features based on observation, and uses this as a basis for subsequent algorithms to detect urban agglomeration events. This process may cause a large amount of loss of original information, and the deep neural network used in the present invention can learn and extract features by itself.
  • the current technology can only use the data information in the adjacent limited area in the judgment of the candidate position, and the deep neural network used in the present invention can exponentially expand the field of view of the neuron, and can use the information of a wider area.
  • Existing technology is difficult to predict aggregation events, because geographic aggregation event pattern mining is already very complicated, and there are few systems that can perform joint time-dimensional pattern mining.
  • the present invention uses the powerful computing power of GPU to analyze the areas prone to aggregation. After collection or simulation, the occurrence and location of aggregation events can be predicted in a large time span.
  • the main problem to be solved by the present invention is to predict and locate urban agglomeration events in advance based on a deep neural network under complex Internet of Things mobile data, and be able to predict and locate key areas where agglomeration events have occurred or simulated for a long time span.
  • the present invention uses a new deep neural network structure to predict and locate the city gathering time in advance, and transform the gathering event prediction into classification and regression problems. Specifically, given a series of observation time series Xt, output is a predicted sequence Yt indicating whether an aggregation event occurs, the probability Pt of the aggregation event occurring immediately, and the location Lt where the aggregation event may occur.
  • the present invention is mainly divided into three parts: feature extraction, probability prediction, and class activation mapping.
  • the relationship between the three is: first, feature extraction is performed on the input data set, and the extracted features are simultaneously input to the probability prediction and class activity mapping parts.
  • a method for predicting and locating urban agglomeration events including:
  • Step S100 Convert the actual trajectory of the city vehicle in a period of time into multiple frames of pictures at preset time intervals;
  • Step S102 Perform feature extraction on the multi-frame pictures using a deep neural network, and extract the time-series and spatial features of the multi-frame pictures;
  • Step S104 Probabilistic prediction is performed by using the time sequence characteristics and spatial characteristics of the multi-frame pictures to calculate the probability of occurrence of urban agglomeration events;
  • Step S106 Perform class activation mapping on the time series feature and spatial feature of the multi-frame pictures, and determine the location where the city cluster event occurs.
  • the method for predicting and locating urban agglomeration events in the embodiment of the present invention uses a deep neural network for feature extraction of multi-frame pictures, extracts the time-series and spatial features of the multi-frame pictures, and uses the time-series and spatial features of the multi-frame pictures for probability Prediction and class activation mapping can determine the probability and location of urban agglomeration events. It can not only detect the agglomeration events, but also predict the urban agglomeration events in advance and locate the gathering place. In the process of solving feature extraction and classification judgment, it selects an ingenious network structure according to the actual situation.
  • the invention predicts and locates urban agglomeration events in advance based on a deep neural network under complex Internet of Things mobile data, and can predict and locate key areas where agglomeration events have occurred or simulated for a long time span.
  • converting the actual trajectory of a city vehicle in a period of time into a multi-frame picture at a preset time interval includes:
  • each small square Divides the city into several small squares evenly according to the latitude and longitude, count the actual trajectory of the city's vehicles at a preset time interval, generate multiple pictures corresponding to several small squares each time, and each generated multiple pictures form a picture , Generate multiple frames of pictures within a period of time; each picture represents the number of vehicles turning in from adjacent directions in the previous period of time corresponding to each small square.
  • the urban agglomeration event prediction and location method based on the deep convolutional long-short-term network mainly includes three parts.
  • the first is feature extraction
  • the second is probability prediction, which predicts the probability of occurrence of agglomeration events
  • the third is class activation mapping. Determine whether there is a gathering event and locate the likely event location.
  • the training source data consists of two parts, one part is real data, and the other part is simulation data.
  • the real data comes from the real-time trajectory of buses and taxis.
  • the main field information includes: license plate, event, GPS longitude, GPS latitude, speed, status, etc.
  • the latitude and longitude of the entire city is evenly divided into a number of small squares, and statistics are calculated at every preset small time interval. Multiple pictures are generated each time, and each picture corresponds to the previous time period of each small grid. The number of vehicles turning in adjacent directions.
  • the method for predicting and locating urban agglomeration events further includes:
  • Step S101 Perform event simulation on the actual trajectory of urban vehicles in a period of time, and add the simulated traffic flow to the actual traffic flow into a simulated event sample and multiple frames of pictures as input data of the deep neural network.
  • the present invention performs event simulation during preprocessing.
  • the data set needs to be expanded; on the other hand, in order to increase the simulated event samples of key monitoring locations.
  • the simulated traffic flow is added to the actual traffic flow to become a simulated event sample containing the actual event.
  • feature extraction takes 20-frame processed continuous images as input. This part is composed of four different layers, in order: convolution layer, convolution LSTM layer), dropout layer (dropout layer), and dilated convolution layer (dilated convolution layer).
  • the deep neural network includes:
  • Convolutional layer used to retain local spatial information and extract more comprehensive spatial features
  • Convolutional long and short-term memory layer used to visually predict time series of multi-frame pictures and generate descriptive text from the image sequence of multi-frame pictures to extract more comprehensive time series features
  • the exit layer is used to avoid over-fitting of the deep neural network; each neuron in the exit layer is inactivated with a certain probability. If a neuron is inactivated, its output will become zero.
  • the exit layer is to avoid over-fitting of the deep neural network, so that the present invention can detect urban agglomeration events with different characteristics (such as different numbers of participants).
  • Expanding the convolutional layer is used to expand the field of view of neurons in the deep neural network without pooling. Expanding the convolutional layer can expand the field of view of neurons without pooling.
  • the traditional convolutional neural network will pool the input image first to expand the receptive field, but because the data size is reduced, it is necessary to use upsampling to expand the size later, which will cause loss of information and decrease in resolution.
  • the expanded convolutional layer does not use pooling, but skips some points or adds holes with zero weight in the filter, and then achieves exponential field expansion from the cascade structure, while maintaining the size of the data.
  • the present invention introduces the expanded convolutional layer, because the expanded convolutional layer has very important practical significance for maintaining a wider range of information and data resolution of traffic information integration.
  • the convolutional layer is widely used in image classification.
  • the convolutional layer is mainly considered to maintain the correlation between adjacent squares, which is similar to the image classification problem.
  • the use of convolutional layers can retain local spatial information, and can extract more comprehensive spatial features. Because some interconnected geographic locations have a relatively large span, such as highways, multiple convolutional layers are required for feature extraction.
  • H t represents the output of the convolutional layer
  • X t represents the input of the convolutional layer
  • * represents the convolution operation
  • W and b are training parameters
  • f is the modified activation function Relu.
  • the convolutional long and short-term memory layer includes:
  • Convolutional neural network layer used to generate descriptive text from image sequences of multiple frames
  • the convolutional neural network layer is combined with the long- and short-term memory layer to extract temporal features.
  • the convolutional long and short-term memory layer architecture consists of two parts.
  • One is to use a convolutional neural network layer (CNN), which is a type of feedforward neural network (Feedforward Neural Networks) that includes convolution calculations and has a deep structure.
  • CNN convolutional neural network layer
  • feedforward Neural Networks feedforward neural network
  • the long- and short-term neural network is a special recurrent neural network.
  • the so-called recurrent neural network is the prediction that the network can solve time series problems.
  • the so-called recurrent neural network is the loop structure in the network. You can think of a recurrent neural network as a neural network with multiple layers of the same network structure, and each layer transmits information to the next layer; the two are combined for feature extraction to support sequence prediction.
  • the convolutional long and short-term memory layer is developed for visual time series prediction problems and applications that generate text descriptions from image sequences (such as videos), which can extract more comprehensive timing features. Since the occurrence of urban agglomeration events evolves with time, the combination of convolutional neural network layer and long- and short-term memory layer can well extract image features and time patterns, and therefore can well explore the agglomeration events in time series images.
  • using the time sequence feature and spatial feature of a multi-frame picture to perform probability prediction includes:
  • n is an integer ⁇ 1
  • the probability of the rth frame of the aggregation event occurring during training is set to r/n, where r is An integer ⁇ n.
  • the probability prediction model can output the probability of an urban agglomeration event occurring immediately.
  • the probability of the event occurring is set to 1.
  • the present invention assumes that the entire event will occur in n consecutive frame images, and n is an integer ⁇ 1.
  • the probability of the rth frame of the aggregation event is set to r is an integer ⁇ n.
  • the occurrence probability of the first to 20 frames within these 20 frames is as follows
  • zero padding is not performed during convolution. Since the probability here is a continuous value, cross entropy cannot be used, so the extracted features are divided into two parts: prediction probability and classification. The loss function guarantees that the two results are consistent, that is, when the probability prediction is 1, the classification output of the class activation map is 1.
  • the layer structure of the class activation map is formed by the convolutional layer, the average pooling layer, and the fully connected layer in sequence; the class activation map is to show which part of the image contributes to the final result of the convolutional neural network.
  • the class activation mapping corresponds to the location of the event, because the location of the occurrence point is in the process of cluster formation, as the participants continue to go to the destination, the flow of the event location will change greatly.
  • Class activation mapping can determine the contribution area of event aggregation, that is, where the aggregation event occurs.
  • Class activation mapping is formed by convolutional layer, average pooling layer, and fully connected layer in sequence.
  • the class activation mapping uses the binary cross-entropy loss function in the classification discrimination, and the specific formula is expressed as:
  • yi is the prediction result in classification judgment
  • Is the expected result in classification judgment
  • i represents the i-th frame of the input sequence
  • k is the total number of input sequences
  • yi′ is the prediction result in probability prediction, It is the expected result in the probability prediction;
  • the loss of the entire deep neural network is the sum of two parts:
  • Loss loss 1 +loss 2 .
  • an apparatus for predicting and positioning urban agglomeration events including:
  • the picture generating unit 200 is configured to convert the actual trajectory of a city vehicle in a period of time into a multi-frame picture at a preset time interval;
  • the feature extraction unit 202 is configured to perform feature extraction on the multi-frame pictures using a deep neural network, and extract the time series and spatial features of the multi-frame pictures;
  • the probability prediction unit 204 is configured to perform probability prediction using the time sequence feature and spatial feature of the multi-frame pictures, and calculate the probability of occurrence of urban agglomeration events;
  • the class activation mapping unit 206 is configured to perform class activation mapping on the time series and spatial features of the multi-frame pictures, and determine the location where the city cluster event occurs.
  • the device for predicting and locating urban agglomeration events in the embodiment of the present invention uses a deep neural network for feature extraction of multi-frame pictures, extracts the time series and spatial features of the multi-frame pictures, and uses the time series and spatial features of the multi-frame pictures for probability Prediction and class activation mapping can determine the probability and location of urban agglomeration events. It can not only detect the agglomeration events, but also predict the urban agglomeration events in advance and locate the gathering place. In the process of solving feature extraction and classification judgment, it selects an ingenious network structure according to the actual situation.
  • the invention predicts and locates urban agglomeration events in advance based on a deep neural network under complex Internet of Things mobile data, and can predict and locate key areas where agglomeration events have occurred or simulated for a long time span.
  • the device for predicting and positioning urban agglomeration events further includes:
  • the event simulation unit 201 is used to perform event simulation on the actual trajectory of urban vehicles in a period of time, and add the simulated traffic flow to the actual traffic flow into a simulated event sample together with multiple frames of pictures as input data of the deep neural network.
  • the urban agglomeration event prediction and positioning device based on deep convolutional long-short-term network mainly includes three parts.
  • the first is feature extraction
  • the second is probabilistic prediction, which predicts the probability of occurrence of agglomeration events
  • the third is class activation mapping. Determine whether there is a gathering event and locate the likely event location.
  • the training source data consists of two parts, one part is real data, and the other part is simulation data.
  • the real data comes from the real-time trajectory of buses and taxis.
  • the main field information includes: license plate, event, GPS longitude, GPS latitude, speed, status, etc.
  • the latitude and longitude of the entire city is evenly divided into a number of small squares, and statistics are calculated at every preset small time interval. Multiple pictures are generated each time, and each picture corresponds to the previous time period of each small grid. The number of vehicles turning in adjacent directions.
  • the present invention performs event simulation during preprocessing.
  • the data set needs to be expanded; on the other hand, in order to increase the simulated event samples of key monitoring locations.
  • the simulated traffic flow is added to the actual traffic flow to become a simulated event sample containing the actual event.
  • feature extraction takes 20-frame processed continuous images as input. This part is composed of four different layers, in order: convolution layer, convolution LSTM layer), dropout layer (dropout layer), and dilated convolution layer (dilated convolution layer).
  • the deep neural network includes:
  • Convolutional layer used to retain local spatial information and extract more comprehensive spatial features
  • Convolutional long and short-term memory layer used to visually predict time series of multi-frame pictures and generate descriptive text from the image sequence of multi-frame pictures to extract more comprehensive time series features
  • the exit layer is used to avoid over-fitting of the deep neural network; each neuron in the exit layer is inactivated with a certain probability. If a neuron is inactivated, its output will become zero.
  • the exit layer is to avoid over-fitting of the deep neural network, so that the present invention can detect urban agglomeration events with different characteristics (such as different numbers of participants).
  • Expanding the convolutional layer is used to expand the field of view of neurons in the deep neural network without pooling. Expanding the convolutional layer can expand the field of view of neurons without pooling.
  • the traditional convolutional neural network will pool the input image first to expand the receptive field, but because the data size is reduced, it is necessary to use upsampling to expand the size later, which will cause loss of information and decrease in resolution.
  • the expanded convolutional layer does not use pooling, but skips some points or adds holes with zero weight in the filter, and then achieves exponential field expansion from the cascade structure, while maintaining the size of the data.
  • the present invention introduces the expanded convolutional layer, because the expanded convolutional layer has very important practical significance for maintaining a wider range of information and data resolution of traffic information integration.
  • the convolutional layer is widely used in image classification.
  • the convolutional layer is mainly considered to maintain the correlation between adjacent squares, which is similar to the image classification problem.
  • the use of convolutional layers can retain local spatial information, and can extract more comprehensive spatial features. Because some interconnected geographic locations have a relatively large span, such as highways, multiple convolutional layers are required for feature extraction.
  • H t represents the output of the convolutional layer
  • X t represents the input of the convolutional layer
  • * represents the convolution operation
  • W and b are training parameters
  • f is the modified activation function Relu.
  • the convolutional long and short-term memory layer includes:
  • Convolutional neural network layer used to generate descriptive text from image sequences of multiple frames
  • the convolutional neural network layer is combined with the long- and short-term memory layer to extract temporal features.
  • the convolutional long and short-term memory layer architecture consists of two parts.
  • One is to use a convolutional neural network layer (CNN), which is a type of feedforward neural network (Feedforward Neural Networks) that includes convolution calculations and has a deep structure.
  • CNN convolutional neural network layer
  • feedforward Neural Networks feedforward neural network
  • the long- and short-term neural network is a special recurrent neural network.
  • the so-called recurrent neural network is the prediction that the network can solve time series problems.
  • the so-called recursive neural network is the loop structure in the network. You can think of a recurrent neural network as a neural network with multiple layers of the same network structure, and each layer transmits information to the next layer; the two are combined for feature extraction to support sequence prediction.
  • the convolutional long and short-term memory layer is developed for visual time series prediction problems and applications that generate text descriptions from image sequences (such as videos), which can extract more comprehensive timing features. Since the occurrence of urban agglomeration events evolves with time, the combination of convolutional neural network layer and long- and short-term memory layer can well extract image features and time patterns, and therefore can well explore the agglomeration events in time series images.
  • the probability prediction model can output the probability of an urban agglomeration event occurring immediately.
  • the probability of the event occurring is set to 1.
  • the present invention assumes that the entire event will occur in n consecutive frame images, and n is an integer ⁇ 1.
  • the probability of the rth frame of the aggregation event is set to r is an integer ⁇ n.
  • the occurrence probability of the first to 20 frames within these 20 frames is as follows
  • zero padding is not performed during convolution. Since the probability here is a continuous value, cross entropy cannot be used, so the extracted features are divided into two parts: prediction probability and classification. The loss function guarantees that the two results are consistent, that is, when the probability prediction is 1, the classification output of the class activation map is 1.
  • the layer structure of the class activation map is formed by the convolutional layer, the average pooling layer, and the fully connected layer in sequence; the class activation map is to show which part of the image contributes to the final result of the convolutional neural network.
  • the class activation mapping corresponds to the location of the event, because the location of the occurrence point is in the process of cluster formation, as the participants continue to go to the destination, the flow of the event location will change greatly.
  • Class activation mapping can determine the contribution area of event aggregation, that is, where the aggregation event occurs.
  • Class activation mapping is formed by convolutional layer, average pooling layer, and fully connected layer in sequence.
  • the class activation mapping uses the binary cross-entropy loss function in the classification discrimination, and the specific formula is expressed as:
  • yi is the prediction result in classification judgment
  • Is the expected result in classification judgment
  • i represents the i-th frame of the input sequence
  • k is the total number of input sequences
  • yi′ is the prediction result in probability prediction, It is the expected result in the probability prediction;
  • the loss of the entire deep neural network is the sum of two parts:
  • Loss loss 1 +loss 2 .
  • the method and device for predicting and positioning urban agglomeration events based on deep convolutional long and short-term networks can predict the occurrence and location of urban agglomeration events in advance.
  • the invention models the data flow into a statistical image of the traffic flow and transforms it into a data model that can be processed by a deep neural network.
  • the convolutional long and short-term memory layer is used to obtain time series features from the input image, and the expansion of the convolution layer effectively expands the field of view of neurons.
  • the method and device for predicting and locating urban agglomeration events based on a deep convolutional long-short-term network can not only detect the agglomeration events, but also predict the urban agglomeration events in advance and locate the gathering place.
  • it selects an ingenious network structure according to the actual situation.
  • the technical scheme of the present invention has been proved to be feasible through experiments, and its output result has higher reference value.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of units may be a logical function division.
  • multiple units or components may be combined or integrated into Another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present invention essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods in the various embodiments of the present invention.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种城市聚集事件预测与定位方法及装置,涉及交通监控领域,所述方法及装置对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征,利用多帧图片的时序特征和空间特征进行概率预测和类激活映射,判断出发生城市聚集事件的概率和位置,不仅能检测聚集事件,还可以提前预测城市聚集事件,并对聚集地点定位。其在解决特征提取及分类判断过程中,根据现实情况选取了巧妙的网络结构。所述方法是在复杂物联网移动数据下基于深度神经网络的城市聚集事件提前预测与定位,能够对发生过或模拟过聚集事件的重点区域进行长时间跨度的预测与定位。

Description

一种城市聚集事件预测与定位方法及装置 技术领域
本发明涉及交通监控领域,具体而言,涉及一种城市聚集事件预测与定位方法及装置。
背景技术
城市聚集事件是在一段时间内,大量移动物体(出租车、行人等)向小范围区域内汇聚现象。典型的城市聚集事件有交通拥堵、演唱会人群聚集等。大规模的城市聚集事件对城市交通、城市安全有重要影响。提前预测聚集事件的发生与位置可以帮助有关部门规划调整警力等资源,保障城市健康运行,提高人们生活满意度。随着物联网传感器技术的飞速发展,我们已经可以采集到大量的交通、手机等移动数据。但是人类活动非常复杂,目前很多研究只能检测到城市聚集事件,而从杂乱的海量数据中分析出城市聚集事件发生趋势,预测城市聚集事件可能发生地点仍然是一个非常大的挑战。
目前针对城市聚集事件问题,利用深度学习网络系统解决该问题仍为空白,复杂海量数据在建模与特征提取较为困难。现有技术多为启发式算法,定义一两项特征量间接体现聚集程度,前期进行大量数据预处理求得特征量,随后筛选出可能发生聚集的候选区域,然后利用启发式迭代算法进行区域扩展。同时可以结合历史数据中挖掘出来的模式进行检测,形成反馈系统。大部分系统用来检测已经发生的聚集事件,少数系统根据观察到的聚集事件推移演化特征,可以进行算法上加速或短时间内的预测。
现有技术基于观察寻找一两项特征,以此作为后继算法检测城市聚集事件的依据,这个过程可能造成原始信息的大量丢失。另外目前技术在候选位置的判断上只能利用临近有限区域内的数据信息,数据信息较少。且现有技术很难预测聚集事件,因为地理上聚集事件模式挖掘已经很复杂,能够再进行联合时 间维度的模式挖掘的系统较少。
发明内容
本发明实施例提供了一种城市聚集事件预测与定位方法及装置,以至少避免现有无法对重点区域进行长时间跨度的预测与定位的技术问题。
根据本发明的一实施例,提供了一种城市聚集事件预测与定位方法,包括:
将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片;
对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征;
利用多帧图片的时序特征和空间特征进行概率预测,计算出发生城市聚集事件的概率;
对多帧图片的时序特征和空间特征进行类激活映射,判断出发生城市聚集事件的位置。
进一步地,将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片包括:
将城市按经纬度均匀划分为若干小方格,以预设时间为间隔统计一次城市车辆的实际轨迹,每次生成对应若干小方格的多幅图片,每次生成的多幅图片组成一帧图片,在一段时间内生成多帧图片;其中每幅图片表征对应每个小方格前一时间段从邻接方向的车辆转入数目。
进一步地,城市聚集事件预测与定位方法还包括:
对一段时间内城市车辆的实际轨迹进行事件模拟,将模拟的车流加在实际的车流上变为模拟事件样本与多帧图片一起作为深度神经网络的输入数据。
进一步地,深度神经网络包括依次设置的:
卷积层,用于保留局部的空间信息,提取更全面的空间特征;
卷积长短期记忆层,用于对多帧图片进行视觉上的时间序列预测和从多帧图片的图像序列中生成描述性文本,提取更全面的时序特征;
退出层,用于避免深度神经网络过拟合;
扩张卷积层,用于在不做池化的情况下扩张深度神经网络中神经元的视野范围。
进一步地,卷积层中每层卷积在t时刻的运算公式表示为:
H t=f(W*X t+b);
H t代表卷积层的输出,X t代表卷积层的输入,*代表了卷积操作,W和b是训练参数,f是修正激活函数Relu。
进一步地,卷积长短期记忆层包括:
卷积神经网络层,用于从多帧图片的图像序列中生成描述性文本;
长短时记忆层,用于对多帧图片进行视觉上的时间序列预测;
卷积神经网络层与长短时记忆层相结合进行时序特征提取。
进一步地,利用多帧图片的时序特征和空间特征进行概率预测包括:
将聚集事件发生时的概率设为1,假设整个聚集事件会在n个连续帧图像内发生,n为≥1的整数,训练时候聚集事件发生的第r帧概率设为r/n,r为≤n的整数。
进一步地,类激活映射的层结构由卷积层、平均池化层、全连接层依次连接形成;类激活映射在分类判别中采用二元交叉熵损失函数,具体公式表示为:
Figure PCTCN2019130287-appb-000001
其中,yi是分类判别中的预测结果,
Figure PCTCN2019130287-appb-000002
是分类判别中的预期结果,i代表输入序列第i帧,k为输入的总序列数;
概率预测中采用最小均方误差作为损失函数:
Figure PCTCN2019130287-appb-000003
其中,yi′是概率预测中的预测结果,
Figure PCTCN2019130287-appb-000004
是概率预测中的预期结果;
整个深度神经网络的损失为两部分之和:
Loss=loss 1+loss 2
根据本发明的另一实施例,提供了一种城市聚集事件预测与定位装置,包括:
图片生成单元,用于将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片;
特征提取单元,用于对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征;
概率预测单元,用于利用多帧图片的时序特征和空间特征进行概率预测,计算出发生城市聚集事件的概率;
类激活映射单元,用于对多帧图片的时序特征和空间特征进行类激活映射,判断出发生城市聚集事件的位置。
进一步地,城市聚集事件预测与定位装置还包括:
事件模拟单元,用于对一段时间内城市车辆的实际轨迹进行事件模拟,将模拟的车流加在实际的车流上变为模拟事件样本与多帧图片一起作为深度神经网络的输入数据。
本发明实施例中的城市聚集事件预测与定位方法及装置,对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征,利用多帧图片的时序特征和空间特征进行概率预测和类激活映射,判断出发生城市聚集事件的概率和位置,不仅能检测聚集事件,还可以提前预测城市聚集事件,并对聚集地点定位。其在解决特征提取及分类判断过程中,根据现实情况选取了巧妙的网络结构。本发明是在复杂物联网移动数据下基于深度神经网络的城市聚集事件提前预测与定位,能够对发生过或模拟过聚集事件的重点区域进行 长时间跨度的预测与定位。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本发明城市聚集事件预测与定位方法的流程图;
图2为本发明城市聚集事件预测与定位方法的具体流程图;
图3为本发明城市聚集事件预测与定位方法的示意图;
图4为本发明交通流预测装置的连接框图;
图5为本发明城市聚集事件预测与定位方法的具体连接框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设 备固有的其它步骤或单元。
现有技术基于观察寻找一两项特征,以此作为后继算法检测城市聚集事件的依据,这个过程可能造成原始信息的大量丢失,而本发明使用的深度神经网络可以自己学习提取特征。另外目前技术在候选位置的判断上只能利用临近有限区域内的数据信息,而本发明使用的深度神经网络,通过指数扩展神经元视野范围,可以利用到更广泛区域的信息。现有技术很难预测聚集事件,因为地理上聚集事件模式挖掘已经很复杂,能够再进行联合时间维度的模式挖掘的系统较少,而本发明借助GPU强大的计算能力,对易发生聚集的区域进行采集或模拟后,可大时间跨度预测聚集事件的发生与定位。
本发明要解决的主要问题是在复杂物联网移动数据下基于深度神经网络的城市聚集事件提前预测与定位,能够对发生过或模拟过聚集事件的重点区域进行长时间跨度的预测与定位。
实施例1
本发明利用一种新的深度神经网络结构进行城市聚集时间的提前预测与定位,将聚集事件预测转化成为分类与回归问题。具体来说,给定一系列观测时间序列Xt,输出表示聚集事件是否发生的预测序列Yt、立即发生聚集事件的概率Pt和可能发生聚集事件地点Lt。
为了达到该目的,本发明主要分为三部分进行:特征提取、概率预测、类激活映射。三者关系是:首先对输入数据集进行特征提取,提取到的特征同时输入到概率预测和类活动映射部分。
根据本发明的实施例,参见图1,提供了一种城市聚集事件预测与定位方法,包括:
步骤S100:将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片;
步骤S102:对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征;
步骤S104:利用多帧图片的时序特征和空间特征进行概率预测,计算出发生城市聚集事件的概率;
步骤S106:对多帧图片的时序特征和空间特征进行类激活映射,判断出发生城市聚集事件的位置。
本发明实施例中的城市聚集事件预测与定位方法,对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征,利用多帧图片的时序特征和空间特征进行概率预测和类激活映射,判断出发生城市聚集事件的概率和位置,不仅能检测聚集事件,还可以提前预测城市聚集事件,并对聚集地点定位。其在解决特征提取及分类判断过程中,根据现实情况选取了巧妙的网络结构。本发明是在复杂物联网移动数据下基于深度神经网络的城市聚集事件提前预测与定位,能够对发生过或模拟过聚集事件的重点区域进行长时间跨度的预测与定位。
作为优选的技术方案中,将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片包括:
将城市按经纬度均匀划分为若干小方格,以预设时间为间隔统计一次城市车辆的实际轨迹,每次生成对应若干小方格的多幅图片,每次生成的多幅图片组成一帧图片,在一段时间内生成多帧图片;其中每幅图片表征对应每个小方格前一时间段从邻接方向的车辆转入数目。
参见图3,基于深度卷积长短时网络的城市聚集事件预测与定位方法主要包括三大部分,第一是特征提取,第二是概率预测即预测发生聚集事件的概率,第三是类激活映射判断是否有聚集事件发生与定位可能发生事件地点。
训练源数据包括两部分,一部分是真实数据,另外一部分是仿真数据。真实数据来源于公交车和出租车的实时轨迹,主要字段信息包括:车牌、事件、GPS经度、GPS维度、速度、状态等。在预处理时,将整个城市经纬度均匀划分为若干小方格,每隔一个预设的小的时间间隔统计一次,每次生成多幅图片,每幅图片对应每个小格子前一时间段从邻接方向的车辆转入数目。
作为优选的技术方案中,参见图2,城市聚集事件预测与定位方法还包括:
步骤S101:对一段时间内城市车辆的实际轨迹进行事件模拟,将模拟的车流加在实际的车流上变为模拟事件样本与多帧图片一起作为深度神经网络的输入数据。
同时本发明在预处理时进行了事件模拟,一方面由于实际样本中包含事件的样本较少,需要扩展数据集,另一方面为了增加重点监测地点的模拟事件样本。模拟的事件时,将模拟的车流加在实际的车流上变为包含实际事件的模拟事件样本。
作为优选的技术方案中,特征提取以20帧处理后的连续图像作为输入,该部分由四个不同的层组成,依次是:卷积层(convolution layer)、卷积长短期记忆层(convolution LSTM layer)、退出层(dropout layer)、扩张卷积层(dilated convolution layer)。
具体的,深度神经网络包括依次设置的:
卷积层,用于保留局部的空间信息,提取更全面的空间特征;
卷积长短期记忆层,用于对多帧图片进行视觉上的时间序列预测和从多帧图片的图像序列中生成描述性文本,提取更全面的时序特征;
退出层,用于避免深度神经网络过拟合;在退出层中每个神经元以一定概率失活,如果一个神经元失活,则它的输出会变为零。退出层是为了避免深度神经网络过拟合,让本发明能够检测到不同特征(如参与者数量不同)的城市聚集事件。
扩张卷积层,用于在不做池化的情况下扩张深度神经网络中神经元的视野范围。扩张卷积层可以在不做池化的情况下扩张神经元的视野范围。传统的卷积神经网络会对输入图像先进行池化扩大感受野,但由于减少了数据尺寸,因此随后需要利用上采样扩大尺寸,这样一来会造成信息的丢失,分辨率下降。扩张卷积层不通过池化,而是通过跳过一些点或在过滤器中加入权重为零的空洞,进而由级联结构达到指数级视野扩张,与此同时保持了数据的尺寸。本发明引入该扩张卷积层,因为扩张卷积层对保持交通信息整合更广范围的信息与数据分辨率有很重要的实际意义。
作为优选的技术方案中,卷积层被广泛利用在图像分类中,本发明在特征提取第一步采用卷积层主要考虑保持临近方格之间的相关性,与图像分类问题类似,在城市聚集检测问题中,临近位置车辆会相互影响。因此使用卷积层可以保留局部的空间信息,能够提取到更全面的空间特征。由于有些相互联系的地理位置跨度比较大,如高速路,因此需要多层卷积层进行特征提取。
卷积层中每层卷积在t时刻的运算公式表示为:
H t=f(W*X t+b)
H t代表卷积层的输出,X t代表卷积层的输入,*代表了卷积操作,W和b是训练参数,f是修正激活函数Relu。
作为优选的技术方案中,卷积长短期记忆层包括:
卷积神经网络层,用于从多帧图片的图像序列中生成描述性文本;
长短时记忆层,用于对多帧图片进行视觉上的时间序列预测;
卷积神经网络层与长短时记忆层相结合进行时序特征提取。
卷积长短期记忆层架构包含两部分,一是使用卷积神经网络层(CNN),卷积神经网络是一类包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),是深度学习(deep learning)的代表算法之一,二是长短时记忆层,长短时神经网络是一种特殊的递归神经网络,所谓递归神经网络就是网络能够解决时间序列问题的预测。所谓递归神经网络就是网络中具有循环结构。可以将递归神经网络想象成有多层相同网络结构的神经网络,每一层将信息传递给下一层;两者结合进行特征提取,以支持序列预测。卷积长短期记忆层被开发用于视觉时间序列预测问题和从图像序列(例如视频)生成文本描述的应用,能够提取到更全面的时序特征。由于城市聚集事件发生是伴随时间演化的,利用卷积神经网络层和长短时记忆层相结合可以很好地提取图像特征和时间模式,因此可以很好地发掘时间序列图像中的聚集事件。
作为优选的技术方案中,利用多帧图片的时序特征和空间特征进行概率预测包括:
将聚集事件发生时的概率设为1,假设整个聚集事件会在n个连续帧图像内发生,n为≥1的整数,训练时候聚集事件发生的第r帧概率设为r/n,r为≤n的整数。
具体的,概率预测模型可以输出立即发生城市聚集事件的概率,将事件发生时的概率设为1,本发明假设整个事件会在n个连续帧图像内发生,n为≥1的整数,训练时候聚集事件发生的第r帧概率设为
Figure PCTCN2019130287-appb-000005
r为≤n的整数。比如假设一个时间持续20帧的图像,这20帧内第1到20帧的发生概率依次为
Figure PCTCN2019130287-appb-000006
Figure PCTCN2019130287-appb-000007
另外为了减少该部分最后的全连接层的训练参数,该处卷积时候不进行零填充。由于此处的概率是连续值,不能使用交叉熵,所以提取特征后分为预测概率和进行分类两个部分。损失函数保证两处结果一致,即概率预测为1的时候,类激活映射的分类输出为1。
作为优选的技术方案中,类激活映射的层结构由卷积层、平均池化层、全连接层依次连接形成;类激活映射是为了显示卷积神经网络最后结果由图像中哪一部分区域贡献,换句话说,深度卷积神经网络对决策密切相关的区域会更加关注。在城市聚集事件预测问题中,类激活映射对应了事件发生的位置,因为发生点位置在聚集形成过程中,伴随着参与者不断地前往目的地,聚集事件位置的流量会变动很大。类激活映射能判断事件聚集的贡献区域,即聚集事件发生位置。类激活映射由卷积层、平均池化层、全连接层依次连接形成。
类激活映射在分类判别中采用二元交叉熵损失函数,具体公式表示为:
Figure PCTCN2019130287-appb-000008
其中,yi是分类判别中的预测结果,
Figure PCTCN2019130287-appb-000009
是分类判别中的预期结果,i代表输入序列第i帧,k为输入的总序列数;
概率预测中采用最小均方误差作为损失函数:
Figure PCTCN2019130287-appb-000010
其中,yi′是概率预测中的预测结果,
Figure PCTCN2019130287-appb-000011
是概率预测中的预期结果;
整个深度神经网络的损失为两部分之和:
Loss=loss 1+loss 2
实施例二
根据本发明的另一实施例,参见图4,提供了一种城市聚集事件预测与定位装置,包括:
图片生成单元200,用于将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片;
特征提取单元202,用于对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征;
概率预测单元204,用于利用多帧图片的时序特征和空间特征进行概率预测,计算出发生城市聚集事件的概率;
类激活映射单元206,用于对多帧图片的时序特征和空间特征进行类激活映射,判断出发生城市聚集事件的位置。
本发明实施例中的城市聚集事件预测与定位装置,对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征,利用多帧图片的时序特征和空间特征进行概率预测和类激活映射,判断出发生城市聚集事件的概率和位置,不仅能检测聚集事件,还可以提前预测城市聚集事件,并对聚集地点定位。其在解决特征提取及分类判断过程中,根据现实情况选取了巧妙的网络结构。本发明是在复杂物联网移动数据下基于深度神经网络的城市聚集事件提前预测与定位,能够对发生过或模拟过聚集事件的重点区域进行长时间跨度的预测与定位。
作为优选的技术方案中,参见图5,城市聚集事件预测与定位装置还包括:
事件模拟单元201,用于对一段时间内城市车辆的实际轨迹进行事件模拟,将模拟的车流加在实际的车流上变为模拟事件样本与多帧图片一起作为深度神经网络的输入数据。
参见图3,基于深度卷积长短时网络的城市聚集事件预测与定位装置主要包括三大部分,第一是特征提取,第二是概率预测即预测发生聚集事件的概率,第三是类激活映射判断是否有聚集事件发生与定位可能发生事件地点。
训练源数据包括两部分,一部分是真实数据,另外一部分是仿真数据。真实数据来源于公交车和出租车的实时轨迹,主要字段信息包括:车牌、事件、GPS经度、GPS维度、速度、状态等。在预处理时,将整个城市经纬度均匀划分为若干小方格,每隔一个预设的小的时间间隔统计一次,每次生成多幅图片,每幅图片对应每个小格子前一时间段从邻接方向的车辆转入数目。
同时本发明在预处理时进行了事件模拟,一方面由于实际样本中包含事件的样本较少,需要扩展数据集,另一方面为了增加重点监测地点的模拟事件样本。模拟的事件时,将模拟的车流加在实际的车流上变为包含实际事件的模拟事件样本。
作为优选的技术方案中,特征提取以20帧处理后的连续图像作为输入,该部分由四个不同的层组成,依次是:卷积层(convolution layer)、卷积长短期记忆层(convolution LSTM layer)、退出层(dropout layer)、扩张卷积层(dilated convolution layer)。
具体的,深度神经网络包括依次设置的:
卷积层,用于保留局部的空间信息,提取更全面的空间特征;
卷积长短期记忆层,用于对多帧图片进行视觉上的时间序列预测和从多帧图片的图像序列中生成描述性文本,提取更全面的时序特征;
退出层,用于避免深度神经网络过拟合;在退出层中每个神经元以一定概率失活,如果一个神经元失活,则它的输出会变为零。退出层是为了避免深度神经网络过拟合,让本发明能够检测到不同特征(如参与者数量不同)的城市聚集事件。
扩张卷积层,用于在不做池化的情况下扩张深度神经网络中神经元的视野范围。扩张卷积层可以在不做池化的情况下扩张神经元的视野范围。传统的卷积神经网络会对输入图像先进行池化扩大感受野,但由于减少了数据尺寸,因此随后需要利用上采样扩大尺寸,这样一来会造成信息的丢失,分辨率下降。扩张卷积层不通过池化,而是通过跳过一些点或在过滤器中加入权重为零的空洞,进而由级联结构达到指数级视野扩张,与此同时保持了数据的尺寸。本发明引入该扩张卷积层,因为扩张卷积层对保持交通信息整合更广范围的信息与数据分辨率有很重要的实际意义。
作为优选的技术方案中,卷积层被广泛利用在图像分类中,本发明在特征提取第一步采用卷积层主要考虑保持临近方格之间的相关性,与图像分类问题类似,在城市聚集检测问题中,临近位置车辆会相互影响。因此使用卷积层可以保留局部的空间信息,能够提取到更全面的空间特征。由于有些相互联系的地理位置跨度比较大,如高速路,因此需要多层卷积层进行特征提取。
卷积层中每层卷积在t时刻的运算公式表示为:
H t=f(W*X t+b)
H t代表卷积层的输出,X t代表卷积层的输入,*代表了卷积操作,W和b是训练参数,f是修正激活函数Relu。
作为优选的技术方案中,卷积长短期记忆层包括:
卷积神经网络层,用于从多帧图片的图像序列中生成描述性文本;
长短时记忆层,用于对多帧图片进行视觉上的时间序列预测;
卷积神经网络层与长短时记忆层相结合进行时序特征提取。
卷积长短期记忆层架构包含两部分,一是使用卷积神经网络层(CNN),卷积神经网络是一类包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),是深度学习(deep learning)的代表算法之一,二是长短时记忆层,长短时神经网络是一种特殊的递归神经网络,所谓递归神经网络就是网络能够解决时间序列问题的预测。所谓递归神经网络就是网 络中具有循环结构。可以将递归神经网络想象成有多层相同网络结构的神经网络,每一层将信息传递给下一层;两者结合进行特征提取,以支持序列预测。卷积长短期记忆层被开发用于视觉时间序列预测问题和从图像序列(例如视频)生成文本描述的应用,能够提取到更全面的时序特征。由于城市聚集事件发生是伴随时间演化的,利用卷积神经网络层和长短时记忆层相结合可以很好地提取图像特征和时间模式,因此可以很好地发掘时间序列图像中的聚集事件。
具体的,概率预测模型可以输出立即发生城市聚集事件的概率,将事件发生时的概率设为1,本发明假设整个事件会在n个连续帧图像内发生,n为≥1的整数,训练时候聚集事件发生的第r帧概率设为
Figure PCTCN2019130287-appb-000012
r为≤n的整数。比如假设一个时间持续20帧的图像,这20帧内第1到20帧的发生概率依次为
Figure PCTCN2019130287-appb-000013
Figure PCTCN2019130287-appb-000014
另外为了减少该部分最后的全连接层的训练参数,该处卷积时候不进行零填充。由于此处的概率是连续值,不能使用交叉熵,所以提取特征后分为预测概率和进行分类两个部分。损失函数保证两处结果一致,即概率预测为1的时候,类激活映射的分类输出为1。
作为优选的技术方案中,类激活映射的层结构由卷积层、平均池化层、全连接层依次连接形成;类激活映射是为了显示卷积神经网络最后结果由图像中哪一部分区域贡献,换句话说,深度卷积神经网络对决策密切相关的区域会更加关注。在城市聚集事件预测问题中,类激活映射对应了事件发生的位置,因为发生点位置在聚集形成过程中,伴随着参与者不断地前往目的地,聚集事件位置的流量会变动很大。类激活映射能判断事件聚集的贡献区域,即聚集事件发生位置。类激活映射由卷积层、平均池化层、全连接层依次连接形成。
类激活映射在分类判别中采用二元交叉熵损失函数,具体公式表示为:
Figure PCTCN2019130287-appb-000015
其中,yi是分类判别中的预测结果,
Figure PCTCN2019130287-appb-000016
是分类判别中的预期结果,i代表输入序列第i帧,k为输入的总序列数;
概率预测中采用最小均方误差作为损失函数:
Figure PCTCN2019130287-appb-000017
其中,yi′是概率预测中的预测结果,
Figure PCTCN2019130287-appb-000018
是概率预测中的预期结果;
整个深度神经网络的损失为两部分之和:
Loss=loss 1+loss 2
基于深度卷积长短时网络的城市聚集事件预测与定位方法及装置可以提前预测城市聚集事件发生与位置得益于两个方面:特征提取能够提取到更全面的时序与空间信息,类激活映射能判断事件聚集区域。
本发明将数据流建模成车流统计图像,转化成深度神经网络可以处理的数据模型。特征提取时利用卷积长短期记忆层从输入图像中获取时序特征,扩张卷积层有效扩展神经元视野范围。
相比现有技术,基于深度卷积长短时网络的城市聚集事件预测与定位方法及装置不仅能检测聚集事件,还可以提前预测城市聚集事件,并对聚集地点定位。其在解决特征提取及分类判断过程中,根据现实情况选取了巧妙的网络结构。本发明的技术方案经过实验,证明可行,其输出结果有较高的参考价值。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的 形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种城市聚集事件预测与定位方法,其特征在于,包括:
    将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片;
    对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征;
    利用多帧图片的时序特征和空间特征进行概率预测,计算出发生城市聚集事件的概率;
    对多帧图片的时序特征和空间特征进行类激活映射,判断出发生城市聚集事件的位置。
  2. 根据权利要求1所述的城市聚集事件预测与定位方法,其特征在于,所述将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片包括:
    将城市按经纬度均匀划分为若干小方格,以预设时间为间隔统计一次城市车辆的实际轨迹,每次生成对应若干小方格的多幅图片,每次生成的多幅图片组成一帧图片,在一段时间内生成多帧图片;其中每幅图片表征对应每个小方格前一时间段从邻接方向的车辆转入数目。
  3. 根据权利要求1所述的城市聚集事件预测与定位方法,其特征在于,所述城市聚集事件预测与定位方法还包括:
    对一段时间内城市车辆的实际轨迹进行事件模拟,将模拟的车流加在实际的车流上变为模拟事件样本与多帧图片一起作为深度神经网络的输入数据。
  4. 根据权利要求1所述的城市聚集事件预测与定位方法,其特征在于,所述深度神经网络包括依次设置的:
    卷积层,用于保留局部的空间信息,提取更全面的空间特征;
    卷积长短期记忆层,用于对多帧图片进行视觉上的时间序列预测和从多帧图片的图像序列中生成描述性文本,提取更全面的时序特征;
    退出层,用于避免深度神经网络过拟合;
    扩张卷积层,用于在不做池化的情况下扩张深度神经网络中神经元的视野范围。
  5. 根据权利要求4所述的城市聚集事件预测与定位方法,其特征在于,所述卷积层中每层卷积在t时刻的运算公式表示为:
    H t=f(W*X t+b)
    H t代表卷积层的输出,X t代表卷积层的输入,*代表了卷积操作,W和b是训练参数,f是修正激活函数Relu。
  6. 根据权利要求4所述的城市聚集事件预测与定位方法,其特征在于,所述卷积长短期记忆层包括:
    卷积神经网络层,用于从多帧图片的图像序列中生成描述性文本;
    长短时记忆层,用于对多帧图片进行视觉上的时间序列预测;
    所述卷积神经网络层与所述长短时记忆层相结合进行时序特征提取。
  7. 根据权利要求1所述的城市聚集事件预测与定位方法,其特征在于,所述利用多帧图片的时序特征和空间特征进行概率预测包括:
    将聚集事件发生时的概率设为1,假设整个聚集事件会在n个连续帧图像内发生,n为≥1的整数,训练时候聚集事件发生的第r帧概率设为r/n,r为≤n的整数。
  8. 根据权利要求1所述的城市聚集事件预测与定位方法,其特征在于,所述类激活映射的层结构由卷积层、平均池化层、全连接层依次连接形成;所述类激活映射在分类判别中采用二元交叉熵损失函数,具体公式表示为:
    Figure PCTCN2019130287-appb-100001
    其中,yi是分类判别中的预测结果,
    Figure PCTCN2019130287-appb-100002
    是分类判别中的预期结果,i代表输入序列第i帧,k为输入的总序列数;
    所述概率预测中采用最小均方误差作为损失函数:
    Figure PCTCN2019130287-appb-100003
    其中,yi′是概率预测中的预测结果,
    Figure PCTCN2019130287-appb-100004
    是概率预测中的预期结果;
    整个深度神经网络的损失为两部分之和:
    Loss=loss 1+loss 2
  9. 一种城市聚集事件预测与定位装置,其特征在于,包括:
    图片生成单元,用于将一段时间内城市车辆的实际轨迹以预设时间为间隔转换为多帧图片;
    特征提取单元,用于对多帧图片利用深度神经网络进行特征提取,提取出多帧图片的时序特征和空间特征;
    概率预测单元,用于利用多帧图片的时序特征和空间特征进行概率预测,计算出发生城市聚集事件的概率;
    类激活映射单元,用于对多帧图片的时序特征和空间特征进行类激活映射,判断出发生城市聚集事件的位置。
  10. 根据权利要求9所述的城市聚集事件预测与定位装置,其特征在于,所述城市聚集事件预测与定位装置还包括:
    事件模拟单元,用于对一段时间内城市车辆的实际轨迹进行事件模拟,将模拟的车流加在实际的车流上变为模拟事件样本与多帧图片一起作为深度神经网络的输入数据。
PCT/CN2019/130287 2019-04-23 2019-12-31 一种城市聚集事件预测与定位方法及装置 WO2020215793A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910326894.6 2019-04-23
CN201910326894.6A CN110147904B (zh) 2019-04-23 2019-04-23 一种城市聚集事件预测与定位方法及装置

Publications (1)

Publication Number Publication Date
WO2020215793A1 true WO2020215793A1 (zh) 2020-10-29

Family

ID=67593803

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130287 WO2020215793A1 (zh) 2019-04-23 2019-12-31 一种城市聚集事件预测与定位方法及装置

Country Status (2)

Country Link
CN (1) CN110147904B (zh)
WO (1) WO2020215793A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330743A (zh) * 2020-11-06 2021-02-05 安徽清新互联信息科技有限公司 一种基于深度学习的高空抛物检测方法
CN112598165A (zh) * 2020-12-11 2021-04-02 湖南大学 基于私家车数据的城市功能区转移流量预测方法及装置
CN112783941A (zh) * 2021-01-07 2021-05-11 合肥工业大学 一种大规模群体聚集事件的实时检测方法
CN112990543A (zh) * 2021-02-05 2021-06-18 厦门市美亚柏科信息股份有限公司 一种基于人员活动轨迹预测人员聚集风险的方法
CN113626536A (zh) * 2021-07-02 2021-11-09 武汉大学 一种基于深度学习的新闻地理编码方法
CN117649028A (zh) * 2024-01-26 2024-03-05 南京航空航天大学 基于城市功能区域匹配的跨城市人群流量趋势预测方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147904B (zh) * 2019-04-23 2021-06-18 深圳先进技术研究院 一种城市聚集事件预测与定位方法及装置
CN110929915A (zh) * 2019-10-14 2020-03-27 武汉烽火众智数字技术有限责任公司 警情发生区域的智能预警模型建立方法、装置及存储介质
CN110889635B (zh) * 2019-11-29 2022-10-04 北京金和网络股份有限公司 一种用于对食品安全事件的处理进行应急演练的方法
CN111539864B (zh) * 2020-03-31 2023-07-11 中国刑事警察学院 一种基于lbs大数据的踩踏事件的情报分析方法和装置
CN111488815B (zh) * 2020-04-07 2023-05-09 中山大学 基于图卷积网络和长短时记忆网络的事件预测方法
CN111597461B (zh) * 2020-05-08 2023-11-17 北京百度网讯科技有限公司 一种目标对象聚集预测方法、装置以及电子设备
CN111540209B (zh) * 2020-05-12 2021-07-27 青岛海信网络科技股份有限公司 一种车辆聚集监测方法及计算机可读存储介质
CN112288026B (zh) * 2020-11-04 2022-09-20 南京理工大学 一种基于类激活图的红外弱小目标检测方法
CN112862177B (zh) * 2021-02-02 2024-01-19 湖南大学 一种基于深度神经网络的城市区域聚集度预测方法、设备及介质
CN112967252B (zh) * 2021-03-05 2021-10-22 哈尔滨市科佳通用机电股份有限公司 一种轨道车辆机感吊架组装螺栓丢失检测方法
CN114913475B (zh) * 2022-04-16 2023-05-02 北京网汇智城科技有限公司 基于gis和机器视觉的城市网格化管理方法及系统
CN117272082B (zh) * 2023-11-17 2024-03-19 深圳市联特微电脑信息技术开发有限公司 基于工业物联网的数据处理方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106205126A (zh) * 2016-08-12 2016-12-07 北京航空航天大学 基于卷积神经网络的大规模交通网络拥堵预测方法及装置
CN107103758A (zh) * 2017-06-08 2017-08-29 厦门大学 一种基于深度学习的城市区域交通流量预测方法
CN107529651A (zh) * 2017-08-18 2018-01-02 北京航空航天大学 一种基于深度学习的城市交通客流预测方法和设备
CN107967532A (zh) * 2017-10-30 2018-04-27 厦门大学 融合区域活力的城市交通流量预测方法
WO2018112496A1 (en) * 2016-12-20 2018-06-28 Canon Kabushiki Kaisha Tree structured crf with unary potential function using action unit features of other segments as context feature
CN110147904A (zh) * 2019-04-23 2019-08-20 深圳先进技术研究院 一种城市聚集事件预测与定位方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10068171B2 (en) * 2015-11-12 2018-09-04 Conduent Business Services, Llc Multi-layer fusion in a convolutional neural network for image classification
CN108594321A (zh) * 2018-05-02 2018-09-28 深圳市唯特视科技有限公司 一种基于数据增强的弱监督目标定位方法
CN109034460A (zh) * 2018-07-03 2018-12-18 深圳市和讯华谷信息技术有限公司 用于景区客流拥堵程度的预测方法、装置和系统
CN109299401B (zh) * 2018-07-12 2022-02-08 中国海洋大学 基于混合深度学习模型LSTM-ResNet的城域时空流预测方法
CN108831153A (zh) * 2018-08-09 2018-11-16 深圳先进技术研究院 一种利用时空分布特性的交通流预测方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106205126A (zh) * 2016-08-12 2016-12-07 北京航空航天大学 基于卷积神经网络的大规模交通网络拥堵预测方法及装置
WO2018112496A1 (en) * 2016-12-20 2018-06-28 Canon Kabushiki Kaisha Tree structured crf with unary potential function using action unit features of other segments as context feature
CN107103758A (zh) * 2017-06-08 2017-08-29 厦门大学 一种基于深度学习的城市区域交通流量预测方法
CN107529651A (zh) * 2017-08-18 2018-01-02 北京航空航天大学 一种基于深度学习的城市交通客流预测方法和设备
CN107967532A (zh) * 2017-10-30 2018-04-27 厦门大学 融合区域活力的城市交通流量预测方法
CN110147904A (zh) * 2019-04-23 2019-08-20 深圳先进技术研究院 一种城市聚集事件预测与定位方法及装置

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330743A (zh) * 2020-11-06 2021-02-05 安徽清新互联信息科技有限公司 一种基于深度学习的高空抛物检测方法
CN112330743B (zh) * 2020-11-06 2023-03-10 安徽清新互联信息科技有限公司 一种基于深度学习的高空抛物检测方法
CN112598165A (zh) * 2020-12-11 2021-04-02 湖南大学 基于私家车数据的城市功能区转移流量预测方法及装置
CN112598165B (zh) * 2020-12-11 2023-09-26 湖南大学 基于私家车数据的城市功能区转移流量预测方法及装置
CN112783941A (zh) * 2021-01-07 2021-05-11 合肥工业大学 一种大规模群体聚集事件的实时检测方法
CN112990543A (zh) * 2021-02-05 2021-06-18 厦门市美亚柏科信息股份有限公司 一种基于人员活动轨迹预测人员聚集风险的方法
CN113626536A (zh) * 2021-07-02 2021-11-09 武汉大学 一种基于深度学习的新闻地理编码方法
CN113626536B (zh) * 2021-07-02 2023-08-15 武汉大学 一种基于深度学习的新闻地理编码方法
CN117649028A (zh) * 2024-01-26 2024-03-05 南京航空航天大学 基于城市功能区域匹配的跨城市人群流量趋势预测方法
CN117649028B (zh) * 2024-01-26 2024-04-02 南京航空航天大学 基于城市功能区域匹配的跨城市人群流量趋势预测方法

Also Published As

Publication number Publication date
CN110147904A (zh) 2019-08-20
CN110147904B (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
WO2020215793A1 (zh) 一种城市聚集事件预测与定位方法及装置
CN111612206B (zh) 一种基于时空图卷积神经网络的街区人流预测方法及系统
US11270579B2 (en) Transportation network speed foreeasting method using deep capsule networks with nested LSTM models
Kumar et al. A New Vehicle Tracking System with R-CNN and Random Forest Classifier for Disaster Management Platform to Improve Performance
Zhang et al. Graph deep learning model for network-based predictive hotspot mapping of sparse spatio-temporal events
AU2018101946A4 (en) Geographical multivariate flow data spatio-temporal autocorrelation analysis method based on cellular automaton
Liang et al. Joint demand prediction for multimodal systems: A multi-task multi-relational spatiotemporal graph neural network approach
Yu et al. Crime forecasting using spatio-temporal pattern with ensemble learning
EP2590151A1 (en) A framework for the systematic study of vehicular mobility and the analysis of city dynamics using public web cameras
WO2023123625A1 (zh) 一种城市疫情时空预测方法、系统、终端以及存储介质
WO2023123624A1 (zh) 城市流感发病趋势预测方法、系统、终端以及存储介质
WO2022110611A1 (zh) 一种面向平面交叉口的行人过街行为预测方法
CN113011322B (zh) 监控视频特定异常行为的检测模型训练方法及检测方法
Roland et al. Modeling and predicting vehicle accident occurrence in Chattanooga, Tennessee
Liu et al. Spatial-temporal conv-sequence learning with accident encoding for traffic flow prediction
Madhavi et al. Traffic Congestion Detection from Surveillance Videos using Deep Learning
CN112597964A (zh) 分层多尺度人群计数的方法
CN113095246A (zh) 一种基于迁移学习和场景感知的跨域自适应人数统计方法
Asgary et al. Modeling the risk of structural fire incidents using a self-organizing map
Rahman et al. A deep learning approach for network-wide dynamic traffic prediction during hurricane evacuation
Xu et al. Urban short-term traffic speed prediction with complicated information fusion on accidents
CN104778355B (zh) 基于广域分布交通系统的异常轨迹检测方法
Yijing et al. Intelligent algorithms for incident detection and management in smart transportation systems
PM et al. Fuzzy hypergraph modeling, analysis and prediction of crimes
Zhang et al. A spatiotemporal graph wavelet neural network for traffic flow prediction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925574

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925574

Country of ref document: EP

Kind code of ref document: A1