WO2020253039A1 - 路段特征模型训练方法、装置、计算机设备及存储介质 - Google Patents

路段特征模型训练方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020253039A1
WO2020253039A1 PCT/CN2019/117262 CN2019117262W WO2020253039A1 WO 2020253039 A1 WO2020253039 A1 WO 2020253039A1 CN 2019117262 W CN2019117262 W CN 2019117262W WO 2020253039 A1 WO2020253039 A1 WO 2020253039A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
data
monitoring data
target
road section
Prior art date
Application number
PCT/CN2019/117262
Other languages
English (en)
French (fr)
Inventor
林岳鹏
Original Assignee
平安国际智慧城市科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安国际智慧城市科技股份有限公司 filed Critical 平安国际智慧城市科技股份有限公司
Publication of WO2020253039A1 publication Critical patent/WO2020253039A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing

Definitions

  • This application relates to the field of artificial intelligence, and in particular to a method, device, computer equipment and storage medium for training road section feature models.
  • the problem of road section feature recognition is highly nonlinear, and the available data is usually large and complex, and neural networks have the characteristics of recognizing complex nonlinear systems. Therefore, the use of neural networks to deal with road section feature recognition problems has great advantages.
  • the inventor realizes that the data for road segment characteristics is generally a graph data structure. This type of graph data structure belongs to non-European data.
  • the traditional neural network model can only deal with grid data. For processing, it is impossible to process non-European data, which leads to limitations in the processing of model training and affects the accuracy of road segment feature recognition.
  • the embodiments of the application provide a method, device, computer equipment, and storage medium for training a road section feature model to solve the problem that the traditional neural network model cannot process the graph data structure and affects the accuracy of road section feature recognition.
  • a method for training road section feature models including:
  • first monitoring data and second monitoring data on both sides of the target road section, where the first monitoring data and the second monitoring data both include license plate information
  • the first monitoring data and the second monitoring data calculate the transit time of the vehicle within a preset time period, and determine both the transit time and the license plate information as vehicle transit data;
  • the training samples are used to train the long- and short-term neural network to obtain the target road section feature model.
  • a road section feature model training device including:
  • An obtaining module which obtains first monitoring data and second monitoring data on both sides of the target road section, wherein both the first monitoring data and the second monitoring data include license plate information;
  • a calculation module based on the first monitoring data and the second monitoring data, calculates the transit time of the vehicle within a preset time period, and determines both the transit time and the license plate information as vehicle transit data;
  • a preprocessing module to preprocess the vehicle traffic data to obtain a graph data structure
  • a processing module using a pre-trained graph convolutional neural network model to process the graph data structure to obtain training samples
  • the training module uses the training samples to train the long- and short-term neural network to obtain the target road section feature model.
  • a computer device including a memory, a processor, and computer readable instructions stored in the memory and capable of running on the processor, and the processor implements the road section feature model when the computer readable instructions are executed Steps of training method.
  • a non-volatile computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the road section feature model training is implemented Method steps.
  • FIG. 1 is a flowchart of a method for training a section feature model provided by an embodiment of the present application
  • step S1 is a flowchart of step S1 in the method for training a road segment feature model provided by an embodiment of the present application
  • step S11 is a flowchart of step S11 in the method for training a road section feature model provided by an embodiment of the present application
  • step S2 is a flowchart of step S2 in the method for training a road section feature model provided by an embodiment of the present application
  • FIG. 5 is a flowchart of step S3 in the method for training a section feature model provided by an embodiment of the present application
  • FIG. 6 is a flowchart of step S32 in the method for training a section feature model provided by an embodiment of the present application
  • FIG. 7 is a flowchart of step S5 in the method for training a road segment feature model provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a road section feature model training device provided by an embodiment of the present application.
  • Fig. 9 is a basic structural block diagram of a computer device provided by an embodiment of the present application.
  • the road section feature model training method provided in this application is applied to the server, and the server can be implemented by an independent server or a server cluster composed of multiple servers.
  • a method for training a road segment feature model is provided, which includes the following steps:
  • the two sides of the target road segment refer to the entry side of the vehicle entering the target road segment and the exit side of the vehicle leaving the target road segment.
  • Monitoring data refers to the data monitored by the vehicle in the target road section, for example, the time when the vehicle enters the target road section, the time when the vehicle leaves the target road section, and the license plate information corresponding to the vehicle.
  • the first monitoring data refers to the data monitored when the vehicle enters the target road section
  • the second monitoring data refers to the data monitored when the vehicle leaves the target road section.
  • the first monitoring data corresponding to the entry side of the target road section and the second monitoring data corresponding to the exit side of the target road section are acquired from a preset database.
  • the first monitoring data includes the time when the vehicle enters the target road section and the license plate information corresponding to the vehicle
  • the second monitoring data includes the time when the vehicle leaves the target road section and the license plate information corresponding to the vehicle.
  • the preset database refers to a database dedicated to storing the first monitoring data and the second monitoring data.
  • S2 According to the first monitoring data and the second monitoring data, calculate the travel time of the vehicle within the preset time period, and determine both the travel time and the license plate information as the vehicle travel data.
  • step S1 obtain the first monitoring data and the second monitoring data within the preset time period from step S1, select the license plate information with the same first monitoring data and the second monitoring data, and use the vehicle leaving the target road section in the second monitoring data
  • the time is calculated by subtracting the time when the vehicle enters the target road section in the first monitoring data to obtain the travel time of the vehicle within the preset time period, and both the travel time and the license plate information are determined as the vehicle travel data.
  • the preset time period may specifically be 8 to 9 in the morning, or 1 to 2 in the afternoon, and the specific value range is set according to the actual needs of the user, and there is no limitation here.
  • preprocessing refers to converting vehicle traffic data into graph data, and the graph data is the graph data structure.
  • the preprocessed graph data structure is obtained.
  • the preset processing library refers to a database specially used for preprocessing vehicle traffic data.
  • the pre-trained graph convolutional neural network model refers to a model specifically used to process graph data structures into training samples.
  • the graph data structure obtained in step S3 is imported into the pre-trained graph convolutional neural network model, and the following formula is used for training to obtain training samples after training.
  • Zt is the training sample
  • y( ⁇ ,W) represents the graph convolution kernel
  • * represents the graph convolution operation
  • xt represents the graph data structure.
  • S5 Use training samples to train the long- and short-term neural network to obtain the target road section feature model.
  • the Long Short-Term Memory (LSTM) model is a time recursive neural network model, which is used to train data with time series characteristics.
  • the recognition model corresponding to the data can be obtained.
  • the data with temporal characteristics are training samples extracted based on the graph convolutional neural network model, and the model obtained through training sample training is the target road segment feature model.
  • the long and short-term memory neural network model includes an input layer, an output layer and at least one hidden layer.
  • the weights of each layer in the long and short-term memory neural network model refer to the weights of the connections of the layers in the neural network model, and the weights determine each The information finally output by the layer, and make the network have the memory function in time sequence.
  • the weights of each layer in the long- and short-term memory neural network model can be effectively updated. Since the training samples are training data corresponding to the road segment features, the obtained target driving model The traffic situation corresponding to the current training section can be identified. Moreover, the long and short-term memory neural network model can make the recognition result of the target driving model more accurate by recognizing the training samples with temporal characteristics.
  • the travel time of the vehicle in the preset time period is calculated based on the acquired first and second monitoring data on both sides of the target road section, and the travel time and the corresponding license plate information of the vehicle are used as the vehicle travel data
  • the travel time and the corresponding license plate information of the vehicle are used as the vehicle travel data
  • the training samples are used to train the long and short-term neural network to obtain the target road feature model, so as to achieve the
  • the processing of the graph data structure expands the processing range of model training, and can effectively update the weights of each layer in the long- and short-term memory neural network model, making the recognition effect of the target road section feature model obtained through training sample training more accurate.
  • step S1 acquiring the first monitoring data and the second monitoring data on both sides of the target road section includes the following steps:
  • S11 Acquire the bayonet position information of the first vehicle and the bayonet position information of the second vehicle on both sides of the target road section.
  • the vehicle bayonet position information refers to the entry and exit bayonet information specifically used to detect the vehicle exiting and entering the target road section
  • the first vehicle bayonet position information is the bayonet position information of the vehicle entering the target road section
  • the second vehicle bayonet position information is the bayonet position information of the vehicle leaving the target road section.
  • the first vehicle bayonet location information and the second vehicle bayonet location information on both sides of the target road segment are acquired by preset map information.
  • the preset map information is specifically used to store the vehicle bayonet location information corresponding to the target road section.
  • S12 Query the first monitoring data and the second monitoring data corresponding to the location information of the first vehicle bayonet and the location information of the second vehicle bayonet respectively from the preset database.
  • the preset database pre-stores the first vehicle bayonet location information, the first monitoring data corresponding to the first vehicle bayonet location information, the second vehicle bayonet location information, and the first vehicle bayonet location information corresponding to the second vehicle bayonet location information. 2. Monitoring data.
  • the first monitoring data corresponding to the bayonet location information of the first vehicle is acquired; similarly, when the bayonet location of the second vehicle is queried from the preset database Information, obtain the second monitoring data corresponding to the second vehicle bayonet position information.
  • the data on the target road section can be accurately extracted to ensure the subsequent model training. accuracy.
  • step S11 acquiring the position information of the bayonet of the first vehicle and the position information of the bayonet of the second vehicle includes the following steps:
  • S111 Obtain the vehicle bayonet location information of the target road segment from the vehicle bayonet library, where the vehicle bayonet library stores different road segments and vehicle bayonet location information in advance.
  • the vehicle bayonet library pre-stores different road sections and vehicle bayonet position information corresponding to the road sections. By querying the target road section from the vehicle bayonet library, the bayonet position information corresponding to the target road section is obtained.
  • the vehicle bayonet position information corresponding to road segment A is A1, A2, A3, and A4
  • the vehicle bayonet position information corresponding to road segment B is B1, B2, B3, and B4.
  • the vehicle bayonet position information obtained from the vehicle bayonet library is A1, A2, A3, and A4.
  • S112 According to preset conditions, filter out the first vehicle bayonet location information and the second vehicle bayonet location information from the vehicle bayonet location information.
  • the first vehicle bayonet location information and the second vehicle bayonet location information are filtered from the vehicle bayonet location information according to preset conditions.
  • the first vehicle bayonet position information refers to the vehicle bayonet position information specifically used to detect the vehicle entering the target road section
  • the second vehicle bayonet location information refers to the vehicle bayonet position information specifically used to detect the vehicle leaving the target road section.
  • the preset condition refers to selecting a certain direction in the target road section according to the actual needs of the user, for example, the direction from east to north in the target road section.
  • the position information of the vehicle bayonet in the direction from east to north in the target road section C is C1 and C2 respectively. If the preset condition is the direction from east to north in the target road section C, then C1 is used as the first vehicle bayonet Location information, using C2 as the second vehicle bayonet location information. If the preset condition is the direction of the target road from north to east, C1 is used as the second vehicle bayonet location information, and C2 is used as the first vehicle bayonet location information.
  • the bayonet location information corresponding to the target road section can be determined, which is convenient for the user to use the card. Obtain the corresponding data information from the mouth position information to ensure the accuracy of subsequent data training.
  • step S2 based on the first monitoring data and the second monitoring data, the travel time of the vehicle within the preset time period is calculated, and both the travel time and the license plate information are determined as the vehicle Access data includes the following steps:
  • S21 Match the first monitoring data in the preset time period with the license plate information in the second monitoring data. If the same license plate information is matched, the first monitoring data corresponding to the same license plate information is determined as the target first Monitoring data, the second monitoring data is determined as the target second monitoring data, where both the target first monitoring data and the target second monitoring data include the monitoring time.
  • both the first monitoring data and the second monitoring data contain license plate information. If the license plate information of the first monitoring data and the second monitoring data are the same, it means that the first monitoring data and the second monitoring data are the same The data monitored by the vehicle in the target road section.
  • the first monitoring data and the second monitoring data acquired within the preset time period are selected, and the license plate information in the first monitoring data is matched with the license plate information in the second monitoring data.
  • the same license plate information is matched
  • the monitoring time included in the target first monitoring data is the time when the vehicle in the first monitoring data enters the target road section
  • the monitoring time included in the target second monitoring data is the time when the vehicle in the second monitoring data leaves the target road section.
  • the preset time period is from 8 am to 9 am.
  • the first monitoring data the time when the D1 vehicle enters the target road section is 8 am, and the corresponding license plate information is 888; the time when the D2 vehicle enters the target road section is 8:30 am, and the corresponding license plate information is 886.
  • second monitoring data the time when the F1 vehicle leaves the target road section is 9 am, and the corresponding license plate information is 888; the time when the F2 vehicle leaves the target road section is 9:30 am, and the corresponding license plate information is 886.
  • the license plate information 888 and 886 in the first monitoring data are matched with the license plate information 888 in the second monitoring data respectively to obtain the license plate information in the first monitoring data 888 matches the license plate information 888 in the second monitoring data, indicating that the D1 vehicle and the F1 vehicle are the same vehicle, and the vehicle has passed the target road section between the bayonet location information of the first vehicle and the bayonet location information of the second vehicle.
  • the first monitoring data is determined as the target first monitoring data
  • the second monitoring data is determined as the target second monitoring data.
  • S22 Use the monitoring time of the target first monitoring data and the monitoring time of the target second monitoring data to perform the difference calculation to obtain the transit time of the vehicle, and determine both the transit time and the license plate information as the vehicle transit data.
  • the monitoring time included in the target first monitoring data is the first The time when the vehicle enters the target road section in the monitoring data.
  • the monitoring time included in the target second monitoring data is the time when the vehicle leaves the target road section in the second monitoring data.
  • the monitoring time of the target second monitoring data is subtracted from the target first monitoring data The difference obtained is the transit time for the vehicle corresponding to the monitoring time to pass the target road section within the preset time period, and both the transit time and the license plate information are determined as vehicle transit data.
  • the preset time period is 8 am to 10 am
  • the target road section is 123
  • the monitoring time of the target first monitoring data of vehicle Q is 8 am
  • the monitoring time of the target second monitoring data is 9 am.
  • Subtract the monitoring time of the target's first monitoring data from 8 a.m. from the monitoring time of the target's second monitoring data at 9 a.m., and the difference is 1 hour, which means that the vehicle Q passes through the target between 8 a.m.
  • the travel time of section 123 is 1 hour.
  • the target first monitoring data and target second monitoring data are obtained by matching the license plate information, and the difference calculation is performed to obtain the corresponding transit time of the vehicle, and both the transit time of the vehicle and the corresponding license plate information are determined as the vehicle Pass data, so as to realize intelligent calculation of data, extract effective data, and improve the accuracy of subsequent model training.
  • step S3 preprocessing the vehicle traffic data to obtain the graph data structure includes the following steps:
  • S31 Extract data with a travel time within a preset range from the vehicle traffic data, and determine the extracted data as target data.
  • the preset range is mainly used to filter the transit time in the vehicle transit data.
  • the specific range can be 1 to 2 hours, or it can be set according to the actual needs of the user.
  • the travel time in the vehicle travel data is compared with the preset range, and if the travel time is within the preset range, the vehicle travel data including the travel time is determined as the target data.
  • the target data the user can help users delete extreme data, so that errors in the training results due to extreme data can be avoided in the subsequent data training process.
  • the preset range is 1 to 2 hours, and there are 5 vehicle traffic data, namely X1, X2, X3, X4, and X5, which contain traffic time of 0.8 hours, 1 hour, 1.5 hours, 1.8 hours, and 2.5 hours, respectively , Compare the same time with the preset range respectively, and obtain that the vehicle pass data X2, X3, and X4 contain the pass time within the preset range, then the vehicle pass data X2, X3, and X4 are determined as the target data.
  • vehicle traffic data namely X1, X2, X3, X4, and X5, which contain traffic time of 0.8 hours, 1 hour, 1.5 hours, 1.8 hours, and 2.5 hours, respectively .
  • the target data is imported into a preset processing tool for conversion processing, and the converted graph data structure is obtained.
  • the preset processing tool refers to a tool specially used to process data into a graph data structure, for example, a networkx tool can be used for processing.
  • the target data is determined according to the preset range of transit time, and the target data is converted to obtain the graph data structure, which can convert the valid data into the processing data structure for subsequent training, and further ensure the validity of the data and the subsequent model The accuracy of training.
  • step S32 the graph data structure conversion process is performed on the target data, and obtaining the graph data structure includes the following steps:
  • networkx is a software package written in a computer-readable instruction design language, which is convenient for users to create, operate, and learn complex networks. Using networkx, you can store networks in standardized and non-standardized data formats, generate a variety of random networks and classic networks, analyze network structures, build network models, design new network algorithms, and perform network drawing.
  • S322 Use the target data as the input data of the undirected graph, and process the input data into a graph data structure by drawing a network graph.
  • the method of drawing a network diagram refers to a method specifically used to convert input data into a diagram data structure.
  • the method of drawing a network diagram may specifically be the nx.draw() method in networkx.
  • the target data is imported as input data into the undirected graph obtained in step S321, and the graph data structure conversion process is performed using nx.draw() in networkx to obtain the processed graph data structure.
  • the graph data structure conversion processing of the target data can be realized , Provide accurate training data for subsequent use of the pre-trained graph convolutional neural network model, and further improve the accuracy of subsequent model training.
  • step S5 the training samples are used to train the long and short-term neural network, and obtaining the target road section feature model includes the following steps:
  • the long- and short-term memory neural network model is initialized.
  • the long- and short-term memory neural network is a network connected in time, and its basic unit is called a neuron.
  • the long and short-term memory neural network model includes an input layer, an output layer, and at least one hidden layer.
  • the hidden layers include input gates, forget gates, output gates, neuron states, and neuron outputs.
  • Each of the long and short-term memory neural network models A layer can include multiple neurons.
  • the forget gate determines the information to be discarded in the neuron state.
  • the input gate determines the information to be added in the neuron.
  • the output gate determines the information to be output in the neuron.
  • the state of the neuron determines the information discarded, added, and output by each gate, which is specifically expressed as the weight of the connection with each gate.
  • the neuron output determines the connection weight with the next layer.
  • initializing the long and short-term memory neural network model is to set the weights of the connections between the layers of the long- and short-term memory neural network model and the input gate, forget gate, output gate, neuron state and neuron output in the hidden layer.
  • the initial weight can be set to 1.
  • S52 Input training samples into the long- and short-term memory neural network model, and calculate the output value of each layer of the long- and short-term memory neural network model.
  • the training samples obtained within a preset time period according to the unit time interval are input into the long and short-term memory neural network model, and the output values of each layer are calculated respectively, including calculating the training samples in the input gate and forgetting The output of gate, output gate, neuron state and neuron output.
  • a neuron includes three activation functions f (sigmoid), g (tanh) and h (softmax).
  • the activation function can convert the weight result into the classification result, and its function is to add some non-linear factors to the neural network, so that the neural network can better solve more complex problems.
  • the data received and processed by a neuron includes: input training sample: x, state data: s.
  • the parameters mentioned below also include: the input of the neuron is represented by a, and the output is represented by b.
  • Subscript And ⁇ represent input gate, forget gate and output gate respectively.
  • the subscript c represents neuron and t represents time.
  • the weights of the neuron connected to the input gate, forget gate and output gate are recorded as w cl , w c ⁇ and w c ⁇ respectively .
  • S c represents the neuron state.
  • I represents the number of neurons in the input layer
  • H is the number of neurons in the hidden layer
  • the input gate receives the sample X t at the current moment, the output value b t-1 h at the previous moment, and the neuron state data S t-1 c at the previous moment, by connecting the input training sample with the input gate weight w il , Connect the output value of the previous moment and the weight of the input gate w hl and the weight of the neuron and the input gate w cl , according to the formula Calculate the output of the input gate Apply the activation function f to By formula Get a scalar in the 0-1 interval. This scalar controls the proportion of the current information received by the neuron based on the comprehensive judgment of the current state and the past state.
  • the forgetting gate receives the sample X t at the current moment, the output value b t-1 h at the previous moment, and the state data S t-1 c at the previous moment, by connecting the input training sample with the weight w i ⁇ of the forgetting gate, connecting The output value at the last moment and the weight of the forgetting gate w h ⁇ and the weight of the connection neuron and the forgetting gate w c ⁇ , according to the formula Calculate the output of the forget gate Apply the activation function f to By formula Obtain a 0-1 interval scalar, this scalar controls the proportion of the past information received by the neuron based on the comprehensive judgment of the current state and the past state.
  • the neuron receives the sample X t at the current moment, the output value b t-1 h at the previous moment, and the state data S t-1 c at the previous moment, the weight w ic of the training sample connecting the neuron and the input, and the connection nerve
  • the output gate receives the samples at the current time and the current state data X t , the output value b t-1 h at the previous time and the current state data
  • Neuron output Calculate based on the scalar output of the output gate. Specifically, the output of the neuron is based on the formula Calculated.
  • the output value of each layer of the long- and short-term memory neural network model can be obtained by the above calculation of the training samples among the layers.
  • S53 Perform error back-propagation update on each layer of the long and short-term memory neural network model according to the output value, and obtain the updated weight value of each layer.
  • the error back propagation update is performed on each layer of the long and short-term memory neural network model according to the output value of each layer of the long- and short-term memory neural network model.
  • the error term of each layer can be calculated.
  • ⁇ and ⁇ both represent error terms, especially, Represents the error term of the neuron output back propagation, The error term that represents the back propagation of the neuron state, both of which represent the error term, but the specific meaning is different.
  • the input of the neuron is represented by a
  • the output is represented by b.
  • Subscript And ⁇ represent input gate, forget gate and output gate respectively.
  • the subscript c represents neuron and t represents time.
  • the weights of neurons connected to the input gate, forget gate and output gate are recorded as w cl , And w c ⁇ .
  • S c represents the neuron state
  • the activation function of the control gate is represented by f (sigmoid), and g (tanh) and h (softmax) represent the input activation function and output activation function of the neuron, respectively.
  • K is the number of neurons in the output layer
  • H is the number of neurons in the hidden layer
  • the error term of the input gate is The error term of the forgotten gate is The error term of the neuron state backpropagation is among them
  • the error term of the output gate back propagation is The error term of the neuron output back propagation is
  • the weight value of each layer can be updated by calculating the weight gradient, where the expression of the weight value update is
  • T represents time
  • W represents weights, such as connection weights such as w cl , w c ⁇ and w c ⁇ .
  • B represents the output value, such as with Wait for output.
  • represents the error term, such as with Equal error term. Is the state data of the neuron at the previous moment, and b t-1 h is the output value at the previous moment.
  • each parameter of the above expression needs to be corresponding, if the updated specific weight value is w cl , then the output B is corresponding
  • the error term ⁇ corresponds to According to the expressions in step S53 and step S54, the required parameter values of the weight update expression can be obtained. Then, the weight value of each layer after the update can be obtained by performing operations based on the expression of the weight value update.
  • the obtained updated weights of each layer are applied to the long- and short-term memory neural network model to obtain the target model.
  • the output layer of the target model will finally output a probability value, which indicates how close the information is to the target model after being processed by the target model, that is, how likely the information is input to the target model, which can be widely used in road segment feature recognition , In order to achieve the effect of accurately identifying the traffic conditions of the road section.
  • the training sample is used to train the long and short-term memory neural network model, which can be effectively updated
  • the weights of each layer in the long and short-term memory neural network model make the recognition effect of the road section feature model obtained by training sample training more accurate.
  • a road section feature model training device is provided, and the road section feature model training device corresponds to the road section feature model training method in the foregoing embodiment one-to-one.
  • the road section feature model training device includes a first acquisition module 80, a calculation module 81, a preprocessing module 82, a first processing module 83, and a training module 84.
  • the detailed description of each functional module is as follows:
  • the first acquisition module 80 is configured to acquire the first monitoring data and the second monitoring data on both sides of the target road section, where both the first monitoring data and the second monitoring data include license plate information;
  • the calculation module 81 is configured to calculate the transit time of the vehicle within a preset time period according to the first monitoring data and the second monitoring data, and determine both the transit time and the license plate information as the vehicle transit data;
  • the preprocessing module 82 is used to preprocess the vehicle traffic data to obtain the graph data structure
  • the first processing module 83 is configured to process the graph data structure by using the pre-trained graph convolutional neural network model to obtain training samples;
  • the training module 84 is used to train the long- and short-term neural network using training samples to obtain the target road section feature model.
  • the first obtaining module 80 includes:
  • the second acquisition sub-module is used to acquire the bayonet position information of the first vehicle and the bayonet position information of the second vehicle on both sides of the target road section;
  • the query submodule is used to query the first monitoring data and the second monitoring data corresponding to the location information of the first vehicle bayonet and the location information of the second vehicle bayonet respectively from the preset database.
  • the second acquisition submodule includes:
  • the third acquiring unit is used to acquire the vehicle bayonet location information of the target road section from the vehicle bayonet library, where the vehicle bayonet library pre-stores different road segments and vehicle bayonet location information;
  • the screening unit is used to filter out the first vehicle bayonet location information and the second vehicle bayonet location information from the vehicle bayonet location information according to preset conditions.
  • calculation module 81 includes:
  • the matching sub-module is used to match the first monitoring data in the preset time period with the license plate information in the second monitoring data. If the same license plate information is matched, the first monitoring data corresponding to the same license plate information is determined Is the target first monitoring data, and the second monitoring data is determined as the target second monitoring data, where both the target first monitoring data and the target second monitoring data include the monitoring time;
  • the operation sub-module is used to calculate the difference between the monitoring time of the target first monitoring data and the monitoring time of the target second monitoring data to obtain the transit time of the vehicle, and determine both the transit time and the license plate information as the vehicle transit data.
  • the preprocessing module 82 includes:
  • the extraction sub-module is used to extract data with a travel time within a preset range from the vehicle traffic data, and determine the extracted data as target data;
  • the conversion sub-module is used to perform graph data structure conversion processing on the target data to obtain the graph data structure.
  • the conversion sub-module includes:
  • Creation unit used to create an empty undirected graph using networkx
  • the second processing unit is used to treat the target data as the input data of the undirected graph, and process the input data into a graph data structure by drawing a network graph.
  • the training module 84 includes:
  • the initialization sub-module is used to initialize the long and short-term memory neural network model
  • the output value calculation sub-module is used to input training samples in the long and short-term memory neural network model, and calculate the output value of each layer of the long- and short-term memory neural network model;
  • the update sub-module is used to perform error back propagation update on each layer of the long and short-term memory neural network model according to the output value, and obtain the updated weight of each layer;
  • the fourth acquisition sub-module is used to acquire the feature model of the target road section based on the updated weights of each layer.
  • FIG. 9 is a block diagram of the basic structure of the computer device 90 in an embodiment of the application.
  • the computer device 90 includes a memory 91, a processor 92, and a network interface 93 that are communicatively connected to each other through a system bus. It should be pointed out that FIG. 9 only shows a computer device 90 with components 91-93, but it should be understood that it is not required to implement all the shown components, and more or fewer components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
  • Its hardware includes but is not limited to microprocessors, dedicated Integrated Circuit (Application Specific Integrated Circuit, ASIC), Programmable Gate Array (Field-Programmable Gate Array, FPGA), Digital Processor (Digital Signal Processor, DSP), embedded devices, etc.
  • ASIC Application Specific Integrated Circuit
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Processor
  • the computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
  • the memory 91 includes at least one type of readable storage medium, the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static memory Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
  • the memory 91 may be an internal storage unit of the computer device 90, such as a hard disk or memory of the computer device 90.
  • the memory 91 may also be an external storage device of the computer device 90, such as a plug-in hard disk equipped on the computer device 90, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital, SD) card, Flash Card, etc.
  • the memory 91 may also include both the internal storage unit of the computer device 90 and its external storage device.
  • the memory 91 is generally used to store an operating system and various application software installed in the computer device 90, such as computer-readable instructions of the road section feature model training method.
  • the memory 91 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 92 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
  • the processor 92 is generally used to control the overall operation of the computer device 90.
  • the processor 92 is configured to run computer-readable instructions or processed data stored in the memory 91, for example, run the computer-readable instructions of the road section feature model training method.
  • the network interface 93 may include a wireless network interface or a wired network interface, and the network interface 93 is generally used to establish a communication connection between the computer device 90 and other electronic devices.
  • This application also provides another implementation manner, that is, to provide a non-volatile computer-readable storage medium, the non-volatile computer-readable storage medium stores a document information entry process, and the document information entry
  • the process may be executed by at least one processor, so that the at least one processor executes the steps of any one of the above-mentioned road section feature model training methods.
  • the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a computer device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the various embodiments of the present application.
  • a computer device which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

涉及人工智能领域,提供了一种路段特征模型训练方法、装置、计算机设备(90)及存储介质,路段特征模型训练方法包括:获取目标路段两侧的第一监测数据和第二监测数据,其中,第一监测数据和第二监测数据都包括车牌信息(S1);根据第一监测数据和第二监测数据,计算预设时间段内车辆的通行时间,并将通行时间和车牌信息都确定为车辆通行数据(S2);对车辆通行数据进行预处理,得到图数据结构(S3);利用预先训练好的图卷积神经网络模型对图数据结构进行处理,得到训练样本(S4);采用训练样本对长短时神经网络进行训练,得到目标路段特征模型(S5)。实现对图数据结构的处理,提高目标路段特征模型识别的准确性。

Description

路段特征模型训练方法、装置、计算机设备及存储介质
本申请以2019年6月21日提交的申请号为201910540699.3,名称为“路段特征模型训练方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
技术领域
本申请涉及人工智能领域,尤其涉及一种路段特征模型训练方法、装置、计算机设备及存储介质。
背景技术
近年来,人们对神经网络在路段特征识别领域中的应用开始了深入的研究。路段特征识别问题是高度非线性的,可获得的数据通常是大量的、复杂的,而神经网络具有识别复杂非线性系统的特性,因此使用神经网络处理路段特征识别问题有着巨大的优越性。发明人意识到,针对路段特征情况的数据一般为图数据结构,此类图数据结构属于非欧式数据,在用神经网络模型对数据进行处理时,传统的神经网络模型只能对网格化数据进行处理,无法对非欧式数据进行处理,导致模型训练的处理存在局限性,影响路段特征识别的准确性。
发明内容
本申请实施例提供一种路段特征模型训练方法、装置、计算机设备及存储介质,以解决传统的神经网络模型无法对图数据结构进行处理,影响路段特征识别的准确性的问题。
一种路段特征模型训练方法,包括:
获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;
根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;
对所述车辆通行数据进行预处理,得到图数据结构;
利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;
采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
一种路段特征模型训练装置,包括:
获取模块,获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;
计算模块,根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;
预处理模块,对所述车辆通行数据进行预处理,得到图数据结构;
处理模块,利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;
训练模块,采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现所述路段特征模型训练方法的步骤。
一种非易失性的计算机可读存储介质,所述非易失性的计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现所述路段特征模型训练方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的路段特征模型训练方法的流程图;
图2是本申请实施例提供的路段特征模型训练方法中步骤S1的流程图;
图3是本申请实施例提供的路段特征模型训练方法中步骤S11的流程图;
图4是本申请实施例提供的路段特征模型训练方法中步骤S2的流程图;
图5是本申请实施例提供的路段特征模型训练方法中步骤S3的流程图;
图6是本申请实施例提供的路段特征模型训练方法中步骤S32的流程图;
图7是本申请实施例提供的路段特征模型训练方法中步骤S5的流程图;
图8是本申请实施例提供的路段特征模型训练装置的示意图;
图9是本申请实施例提供的计算机设备的基本机构框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的路段特征模型训练方法应用于服务端,服务端具体可以用独立的服务器或者多个服务器组成的服务器集群实现。在一实施例中,如图1所示,提供一种路段特征模型训练方法,包括如下步骤:
S1:获取目标路段两侧的第一监测数据和第二监测数据,其中,第一监 测数据和第二监测数据都包括车牌信息。
在本申请实施例中,目标路段两侧是指车辆进入目标路段的驶入侧和车辆离开目标路段的驶出侧。监测数据是指车辆在目标路段中监测到的数据,例如,车辆进入目标路段时间、车辆离开目标路段时间以及车辆对应的车牌信息等。其中,第一监测数据是指车辆进入目标路段时所监测到的数据,第二监测数据是指车辆离开目标路段时所监测到的数据。
具体地,通过从预设数据库中获取目标路段的驶入侧对应的第一监测数据和目标路段的驶出侧对应的第二监测数据。其中,第一监测数据包括车辆进入目标路段时间和车辆对应的车牌信息,第二监测数据包括车辆离开目标路段时间和车辆对应的车牌信息。
预设数据库是指专门用于存储第一监测数据和第二监测数据的数据库。
S2:根据第一监测数据和第二监测数据,计算预设时间段内车辆的通行时间,并将通行时间和车牌信息都确定为车辆通行数据。
具体地,从步骤S1中获取预设时间段内的第一监测数据和第二监测数据,选取第一监测数据和第二监测数据相同的车牌信息,利用第二监测数据中的车辆离开目标路段时间与第一监测数据中的车辆进入目标路段时间进行相减的方式进行计算,得到车辆在预设时间段内的通行时间,并将该通行时间和车牌信息都确定为车辆通行数据。
其中,预设时间段具体可以是早上8点到9点,也可以是下午1点到2点,其具体的取值范围根据用户的实际需求进行设置,此处不做限制。
S3:对车辆通行数据进行预处理,得到图数据结构。
在本申请实施例中,预处理是指将车辆通行数据转换成图数据,该图数据即为图数据结构。通过将步骤S2获取到的车辆通行数据导入到预设处理库中进行预处理,得到预处理后的图数据结构。
其中,预设处理库是指专门用于对车辆通行数据进行预处理的数据库。
S4:利用预先训练好的图卷积神经网络模型对图数据结构进行处理,得到训练样本。
在本申请实施例中,预先训练好的图卷积神经网络模型是指专门用于将图数据结构处理成训练样本的模型。通过将步骤S3得到的图数据结构导入到预先训练好的图卷积神经网络模型中,利用如下公式进行训练,得到训练后的训练样本。
Zt=y(η,W)*xt
其中,Zt为训练样本,y(η,W)代表图卷积核,*代表图卷积操作,xt代表图数据结构。
S5:采用训练样本对长短时神经网络进行训练,得到目标路段特征模型。
在本申请实施例中,长短时记忆网络(Long Short-Term Memory,简称LSTM)模型,是一种时间递归神经网络模型,用于训练具有时序性特点的数据,将该具有时序性特点的数据在长短时记忆网络模型训练,能够获取与该数据相对应的识别模型。
需要说明的是,具有时序性特点的数据为基于图卷积神经网络模型提取的训练样本,通过训练样本训练获取的模型即为目标路段特征模型。长短时记忆神经网络模型包括一输入层、一输出层和至少一隐藏层,长短时记忆神经网络模型中各层的权值是指神经网络模型中各层连接的权值,权值决定了各层最终输出的信息,并使得网络具有时序上的记忆功能。
具体地,通过采用训练样本对长短时记忆神经网络模型进行训练,能够有效更新长短时记忆神经网络模型中各层的权值,由于训练样本为路段特征对应的训练数据,使得获取的目标驾驶模型可识别出当前训练的路段对应的交通情况。并且,长短时记忆神经网络模型通过对具有时序性特点的训练样本进行识别,可使目标驾驶模型的识别结果更为准确。
本实施例中,通过根据获取到的目标路段两侧的第一监测数据和第二监测数据,计算车辆在预设时间段内的通行时间,将通行时间及车辆对应的车牌信息作为车辆通行数据进行预处理得到图数据结构,并将图数据结构导入预先训练好的图卷积神经网络模型进行处理得到训练样本,最后采用训练样本对长短时神经网络进行训练获取目标路段特征模型,从而实现对图数据结构的处理,扩大模型训练的处理范围,并且能够有效更新长短时记忆神经网络模型中各层的权值,使得通过训练样本训练得到的目标路段特征模型识别效果更精准。
在一实施例中,如图2所示,步骤S1中,即获取目标路段两侧的第一监测数据和第二监测数据包括如下步骤:
S11:获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息。
在本申请实施例中,车辆卡口位置信息是指专门用于检测车辆在目标路段驶出驶入情况的出入卡口信息,第一车辆卡口位置信息为车辆进入目标路段的卡口位置信息,第二车辆卡口位置信息为车辆离开目标路段的卡口位置信息。
具体地,通过预设地图信息,获取目标路段两侧中的第一车辆卡口位置信息和第二车辆卡口位置信息。其中,预设地图信息是指专门用于存储目标路段对应的车辆卡口位置信息。
S12:从预设数据库中查询第一车辆卡口位置信息和第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
具体地,预设数据库中预先保存了第一车辆卡口位置信息、第一车辆卡口位置信息对应的第一监测数据、第二车辆卡口位置信息和第二车辆卡口位置信息对应的第二监测数据。
当从预设数据库中查询到第一车辆卡口位置信息时,获取该第一车辆卡口位置信息对应的第一监测数据;同理,当从预设数据库中查询到第二车辆卡口位置信息时,获取该第二车辆卡口位置信息对应的第二监测数据。
本实施例中,通过从第一车辆卡口位置信息和第二车辆卡口位置信息分别获取对应的第一监测数据和第二监测数据,能够准确提取目标路段上的数 据,保证后续模型训练的准确性。
在一实施例中,如图3所示,步骤S11中,即获取第一车辆卡口位置信息和第二车辆卡口位置信息包括如下步骤:
S111:从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,车辆卡口库预先存储了不同的路段和车辆卡口位置信息。
在本申请实施例中,车辆卡口库中预先保存了不同的路段及路段对应的车辆卡口位置信息。通过从车辆卡口库中查询目标路段,并获取该目标路段对应的卡口位置信息。
例如,车辆卡口库中存在路段A和路段B,路段A对应的车辆卡口位置信息为A1、A2、A3和A4,路段B对应的车辆卡口位置信息为B1、B2、B3和B4,若目标锻炼为路段A,则从车辆卡口库中获取车辆卡口位置信息为A1、A2、A3和A4。
S112:根据预设条件,从车辆卡口位置信息中筛选出第一车辆卡口位置信息和第二车辆卡口位置信息。
具体地,根据步骤S111获取到的车辆卡口位置信息,按照预设条件,从车辆卡口位置信息中筛选出第一车辆卡口位置信息和第二车辆卡口位置信息。其中,第一车辆卡口位置信息是指专门用于检测车辆进入目标路段的车辆卡口位置信息;第二车辆卡口位置信息是指专门用于检测车辆离开目标路段的车辆卡口位置信息。
预设条件是指根据用户实际需求选取目标路段中的某个方向,例如,目标路段中从东往北的方向。
例如,目标路段C中从东往北的方向存在的车辆卡口位置信息分别为C1和C2,若预设条件为目标路段C中从东往北的方向,则将C1作为第一车辆卡口位置信息,将C2作为第二车辆卡口位置信息。若预设条件为目标道路从北往东的方向,则将C1作为第二车辆卡口位置信息,将C2作为第一车辆卡口位置信息。
本实施例中,通过根据预设条件从车辆卡口位置信息中确定第一车辆卡口位置信息和第二车辆卡口位置信息,从而能够确定目标路段对应的卡口位置信息,便于用户利用卡口位置信息获取对应的数据信息,保证后续数据训练的准确性。
在一实施例中,如图4所示,步骤S2中,即根据第一监测数据和第二监测数据,计算预设时间段内车辆的通行时间,并将通行时间和车牌信息都确定为车辆通行数据包括如下步骤:
S21:将预设时间段内的第一监测数据与第二监测数据中的车牌信息进行匹配,若匹配到相同的车牌信息,则将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据,其中,目标第一监测数据和目标第二监测数据都包括监测时间。
在本申请实施例中,第一监测数据与第二监测数据中都包含车牌信息,若第一监测数据与第二监测数据的车牌信息相同,则表示第一监测数据与第 二监测数据为同一车辆在目标路段中所监测到的数据。
具体地,选取预设时间段内获取到的第一监测数据和第二监测数据,将第一监测数据中的车牌信息与第二监测数据中的车牌信息进行匹配,当匹配到相同的车牌信息时,表示该车牌信息对应的车辆已经过第一车辆卡口位置信息和第二车辆卡口位置信息之间的目标路段,将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据。
其中,目标第一监测数据包括的监测时间为第一监测数据中的车辆进入目标路段的时间,目标第二监测数据包括的监测时间为第二监测数据中的车辆离开目标路段的时间。
需要说明的是,当未匹配到相同的车牌信息时,表示该车牌信息对应的车辆当前位置位于目标路段之中,不对第一监测数据和第二监测数据进行标注。
例如,存在第一车辆卡口位置信息和第二车辆卡口位置信息之间的目标路段L,预设时间段为早上8点到9点。存在第一监测数据:D1车辆进入目标路段的时间为早上8点,对应的车牌信息为888;D2车辆进入目标路段的时间为早上8点半,对应的车牌信息为886。存在第二监测数据:F1车辆离开目标路段的时间为早上9点,对应的车牌信息为888;F2车辆离开目标路段的时间为早上9点半,对应的车牌信息为886。由于预设时间段为早上8点到9点,则将第一监测数据中的车牌信息888、886,分别与第二监测数据中的车牌信息888进行匹配,得到第一监测数据中的车牌信息888与第二监测数据中的车牌信息888相匹配,表示D1车辆和F1车辆为同一车辆,且该车辆已经过第一车辆卡口位置信息和第二车辆卡口位置信息之间的目标路段,并将第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据。
S22:利用目标第一监测数据的监测时间和目标第二监测数据的监测时间进行求差运算,得到车辆的通行时间,并将通行时间和车牌信息都确定为车辆通行数据。
具体地,根据步骤S21得到的目标第一监测数据和目标第二监测数据,由于目标第一监测数据和目标第二监测数据都包括监测时间,且目标第一监测数据包括的监测时间为第一监测数据中的车辆进入目标路段的时间,目标第二监测数据包括的监测时间为第二监测数据中的车辆离开目标路段的时间,将目标第二监测数据的监测时间减去目标第一监测数据的监测时间,得到的差值即为该监测时间对应的车辆在预设时间段内经过目标路段的通行时间,并将通行时间和车牌信息都确定为车辆通行数据。
例如,存在预设时间段为早上8点到早上10点,目标路段为123,车辆Q的目标第一监测数据的监测时间为早上8点,目标第二监测数据的监测时间为早上9点,将目标第二监测数据的监测时间早上9点减去目标第一监测数据的监测时间早上8点,得到差值为1小时,则表示车辆Q在早上8点到 早上10点之间,经过目标路段123的通行时间为1个小时。
本实施例中,通过匹配车牌信息获取目标第一监测数据和目标第二监测数据,并对其进行求差运算获取车辆对应的通行时间,并将车辆通行时间和对应的车牌信息都确定为车辆通行数据,从而实现对数据的智能运算,提取有效数据,提高后续模型训练的准确性。
在一实施例中,如图5所示,步骤S3中,即对车辆通行数据进行预处理,得到图数据结构包括如下步骤:
S31:从车辆通行数据中提取通行时间在预设范围内的数据,将提取到的数据确定为目标数据。
在本申请实施例中,预设范围主要是用于对车辆通行数据中的通行时间进行筛选,其具体的范围可以是1至2个小时,也可以根据用户的实际需求进行设定,此处不做限制。
具体地,针对车辆通行数据中的通行时间,将通行时间与预设范围进行比较,若通行时间在预设范围内,则将包含该通行时间的车辆通行数据确定为目标数据。通过确定目标数据可以帮助用户删除极端数据,从而可以在后续数据训练的过程中避免由于极端数据导致训练结果存在误差。
例如,预设范围为1至2小时,存在5个车辆通行数据分别为X1、X2、X3、X4和X5,其包含的通行时间分别为0.8小时、1小时、1.5小时、1.8小时和2.5小时,将同行时间分别与预设范围进行比较,得到车辆通行数据X2、X3和X4包含的通行时间在预设范围内,则将车辆通行数据X2、X3和X4确定为目标数据。
S32:对目标数据进行图数据结构转换处理,得到图数据结构。
具体地,将目标数据导入到预设处理工具中进行转换处理,得到转换处理后的图数据结构。
其中,预设处理工具是指专门用于将数据处理成图数据结构的工具,例如,可以利用networkx工具进行处理。
本实施例中,根据通行时间的预设范围确定目标数据,并将目标数据进行转换处理得到图数据结构,能够将有效数据转换成后续训练的处理数据结构,进一步保证数据的有效性及后续模型训练的准确性。
在一实施例中,如图6所示,步骤S32中,即对目标数据进行图数据结构转换处理,得到图数据结构包括如下步骤:
S321:使用networkx建立空的无向图。
在本申请实施例中,networkx是用计算机可读指令设计语言编写的软件包,便于用户对复杂网络进行创建、操作和学习。利用networkx可以以标准化和非标准化的数据格式存储网络、生成多种随机网络和经典网络、分析网络结构、建立网络模型、设计新的网络算法、进行网络绘制等。
具体地,通过networkx中nx.Graph()方法建立空的无向图。
S322:将目标数据作为无向图的输入数据,并通过绘制网络图方法将输入数据处理成图数据结构。
在本申请实施例中,绘制网络图方法是指专门用于将输入数据转换成图数据结构的方法,该绘制网络图方法具体可以是networkx中的nx.draw()方法。
具体地,将目标数据作为输入数据导入到步骤S321得到的无向图中,并利用networkx中的nx.draw()进行图数据结构转换处理,得到处理后图数据结构。
本实施例中,通过先建立无向图,将目标数据作为无向图的输入数据,并通过绘制网络图方法将输入数据处理成图数据结构,能够实现对目标数据进行图数据结构的转换处理,为后续使用预先训练好的图卷积神经网络模型提供了准确的训练数据,进一步提高后续模型训练的准确性。
在一实施例中,如图7所示,步骤S5中,即采用训练样本对长短时神经网络进行训练,得到目标路段特征模型包括如下步骤:
S51:初始化长短时记忆神经网络模型。
在本申请实施例中,对长短时记忆神经网络模型进行初始化操作,其中,长短时记忆神经网络是在时间上相互连接的网络,其基本单元称为神经元。长短时记忆神经网络模型包括一输入层、一输出层和至少一隐藏层,其隐藏层包括输入门、遗忘门、输出门、神经元状态和神经元输出,长短时记忆神经网络模型中的每一层可以包括多个神经元。遗忘门决定了在神经元状态中所要丢弃的信息。输入门决定了在神经元中所要增加的信息。输出门决定了在神经元中所要输出的信息。神经元状态决定了各个门丢弃、增加和输出的信息,具体表示为与各个门之间连接的权值。神经元输出决定了与下一层的连接权值。
可以理解地,初始化长短时记忆神经网络模型,即为设置长短时记忆神经网络模型各层之间连接的权值以及隐藏层中输入门、遗忘门、输出门、神经元状态和神经元输出之间的初始权值,本实施例中初始权值可设为1。
S52:在长短时记忆神经网络模型中输入训练样本,计算长短时记忆神经网络模型各层的输出值。
在本申请实施例中,采用按单元时间间隔在一预设时间段内获取的训练样本输入到长短时记忆神经网络模型中,分别计算各层的输出值,包括计算训练样本在输入门、遗忘门、输出门、神经元状态和神经元输出的输出。其中,一个神经元包括有三种激活函数f(sigmoid)、g(tanh)和h(softmax)。激活函数能够将权值结果转化成分类结果,其作用是能够给神经网络加入一些非线性因素,使得神经网络可以更好地解决较为复杂的问题。
一个神经元所接收和处理的数据包括:输入的训练样本:x,状态数据:s。此外,以下提及的参数还包括:神经元的输入用a表示,输出用b表示。下标
Figure PCTCN2019117262-appb-000001
和ω分别表示输入门、遗忘门和输出门。下标c表示神经元,t代表时刻。神经元跟输入门、遗忘门和输出门连接的权值分别记做w cl、w 和w 。S c表示神经元状态。I表示输入层的神经元的个数,H是隐层神经元的个数,C是神经元状态的神经元个数,这里取C=H。
输入门接收当前时刻的样本X t、上一时刻的输出值b t-1 h以及上一时刻神经元的状态数据S t-1 c,通过连接输入的训练样本与输入门的权值w il、连接上一时刻的输出值与输入门的权值w hl和连接神经元与输入门的权值w cl,根据公式
Figure PCTCN2019117262-appb-000002
计算得到输入门的输出
Figure PCTCN2019117262-appb-000003
将激活函数f作用于
Figure PCTCN2019117262-appb-000004
由公式
Figure PCTCN2019117262-appb-000005
得到一个0-1区间的标量。此标量控制了神经元根据当前状态和过去状态的综合判断所接收当前信息的比例。
遗忘门接收当前时刻的样本X t、上一时刻的输出值b t-1 h以及上一时刻的状态数据S t-1 c,通过连接输入的训练样本与遗忘门的权值w 、连接上一时刻的输出值与遗忘门的权值w 和连接神经元与遗忘门的权值w ,根据公式
Figure PCTCN2019117262-appb-000006
计算得到遗忘门的输出
Figure PCTCN2019117262-appb-000007
将激活函数f作用于
Figure PCTCN2019117262-appb-000008
由公式
Figure PCTCN2019117262-appb-000009
得到一个0-1区间的标量,此标量控制了神经元根据当前状态和过去状态的综合判断所接收过去信息的比例。
神经元接收当前时刻的样本X t、上一时刻的输出值b t-1 h以及上一时刻的状态数据S t-1 c、连接神经元与输入的训练样本的权值w ic、连接神经元与上一时刻的输出值的权值w hc以及输入门、遗忘门的输出标量,根据式
Figure PCTCN2019117262-appb-000010
计算当前时刻的神经元状态
Figure PCTCN2019117262-appb-000011
输出门接收当前时刻的样本以及当前时刻的状态数据X t,上一时刻的输出值b t-1 h以及当前时刻的状态数据
Figure PCTCN2019117262-appb-000012
通过连接输入的训练样本与输出门的权值w iw、连接上一时刻的输出值与输出门的权值w hw以及连接神经元与输出门的权值w cw,根据公式
Figure PCTCN2019117262-appb-000013
计算输出门的输出
Figure PCTCN2019117262-appb-000014
将激活函数f作用于
Figure PCTCN2019117262-appb-000015
上由公式
Figure PCTCN2019117262-appb-000016
得到一个0-1区间的标量。
神经元输出
Figure PCTCN2019117262-appb-000017
根据输出门输出的标量计算。具体地,神经元输出的输出根据公式
Figure PCTCN2019117262-appb-000018
计算得出。由上述对训练样本在各层间的计算可获取长短时记忆神经网络模型各层的输出值。
S53:根据输出值对长短时记忆神经网络模型各层进行误差反传更新,获取更新后的各层的权值。
在本申请实施例中,根据获取长短时记忆神经网络模型各层的输出值对长短时记忆神经网络模型各层进行误差反传更新。
具体地,首先根据误差项的表达式
Figure PCTCN2019117262-appb-000019
可求出各层的误差项。其中,ε和δ均表示误差项,特别地,
Figure PCTCN2019117262-appb-000020
表示神经元输出反传的误差项,
Figure PCTCN2019117262-appb-000021
表示神经元状态反传的误差项,两者均表示误差项,但具体含义不同。在以下表达式中,神经元的输入用a表示,输出用b表示。下标
Figure PCTCN2019117262-appb-000022
和ω分别表 示输入门、遗忘门和输出门。下标c表示神经元,t代表时刻。神经元跟输入门、遗忘门和输出门连接的权值分别记做w cl
Figure PCTCN2019117262-appb-000023
和w 。S c表示神经元状态,控制门的激活函数用f(sigmoid)表示,g(tanh)和h(softmax)分别表示神经元的输入激活函数和输出激活函数。K是输出层神经元的个数,H是隐层神经元的个数,C是神经元状态的神经元个数,这里取C=H。则输入门反传的误差项为
Figure PCTCN2019117262-appb-000024
遗忘门反传的误差项为
Figure PCTCN2019117262-appb-000025
神经元状态反传的误差项为
Figure PCTCN2019117262-appb-000026
其中,
Figure PCTCN2019117262-appb-000027
输出门反传的误差项为
Figure PCTCN2019117262-appb-000028
神经元输出反传的误差项为
Figure PCTCN2019117262-appb-000029
根据获得的各层误差项,再进行权值梯度的计算即可更新各层的权值,其中,权值更新的表达式为
Figure PCTCN2019117262-appb-000030
式中T表示时刻,W表示权值,如w cl、w 和w 等连接权值。B表示输出值,如
Figure PCTCN2019117262-appb-000031
Figure PCTCN2019117262-appb-000032
等输出。δ表示误差项,如
Figure PCTCN2019117262-appb-000033
Figure PCTCN2019117262-appb-000034
等误差项。
Figure PCTCN2019117262-appb-000035
为上一时刻神经元的状态数据,b t-1 h为上一时刻的输出值。上述表达式各参数需相对应,如更新的具体权值为w cl时,则输出B为相对应的
Figure PCTCN2019117262-appb-000036
误差项δ为相对应的
Figure PCTCN2019117262-appb-000037
根据步骤S53和步骤S54的表达式可获得该权值更新表达式的所需参数值。则根据该权值更新的表达式进行运算即可获取更新后各层的权值。
S54:基于更新后的各层的权值,获取目标路段特征模型。
具体地,将获取的更新后的各层的权值,应用到长短时记忆神经网络模型中即可获取目标模型。在目标模型的输出层最终会输出一概率值,该概率值表示信息在通过目标模型处理后与该目标模型的贴近程度,即信息输入该目标模型的概率有多大,可广泛应用于路段特征识别,以达到准确识别路段交通情况的效果。
本实施例中,通过对长短时记忆神经网络模型进行初始化,将训练样本输入到长短时记忆神经网络模型中并计算各层的输出值,利用输出值对长短时记忆神经网络模型各层进行误差反传更新,得到各层的权值,最后利用各层的权值获取目标路段特征模型,从而实现对目标路段特征模型的获取,采用训练样本对长短时记忆神经网络模型进行训练,能够有效更新长短时记忆神经网络模型中各层的权值,使得通过训练样本训练得到的路段特征模型识别效果更精准。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种路段特征模型训练装置,该路段特征模型训练装置与上述实施例中路段特征模型训练方法一一对应。如图8所示,该路段 特征模型训练装置包括第一获取模块80、计算模块81、预处理模块82、第一处理模块83和训练模块84。各功能模块详细说明如下:
第一获取模块80,用于获取目标路段两侧的第一监测数据和第二监测数据,其中,第一监测数据和第二监测数据都包括车牌信息;
计算模块81,用于根据第一监测数据和第二监测数据,计算预设时间段内车辆的通行时间,并将通行时间和车牌信息都确定为车辆通行数据;
预处理模块82,用于对车辆通行数据进行预处理,得到图数据结构;
第一处理模块83,用于利用预先训练好的图卷积神经网络模型对图数据结构进行处理,得到训练样本;
训练模块84,用于采用训练样本对长短时神经网络进行训练,得到目标路段特征模型。
进一步地,第一获取模块80包括:
第二获取子模块,用于获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;
查询子模块,用于从预设数据库中查询第一车辆卡口位置信息和第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
进一步地,第二获取子模块包括:
第三获取单元,用于从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,车辆卡口库预先存储了不同的路段和车辆卡口位置信息;
筛选单元,用于根据预设条件,从车辆卡口位置信息中筛选出第一车辆卡口位置信息和第二车辆卡口位置信息。
进一步地,计算模块81包括:
匹配子模块,用于将预设时间段内的第一监测数据与第二监测数据中的车牌信息进行匹配,若匹配到相同的车牌信息,则将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据,其中,目标第一监测数据和目标第二监测数据都包括监测时间;
运算子模块,用于利用目标第一监测数据的监测时间和目标第二监测数据的监测时间进行求差运算,得到车辆的通行时间,并将通行时间和车牌信息都确定为车辆通行数据。
进一步地,预处理模块82包括:
提取子模块,用于从车辆通行数据中提取通行时间在预设范围内的数据,将提取到的数据确定为目标数据;
转换子模块,用于对目标数据进行图数据结构转换处理,得到图数据结构。
进一步地,转换子模块包括:
创建单元,用于使用networkx建立空的无向图;
第二处理单元,用于将目标数据作为无向图的输入数据,并通过绘制网络图方法将输入数据处理成图数据结构。
进一步地,训练模块84包括:
初始化子模块,用于初始化长短时记忆神经网络模型;
输出值计算子模块,用于在长短时记忆神经网络模型中输入训练样本,计算长短时记忆神经网络模型各层的输出值;
更新子模块,用于根据输出值对长短时记忆神经网络模型各层进行误差反传更新,获取更新后的各层的权值;
第四获取子模块,用于基于更新后的各层的权值,获取目标路段特征模型。
本申请的一些实施例公开了计算机设备。具体请参阅图9,为本申请的一实施例中计算机设备90基本结构框图。
如图9中所示意的,所述计算机设备90包括通过系统总线相互通信连接存储器91、处理器92、网络接口93。需要指出的是,图9中仅示出了具有组件91-93的计算机设备90,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
所述存储器91至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器91可以是所述计算机设备90的内部存储单元,例如该计算机设备90的硬盘或内存。在另一些实施例中,所述存储器91也可以是所述计算机设备90的外部存储设备,例如该计算机设备90上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器91还可以既包括所述计算机设备90的内部存储单元也包括其外部存储设备。本实施例中,所述存储器91通常用于存储安装于所述计算机设备90的操作系统和各类应用软件,例如所述路段特征模型训练方法的计算机可读指令等。此外,所述存储器91还可以用于暂时地存储已经输出或者将要输出的各类数据。
所述处理器92在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器92通常用于控制所述计算机设备90的总体操作。本实施例中,所述处理器92用于运行所述存储器91中存储的计算机可读指令或者处理数据,例如运行所 述路段特征模型训练方法的计算机可读指令。
所述网络接口93可包括无线网络接口或有线网络接口,该网络接口93通常用于在所述计算机设备90与其他电子设备之间建立通信连接。
本申请还提供了另一种实施方式,即提供一种非易失性的计算机可读存储介质,所述非易失性的计算机可读存储介质存储有单据信息录入流程,所述单据信息录入流程可被至少一个处理器执行,以使所述至少一个处理器执行上述任意一种路段特征模型训练方法的步骤。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台计算机设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
最后应说明的是,显然以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。

Claims (20)

  1. 一种路段特征模型训练方法,其特征在于,所述路段特征模型训练方法包括:
    获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;
    根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;
    对所述车辆通行数据进行预处理,得到图数据结构;
    利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;
    采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
  2. 如权利要求1所述的路段特征模型训练方法,其特征在于,所述获取目标路段两侧的第一监测数据和第二监测数据的步骤包括:
    获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;
    从预设数据库中查询所述第一车辆卡口位置信息和所述第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
  3. 如权利要求2所述的路段特征模型训练方法,其特征在于,所述获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息的步骤包括:
    从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,所述车辆卡口库预先存储了不同的路段和所述车辆卡口位置信息;
    根据预设条件,从所述车辆卡口位置信息中筛选出所述第一车辆卡口位置信息和所述第二车辆卡口位置信息。
  4. 如权利要求1所述的路段特征模型训练方法,其特征在于,所述根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据的步骤包括:
    将预设时间段内的所述第一监测数据与所述第二监测数据中的车牌信息进行匹配,若匹配到相同的车牌信息,则将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据,其中,所述目标第一监测数据和目标第二监测数据都包括监测时间;
    利用所述目标第一监测数据的监测时间和所述目标第二监测数据的监测时间进行求差运算,得到车辆的所述通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据。
  5. 如权利要求1所述的路段特征模型训练方法,其特征在于,所述对所述车辆通行数据进行预处理,得到图数据结构的步骤包括:
    从所述车辆通行数据中提取所述通行时间在预设范围内的数据,将提取到的数据确定为目标数据;
    对所述目标数据进行图数据结构转换处理,得到所述图数据结构。
  6. 如权利要求5所述的路段特征模型训练方法,其特征在于,所述对所 述目标数据进行图数据结构转换处理,得到所述图数据结构的步骤包括:
    使用networkx建立空的无向图;
    将所述目标数据作为所述无向图的输入数据,并通过绘制网络图方法将所述输入数据处理成所述图数据结构。
  7. 如权利要求1所述的路段特征模型训练方法,其特征在于,所述采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型的步骤包括:
    初始化长短时记忆神经网络模型;
    在所述长短时记忆神经网络模型中输入所述训练样本,计算所述长短时记忆神经网络模型各层的输出值;
    根据所述输出值对所述长短时记忆神经网络模型各层进行误差反传更新,获取更新后的所述各层的权值;
    基于更新后的所述各层的权值,获取目标路段特征模型。
  8. 一种路段特征模型训练装置,其特征在于,所述路段特征模型训练包括:
    第一获取模块,获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;
    计算模块,根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;
    预处理模块,对所述车辆通行数据进行预处理,得到图数据结构;
    第一处理模块,利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;
    训练模块,采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
  9. 如权利要求8所述的路段特征模型训练装置,其特征在于,所述第一获取模块包括:
    第二获取子模块,用于获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;
    查询子模块,用于从预设数据库中查询所述第一车辆卡口位置信息和所述第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
  10. 如权利要求9所述的路段特征模型训练装置,其特征在于,所述第二获取子模块包括:
    第三获取单元,用于从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,所述车辆卡口库预先存储了不同的路段和所述车辆卡口位置信息;
    筛选单元,用于根据预设条件,从所述车辆卡口位置信息中筛选出所述第一车辆卡口位置信息和所述第二车辆卡口位置信息。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可 在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;
    根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;
    对所述车辆通行数据进行预处理,得到图数据结构;
    利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;
    采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
  12. 如权利要求11所述的计算机设备,其特征在于,所述获取目标路段两侧的第一监测数据和第二监测数据的步骤包括:
    获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;
    从预设数据库中查询所述第一车辆卡口位置信息和所述第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
  13. 如权利要求12所述的计算机设备,其特征在于,所述获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息的步骤包括:
    从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,所述车辆卡口库预先存储了不同的路段和所述车辆卡口位置信息;
    根据预设条件,从所述车辆卡口位置信息中筛选出所述第一车辆卡口位置信息和所述第二车辆卡口位置信息。
  14. 如权利要求11所述的计算机设备,其特征在于,所述根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据的步骤包括:
    将预设时间段内的所述第一监测数据与所述第二监测数据中的车牌信息进行匹配,若匹配到相同的车牌信息,则将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据,其中,所述目标第一监测数据和目标第二监测数据都包括监测时间;
    利用所述目标第一监测数据的监测时间和所述目标第二监测数据的监测时间进行求差运算,得到车辆的所述通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据。
  15. 如权利要求11所述的计算机设备,其特征在于,所述对所述车辆通行数据进行预处理,得到图数据结构的步骤包括:
    从所述车辆通行数据中提取所述通行时间在预设范围内的数据,将提取到的数据确定为目标数据;
    对所述目标数据进行图数据结构转换处理,得到所述图数据结构。
  16. 一种非易失性的计算机可读存储介质,所述非易失性的计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被一种处理器执行时使得所述一种处理器执行如下步骤:
    获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;
    根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;
    对所述车辆通行数据进行预处理,得到图数据结构;
    利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;
    采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
  17. 如权利要求16所述的非易失性的计算机可读存储介质,其特征在于,所述获取目标路段两侧的第一监测数据和第二监测数据的步骤包括:
    获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;
    从预设数据库中查询所述第一车辆卡口位置信息和所述第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
  18. 如权利要求17所述的非易失性的计算机可读存储介质,其特征在于,所述获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息的步骤包括:
    从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,所述车辆卡口库预先存储了不同的路段和所述车辆卡口位置信息;
    根据预设条件,从所述车辆卡口位置信息中筛选出所述第一车辆卡口位置信息和所述第二车辆卡口位置信息。
  19. 如权利要求16所述的非易失性的计算机可读存储介质,其特征在于,所述根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据的步骤包括:
    将预设时间段内的所述第一监测数据与所述第二监测数据中的车牌信息进行匹配,若匹配到相同的车牌信息,则将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据,其中,所述目标第一监测数据和目标第二监测数据都包括监测时间;
    利用所述目标第一监测数据的监测时间和所述目标第二监测数据的监测时间进行求差运算,得到车辆的所述通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据。
  20. 如权利要求16所述的非易失性的计算机可读存储介质,其特征在于,所述对所述车辆通行数据进行预处理,得到图数据结构的步骤包括:
    从所述车辆通行数据中提取所述通行时间在预设范围内的数据,将提取到的数据确定为目标数据;
    对所述目标数据进行图数据结构转换处理,得到所述图数据结构。
PCT/CN2019/117262 2019-06-21 2019-11-11 路段特征模型训练方法、装置、计算机设备及存储介质 WO2020253039A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910540699.3A CN110459051B (zh) 2019-06-21 2019-06-21 路段特征模型训练方法、装置、终端设备及存储介质
CN201910540699.3 2019-06-21

Publications (1)

Publication Number Publication Date
WO2020253039A1 true WO2020253039A1 (zh) 2020-12-24

Family

ID=68480688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117262 WO2020253039A1 (zh) 2019-06-21 2019-11-11 路段特征模型训练方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN110459051B (zh)
WO (1) WO2020253039A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836626A (zh) * 2021-01-29 2021-05-25 北京百度网讯科技有限公司 事故确定方法及装置、模型训练方法及装置、电子设备
CN113257002A (zh) * 2021-05-11 2021-08-13 青岛海信网络科技股份有限公司 一种高峰开始时间预测方法、装置、设备及介质
CN114945154A (zh) * 2022-05-31 2022-08-26 中国移动通信集团江苏有限公司 车辆位置预测方法、装置、电子设备和计算机程序产品
CN115050120A (zh) * 2022-06-13 2022-09-13 中国电信股份有限公司 交通关卡安检方法、装置、系统及设备
CN115601744A (zh) * 2022-12-14 2023-01-13 松立控股集团股份有限公司(Cn) 一种车身与车牌颜色相近的车牌检测方法
CN115691143A (zh) * 2022-12-30 2023-02-03 北京码牛科技股份有限公司 交通卡口设备采集数据的时间的动态纠正方法和系统
CN117634167A (zh) * 2023-11-17 2024-03-01 深圳市特区铁工建设集团有限公司 一种桥梁监测和预警方法、装置、终端及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954650B (zh) * 2021-03-31 2022-11-22 东风汽车集团股份有限公司 基于隧道的网络切换方法、装置、可移动载体及存储介质
CN114550453B (zh) * 2022-02-23 2023-09-26 阿里巴巴(中国)有限公司 模型训练方法、确定方法、电子设备及计算机存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077610A (zh) * 2012-12-31 2013-05-01 清华大学 一种路段旅行时间估计的方法和系统
CN104537836A (zh) * 2014-12-30 2015-04-22 北京通博科技有限公司 路段行驶时间分布预测方法
CN104715604A (zh) * 2014-01-13 2015-06-17 杭州海康威视数字技术股份有限公司 获取实时路况信息的方法及其系统
WO2016156236A1 (en) * 2015-03-31 2016-10-06 Sony Corporation Method and electronic device
CN107697070A (zh) * 2017-09-05 2018-02-16 百度在线网络技术(北京)有限公司 驾驶行为预测方法和装置、无人车
CN108053653A (zh) * 2018-01-11 2018-05-18 广东蔚海数问大数据科技有限公司 基于lstm的车辆行为预测方法和装置
CN108898831A (zh) * 2018-06-25 2018-11-27 广州市市政工程设计研究总院有限公司 基于道路高清卡口数据的路段状况评估方法及系统

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046956B (zh) * 2015-06-24 2017-04-26 银江股份有限公司 一种基于转向系数的交通流模拟及预测方法
WO2018187632A1 (en) * 2017-04-05 2018-10-11 Carnegie Mellon University Deep learning methods for estimating density and/or flow of objects, and related methods and software
CN107704918B (zh) * 2017-09-19 2019-07-12 平安科技(深圳)有限公司 驾驶模型训练方法、驾驶人识别方法、装置、设备及介质
CN109711591B (zh) * 2017-10-25 2022-02-01 腾讯科技(深圳)有限公司 一种路段速度预测方法、装置、服务器及存储介质
CN108664687A (zh) * 2018-03-22 2018-10-16 浙江工业大学 一种基于深度学习的工控系统时空数据预测方法
CN109740785A (zh) * 2018-10-22 2019-05-10 北京师范大学 基于图卷积神经网络的节点状态预测的方法
CN109636049B (zh) * 2018-12-19 2021-10-29 浙江工业大学 一种结合道路网络拓扑结构与语义关联的拥堵指数预测方法
CN109544932B (zh) * 2018-12-19 2021-03-19 东南大学 一种基于出租车gps数据与卡口数据融合的城市路网流量估计方法
CN109816976A (zh) * 2019-01-21 2019-05-28 平安科技(深圳)有限公司 一种交通管理方法及系统
CN109754605B (zh) * 2019-02-27 2021-12-07 中南大学 一种基于注意力时态图卷积网络的交通预测方法
CN109887282B (zh) * 2019-03-05 2022-01-21 中南大学 一种基于层级时序图卷积网络的路网交通流预测方法
CN109872535B (zh) * 2019-03-27 2020-09-18 深圳市中电数通智慧安全科技股份有限公司 一种智慧交通通行预测方法、装置及服务器

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077610A (zh) * 2012-12-31 2013-05-01 清华大学 一种路段旅行时间估计的方法和系统
CN104715604A (zh) * 2014-01-13 2015-06-17 杭州海康威视数字技术股份有限公司 获取实时路况信息的方法及其系统
CN104537836A (zh) * 2014-12-30 2015-04-22 北京通博科技有限公司 路段行驶时间分布预测方法
WO2016156236A1 (en) * 2015-03-31 2016-10-06 Sony Corporation Method and electronic device
CN107697070A (zh) * 2017-09-05 2018-02-16 百度在线网络技术(北京)有限公司 驾驶行为预测方法和装置、无人车
CN108053653A (zh) * 2018-01-11 2018-05-18 广东蔚海数问大数据科技有限公司 基于lstm的车辆行为预测方法和装置
CN108898831A (zh) * 2018-06-25 2018-11-27 广州市市政工程设计研究总院有限公司 基于道路高清卡口数据的路段状况评估方法及系统

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836626A (zh) * 2021-01-29 2021-05-25 北京百度网讯科技有限公司 事故确定方法及装置、模型训练方法及装置、电子设备
CN112836626B (zh) * 2021-01-29 2023-10-27 北京百度网讯科技有限公司 事故确定方法及装置、模型训练方法及装置、电子设备
CN113257002A (zh) * 2021-05-11 2021-08-13 青岛海信网络科技股份有限公司 一种高峰开始时间预测方法、装置、设备及介质
CN114945154A (zh) * 2022-05-31 2022-08-26 中国移动通信集团江苏有限公司 车辆位置预测方法、装置、电子设备和计算机程序产品
CN115050120A (zh) * 2022-06-13 2022-09-13 中国电信股份有限公司 交通关卡安检方法、装置、系统及设备
CN115601744A (zh) * 2022-12-14 2023-01-13 松立控股集团股份有限公司(Cn) 一种车身与车牌颜色相近的车牌检测方法
CN115691143A (zh) * 2022-12-30 2023-02-03 北京码牛科技股份有限公司 交通卡口设备采集数据的时间的动态纠正方法和系统
CN117634167A (zh) * 2023-11-17 2024-03-01 深圳市特区铁工建设集团有限公司 一种桥梁监测和预警方法、装置、终端及存储介质
CN117634167B (zh) * 2023-11-17 2024-08-13 深圳市特区铁工建设集团有限公司 一种桥梁监测和预警方法、装置、终端及存储介质

Also Published As

Publication number Publication date
CN110459051A (zh) 2019-11-15
CN110459051B (zh) 2020-09-04

Similar Documents

Publication Publication Date Title
WO2020253039A1 (zh) 路段特征模型训练方法、装置、计算机设备及存储介质
CN109583332B (zh) 人脸识别方法、人脸识别系统、介质及电子设备
CN107704918B (zh) 驾驶模型训练方法、驾驶人识别方法、装置、设备及介质
WO2019169688A1 (zh) 车辆定损方法、装置、电子设备及存储介质
WO2019120115A1 (zh) 人脸识别的方法、装置及计算机装置
CN110472675B (zh) 图像分类方法、图像分类装置、存储介质与电子设备
CN108681746B (zh) 一种图像识别方法、装置、电子设备和计算机可读介质
CN111542841A (zh) 一种内容识别的系统和方法
CN110781970B (zh) 分类器的生成方法、装置、设备及存储介质
CN111340226B (zh) 一种量化神经网络模型的训练及测试方法、装置及设备
US20220415023A1 (en) Model update method and related apparatus
CN113313053A (zh) 图像处理方法、装置、设备、介质及程序产品
CN109376736A (zh) 一种基于深度卷积神经网络的视频小目标检测方法
CN113011568A (zh) 一种模型的训练方法、数据处理方法及设备
CN111159481B (zh) 图数据的边预测方法、装置及终端设备
CN116452333A (zh) 异常交易检测模型的构建方法、异常交易检测方法及装置
CN114743187A (zh) 银行安全控件自动登录方法、系统、设备及存储介质
CN112613496A (zh) 一种行人重识别方法、装置、电子设备及存储介质
WO2021051568A1 (zh) 路网拓扑结构的构建方法、装置、计算机设备及存储介质
CN114882273B (zh) 应用于狭小空间的视觉识别方法、装置、设备和存储介质
CN113469237A (zh) 用户意图识别方法、装置、电子设备及存储介质
CN112560953A (zh) 私家车非法营运的识别方法、系统、设备及存储介质
CN111401112A (zh) 人脸识别方法和装置
KR102301786B1 (ko) 딥러닝 기반 실시간 온-디바이스 얼굴 인증을 위한 방법 및 장치
CN113420628B (zh) 一种群体行为识别方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19933462

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19933462

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/03/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19933462

Country of ref document: EP

Kind code of ref document: A1