WO2020253039A1 - 路段特征模型训练方法、装置、计算机设备及存储介质 - Google Patents
路段特征模型训练方法、装置、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2020253039A1 WO2020253039A1 PCT/CN2019/117262 CN2019117262W WO2020253039A1 WO 2020253039 A1 WO2020253039 A1 WO 2020253039A1 CN 2019117262 W CN2019117262 W CN 2019117262W WO 2020253039 A1 WO2020253039 A1 WO 2020253039A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- data
- monitoring data
- target
- road section
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
Definitions
- This application relates to the field of artificial intelligence, and in particular to a method, device, computer equipment and storage medium for training road section feature models.
- the problem of road section feature recognition is highly nonlinear, and the available data is usually large and complex, and neural networks have the characteristics of recognizing complex nonlinear systems. Therefore, the use of neural networks to deal with road section feature recognition problems has great advantages.
- the inventor realizes that the data for road segment characteristics is generally a graph data structure. This type of graph data structure belongs to non-European data.
- the traditional neural network model can only deal with grid data. For processing, it is impossible to process non-European data, which leads to limitations in the processing of model training and affects the accuracy of road segment feature recognition.
- the embodiments of the application provide a method, device, computer equipment, and storage medium for training a road section feature model to solve the problem that the traditional neural network model cannot process the graph data structure and affects the accuracy of road section feature recognition.
- a method for training road section feature models including:
- first monitoring data and second monitoring data on both sides of the target road section, where the first monitoring data and the second monitoring data both include license plate information
- the first monitoring data and the second monitoring data calculate the transit time of the vehicle within a preset time period, and determine both the transit time and the license plate information as vehicle transit data;
- the training samples are used to train the long- and short-term neural network to obtain the target road section feature model.
- a road section feature model training device including:
- An obtaining module which obtains first monitoring data and second monitoring data on both sides of the target road section, wherein both the first monitoring data and the second monitoring data include license plate information;
- a calculation module based on the first monitoring data and the second monitoring data, calculates the transit time of the vehicle within a preset time period, and determines both the transit time and the license plate information as vehicle transit data;
- a preprocessing module to preprocess the vehicle traffic data to obtain a graph data structure
- a processing module using a pre-trained graph convolutional neural network model to process the graph data structure to obtain training samples
- the training module uses the training samples to train the long- and short-term neural network to obtain the target road section feature model.
- a computer device including a memory, a processor, and computer readable instructions stored in the memory and capable of running on the processor, and the processor implements the road section feature model when the computer readable instructions are executed Steps of training method.
- a non-volatile computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the road section feature model training is implemented Method steps.
- FIG. 1 is a flowchart of a method for training a section feature model provided by an embodiment of the present application
- step S1 is a flowchart of step S1 in the method for training a road segment feature model provided by an embodiment of the present application
- step S11 is a flowchart of step S11 in the method for training a road section feature model provided by an embodiment of the present application
- step S2 is a flowchart of step S2 in the method for training a road section feature model provided by an embodiment of the present application
- FIG. 5 is a flowchart of step S3 in the method for training a section feature model provided by an embodiment of the present application
- FIG. 6 is a flowchart of step S32 in the method for training a section feature model provided by an embodiment of the present application
- FIG. 7 is a flowchart of step S5 in the method for training a road segment feature model provided by an embodiment of the present application.
- FIG. 8 is a schematic diagram of a road section feature model training device provided by an embodiment of the present application.
- Fig. 9 is a basic structural block diagram of a computer device provided by an embodiment of the present application.
- the road section feature model training method provided in this application is applied to the server, and the server can be implemented by an independent server or a server cluster composed of multiple servers.
- a method for training a road segment feature model is provided, which includes the following steps:
- the two sides of the target road segment refer to the entry side of the vehicle entering the target road segment and the exit side of the vehicle leaving the target road segment.
- Monitoring data refers to the data monitored by the vehicle in the target road section, for example, the time when the vehicle enters the target road section, the time when the vehicle leaves the target road section, and the license plate information corresponding to the vehicle.
- the first monitoring data refers to the data monitored when the vehicle enters the target road section
- the second monitoring data refers to the data monitored when the vehicle leaves the target road section.
- the first monitoring data corresponding to the entry side of the target road section and the second monitoring data corresponding to the exit side of the target road section are acquired from a preset database.
- the first monitoring data includes the time when the vehicle enters the target road section and the license plate information corresponding to the vehicle
- the second monitoring data includes the time when the vehicle leaves the target road section and the license plate information corresponding to the vehicle.
- the preset database refers to a database dedicated to storing the first monitoring data and the second monitoring data.
- S2 According to the first monitoring data and the second monitoring data, calculate the travel time of the vehicle within the preset time period, and determine both the travel time and the license plate information as the vehicle travel data.
- step S1 obtain the first monitoring data and the second monitoring data within the preset time period from step S1, select the license plate information with the same first monitoring data and the second monitoring data, and use the vehicle leaving the target road section in the second monitoring data
- the time is calculated by subtracting the time when the vehicle enters the target road section in the first monitoring data to obtain the travel time of the vehicle within the preset time period, and both the travel time and the license plate information are determined as the vehicle travel data.
- the preset time period may specifically be 8 to 9 in the morning, or 1 to 2 in the afternoon, and the specific value range is set according to the actual needs of the user, and there is no limitation here.
- preprocessing refers to converting vehicle traffic data into graph data, and the graph data is the graph data structure.
- the preprocessed graph data structure is obtained.
- the preset processing library refers to a database specially used for preprocessing vehicle traffic data.
- the pre-trained graph convolutional neural network model refers to a model specifically used to process graph data structures into training samples.
- the graph data structure obtained in step S3 is imported into the pre-trained graph convolutional neural network model, and the following formula is used for training to obtain training samples after training.
- Zt is the training sample
- y( ⁇ ,W) represents the graph convolution kernel
- * represents the graph convolution operation
- xt represents the graph data structure.
- S5 Use training samples to train the long- and short-term neural network to obtain the target road section feature model.
- the Long Short-Term Memory (LSTM) model is a time recursive neural network model, which is used to train data with time series characteristics.
- the recognition model corresponding to the data can be obtained.
- the data with temporal characteristics are training samples extracted based on the graph convolutional neural network model, and the model obtained through training sample training is the target road segment feature model.
- the long and short-term memory neural network model includes an input layer, an output layer and at least one hidden layer.
- the weights of each layer in the long and short-term memory neural network model refer to the weights of the connections of the layers in the neural network model, and the weights determine each The information finally output by the layer, and make the network have the memory function in time sequence.
- the weights of each layer in the long- and short-term memory neural network model can be effectively updated. Since the training samples are training data corresponding to the road segment features, the obtained target driving model The traffic situation corresponding to the current training section can be identified. Moreover, the long and short-term memory neural network model can make the recognition result of the target driving model more accurate by recognizing the training samples with temporal characteristics.
- the travel time of the vehicle in the preset time period is calculated based on the acquired first and second monitoring data on both sides of the target road section, and the travel time and the corresponding license plate information of the vehicle are used as the vehicle travel data
- the travel time and the corresponding license plate information of the vehicle are used as the vehicle travel data
- the training samples are used to train the long and short-term neural network to obtain the target road feature model, so as to achieve the
- the processing of the graph data structure expands the processing range of model training, and can effectively update the weights of each layer in the long- and short-term memory neural network model, making the recognition effect of the target road section feature model obtained through training sample training more accurate.
- step S1 acquiring the first monitoring data and the second monitoring data on both sides of the target road section includes the following steps:
- S11 Acquire the bayonet position information of the first vehicle and the bayonet position information of the second vehicle on both sides of the target road section.
- the vehicle bayonet position information refers to the entry and exit bayonet information specifically used to detect the vehicle exiting and entering the target road section
- the first vehicle bayonet position information is the bayonet position information of the vehicle entering the target road section
- the second vehicle bayonet position information is the bayonet position information of the vehicle leaving the target road section.
- the first vehicle bayonet location information and the second vehicle bayonet location information on both sides of the target road segment are acquired by preset map information.
- the preset map information is specifically used to store the vehicle bayonet location information corresponding to the target road section.
- S12 Query the first monitoring data and the second monitoring data corresponding to the location information of the first vehicle bayonet and the location information of the second vehicle bayonet respectively from the preset database.
- the preset database pre-stores the first vehicle bayonet location information, the first monitoring data corresponding to the first vehicle bayonet location information, the second vehicle bayonet location information, and the first vehicle bayonet location information corresponding to the second vehicle bayonet location information. 2. Monitoring data.
- the first monitoring data corresponding to the bayonet location information of the first vehicle is acquired; similarly, when the bayonet location of the second vehicle is queried from the preset database Information, obtain the second monitoring data corresponding to the second vehicle bayonet position information.
- the data on the target road section can be accurately extracted to ensure the subsequent model training. accuracy.
- step S11 acquiring the position information of the bayonet of the first vehicle and the position information of the bayonet of the second vehicle includes the following steps:
- S111 Obtain the vehicle bayonet location information of the target road segment from the vehicle bayonet library, where the vehicle bayonet library stores different road segments and vehicle bayonet location information in advance.
- the vehicle bayonet library pre-stores different road sections and vehicle bayonet position information corresponding to the road sections. By querying the target road section from the vehicle bayonet library, the bayonet position information corresponding to the target road section is obtained.
- the vehicle bayonet position information corresponding to road segment A is A1, A2, A3, and A4
- the vehicle bayonet position information corresponding to road segment B is B1, B2, B3, and B4.
- the vehicle bayonet position information obtained from the vehicle bayonet library is A1, A2, A3, and A4.
- S112 According to preset conditions, filter out the first vehicle bayonet location information and the second vehicle bayonet location information from the vehicle bayonet location information.
- the first vehicle bayonet location information and the second vehicle bayonet location information are filtered from the vehicle bayonet location information according to preset conditions.
- the first vehicle bayonet position information refers to the vehicle bayonet position information specifically used to detect the vehicle entering the target road section
- the second vehicle bayonet location information refers to the vehicle bayonet position information specifically used to detect the vehicle leaving the target road section.
- the preset condition refers to selecting a certain direction in the target road section according to the actual needs of the user, for example, the direction from east to north in the target road section.
- the position information of the vehicle bayonet in the direction from east to north in the target road section C is C1 and C2 respectively. If the preset condition is the direction from east to north in the target road section C, then C1 is used as the first vehicle bayonet Location information, using C2 as the second vehicle bayonet location information. If the preset condition is the direction of the target road from north to east, C1 is used as the second vehicle bayonet location information, and C2 is used as the first vehicle bayonet location information.
- the bayonet location information corresponding to the target road section can be determined, which is convenient for the user to use the card. Obtain the corresponding data information from the mouth position information to ensure the accuracy of subsequent data training.
- step S2 based on the first monitoring data and the second monitoring data, the travel time of the vehicle within the preset time period is calculated, and both the travel time and the license plate information are determined as the vehicle Access data includes the following steps:
- S21 Match the first monitoring data in the preset time period with the license plate information in the second monitoring data. If the same license plate information is matched, the first monitoring data corresponding to the same license plate information is determined as the target first Monitoring data, the second monitoring data is determined as the target second monitoring data, where both the target first monitoring data and the target second monitoring data include the monitoring time.
- both the first monitoring data and the second monitoring data contain license plate information. If the license plate information of the first monitoring data and the second monitoring data are the same, it means that the first monitoring data and the second monitoring data are the same The data monitored by the vehicle in the target road section.
- the first monitoring data and the second monitoring data acquired within the preset time period are selected, and the license plate information in the first monitoring data is matched with the license plate information in the second monitoring data.
- the same license plate information is matched
- the monitoring time included in the target first monitoring data is the time when the vehicle in the first monitoring data enters the target road section
- the monitoring time included in the target second monitoring data is the time when the vehicle in the second monitoring data leaves the target road section.
- the preset time period is from 8 am to 9 am.
- the first monitoring data the time when the D1 vehicle enters the target road section is 8 am, and the corresponding license plate information is 888; the time when the D2 vehicle enters the target road section is 8:30 am, and the corresponding license plate information is 886.
- second monitoring data the time when the F1 vehicle leaves the target road section is 9 am, and the corresponding license plate information is 888; the time when the F2 vehicle leaves the target road section is 9:30 am, and the corresponding license plate information is 886.
- the license plate information 888 and 886 in the first monitoring data are matched with the license plate information 888 in the second monitoring data respectively to obtain the license plate information in the first monitoring data 888 matches the license plate information 888 in the second monitoring data, indicating that the D1 vehicle and the F1 vehicle are the same vehicle, and the vehicle has passed the target road section between the bayonet location information of the first vehicle and the bayonet location information of the second vehicle.
- the first monitoring data is determined as the target first monitoring data
- the second monitoring data is determined as the target second monitoring data.
- S22 Use the monitoring time of the target first monitoring data and the monitoring time of the target second monitoring data to perform the difference calculation to obtain the transit time of the vehicle, and determine both the transit time and the license plate information as the vehicle transit data.
- the monitoring time included in the target first monitoring data is the first The time when the vehicle enters the target road section in the monitoring data.
- the monitoring time included in the target second monitoring data is the time when the vehicle leaves the target road section in the second monitoring data.
- the monitoring time of the target second monitoring data is subtracted from the target first monitoring data The difference obtained is the transit time for the vehicle corresponding to the monitoring time to pass the target road section within the preset time period, and both the transit time and the license plate information are determined as vehicle transit data.
- the preset time period is 8 am to 10 am
- the target road section is 123
- the monitoring time of the target first monitoring data of vehicle Q is 8 am
- the monitoring time of the target second monitoring data is 9 am.
- Subtract the monitoring time of the target's first monitoring data from 8 a.m. from the monitoring time of the target's second monitoring data at 9 a.m., and the difference is 1 hour, which means that the vehicle Q passes through the target between 8 a.m.
- the travel time of section 123 is 1 hour.
- the target first monitoring data and target second monitoring data are obtained by matching the license plate information, and the difference calculation is performed to obtain the corresponding transit time of the vehicle, and both the transit time of the vehicle and the corresponding license plate information are determined as the vehicle Pass data, so as to realize intelligent calculation of data, extract effective data, and improve the accuracy of subsequent model training.
- step S3 preprocessing the vehicle traffic data to obtain the graph data structure includes the following steps:
- S31 Extract data with a travel time within a preset range from the vehicle traffic data, and determine the extracted data as target data.
- the preset range is mainly used to filter the transit time in the vehicle transit data.
- the specific range can be 1 to 2 hours, or it can be set according to the actual needs of the user.
- the travel time in the vehicle travel data is compared with the preset range, and if the travel time is within the preset range, the vehicle travel data including the travel time is determined as the target data.
- the target data the user can help users delete extreme data, so that errors in the training results due to extreme data can be avoided in the subsequent data training process.
- the preset range is 1 to 2 hours, and there are 5 vehicle traffic data, namely X1, X2, X3, X4, and X5, which contain traffic time of 0.8 hours, 1 hour, 1.5 hours, 1.8 hours, and 2.5 hours, respectively , Compare the same time with the preset range respectively, and obtain that the vehicle pass data X2, X3, and X4 contain the pass time within the preset range, then the vehicle pass data X2, X3, and X4 are determined as the target data.
- vehicle traffic data namely X1, X2, X3, X4, and X5, which contain traffic time of 0.8 hours, 1 hour, 1.5 hours, 1.8 hours, and 2.5 hours, respectively .
- the target data is imported into a preset processing tool for conversion processing, and the converted graph data structure is obtained.
- the preset processing tool refers to a tool specially used to process data into a graph data structure, for example, a networkx tool can be used for processing.
- the target data is determined according to the preset range of transit time, and the target data is converted to obtain the graph data structure, which can convert the valid data into the processing data structure for subsequent training, and further ensure the validity of the data and the subsequent model The accuracy of training.
- step S32 the graph data structure conversion process is performed on the target data, and obtaining the graph data structure includes the following steps:
- networkx is a software package written in a computer-readable instruction design language, which is convenient for users to create, operate, and learn complex networks. Using networkx, you can store networks in standardized and non-standardized data formats, generate a variety of random networks and classic networks, analyze network structures, build network models, design new network algorithms, and perform network drawing.
- S322 Use the target data as the input data of the undirected graph, and process the input data into a graph data structure by drawing a network graph.
- the method of drawing a network diagram refers to a method specifically used to convert input data into a diagram data structure.
- the method of drawing a network diagram may specifically be the nx.draw() method in networkx.
- the target data is imported as input data into the undirected graph obtained in step S321, and the graph data structure conversion process is performed using nx.draw() in networkx to obtain the processed graph data structure.
- the graph data structure conversion processing of the target data can be realized , Provide accurate training data for subsequent use of the pre-trained graph convolutional neural network model, and further improve the accuracy of subsequent model training.
- step S5 the training samples are used to train the long and short-term neural network, and obtaining the target road section feature model includes the following steps:
- the long- and short-term memory neural network model is initialized.
- the long- and short-term memory neural network is a network connected in time, and its basic unit is called a neuron.
- the long and short-term memory neural network model includes an input layer, an output layer, and at least one hidden layer.
- the hidden layers include input gates, forget gates, output gates, neuron states, and neuron outputs.
- Each of the long and short-term memory neural network models A layer can include multiple neurons.
- the forget gate determines the information to be discarded in the neuron state.
- the input gate determines the information to be added in the neuron.
- the output gate determines the information to be output in the neuron.
- the state of the neuron determines the information discarded, added, and output by each gate, which is specifically expressed as the weight of the connection with each gate.
- the neuron output determines the connection weight with the next layer.
- initializing the long and short-term memory neural network model is to set the weights of the connections between the layers of the long- and short-term memory neural network model and the input gate, forget gate, output gate, neuron state and neuron output in the hidden layer.
- the initial weight can be set to 1.
- S52 Input training samples into the long- and short-term memory neural network model, and calculate the output value of each layer of the long- and short-term memory neural network model.
- the training samples obtained within a preset time period according to the unit time interval are input into the long and short-term memory neural network model, and the output values of each layer are calculated respectively, including calculating the training samples in the input gate and forgetting The output of gate, output gate, neuron state and neuron output.
- a neuron includes three activation functions f (sigmoid), g (tanh) and h (softmax).
- the activation function can convert the weight result into the classification result, and its function is to add some non-linear factors to the neural network, so that the neural network can better solve more complex problems.
- the data received and processed by a neuron includes: input training sample: x, state data: s.
- the parameters mentioned below also include: the input of the neuron is represented by a, and the output is represented by b.
- Subscript And ⁇ represent input gate, forget gate and output gate respectively.
- the subscript c represents neuron and t represents time.
- the weights of the neuron connected to the input gate, forget gate and output gate are recorded as w cl , w c ⁇ and w c ⁇ respectively .
- S c represents the neuron state.
- I represents the number of neurons in the input layer
- H is the number of neurons in the hidden layer
- the input gate receives the sample X t at the current moment, the output value b t-1 h at the previous moment, and the neuron state data S t-1 c at the previous moment, by connecting the input training sample with the input gate weight w il , Connect the output value of the previous moment and the weight of the input gate w hl and the weight of the neuron and the input gate w cl , according to the formula Calculate the output of the input gate Apply the activation function f to By formula Get a scalar in the 0-1 interval. This scalar controls the proportion of the current information received by the neuron based on the comprehensive judgment of the current state and the past state.
- the forgetting gate receives the sample X t at the current moment, the output value b t-1 h at the previous moment, and the state data S t-1 c at the previous moment, by connecting the input training sample with the weight w i ⁇ of the forgetting gate, connecting The output value at the last moment and the weight of the forgetting gate w h ⁇ and the weight of the connection neuron and the forgetting gate w c ⁇ , according to the formula Calculate the output of the forget gate Apply the activation function f to By formula Obtain a 0-1 interval scalar, this scalar controls the proportion of the past information received by the neuron based on the comprehensive judgment of the current state and the past state.
- the neuron receives the sample X t at the current moment, the output value b t-1 h at the previous moment, and the state data S t-1 c at the previous moment, the weight w ic of the training sample connecting the neuron and the input, and the connection nerve
- the output gate receives the samples at the current time and the current state data X t , the output value b t-1 h at the previous time and the current state data
- Neuron output Calculate based on the scalar output of the output gate. Specifically, the output of the neuron is based on the formula Calculated.
- the output value of each layer of the long- and short-term memory neural network model can be obtained by the above calculation of the training samples among the layers.
- S53 Perform error back-propagation update on each layer of the long and short-term memory neural network model according to the output value, and obtain the updated weight value of each layer.
- the error back propagation update is performed on each layer of the long and short-term memory neural network model according to the output value of each layer of the long- and short-term memory neural network model.
- the error term of each layer can be calculated.
- ⁇ and ⁇ both represent error terms, especially, Represents the error term of the neuron output back propagation, The error term that represents the back propagation of the neuron state, both of which represent the error term, but the specific meaning is different.
- the input of the neuron is represented by a
- the output is represented by b.
- Subscript And ⁇ represent input gate, forget gate and output gate respectively.
- the subscript c represents neuron and t represents time.
- the weights of neurons connected to the input gate, forget gate and output gate are recorded as w cl , And w c ⁇ .
- S c represents the neuron state
- the activation function of the control gate is represented by f (sigmoid), and g (tanh) and h (softmax) represent the input activation function and output activation function of the neuron, respectively.
- K is the number of neurons in the output layer
- H is the number of neurons in the hidden layer
- the error term of the input gate is The error term of the forgotten gate is The error term of the neuron state backpropagation is among them
- the error term of the output gate back propagation is The error term of the neuron output back propagation is
- the weight value of each layer can be updated by calculating the weight gradient, where the expression of the weight value update is
- T represents time
- W represents weights, such as connection weights such as w cl , w c ⁇ and w c ⁇ .
- B represents the output value, such as with Wait for output.
- ⁇ represents the error term, such as with Equal error term. Is the state data of the neuron at the previous moment, and b t-1 h is the output value at the previous moment.
- each parameter of the above expression needs to be corresponding, if the updated specific weight value is w cl , then the output B is corresponding
- the error term ⁇ corresponds to According to the expressions in step S53 and step S54, the required parameter values of the weight update expression can be obtained. Then, the weight value of each layer after the update can be obtained by performing operations based on the expression of the weight value update.
- the obtained updated weights of each layer are applied to the long- and short-term memory neural network model to obtain the target model.
- the output layer of the target model will finally output a probability value, which indicates how close the information is to the target model after being processed by the target model, that is, how likely the information is input to the target model, which can be widely used in road segment feature recognition , In order to achieve the effect of accurately identifying the traffic conditions of the road section.
- the training sample is used to train the long and short-term memory neural network model, which can be effectively updated
- the weights of each layer in the long and short-term memory neural network model make the recognition effect of the road section feature model obtained by training sample training more accurate.
- a road section feature model training device is provided, and the road section feature model training device corresponds to the road section feature model training method in the foregoing embodiment one-to-one.
- the road section feature model training device includes a first acquisition module 80, a calculation module 81, a preprocessing module 82, a first processing module 83, and a training module 84.
- the detailed description of each functional module is as follows:
- the first acquisition module 80 is configured to acquire the first monitoring data and the second monitoring data on both sides of the target road section, where both the first monitoring data and the second monitoring data include license plate information;
- the calculation module 81 is configured to calculate the transit time of the vehicle within a preset time period according to the first monitoring data and the second monitoring data, and determine both the transit time and the license plate information as the vehicle transit data;
- the preprocessing module 82 is used to preprocess the vehicle traffic data to obtain the graph data structure
- the first processing module 83 is configured to process the graph data structure by using the pre-trained graph convolutional neural network model to obtain training samples;
- the training module 84 is used to train the long- and short-term neural network using training samples to obtain the target road section feature model.
- the first obtaining module 80 includes:
- the second acquisition sub-module is used to acquire the bayonet position information of the first vehicle and the bayonet position information of the second vehicle on both sides of the target road section;
- the query submodule is used to query the first monitoring data and the second monitoring data corresponding to the location information of the first vehicle bayonet and the location information of the second vehicle bayonet respectively from the preset database.
- the second acquisition submodule includes:
- the third acquiring unit is used to acquire the vehicle bayonet location information of the target road section from the vehicle bayonet library, where the vehicle bayonet library pre-stores different road segments and vehicle bayonet location information;
- the screening unit is used to filter out the first vehicle bayonet location information and the second vehicle bayonet location information from the vehicle bayonet location information according to preset conditions.
- calculation module 81 includes:
- the matching sub-module is used to match the first monitoring data in the preset time period with the license plate information in the second monitoring data. If the same license plate information is matched, the first monitoring data corresponding to the same license plate information is determined Is the target first monitoring data, and the second monitoring data is determined as the target second monitoring data, where both the target first monitoring data and the target second monitoring data include the monitoring time;
- the operation sub-module is used to calculate the difference between the monitoring time of the target first monitoring data and the monitoring time of the target second monitoring data to obtain the transit time of the vehicle, and determine both the transit time and the license plate information as the vehicle transit data.
- the preprocessing module 82 includes:
- the extraction sub-module is used to extract data with a travel time within a preset range from the vehicle traffic data, and determine the extracted data as target data;
- the conversion sub-module is used to perform graph data structure conversion processing on the target data to obtain the graph data structure.
- the conversion sub-module includes:
- Creation unit used to create an empty undirected graph using networkx
- the second processing unit is used to treat the target data as the input data of the undirected graph, and process the input data into a graph data structure by drawing a network graph.
- the training module 84 includes:
- the initialization sub-module is used to initialize the long and short-term memory neural network model
- the output value calculation sub-module is used to input training samples in the long and short-term memory neural network model, and calculate the output value of each layer of the long- and short-term memory neural network model;
- the update sub-module is used to perform error back propagation update on each layer of the long and short-term memory neural network model according to the output value, and obtain the updated weight of each layer;
- the fourth acquisition sub-module is used to acquire the feature model of the target road section based on the updated weights of each layer.
- FIG. 9 is a block diagram of the basic structure of the computer device 90 in an embodiment of the application.
- the computer device 90 includes a memory 91, a processor 92, and a network interface 93 that are communicatively connected to each other through a system bus. It should be pointed out that FIG. 9 only shows a computer device 90 with components 91-93, but it should be understood that it is not required to implement all the shown components, and more or fewer components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
- Its hardware includes but is not limited to microprocessors, dedicated Integrated Circuit (Application Specific Integrated Circuit, ASIC), Programmable Gate Array (Field-Programmable Gate Array, FPGA), Digital Processor (Digital Signal Processor, DSP), embedded devices, etc.
- ASIC Application Specific Integrated Circuit
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- DSP Digital Processor
- the computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
- the computer device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
- the memory 91 includes at least one type of readable storage medium, the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static memory Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
- the memory 91 may be an internal storage unit of the computer device 90, such as a hard disk or memory of the computer device 90.
- the memory 91 may also be an external storage device of the computer device 90, such as a plug-in hard disk equipped on the computer device 90, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital, SD) card, Flash Card, etc.
- the memory 91 may also include both the internal storage unit of the computer device 90 and its external storage device.
- the memory 91 is generally used to store an operating system and various application software installed in the computer device 90, such as computer-readable instructions of the road section feature model training method.
- the memory 91 can also be used to temporarily store various types of data that have been output or will be output.
- the processor 92 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
- the processor 92 is generally used to control the overall operation of the computer device 90.
- the processor 92 is configured to run computer-readable instructions or processed data stored in the memory 91, for example, run the computer-readable instructions of the road section feature model training method.
- the network interface 93 may include a wireless network interface or a wired network interface, and the network interface 93 is generally used to establish a communication connection between the computer device 90 and other electronic devices.
- This application also provides another implementation manner, that is, to provide a non-volatile computer-readable storage medium, the non-volatile computer-readable storage medium stores a document information entry process, and the document information entry
- the process may be executed by at least one processor, so that the at least one processor executes the steps of any one of the above-mentioned road section feature model training methods.
- the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇
- the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a computer device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the various embodiments of the present application.
- a computer device which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (20)
- 一种路段特征模型训练方法,其特征在于,所述路段特征模型训练方法包括:获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;对所述车辆通行数据进行预处理,得到图数据结构;利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
- 如权利要求1所述的路段特征模型训练方法,其特征在于,所述获取目标路段两侧的第一监测数据和第二监测数据的步骤包括:获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;从预设数据库中查询所述第一车辆卡口位置信息和所述第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
- 如权利要求2所述的路段特征模型训练方法,其特征在于,所述获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息的步骤包括:从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,所述车辆卡口库预先存储了不同的路段和所述车辆卡口位置信息;根据预设条件,从所述车辆卡口位置信息中筛选出所述第一车辆卡口位置信息和所述第二车辆卡口位置信息。
- 如权利要求1所述的路段特征模型训练方法,其特征在于,所述根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据的步骤包括:将预设时间段内的所述第一监测数据与所述第二监测数据中的车牌信息进行匹配,若匹配到相同的车牌信息,则将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据,其中,所述目标第一监测数据和目标第二监测数据都包括监测时间;利用所述目标第一监测数据的监测时间和所述目标第二监测数据的监测时间进行求差运算,得到车辆的所述通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据。
- 如权利要求1所述的路段特征模型训练方法,其特征在于,所述对所述车辆通行数据进行预处理,得到图数据结构的步骤包括:从所述车辆通行数据中提取所述通行时间在预设范围内的数据,将提取到的数据确定为目标数据;对所述目标数据进行图数据结构转换处理,得到所述图数据结构。
- 如权利要求5所述的路段特征模型训练方法,其特征在于,所述对所 述目标数据进行图数据结构转换处理,得到所述图数据结构的步骤包括:使用networkx建立空的无向图;将所述目标数据作为所述无向图的输入数据,并通过绘制网络图方法将所述输入数据处理成所述图数据结构。
- 如权利要求1所述的路段特征模型训练方法,其特征在于,所述采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型的步骤包括:初始化长短时记忆神经网络模型;在所述长短时记忆神经网络模型中输入所述训练样本,计算所述长短时记忆神经网络模型各层的输出值;根据所述输出值对所述长短时记忆神经网络模型各层进行误差反传更新,获取更新后的所述各层的权值;基于更新后的所述各层的权值,获取目标路段特征模型。
- 一种路段特征模型训练装置,其特征在于,所述路段特征模型训练包括:第一获取模块,获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;计算模块,根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;预处理模块,对所述车辆通行数据进行预处理,得到图数据结构;第一处理模块,利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;训练模块,采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
- 如权利要求8所述的路段特征模型训练装置,其特征在于,所述第一获取模块包括:第二获取子模块,用于获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;查询子模块,用于从预设数据库中查询所述第一车辆卡口位置信息和所述第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
- 如权利要求9所述的路段特征模型训练装置,其特征在于,所述第二获取子模块包括:第三获取单元,用于从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,所述车辆卡口库预先存储了不同的路段和所述车辆卡口位置信息;筛选单元,用于根据预设条件,从所述车辆卡口位置信息中筛选出所述第一车辆卡口位置信息和所述第二车辆卡口位置信息。
- 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可 在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;对所述车辆通行数据进行预处理,得到图数据结构;利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
- 如权利要求11所述的计算机设备,其特征在于,所述获取目标路段两侧的第一监测数据和第二监测数据的步骤包括:获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;从预设数据库中查询所述第一车辆卡口位置信息和所述第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
- 如权利要求12所述的计算机设备,其特征在于,所述获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息的步骤包括:从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,所述车辆卡口库预先存储了不同的路段和所述车辆卡口位置信息;根据预设条件,从所述车辆卡口位置信息中筛选出所述第一车辆卡口位置信息和所述第二车辆卡口位置信息。
- 如权利要求11所述的计算机设备,其特征在于,所述根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据的步骤包括:将预设时间段内的所述第一监测数据与所述第二监测数据中的车牌信息进行匹配,若匹配到相同的车牌信息,则将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据,其中,所述目标第一监测数据和目标第二监测数据都包括监测时间;利用所述目标第一监测数据的监测时间和所述目标第二监测数据的监测时间进行求差运算,得到车辆的所述通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据。
- 如权利要求11所述的计算机设备,其特征在于,所述对所述车辆通行数据进行预处理,得到图数据结构的步骤包括:从所述车辆通行数据中提取所述通行时间在预设范围内的数据,将提取到的数据确定为目标数据;对所述目标数据进行图数据结构转换处理,得到所述图数据结构。
- 一种非易失性的计算机可读存储介质,所述非易失性的计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被一种处理器执行时使得所述一种处理器执行如下步骤:获取目标路段两侧的第一监测数据和第二监测数据,其中,所述第一监测数据和所述第二监测数据都包括车牌信息;根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据;对所述车辆通行数据进行预处理,得到图数据结构;利用预先训练好的图卷积神经网络模型对所述图数据结构进行处理,得到训练样本;采用所述训练样本对长短时神经网络进行训练,得到目标路段特征模型。
- 如权利要求16所述的非易失性的计算机可读存储介质,其特征在于,所述获取目标路段两侧的第一监测数据和第二监测数据的步骤包括:获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息;从预设数据库中查询所述第一车辆卡口位置信息和所述第二车辆卡口位置信息分别对应的第一监测数据和第二监测数据。
- 如权利要求17所述的非易失性的计算机可读存储介质,其特征在于,所述获取目标路段两侧的第一车辆卡口位置信息和第二车辆卡口位置信息的步骤包括:从车辆卡口库中获取目标路段存在的车辆卡口位置信息,其中,所述车辆卡口库预先存储了不同的路段和所述车辆卡口位置信息;根据预设条件,从所述车辆卡口位置信息中筛选出所述第一车辆卡口位置信息和所述第二车辆卡口位置信息。
- 如权利要求16所述的非易失性的计算机可读存储介质,其特征在于,所述根据所述第一监测数据和所述第二监测数据,计算预设时间段内车辆的通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据的步骤包括:将预设时间段内的所述第一监测数据与所述第二监测数据中的车牌信息进行匹配,若匹配到相同的车牌信息,则将相同的车牌信息对应的第一监测数据确定为目标第一监测数据,将第二监测数据确定为目标第二监测数据,其中,所述目标第一监测数据和目标第二监测数据都包括监测时间;利用所述目标第一监测数据的监测时间和所述目标第二监测数据的监测时间进行求差运算,得到车辆的所述通行时间,并将所述通行时间和所述车牌信息都确定为车辆通行数据。
- 如权利要求16所述的非易失性的计算机可读存储介质,其特征在于,所述对所述车辆通行数据进行预处理,得到图数据结构的步骤包括:从所述车辆通行数据中提取所述通行时间在预设范围内的数据,将提取到的数据确定为目标数据;对所述目标数据进行图数据结构转换处理,得到所述图数据结构。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910540699.3A CN110459051B (zh) | 2019-06-21 | 2019-06-21 | 路段特征模型训练方法、装置、终端设备及存储介质 |
CN201910540699.3 | 2019-06-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020253039A1 true WO2020253039A1 (zh) | 2020-12-24 |
Family
ID=68480688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/117262 WO2020253039A1 (zh) | 2019-06-21 | 2019-11-11 | 路段特征模型训练方法、装置、计算机设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110459051B (zh) |
WO (1) | WO2020253039A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112836626A (zh) * | 2021-01-29 | 2021-05-25 | 北京百度网讯科技有限公司 | 事故确定方法及装置、模型训练方法及装置、电子设备 |
CN113257002A (zh) * | 2021-05-11 | 2021-08-13 | 青岛海信网络科技股份有限公司 | 一种高峰开始时间预测方法、装置、设备及介质 |
CN114945154A (zh) * | 2022-05-31 | 2022-08-26 | 中国移动通信集团江苏有限公司 | 车辆位置预测方法、装置、电子设备和计算机程序产品 |
CN115050120A (zh) * | 2022-06-13 | 2022-09-13 | 中国电信股份有限公司 | 交通关卡安检方法、装置、系统及设备 |
CN115601744A (zh) * | 2022-12-14 | 2023-01-13 | 松立控股集团股份有限公司(Cn) | 一种车身与车牌颜色相近的车牌检测方法 |
CN115691143A (zh) * | 2022-12-30 | 2023-02-03 | 北京码牛科技股份有限公司 | 交通卡口设备采集数据的时间的动态纠正方法和系统 |
CN117634167A (zh) * | 2023-11-17 | 2024-03-01 | 深圳市特区铁工建设集团有限公司 | 一种桥梁监测和预警方法、装置、终端及存储介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112954650B (zh) * | 2021-03-31 | 2022-11-22 | 东风汽车集团股份有限公司 | 基于隧道的网络切换方法、装置、可移动载体及存储介质 |
CN114550453B (zh) * | 2022-02-23 | 2023-09-26 | 阿里巴巴(中国)有限公司 | 模型训练方法、确定方法、电子设备及计算机存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103077610A (zh) * | 2012-12-31 | 2013-05-01 | 清华大学 | 一种路段旅行时间估计的方法和系统 |
CN104537836A (zh) * | 2014-12-30 | 2015-04-22 | 北京通博科技有限公司 | 路段行驶时间分布预测方法 |
CN104715604A (zh) * | 2014-01-13 | 2015-06-17 | 杭州海康威视数字技术股份有限公司 | 获取实时路况信息的方法及其系统 |
WO2016156236A1 (en) * | 2015-03-31 | 2016-10-06 | Sony Corporation | Method and electronic device |
CN107697070A (zh) * | 2017-09-05 | 2018-02-16 | 百度在线网络技术(北京)有限公司 | 驾驶行为预测方法和装置、无人车 |
CN108053653A (zh) * | 2018-01-11 | 2018-05-18 | 广东蔚海数问大数据科技有限公司 | 基于lstm的车辆行为预测方法和装置 |
CN108898831A (zh) * | 2018-06-25 | 2018-11-27 | 广州市市政工程设计研究总院有限公司 | 基于道路高清卡口数据的路段状况评估方法及系统 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105046956B (zh) * | 2015-06-24 | 2017-04-26 | 银江股份有限公司 | 一种基于转向系数的交通流模拟及预测方法 |
WO2018187632A1 (en) * | 2017-04-05 | 2018-10-11 | Carnegie Mellon University | Deep learning methods for estimating density and/or flow of objects, and related methods and software |
CN107704918B (zh) * | 2017-09-19 | 2019-07-12 | 平安科技(深圳)有限公司 | 驾驶模型训练方法、驾驶人识别方法、装置、设备及介质 |
CN109711591B (zh) * | 2017-10-25 | 2022-02-01 | 腾讯科技(深圳)有限公司 | 一种路段速度预测方法、装置、服务器及存储介质 |
CN108664687A (zh) * | 2018-03-22 | 2018-10-16 | 浙江工业大学 | 一种基于深度学习的工控系统时空数据预测方法 |
CN109740785A (zh) * | 2018-10-22 | 2019-05-10 | 北京师范大学 | 基于图卷积神经网络的节点状态预测的方法 |
CN109636049B (zh) * | 2018-12-19 | 2021-10-29 | 浙江工业大学 | 一种结合道路网络拓扑结构与语义关联的拥堵指数预测方法 |
CN109544932B (zh) * | 2018-12-19 | 2021-03-19 | 东南大学 | 一种基于出租车gps数据与卡口数据融合的城市路网流量估计方法 |
CN109816976A (zh) * | 2019-01-21 | 2019-05-28 | 平安科技(深圳)有限公司 | 一种交通管理方法及系统 |
CN109754605B (zh) * | 2019-02-27 | 2021-12-07 | 中南大学 | 一种基于注意力时态图卷积网络的交通预测方法 |
CN109887282B (zh) * | 2019-03-05 | 2022-01-21 | 中南大学 | 一种基于层级时序图卷积网络的路网交通流预测方法 |
CN109872535B (zh) * | 2019-03-27 | 2020-09-18 | 深圳市中电数通智慧安全科技股份有限公司 | 一种智慧交通通行预测方法、装置及服务器 |
-
2019
- 2019-06-21 CN CN201910540699.3A patent/CN110459051B/zh active Active
- 2019-11-11 WO PCT/CN2019/117262 patent/WO2020253039A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103077610A (zh) * | 2012-12-31 | 2013-05-01 | 清华大学 | 一种路段旅行时间估计的方法和系统 |
CN104715604A (zh) * | 2014-01-13 | 2015-06-17 | 杭州海康威视数字技术股份有限公司 | 获取实时路况信息的方法及其系统 |
CN104537836A (zh) * | 2014-12-30 | 2015-04-22 | 北京通博科技有限公司 | 路段行驶时间分布预测方法 |
WO2016156236A1 (en) * | 2015-03-31 | 2016-10-06 | Sony Corporation | Method and electronic device |
CN107697070A (zh) * | 2017-09-05 | 2018-02-16 | 百度在线网络技术(北京)有限公司 | 驾驶行为预测方法和装置、无人车 |
CN108053653A (zh) * | 2018-01-11 | 2018-05-18 | 广东蔚海数问大数据科技有限公司 | 基于lstm的车辆行为预测方法和装置 |
CN108898831A (zh) * | 2018-06-25 | 2018-11-27 | 广州市市政工程设计研究总院有限公司 | 基于道路高清卡口数据的路段状况评估方法及系统 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112836626A (zh) * | 2021-01-29 | 2021-05-25 | 北京百度网讯科技有限公司 | 事故确定方法及装置、模型训练方法及装置、电子设备 |
CN112836626B (zh) * | 2021-01-29 | 2023-10-27 | 北京百度网讯科技有限公司 | 事故确定方法及装置、模型训练方法及装置、电子设备 |
CN113257002A (zh) * | 2021-05-11 | 2021-08-13 | 青岛海信网络科技股份有限公司 | 一种高峰开始时间预测方法、装置、设备及介质 |
CN114945154A (zh) * | 2022-05-31 | 2022-08-26 | 中国移动通信集团江苏有限公司 | 车辆位置预测方法、装置、电子设备和计算机程序产品 |
CN115050120A (zh) * | 2022-06-13 | 2022-09-13 | 中国电信股份有限公司 | 交通关卡安检方法、装置、系统及设备 |
CN115601744A (zh) * | 2022-12-14 | 2023-01-13 | 松立控股集团股份有限公司(Cn) | 一种车身与车牌颜色相近的车牌检测方法 |
CN115691143A (zh) * | 2022-12-30 | 2023-02-03 | 北京码牛科技股份有限公司 | 交通卡口设备采集数据的时间的动态纠正方法和系统 |
CN117634167A (zh) * | 2023-11-17 | 2024-03-01 | 深圳市特区铁工建设集团有限公司 | 一种桥梁监测和预警方法、装置、终端及存储介质 |
CN117634167B (zh) * | 2023-11-17 | 2024-08-13 | 深圳市特区铁工建设集团有限公司 | 一种桥梁监测和预警方法、装置、终端及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110459051A (zh) | 2019-11-15 |
CN110459051B (zh) | 2020-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020253039A1 (zh) | 路段特征模型训练方法、装置、计算机设备及存储介质 | |
CN109583332B (zh) | 人脸识别方法、人脸识别系统、介质及电子设备 | |
CN107704918B (zh) | 驾驶模型训练方法、驾驶人识别方法、装置、设备及介质 | |
WO2019169688A1 (zh) | 车辆定损方法、装置、电子设备及存储介质 | |
WO2019120115A1 (zh) | 人脸识别的方法、装置及计算机装置 | |
CN110472675B (zh) | 图像分类方法、图像分类装置、存储介质与电子设备 | |
CN108681746B (zh) | 一种图像识别方法、装置、电子设备和计算机可读介质 | |
CN111542841A (zh) | 一种内容识别的系统和方法 | |
CN110781970B (zh) | 分类器的生成方法、装置、设备及存储介质 | |
CN111340226B (zh) | 一种量化神经网络模型的训练及测试方法、装置及设备 | |
US20220415023A1 (en) | Model update method and related apparatus | |
CN113313053A (zh) | 图像处理方法、装置、设备、介质及程序产品 | |
CN109376736A (zh) | 一种基于深度卷积神经网络的视频小目标检测方法 | |
CN113011568A (zh) | 一种模型的训练方法、数据处理方法及设备 | |
CN111159481B (zh) | 图数据的边预测方法、装置及终端设备 | |
CN116452333A (zh) | 异常交易检测模型的构建方法、异常交易检测方法及装置 | |
CN114743187A (zh) | 银行安全控件自动登录方法、系统、设备及存储介质 | |
CN112613496A (zh) | 一种行人重识别方法、装置、电子设备及存储介质 | |
WO2021051568A1 (zh) | 路网拓扑结构的构建方法、装置、计算机设备及存储介质 | |
CN114882273B (zh) | 应用于狭小空间的视觉识别方法、装置、设备和存储介质 | |
CN113469237A (zh) | 用户意图识别方法、装置、电子设备及存储介质 | |
CN112560953A (zh) | 私家车非法营运的识别方法、系统、设备及存储介质 | |
CN111401112A (zh) | 人脸识别方法和装置 | |
KR102301786B1 (ko) | 딥러닝 기반 실시간 온-디바이스 얼굴 인증을 위한 방법 및 장치 | |
CN113420628B (zh) | 一种群体行为识别方法、装置、计算机设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19933462 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19933462 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/03/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19933462 Country of ref document: EP Kind code of ref document: A1 |