CN113642708A - Training method, recognition method and device for vehicle environment grade recognition model - Google Patents

Training method, recognition method and device for vehicle environment grade recognition model Download PDF

Info

Publication number
CN113642708A
CN113642708A CN202110932521.0A CN202110932521A CN113642708A CN 113642708 A CN113642708 A CN 113642708A CN 202110932521 A CN202110932521 A CN 202110932521A CN 113642708 A CN113642708 A CN 113642708A
Authority
CN
China
Prior art keywords
training
vehicle environment
model
environment
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110932521.0A
Other languages
Chinese (zh)
Inventor
胡大林
胡艳玲
唐珊珊
谭哲
薛晓卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Saimu Technology Co ltd
Original Assignee
Beijing Saimu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Saimu Technology Co ltd filed Critical Beijing Saimu Technology Co ltd
Priority to CN202110932521.0A priority Critical patent/CN113642708A/en
Publication of CN113642708A publication Critical patent/CN113642708A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application provides a training method, an identification method and a device of a vehicle environment grade identification model, wherein a deep neural network is constructed according to preset network hyper-parameters; regularizing the cross entropy loss function based on the first regularization coefficient and the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model; training the deep neural network by using data in the training sample set, and determining a model loss value of the deep neural network after training by using the test sample set and a model loss function when the training times reach a preset single-round training time; and if the loss change rate of the deep neural network is smaller than a preset change threshold value, obtaining a vehicle environment grade recognition model. Therefore, the vehicle environment grade recognition model which is over-fitted can be avoided from being obtained through training, and the environment grade of the current environment of the target vehicle can be accurately judged through the vehicle environment grade recognition model.

Description

Training method, recognition method and device for vehicle environment grade recognition model
Technical Field
The application relates to the technical field of vehicle control, in particular to a training method, a recognition method and a device for a vehicle environment grade recognition model.
Background
With the continuous development of vehicle control technology, an automatic driving system is continuously introduced into vehicle control, and the automatic driving system is an integrated system integrating a plurality of technologies, and mainly depends on the acquisition of environmental information, a sensing technology, an image recognition technology and the like. These technologies, while performing the intended function, have situational awareness (situational awareness) that can directly affect the safety of the driving process in some cases.
Whether accurate perception can be carried out on the road environment in the automatic driving field is extremely important, and under the extreme environmental condition or the condition that a perception system is sheltered from, an automatic driving vehicle generally cannot accurately perceive the surrounding environment, for example, in a section of road which is not directly illuminated by a street lamp, or when sunlight influences a camera device, traffic accidents can occur due to the fact that accurate perception cannot be carried out on the environment. Therefore, in order to avoid accidents, the safety standard of the automobile function sets six levels of driving environments for the automatic driving automobile, such as frequently-occurring driving scenes, driving scenes which are not possible to occur and the like, and if the current environment level of the automobile can be accurately determined in the driving process, the automatic driving system can set a driving strategy matched with the environment level, so that the aim of driving assistance is fulfilled. Therefore, how to accurately determine the environmental level of the environment in which the vehicle is located becomes an urgent problem to be solved.
Disclosure of Invention
In view of the above, an object of the present application is to provide a training method, an identification method and an apparatus for a vehicle environment level identification model, which can avoid obtaining a fitted vehicle environment level identification model through training, and further can accurately judge an environment level of a current environment of a target vehicle through the vehicle environment level identification model, so as to achieve the purpose of assisting an automatic driving system.
The embodiment of the application provides a training method of a vehicle environment grade recognition model, which comprises the following steps:
acquiring a training sample set and a test sample set for training a vehicle environment grade recognition model;
according to preset network hyper-parameters, constructing and obtaining an untrained deep neural network;
regularizing the cross entropy loss function based on a first regularization coefficient and a second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model;
training the deep neural network by using a training environment vector representing the vehicle environment of each training sample in the training sample set and an environment grade label representing the vehicle environment of each training sample, and determining a model loss value of the deep neural network after the training by using the test sample set and the model loss function when the training times reach a preset single-round training time;
and determining that the deep neural network training is finished when the loss change rate of the deep neural network is determined to be smaller than a preset change threshold value based on the model loss value of the deep neural network after each round of training, so as to obtain the vehicle environment grade recognition model.
Further, the regularizing the cross entropy loss function based on the first regularization coefficient and the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model, including:
on the basis of the cross entropy loss function, adding the first regularization coefficient to perform primary constraint on each weight coefficient in the vehicle environment level identification model through the first regularization coefficient to obtain a once-constrained cross entropy loss function;
and on the basis of the once constrained cross entropy loss function, adding the second regularization coefficient to perform secondary constraint on each weight coefficient in the vehicle environment level identification model through the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model.
Further, the first regularization coefficient and the second regularization coefficient are adjusted by:
and if the model loss value of the vehicle environment level recognition model is in an increasing state after each round of training, adjusting the first regularization coefficient and/or the second regularization coefficient according to a preset coefficient adjustment step length within a preset coefficient adjustment range.
Further, after obtaining the vehicle environment level identification model, the training method further includes:
obtaining a verification sample set used for verifying the vehicle environment level identification model;
determining a verification loss value of the vehicle environment level identification model by using the verification sample set and the model loss function;
determining a loss difference between the validation loss value and a model loss value of the deep neural network after a last round of training;
and if the loss difference is smaller than a preset difference threshold value, determining that the vehicle environment grade identification model can be used for predicting the environment grade of the vehicle environment.
Further, the network hyper-parameter includes the number of network training layers, the number of nodes in each network training layer, and the initial value of each weight coefficient.
The embodiment of the application also provides a vehicle environment grade identification method, which comprises the following steps:
acquiring current environment data of a vehicle to be identified;
extracting a plurality of environmental influence factors included in the current environmental data from the current environmental data;
based on the plurality of environmental influence factors and an assignment mapping relation which maps the plurality of environmental influence factors into parameter values, constructing an environmental feature vector for representing the current environmental data;
and inputting the environment characteristic vector into the vehicle environment grade recognition model obtained by the training method of the vehicle environment grade recognition model to obtain an environment grade recognition result of the current environment data.
The embodiment of the present application further provides a training device for a vehicle environment class recognition model, where the training device includes:
the system comprises a sample set acquisition module, a test module and a data processing module, wherein the sample set acquisition module is used for acquiring a training sample set and a test sample set for training a vehicle environment grade recognition model;
the network construction module is used for constructing and obtaining an untrained deep neural network according to preset network hyper-parameters;
the function construction module is used for carrying out regularization processing on the cross entropy loss function based on a first regularization coefficient and a second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model;
the network training module is used for training the deep neural network by using a training environment vector representing the vehicle environment of each training sample in the training sample set and an environment grade label representing the vehicle environment of each training sample, and when the training times reach a preset single-round training time, the model loss value of the deep neural network after the training is determined by using the test sample set and the model loss function;
and the model determining module is used for determining that the deep neural network is trained to be finished to obtain the vehicle environment grade recognition model when determining that the loss change rate of the deep neural network is smaller than a preset change threshold value based on the model loss value of the deep neural network after each round of training.
Further, when the function building module is configured to perform regularization processing on the cross entropy loss function based on the first regularization coefficient and the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model, the function building module is configured to:
on the basis of the cross entropy loss function, adding the first regularization coefficient to perform primary constraint on each weight coefficient in the vehicle environment level identification model through the first regularization coefficient to obtain a once-constrained cross entropy loss function;
and on the basis of the once constrained cross entropy loss function, adding the second regularization coefficient to perform secondary constraint on each weight coefficient in the vehicle environment level identification model through the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model.
Further, the function building module adjusts the first regularization coefficient and the second regularization coefficient by:
and if the model loss value of the vehicle environment level recognition model is in an increasing state after each round of training, adjusting the first regularization coefficient and/or the second regularization coefficient according to a preset coefficient adjustment step length within a preset coefficient adjustment range.
Further, the training apparatus further comprises a model verification module, and the model verification module is configured to:
obtaining a verification sample set used for verifying the vehicle environment level identification model;
determining a verification loss value of the vehicle environment level identification model by using the verification sample set and the model loss function;
determining a loss difference between the validation loss value and a model loss value of the deep neural network after a last round of training;
and if the loss difference is smaller than a preset difference threshold value, determining that the vehicle environment grade identification model can be used for predicting the environment grade of the vehicle environment.
Further, the network hyper-parameter includes the number of network training layers, the number of nodes in each network training layer, and the initial value of each weight coefficient.
The embodiment of the present application further provides an identification apparatus for a vehicle environmental class, the identification apparatus includes:
the data acquisition module is used for acquiring the current environment data of the vehicle to be identified;
the factor determining module is used for extracting a plurality of environment influence factors included in the current environment data from the current environment data;
the vector framework module is used for constructing an environment feature vector for representing the current environment data based on the plurality of environment influence factors and an assignment mapping relation for mapping the plurality of environment influence factors into parameter values;
and the result determining module is used for inputting the environment characteristic vector into the vehicle environment grade recognition model obtained by the training method of the vehicle environment grade recognition model to obtain the environment grade recognition result of the current environment data.
An embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the training method of the vehicle environment level identification model as described above and/or the steps of the vehicle environment level identification as described above.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the training method for a vehicle environment level identification model as described above, and/or the steps of the vehicle environment level identification as described above.
The training method for the vehicle environment level recognition model, provided by the embodiment of the application, comprises the steps of obtaining a training sample set and a test sample set for training the vehicle environment level recognition model; according to preset network hyper-parameters, constructing and obtaining an untrained deep neural network; regularizing the cross entropy loss function based on a first regularization coefficient and a second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model; training the deep neural network by using a training environment vector representing the vehicle environment of each training sample in the training sample set and an environment grade label representing the vehicle environment of each training sample, and determining a model loss value of the deep neural network after the training by using the test sample set and the model loss function when the training times reach a preset single-round training time; and determining that the deep neural network training is finished when the loss change rate of the deep neural network is determined to be smaller than a preset change threshold value based on the model loss value of the deep neural network after each round of training, so as to obtain the vehicle environment grade recognition model. Therefore, the condition that the overfitting vehicle environment grade recognition model is obtained through training can be avoided, and then the environment grade of the current environment of the target vehicle can be accurately judged through the vehicle environment grade recognition model, so that the aim of assisting the automatic driving system is fulfilled.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a flowchart of a training method for a vehicle environment class recognition model according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a method for identifying a vehicle environment level according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a training apparatus for a vehicle environment class recognition model according to an embodiment of the present disclosure;
fig. 4 is a second schematic structural diagram of a training apparatus for a vehicle environment class recognition model according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus for identifying a vehicle environmental grade according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
Research shows that whether accurate sensing of the road environment can be carried out in the field of automatic driving is particularly important, and under extreme environmental conditions or under the condition that a sensing system is shielded, an automatic driving vehicle cannot accurately sense the surrounding environment generally, for example, in a section of road which is not directly illuminated by a street lamp or when sunlight influences the device of a camera, traffic accidents can occur due to the fact that the environment cannot be accurately sensed. Therefore, in order to avoid accidents, the safety standard of the automobile function sets six levels of driving environments for the automatic driving automobile, such as frequently-occurring driving scenes, driving scenes which are not possible to occur and the like, and if the current environment level of the automobile can be accurately determined in the driving process, the automatic driving system can set a driving strategy matched with the environment level, so that the aim of driving assistance is fulfilled. Therefore, how to accurately determine the environmental level of the environment in which the vehicle is located becomes an urgent problem to be solved.
Based on this, the embodiment of the application provides a training method for a vehicle environment level identification model, which can prevent the vehicle environment level identification model used for determining the environment level of the current environment where a vehicle is located and obtained through training from being over-fitted, and is beneficial to improving the identification accuracy of the vehicle environment level identification model.
Referring to fig. 1, fig. 1 is a flowchart illustrating a training method for a vehicle environment level recognition model according to an embodiment of the present disclosure. As shown in fig. 1, a training method for a vehicle environment level recognition model provided in an embodiment of the present application includes:
s101, a training sample set and a testing sample set used for training the vehicle environment level recognition model are obtained.
S102, according to preset network hyper-parameters, constructing and obtaining an untrained deep neural network.
S103, regularizing the cross entropy loss function based on the first regularization coefficient and the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model.
And S104, training the deep neural network by using the training environment vector representing the vehicle environment of the training sample in the training sample set and the environment grade label of each training sample vehicle environment, and determining the model loss value of the deep neural network after the training by using the test sample set and the model loss function when the training times reach the preset single-round training times.
S105, when the loss change rate of the deep neural network is determined to be smaller than a preset change threshold value based on the model loss value of the deep neural network after each round of training, determining that the deep neural network is trained completely, and obtaining the vehicle environment grade recognition model.
The 6 environmental classes are specified explicitly in the ISO21448 standard in the field of autopilot, one class: an environment that is unlikely to occur over the life of each vehicle; and (2) second stage: environments that rarely occur randomly over the life of each vehicle; third-stage: environments that occur less than once randomly over the life of each vehicle; and (4) fourth stage: an environment that is expected to occur one or more times during a portion of the vehicle's life; and (5) fifth stage: the environment is expected to occur one or more times during the life of each vehicle; and a sixth stage: one or more circumstances are expected to occur during each drive.
Here, the training sample set includes a training environment vector characterizing each training sample vehicle environment and an environment level label of each training sample vehicle environment; the training environment vector is obtained by mapping each environmental influence factor included in the training sample vehicle environment into a parameter value form and then splicing the parameter value of each environmental influence factor, namely, each element in the training environment vector represents one environmental influence factor included in the training sample vehicle environment.
Specifically, the environmental influence factor includes at least one or more of a geographic factor, an environmental factor, a road factor, and a driving situation. Here, the geographic factor may be: at least one of city or mountain area, sea or land, mountain land or plain; the environmental factors may be: at least one of factors such as travel time (day, night, whole day, time period, and the like), travel weather (rainfall, snowfall, visibility, wind direction, probability of rainfall, probability of snowfall, and the like), illumination (light intensity, light range, and the like); the road factors may be: at least one of road type (freeway, ordinary road, dirt road, driveway for an autonomous vehicle, etc.), road parameters (grade, inclination, curvature of curve), road friction coefficient, speed limit, etc.; driving scenario (host vehicle \ driver's action, other vehicles): at least one of lane change, speed change, overtaking, traffic conditions, pedestrian behavior, and the like.
Here, when the environmental factors included in the training sample vehicle environment are mapped to parameter values, the parameter values may be mapped according to a preset assignment mapping rule; for example, when the environmental impact factor is 30mm of rainfall, the environmental impact factor may be mapped to a parameter value of 30; alternatively, according to a preset assignment mapping rule, for example, "30 mm rainfall" is mapped as a whole to a parameter value, and assuming that a preset value representing "6 rainfall" is set, in combination with the 30mm rainfall value, the "30 mm rainfall" can be mapped as a whole to 630.
It should be noted that the environmental impact factor may be mapped to the parameter value in any manner, which is not limited herein and may be determined according to the actual situation.
Correspondingly, a test environment vector for representing the environment of each test sample vehicle and an environment grade label of the environment of each test sample vehicle are included in the test sample set; the test environment vector is obtained by mapping each environmental influence factor included in the test sample vehicle environment into a parameter value form and then splicing the parameter values of each environmental influence factor included in the test sample vehicle environment, that is, each element in the test environment vector represents one environmental influence factor included in the test sample vehicle environment.
In step S101, a training sample set for training the vehicle environment level recognition model and a test sample set for testing the deep neural network in the training process are obtained, where the test sample set is used for testing the deep neural network after a single round of training in the training process, and then the training process is finished timely, so as to avoid an overfitting phenomenon of the vehicle environment level recognition model obtained by training.
Here, if the number of network training layers in the deep neural network is too large or the number of nodes is too large, it means that the deep neural network has a deeper depth, which may cause slow backward propagation of the gradient during training, and further cause a situation of gradient explosion or gradient disappearance; if the number of network training layers is too small or the number of nodes is too small, the final training, testing and verifying precision may be insufficient; therefore, in order to construct a deep neural network more suitable for training the vehicle environment level recognition model, in step S102, network hyper-parameters required for constructing the deep neural network are determined according to software facilities of an automatic driving system to which the vehicle environment level recognition model is applied before training, and an untrained deep neural network is constructed based on the determined network hyper-parameters; here, the network hyper-parameter includes the number of network training layers, the number of nodes in each layer of the network training layer, and initial values of the respective weight coefficients.
Here, when verifying whether the deep neural network completes training, it is necessary to verify the model loss function, and in order to further avoid the overfitting of the vehicle environment level identification model obtained by training, in step S103, the cross entropy loss function is normalized again by the first regularization coefficient and the second regularization coefficient, so as to obtain the model loss function for measuring the vehicle environment level identification model.
In one embodiment, step S103 includes: on the basis of the cross entropy loss function, adding the first regularization coefficient to perform primary constraint on each weight coefficient in the vehicle environment level identification model through the first regularization coefficient to obtain a once-constrained cross entropy loss function; and on the basis of the once constrained cross entropy loss function, adding the second regularization coefficient to perform secondary constraint on each weight coefficient in the vehicle environment level identification model through the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model.
In the step, in the regularization processing process, firstly, a first regularization coefficient is added on the basis of an original cross loss function, so that when a trained vehicle environment level identification model is verified, each weight coefficient in the trained vehicle environment level identification model can be subjected to primary constraint through the first regularization coefficient, and a once-constrained cross entropy loss function is obtained; specifically, the 1 norm of each weight coefficient is constrained by a first regularization coefficient, and the constraint formula is as follows:
λ1||w||1
wherein λ is1Representing a first regularization coefficient, w represents a weight vector consisting of the weight coefficients of the vehicle environment level recognition model, | w | | calculation1Representing the 1 norm of the vector w.
Further, on the basis of the cross entropy loss function after the first constraint, a second regularization coefficient is added, so that when the vehicle environment level identification model obtained through training is verified, each weight coefficient in the vehicle environment level identification model obtained through training can be subjected to secondary constraint through the second regularization coefficient, and a model loss function for measuring the vehicle environment level identification model is obtained; specifically, the 2 norm of each weight coefficient is constrained by the second regularization coefficient, and the constraint formula is as follows:
λ2||w||2
wherein λ is2Representing a second regularization coefficient, w represents a weight vector consisting of the weight coefficients of the vehicle environment level recognition model, | w | | survival2Representing the 2 norm of the vector w.
If the certification of the model loss function on the vehicle environment level identification model indicates that the vehicle environment level identification model cannot achieve the expected training effect all the time in the training process of the vehicle environment level identification model, for example, the vehicle environment level identification model cannot be fitted all the time, the first regularization coefficient and/or the second regularization coefficient are/is appropriately adjusted according to the actual situation.
In one embodiment, the first regularization coefficient and the second regularization coefficient are adjusted by: and if the model loss value of the vehicle environment level recognition model is in an increasing state after each round of training, adjusting the first regularization coefficient and/or the second regularization coefficient according to a preset coefficient adjustment step length within a preset coefficient adjustment range.
In this step, after a plurality of training, if the model loss value calculated by the vehicle environment level recognition model through the model loss function after each training is in an increasing state, it indicates that the current model loss function cannot measure the vehicle environment level recognition model (because, in the training process of the model, as long as the training sample data is correct, the model does not reach the fitting condition, the loss value calculated through the loss function inevitably shows a descending trend), and at this time, the adjustment of the model loss function can be realized by adjusting the first regularization coefficient and/or the second regularization coefficient; specifically, within a preset coefficient adjustment range, the first regularization coefficient and/or the second regularization coefficient are adjusted according to a preset coefficient adjustment step length.
As an example, a preset coefficient adjustment range [ 0,0.6 ] is selected, and a preset coefficient adjustment step size is 0.05, so that the first regularization coefficient and/or the second regularization coefficient are/is gradually increased or decreased by the preset coefficient adjustment step size of 0.05 in the coefficient adjustment range, for example, the initial value of the first regularization coefficient is 0.05, and the initial value of the second regularization coefficient is 0.15, then the first regularization coefficient may be increased by the preset coefficient adjustment step size of 0.05 during adjustment, and the adjusted first regularization coefficient is 0.1; and/or adding a second regularization coefficient, the added second regularization coefficient being 0.2.
In step S104, training the deep neural network by using the training sample set to represent the training environment vector of each training sample vehicle environment and the environment level label of each training sample vehicle environment, in order to avoid the overfitting condition of the trained vehicle environment level recognition model caused by excessive training times in the training process, when the training times reach the preset single-round training times, for example, preset 5 times of training as one-round training, and when the training environment vectors of 5 training sample vehicle environments and the environment level label of each training sample vehicle environment are used to perform 5 times of training on the constructed deep neural network, it is determined that the training times reach the preset single-round training times; at this time, the model loss value of the deep neural network after the training of the round can be determined by using the test sample set and the model loss function.
In the training process, the training effect of the deep neural network is monitored through the test sample set, and the training of the deep neural network can be finished at a proper time by adopting an early termination principle so as to avoid the condition of model overfitting.
In step S105, after the deep neural network is trained for multiple rounds, it is determined whether a loss change rate of the deep neural network is smaller than a preset change threshold according to a model loss value of the deep neural network after each round of training, and if so, it is determined that the deep neural network is trained completely, and a vehicle environment level recognition model is obtained.
In one embodiment, after obtaining the vehicle environment level recognition model, the training method further comprises: obtaining a verification sample set used for verifying the vehicle environment level identification model; determining a verification loss value of the vehicle environment level identification model by using the verification sample set and the model loss function; determining a loss difference between the validation loss value and a model loss value of the deep neural network after a last round of training; and if the loss difference is smaller than a preset difference threshold value, determining that the vehicle environment grade identification model can be used for predicting the environment grade of the vehicle environment.
In the step, after the vehicle environment grade identification model is obtained through training, in order to further verify the accuracy of the vehicle environment grade identification model, the vehicle environment grade identification model obtained through training is verified through a verification sample set; here, the verification sample set includes a verification environment vector characterizing each verification sample vehicle environment and an environment level tag of each verification sample vehicle environment; the verification environment vector is obtained by mapping each environmental influence factor included in the verification sample vehicle environment into a parameter value form and then splicing the parameter value of each environmental influence factor, namely, each element in the verification environment vector represents one environmental influence factor included in the verification sample vehicle environment.
Determining a verification loss value of the vehicle environment grade identification model by using a verification sample set and a model loss function; if the loss difference between the verification loss value and the model loss value of the deep neural network after the last round of training is smaller than the preset difference threshold value, it is indicated that the vehicle environment grade recognition model obtained through training can be simultaneously suitable for the training sample set, the testing sample set and the verification sample set, and at the moment, the vehicle environment grade recognition model obtained through training can be determined to be used for predicting the environment grade of the vehicle environment.
The training method for the vehicle environment level recognition model, provided by the embodiment of the application, comprises the steps of obtaining a training sample set and a test sample set for training the vehicle environment level recognition model; according to preset network hyper-parameters, constructing and obtaining an untrained deep neural network; regularizing the cross entropy loss function based on a first regularization coefficient and a second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model; training the deep neural network by using a training environment vector representing the vehicle environment of each training sample in the training sample set and an environment grade label representing the vehicle environment of each training sample, and determining a model loss value of the deep neural network after the training by using the test sample set and the model loss function when the training times reach a preset single-round training time; and determining that the deep neural network training is finished when the loss change rate of the deep neural network is determined to be smaller than a preset change threshold value based on the model loss value of the deep neural network after each round of training, so as to obtain the vehicle environment grade recognition model. Therefore, the condition that the overfitting vehicle environment grade recognition model is obtained through training can be avoided, and then the environment grade of the current environment of the target vehicle can be accurately judged through the vehicle environment grade recognition model, so that the aim of assisting the automatic driving system is fulfilled.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for identifying a vehicle environment level according to an embodiment of the present disclosure. As shown in fig. 2, a method for identifying a vehicle environment level provided by an embodiment of the present application includes:
s201, obtaining current environment data of the vehicle to be identified.
S202, extracting a plurality of environmental influence factors included in the current environment data from the current environment data.
S203, based on the plurality of environmental influence factors and the assignment mapping relationship for mapping the plurality of environmental influence factors into parameter values, constructing an environmental feature vector for representing the current environmental data.
And S204, inputting the environment characteristic vector into the vehicle environment grade recognition model obtained by the training method of the vehicle environment grade recognition model provided by the embodiment of the application, and obtaining the environment grade recognition result of the current environment data.
In step S201, in the driving process of the vehicle to be identified, current environment data of the current environment where the vehicle to be identified is located is obtained in real time; wherein the current environmental data includes: lighting conditions, road surface conditions, time period of residence, geographic location, etc.
In step S202, a plurality of environmental influence factors that influence the automatic driving system and exist in the current environment are extracted from the current environment data; wherein the environmental influence factors comprise at least one or more of geographic factors, environmental factors, road factors and driving situations.
Here, the geographic factor may be: at least one of city or mountain area, sea or land, mountain land or plain; the environmental factors may be: at least one of factors such as travel time (day, night, whole day, time period, and the like), travel weather (rainfall, snowfall, visibility, wind direction, probability of rainfall, probability of snowfall, and the like), illumination (light intensity, light range, and the like); the road factors may be: at least one of road type (freeway, ordinary road, dirt road, driveway for an autonomous vehicle, etc.), road parameters (grade, inclination, curvature of curve), road friction coefficient, speed limit, etc.; driving scenario (host vehicle \ driver's action, other vehicles): at least one of lane change, speed change, overtaking, traffic conditions, pedestrian behavior, and the like.
In step S203, according to a preset assignment mapping relationship that maps each environmental influence factor to a parameter value, each environmental influence factor is mapped to a parameter value form, and the parameter values of each environmental influence factor are spliced to obtain an environmental feature vector for representing current environmental data.
Here, each element in the constructed environment feature vector characterizes one of the environment influencing factors included in the current environment data.
For example, when the environmental impact factor included in the current environmental data is rainfall 30mm, the environmental impact factor may be mapped to a parameter value of 30; alternatively, according to a preset assignment mapping rule, for example, "30 mm rainfall" is mapped as a whole to a parameter value, and assuming that a preset value representing "6 rainfall" is set, in combination with the 30mm rainfall value, the "30 mm rainfall" can be mapped as a whole to 630.
It should be noted that the environmental impact factor may be mapped to the parameter value in any manner, which is not limited herein and may be determined according to the actual situation.
In step S204, the obtained environment feature vector representing the current environment data is input into the vehicle environment level identification model obtained by the trained method for vehicle environment level identification model, so as to obtain an environment level identification result of the current environment data, and the environment level of the current environment can be determined according to the environment level identification result.
The method for identifying the vehicle environment grade, provided by the embodiment of the application, comprises the steps of obtaining current environment data of a vehicle to be identified; extracting a plurality of environmental influence factors included in the current environmental data from the current environmental data; based on the plurality of environmental influence factors and an assignment mapping relation which maps the plurality of environmental influence factors into parameter values, constructing an environmental feature vector for representing the current environmental data; and inputting the environment characteristic vector into the vehicle environment grade recognition model obtained by the training method of the vehicle environment grade recognition model provided by the embodiment of the application to obtain the environment grade recognition result of the current environment data. Therefore, the environment level of the current environment where the vehicle to be recognized is located can be accurately determined, so that the automatic driving system adjusts the driving strategy according to the environment level, and the purpose of assisting the automatic driving system is achieved.
Referring to fig. 3 and 4, fig. 3 is a first schematic structural diagram of a training device for a vehicle environment level recognition model according to an embodiment of the present disclosure, and fig. 4 is a second schematic structural diagram of the training device for the vehicle environment level recognition model according to the embodiment of the present disclosure. As shown in fig. 3, the training apparatus 300 includes:
a sample set obtaining module 310, configured to obtain a training sample set and a testing sample set for training a vehicle environment level identification model;
the network construction module 320 is used for constructing and obtaining an untrained deep neural network according to preset network hyper-parameters;
the function building module 330 is configured to perform regularization processing on the cross entropy loss function based on a first regularization coefficient and a second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model;
the network training module 340 is configured to train the deep neural network by using a training environment vector representing the vehicle environment of each training sample in the training sample set and an environment level label representing the vehicle environment of each training sample, and when the training frequency reaches a preset single-round training frequency, determine a model loss value of the deep neural network after the training by using the test sample set and the model loss function;
and the model determining module 350 is configured to determine that the deep neural network training is completed to obtain the vehicle environment level identification model when it is determined that the loss change rate of the deep neural network is smaller than a preset change threshold based on the model loss value of the deep neural network after each round of training.
Further, as shown in fig. 4, the training apparatus 300 further includes a model verification module 360, where the model verification module 360 is configured to:
obtaining a verification sample set used for verifying the vehicle environment level identification model;
determining a verification loss value of the vehicle environment level identification model by using the verification sample set and the model loss function;
determining a loss difference between the validation loss value and a model loss value of the deep neural network after a last round of training;
and if the loss difference is smaller than a preset difference threshold value, determining that the vehicle environment grade identification model can be used for predicting the environment grade of the vehicle environment.
Further, when the function building module 330 is configured to perform regularization processing on the cross entropy loss function based on the first regularization coefficient and the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model, the function building module 330 is configured to:
on the basis of the cross entropy loss function, adding the first regularization coefficient to perform primary constraint on each weight coefficient in the vehicle environment level identification model through the first regularization coefficient to obtain a once-constrained cross entropy loss function;
and on the basis of the once constrained cross entropy loss function, adding the second regularization coefficient to perform secondary constraint on each weight coefficient in the vehicle environment level identification model through the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model.
Further, the function building module 330 adjusts the first regularization coefficient and the second regularization coefficient by:
and if the model loss value of the vehicle environment level recognition model is in an increasing state after each round of training, adjusting the first regularization coefficient and/or the second regularization coefficient according to a preset coefficient adjustment step length within a preset coefficient adjustment range.
Further, the network hyper-parameter includes the number of network training layers, the number of nodes in each network training layer, and the initial value of each weight coefficient.
The training device for the vehicle environment level recognition model, provided by the embodiment of the application, is used for obtaining a training sample set and a test sample set for training the vehicle environment level recognition model; according to preset network hyper-parameters, constructing and obtaining an untrained deep neural network; regularizing the cross entropy loss function based on a first regularization coefficient and a second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model; training the deep neural network by using a training environment vector representing the vehicle environment of each training sample in the training sample set and an environment grade label representing the vehicle environment of each training sample, and determining a model loss value of the deep neural network after the training by using the test sample set and the model loss function when the training times reach a preset single-round training time; and determining that the deep neural network training is finished when the loss change rate of the deep neural network is determined to be smaller than a preset change threshold value based on the model loss value of the deep neural network after each round of training, so as to obtain the vehicle environment grade recognition model. Therefore, the condition that the overfitting vehicle environment grade recognition model is obtained through training can be avoided, and then the environment grade of the current environment of the target vehicle can be accurately judged through the vehicle environment grade recognition model, so that the aim of assisting the automatic driving system is fulfilled.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a device for identifying a vehicle environment level according to an embodiment of the present disclosure. As shown in fig. 5, the recognition apparatus 500 includes:
a data obtaining module 510, configured to obtain current environment data of a vehicle to be identified;
a factor determining module 520, configured to extract a plurality of environmental impact factors included in the current environmental data from the current environmental data;
a vector construction module 530, configured to construct an environmental feature vector for characterizing the current environmental data based on the plurality of environmental impact factors and an assignment mapping relationship that maps the plurality of environmental impact factors into parameter values;
and the result determining module 540 is configured to input the environment feature vector into the vehicle environment level identification model obtained by the above training method of the vehicle environment level identification model, so as to obtain an environment level identification result of the current environment data.
The identification device for the vehicle environment grade, provided by the embodiment of the application, acquires the current environment data of the vehicle to be identified; extracting a plurality of environmental influence factors included in the current environmental data from the current environmental data; based on the plurality of environmental influence factors and an assignment mapping relation which maps the plurality of environmental influence factors into parameter values, constructing an environmental feature vector for representing the current environmental data; and inputting the environment characteristic vector into the vehicle environment grade recognition model obtained by the training method of the vehicle environment grade recognition model provided by the embodiment of the application to obtain the environment grade recognition result of the current environment data. Therefore, the environment level of the current environment where the vehicle to be recognized is located can be accurately determined, so that the automatic driving system adjusts the driving strategy according to the environment level, and the purpose of assisting the automatic driving system is achieved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device 600 includes a processor 610, a memory 620, and a bus 630.
The memory 620 stores machine-readable instructions executable by the processor 610, when the electronic device 600 runs, the processor 610 communicates with the memory 620 through the bus 630, and when the machine-readable instructions are executed by the processor 610, the steps of the training method for the vehicle environment level identification model in the embodiment of the method shown in fig. 1 and/or the steps of the identification method for the vehicle environment level in the embodiment of the method shown in fig. 2 may be executed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the training method for a vehicle environment class identification model in the embodiment of the method shown in fig. 1 and/or the steps of the identification method for a vehicle environment class in the embodiment of the method shown in fig. 2 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A training method for a vehicle environment grade recognition model is characterized by comprising the following steps:
acquiring a training sample set and a test sample set for training a vehicle environment grade recognition model;
according to preset network hyper-parameters, constructing and obtaining an untrained deep neural network;
regularizing the cross entropy loss function based on a first regularization coefficient and a second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model;
training the deep neural network by using a training environment vector representing the vehicle environment of each training sample in the training sample set and an environment grade label representing the vehicle environment of each training sample, and determining a model loss value of the deep neural network after the training by using the test sample set and the model loss function when the training times reach a preset single-round training time;
and determining that the deep neural network training is finished when the loss change rate of the deep neural network is determined to be smaller than a preset change threshold value based on the model loss value of the deep neural network after each round of training, so as to obtain the vehicle environment grade recognition model.
2. The training method according to claim 1, wherein the regularizing the cross entropy loss function based on the first regularization coefficient and the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level recognition model comprises:
on the basis of the cross entropy loss function, adding the first regularization coefficient to perform primary constraint on each weight coefficient in the vehicle environment level identification model through the first regularization coefficient to obtain a once-constrained cross entropy loss function;
and on the basis of the once constrained cross entropy loss function, adding the second regularization coefficient to perform secondary constraint on each weight coefficient in the vehicle environment level identification model through the second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model.
3. A training method as recited in claim 2, wherein the first regularization coefficient and the second regularization coefficient are adjusted by:
and if the model loss value of the vehicle environment level recognition model is in an increasing state after each round of training, adjusting the first regularization coefficient and/or the second regularization coefficient according to a preset coefficient adjustment step length within a preset coefficient adjustment range.
4. The training method according to claim 1, wherein after obtaining the vehicle environment class recognition model, the training method further comprises:
obtaining a verification sample set used for verifying the vehicle environment level identification model;
determining a verification loss value of the vehicle environment level identification model by using the verification sample set and the model loss function;
determining a loss difference between the validation loss value and a model loss value of the deep neural network after a last round of training;
and if the loss difference is smaller than a preset difference threshold value, determining that the vehicle environment grade identification model can be used for predicting the environment grade of the vehicle environment.
5. The training method of claim 1, wherein the network hyper-parameter comprises a number of network training layers, a number of nodes in each network training layer, and an initial value of each weight coefficient.
6. A method for identifying a vehicle environmental level, the method comprising:
acquiring current environment data of a vehicle to be identified;
extracting a plurality of environmental influence factors included in the current environmental data from the current environmental data;
based on the plurality of environmental influence factors and an assignment mapping relation which maps the plurality of environmental influence factors into parameter values, constructing an environmental feature vector for representing the current environmental data;
inputting the environment feature vector into a vehicle environment grade recognition model obtained by the training method of the vehicle environment grade recognition model according to any one of claims 1 to 5, and obtaining an environment grade recognition result of the current environment data.
7. A training apparatus for a vehicle environment class recognition model, the training apparatus comprising:
the system comprises a sample set acquisition module, a test module and a data processing module, wherein the sample set acquisition module is used for acquiring a training sample set and a test sample set for training a vehicle environment grade recognition model;
the network construction module is used for constructing and obtaining an untrained deep neural network according to preset network hyper-parameters;
the function construction module is used for carrying out regularization processing on the cross entropy loss function based on a first regularization coefficient and a second regularization coefficient to obtain a model loss function for measuring the vehicle environment level identification model;
the network training module is used for training the deep neural network by using a training environment vector representing the vehicle environment of each training sample in the training sample set and an environment grade label representing the vehicle environment of each training sample, and when the training times reach a preset single-round training time, the model loss value of the deep neural network after the training is determined by using the test sample set and the model loss function;
and the model determining module is used for determining that the deep neural network is trained to be finished to obtain the vehicle environment grade recognition model when determining that the loss change rate of the deep neural network is smaller than a preset change threshold value based on the model loss value of the deep neural network after each round of training.
8. An apparatus for recognizing an environmental grade of a vehicle, said apparatus comprising:
the data acquisition module is used for acquiring the current environment data of the vehicle to be identified;
the factor determining module is used for extracting a plurality of environment influence factors included in the current environment data from the current environment data;
the vector framework module is used for constructing an environment feature vector for representing the current environment data based on the plurality of environment influence factors and an assignment mapping relation for mapping the plurality of environment influence factors into parameter values;
a result determining module, configured to input the environment feature vector into a vehicle environment level identification model obtained by the training method of the vehicle environment level identification model according to any one of claims 1 to 5, so as to obtain an environment level identification result of the current environment data.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when an electronic device is operated, the machine-readable instructions being executable by the processor to perform the steps of the training method for a vehicle environment class identification model according to any one of claims 1 to 5 and/or the steps of the identification method for a vehicle environment class according to claim 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the training method for a vehicle environment class identification model according to any one of claims 1 to 5 and/or the steps of the identification method for a vehicle environment class according to claim 6.
CN202110932521.0A 2021-08-13 2021-08-13 Training method, recognition method and device for vehicle environment grade recognition model Pending CN113642708A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110932521.0A CN113642708A (en) 2021-08-13 2021-08-13 Training method, recognition method and device for vehicle environment grade recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110932521.0A CN113642708A (en) 2021-08-13 2021-08-13 Training method, recognition method and device for vehicle environment grade recognition model

Publications (1)

Publication Number Publication Date
CN113642708A true CN113642708A (en) 2021-11-12

Family

ID=78421625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110932521.0A Pending CN113642708A (en) 2021-08-13 2021-08-13 Training method, recognition method and device for vehicle environment grade recognition model

Country Status (1)

Country Link
CN (1) CN113642708A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155495A (en) * 2022-02-10 2022-03-08 西南交通大学 Safety monitoring method, device, equipment and medium for vehicle operation in sea-crossing bridge

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805259A (en) * 2018-05-23 2018-11-13 北京达佳互联信息技术有限公司 neural network model training method, device, storage medium and terminal device
CN111461238A (en) * 2020-04-03 2020-07-28 讯飞智元信息科技有限公司 Model training method, character recognition method, device, equipment and storage medium
US20200380392A1 (en) * 2019-05-29 2020-12-03 Hitachi, Ltd. Data analysis apparatus, data analysis method, and data analysis program
CN112180912A (en) * 2019-07-01 2021-01-05 百度(美国)有限责任公司 Hierarchical path decision system for planning a path for an autonomous vehicle
CN112650210A (en) * 2019-09-25 2021-04-13 本田技研工业株式会社 Vehicle control device, vehicle control method, and storage medium
CN112836584A (en) * 2021-01-05 2021-05-25 西安理工大学 Traffic image safety belt classification method based on deep learning
CN113129338A (en) * 2021-04-21 2021-07-16 平安国际智慧城市科技股份有限公司 Image processing method, device, equipment and medium based on multi-target tracking algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805259A (en) * 2018-05-23 2018-11-13 北京达佳互联信息技术有限公司 neural network model training method, device, storage medium and terminal device
US20200380392A1 (en) * 2019-05-29 2020-12-03 Hitachi, Ltd. Data analysis apparatus, data analysis method, and data analysis program
CN112180912A (en) * 2019-07-01 2021-01-05 百度(美国)有限责任公司 Hierarchical path decision system for planning a path for an autonomous vehicle
CN112650210A (en) * 2019-09-25 2021-04-13 本田技研工业株式会社 Vehicle control device, vehicle control method, and storage medium
CN111461238A (en) * 2020-04-03 2020-07-28 讯飞智元信息科技有限公司 Model training method, character recognition method, device, equipment and storage medium
CN112836584A (en) * 2021-01-05 2021-05-25 西安理工大学 Traffic image safety belt classification method based on deep learning
CN113129338A (en) * 2021-04-21 2021-07-16 平安国际智慧城市科技股份有限公司 Image processing method, device, equipment and medium based on multi-target tracking algorithm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155495A (en) * 2022-02-10 2022-03-08 西南交通大学 Safety monitoring method, device, equipment and medium for vehicle operation in sea-crossing bridge
CN114155495B (en) * 2022-02-10 2022-05-06 西南交通大学 Safety monitoring method, device, equipment and medium for vehicle operation in sea-crossing bridge

Similar Documents

Publication Publication Date Title
CN111506980B (en) Method and device for generating traffic scene for virtual driving environment
CN109520744B (en) Driving performance testing method and device for automatic driving vehicle
CN108985194B (en) Intelligent vehicle travelable area identification method based on image semantic segmentation
CN109522784B (en) Device and method for distinguishing between surmountable and non-surmountable objects
CN100555311C (en) Pattern recognition device and image-recognizing method
JP6916552B2 (en) A method and device for detecting a driving scenario that occurs during driving and providing information for evaluating a driver's driving habits.
CN110395258A (en) Pavement state apparatus for predicting and pavement state estimating method
CN110400478A (en) A kind of road condition notification method and device
CN111652060B (en) Laser radar-based height limiting early warning method and device, electronic equipment and storage medium
CN110415544B (en) Disaster weather early warning method and automobile AR-HUD system
CN111127449A (en) Automatic crack detection method based on encoder-decoder
WO2020007589A1 (en) Training a deep convolutional neural network for individual routes
CN112740295B (en) Method and device for detecting complexity of vehicle driving scene
US11645360B2 (en) Neural network image processing
CN110660141B (en) Road surface condition detection method and device, electronic equipment and readable storage medium
CN113642708A (en) Training method, recognition method and device for vehicle environment grade recognition model
CN113807220A (en) Traffic event detection method and device, electronic equipment and readable storage medium
CN117056153A (en) Methods, systems, and computer program products for calibrating and verifying driver assistance systems and/or autopilot systems
CN109478234A (en) The computer program for identifying method the reason of blocking in image sequence, executing the method, the computer readable recording medium comprising this computer program, the driving assistance system for being able to carry out the method
CN116597690B (en) Highway test scene generation method, equipment and medium for intelligent network-connected automobile
JP7259644B2 (en) Target object recognition device, target object recognition method and program
CN117242460A (en) Computerized detection of unsafe driving scenarios
CN113147790A (en) Control method and control device for unmanned vehicle and unmanned vehicle
CN114155499A (en) Real-time road condition target detection method and device based on improved YOLOX
CN113642644A (en) Method and device for determining vehicle environment grade, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination