CN110705101A - Network training method, vehicle driving method and related product - Google Patents

Network training method, vehicle driving method and related product Download PDF

Info

Publication number
CN110705101A
CN110705101A CN201910944404.9A CN201910944404A CN110705101A CN 110705101 A CN110705101 A CN 110705101A CN 201910944404 A CN201910944404 A CN 201910944404A CN 110705101 A CN110705101 A CN 110705101A
Authority
CN
China
Prior art keywords
real
simulation
vehicle
data set
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910944404.9A
Other languages
Chinese (zh)
Inventor
倪枫
赵扬波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201910944404.9A priority Critical patent/CN110705101A/en
Publication of CN110705101A publication Critical patent/CN110705101A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a network training method, a vehicle driving method and related products, wherein the network training method comprises the following steps: controlling a simulation vehicle to run in a simulation scene to obtain a running data set collected by the simulation vehicle in the running process, wherein the simulation vehicle is a simulation model of a real vehicle, and the simulation scene is a simulation model of a real scene; modifying the data in the driving data set to obtain a training data set; training a control network according to the training data set, wherein the control network is used for controlling the real vehicle to run in the real scene. The method and the device are beneficial to improving the identification precision of the control network.

Description

Network training method, vehicle driving method and related product
Technical Field
The application relates to the technical field of computer application, in particular to a network training method, a vehicle driving method and a related product.
Background
At present, an automatic driving vehicle collects images of a road to be driven through an image sensor, the collected images are identified through a trained deep learning model, and the vehicle is controlled to drive according to an identification result, so that automatic driving is realized. However, collecting the driving data of the unmanned vehicle in the real environment takes a lot of time and energy, and a corresponding solution is to establish a virtual driving environment similar to the actual driving environment, collect a lot of driving data in the virtual environment, and train a deep learning model using the driving data.
However, the virtual driving environment cannot completely simulate the actual driving environment, so that the quality and abundance of the driving data collected in the virtual driving environment are greatly different from the actual driving environment, and the trained deep learning model cannot accurately judge the driving scene, thereby affecting the driving safety.
Disclosure of Invention
The embodiment of the application provides a network training method, a vehicle driving method and related products, wherein driving data collected in a simulation scene is modified, so that the driving data collected in a simulation environment is in accordance with data of a vehicle in real driving, the recognition accuracy of a trained control network is high, and the driving safety is improved.
In a first aspect, an embodiment of the present application provides a network training method, including:
controlling a simulation vehicle to run in a simulation scene to obtain a running data set collected by the simulation vehicle in the running process, wherein the simulation vehicle is a simulation model of a real vehicle, and the simulation scene is a simulation model of a real scene;
modifying the data in the driving data set to obtain a training data set;
training a control network according to the training data set, wherein the control network is used for controlling the real vehicle to run in the real scene.
In some possible embodiments, before controlling the simulated vehicle to travel in the simulated scene, the method further comprises:
comparing a simulation image with a real image to obtain a first comparison result, wherein the real image is obtained by shooting through a real image sensor of the real vehicle in the real scene, the simulation image is obtained by shooting through a simulation image sensor of the simulation vehicle in the simulation scene, and the corresponding shooting parameters of the simulation image sensor and the real image sensor are the same;
and adjusting a first simulation parameter of the simulated image sensor according to the first comparison result.
In some possible embodiments, the position and angle at which the simulated image sensor captures the simulated image in the simulated scene correspond to the position and angle at which the real image sensor captures the real image in the real scene; the first simulation parameter includes one or more of an imaging perspective, a resolution, a distortion coefficient, a relative position of the simulated image sensor to the simulated vehicle.
In some possible embodiments, before controlling the simulated vehicle to travel in the simulated scene, the method further comprises:
comparing simulation data with real data to obtain a second comparison result, wherein the real data is obtained by driving the real vehicle in the real scene according to a preset driving condition, and the simulation data is obtained by controlling the simulation vehicle to drive in the simulation scene according to the preset driving condition;
and adjusting a second simulation parameter of the simulated vehicle according to the second comparison result.
In some possible embodiments, the second simulation parameter includes one or more of a turning radius, a speed, and a collision volume of the simulated vehicle.
In some possible embodiments, the controlling the simulated vehicle to travel in the simulated scene to obtain the travel data set collected by the simulated vehicle during the travel includes:
controlling the simulation vehicle to run according to N preset running conditions in the simulation scene, and acquiring N running data corresponding to the N preset running conditions in the running process through a simulation image sensor of the simulation vehicle;
and composing the N types of driving data into the driving data set.
In some possible embodiments, the modifying the data in the driving data set to obtain a training data set includes:
acquiring the real running condition of the real vehicle in the real scene;
modifying the data in the driving data set according to the real driving condition to obtain a target driving data set;
and obtaining a training data set according to the target driving data set.
Modifying the data in the driving data set according to the real driving condition to obtain a target driving data set, wherein the target driving data set comprises at least one of the following modes:
acquiring a bias corresponding to the real driving condition, and modifying the data in the driving data set according to the bias to obtain a target driving data set;
acquiring image noise corresponding to the real driving condition, and modifying data in the driving data set according to the image noise to obtain a target driving data set;
and acquiring a sampling frequency corresponding to the real running condition, and resampling the data in the first running data set according to the sampling frequency to obtain a target running data set.
In some possible embodiments, the obtaining a training data set from the target driving data set includes:
determining the distribution proportion of each type of driving data in the target driving data set according to the training purpose corresponding to the control network;
and selecting the training data set from the target driving data set according to the distribution proportion.
In a second aspect, an embodiment of the present application provides a vehicle driving method, including:
acquiring a target image of a real vehicle acquired in a real scene;
and controlling the real vehicle to run in the real scene through the processing result of the target image by the control network trained in the claims 1 to 9.
In a third aspect, an embodiment of the present application provides a network training apparatus, including:
the control unit is used for controlling a simulation vehicle to run in a simulation scene to obtain a running data set acquired by the simulation vehicle in the running process, wherein the simulation vehicle is a simulation model of a real vehicle, and the simulation scene is a simulation model of the real scene;
the adjusting unit is used for modifying the data in the driving data set to obtain a training data set;
and the training unit is used for training a control network according to the training data set, wherein the control network is used for controlling the real vehicle to run in the real scene.
In some possible embodiments, the adjusting unit is further configured to, before the control unit controls the simulated vehicle to travel in the simulated scene,
comparing a simulation image with a real image to obtain a first comparison result, wherein the real image is obtained by shooting through a real image sensor of the real vehicle in the real scene, the simulation image is obtained by shooting through a simulation image sensor of the simulation vehicle in the simulation scene, and the corresponding shooting parameters of the simulation image sensor and the real image sensor are the same;
and adjusting a first simulation parameter of the simulated image sensor according to the first comparison result.
In some possible embodiments, the position and angle at which the simulated image sensor captures the simulated image in the simulated scene correspond to the position and angle at which the real image sensor captures the real image in the real scene; the first simulation parameter includes one or more of an imaging perspective, a resolution, a distortion coefficient, a relative position of the simulated image sensor to the simulated vehicle.
In some possible embodiments, the adjusting unit is further configured to, before the control unit controls the simulated vehicle to travel in the simulated scene,
comparing simulation data with real data to obtain a second comparison result, wherein the real data is obtained by driving the real vehicle in the real scene according to a preset driving condition, and the simulation data is obtained by controlling the simulation vehicle to drive in the simulation scene according to the preset driving condition;
and adjusting a second simulation parameter of the simulated vehicle according to the second comparison result.
In some possible embodiments, the second simulation parameter includes one or more of a turning radius, a speed, and a collision volume of the simulated vehicle.
In some possible embodiments, in controlling a simulated vehicle to travel in a simulation scene, and obtaining a travel data set collected by the simulated vehicle during travel, the control unit is specifically configured to:
controlling the simulation vehicle to run according to N preset running conditions in the simulation scene, and acquiring N running data corresponding to the N preset running conditions in the running process through a simulation image sensor of the simulation vehicle;
and composing the N types of driving data into the driving data set.
In some possible embodiments, in terms of modifying the data in the driving data set to obtain the training data set, the adjusting unit is specifically configured to:
acquiring the real running condition of the real vehicle in the real scene;
modifying the data in the driving data set according to the real driving condition to obtain a target driving data set;
and obtaining a training data set according to the target driving data set.
In some possible embodiments, in terms of modifying the data in the driving data set according to the real driving condition to obtain the target driving data set, the adjusting unit is specifically configured to perform at least one of the following manners:
acquiring a bias corresponding to the real driving condition, and modifying the data in the driving data set according to the bias to obtain a target driving data set;
acquiring image noise corresponding to the real driving condition, and modifying data in the driving data set according to the image noise to obtain a target driving data set;
and acquiring a sampling frequency corresponding to the real running condition, and resampling the data in the first running data set according to the sampling frequency to obtain a target running data set.
In some possible embodiments, in obtaining the training data set from the target driving data set, the adjusting unit is specifically configured to:
determining the distribution proportion of each type of driving data in the target driving data set according to the training purpose corresponding to the control network;
and selecting the training data set from the target driving data set according to the distribution proportion.
In a fourth aspect, an embodiment of the present application provides a vehicle running device including:
the acquiring unit is used for acquiring a target image acquired by a real vehicle in a real scene;
a control unit, configured to control the real vehicle to run in the real scene through a processing result of the target image obtained through the control network trained in claims 1 to 9.
In a fifth aspect, embodiments of the present application provide an electronic device, comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps of the method according to the first aspect or the second aspect.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, where the computer program causes a computer to execute the method according to the first aspect or the second aspect.
In a seventh aspect, the present application provides a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer being operable to cause a computer to perform the method according to the first or second aspect.
The embodiment of the application has the following beneficial effects:
it can be seen that, in the embodiment of the application, the simulation vehicle is controlled to run in the simulation scene to obtain the running data set, and the running data set is modified, so that the data in the modified running data set more conforms to the real running data of the vehicle, therefore, the recognition accuracy of the control network can be improved by training the control network by using the modified running data set, and the running safety can be improved when the control network is used for controlling the real vehicle to run.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a network training method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a network training apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a vehicle running device according to an embodiment of the present application;
fig. 4 is a block diagram illustrating functional units of a network training apparatus according to an embodiment of the present disclosure;
fig. 5 is a block diagram showing functional units of a vehicle running device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a network training method provided in the present application, which includes, but is not limited to, the following steps:
101: and controlling the simulation vehicle to run in the simulation scene to obtain a running data set collected by the simulation vehicle in the running process.
The simulation vehicle is a simulation model of a real vehicle, and the simulation scene is a simulation model of a real scene.
The real vehicle is a vehicle that runs in the real world, and may be a vehicle that supports automatic driving, such as a car, a truck, a motorcycle, a bus, a lawn mower, an amusement car, a playground vehicle, an electric car, a golf cart, a train, a trolley, or the like, or may be a manually driven vehicle, which is not limited in this application. The simulation vehicle is a simulation model obtained by simulating a real vehicle.
The real scene is a scene that the real vehicle travels in the real world, for example, the elements in the scene may include lanes, traffic lights, traffic signs, houses, various types of real vehicles, pedestrians, and the like, which is not limited in this application. The simulation scene is a simulation model obtained by simulating elements in a real scene.
It is understood that, before controlling the simulated vehicle to travel in the simulated scene, the simulated scene is created by inputting the map element types of the real scene into the simulator, wherein the map element types comprise lanes, signal lights, traffic signs, lane lines, obstacles, and the like; the simulated vehicle is created by inputting vehicle parameters of the real vehicle into the simulator, including vehicle size, turning radius, collision volume, hardware configuration (e.g., number of cameras, camera category, etc.), and the like. The simulator may be a Unity3D engine, and the specific creation process is prior art and will not be described.
102: and modifying the data in the driving data set to obtain a training data set.
Specifically, data in the travel data set is resampled, an offset is added, or image noise is added to obtain a target travel data set, and then the training data set is selected from the target travel data set.
103: and training a control network according to the training data set.
The control network is used for controlling the real vehicle to run in the real scene.
Specifically, a trained control network may be used in an automatic driving system of a real vehicle, the automatic driving system controls a real image sensor to collect a road image, the road image is input to the control network, a recognition result of the road image is obtained, a driving decision of the real vehicle is determined according to the recognition result, and the real vehicle is controlled to run according to the driving decision.
For example, if a curve is identified in the road image and a driving decision corresponding to the curve driving is obtained, the driving speed of the vehicle is controlled within a range of 20km/h to 40km/h, and the current driving speed of the real vehicle is obtained, and if the current driving speed is not within the range, the speed is adjusted to be within a range of 20km/h to 40 km/h.
It can be seen that in the embodiment of the application, the simulation scene and the simulation vehicle are created, and the simulation vehicle is controlled to run in the simulation scene, so that abundant running data of the vehicle during running can be collected; the driving data are modified, so that the driving data are more consistent with the driving data of the vehicle in the real scene, and the control network is trained by using the modified driving data, so that the trained control network can accurately identify the driving behaviors in the real scene, and the driving safety is improved.
In some possible embodiments, before controlling the simulated vehicle to travel in the simulated scene, the method further comprises:
comparing a simulation image with a real image to obtain a first comparison result, wherein the real image is obtained by shooting through a real image sensor of the real vehicle in the real scene, the simulation image is obtained by shooting through a simulation image sensor of the simulation vehicle in the simulation scene, and the corresponding shooting parameters of the simulation image sensor and the real image sensor are the same; and adjusting a first simulation parameter of the simulated image sensor according to the first comparison result.
The first simulation parameter comprises one or more of an imaging visual angle, a resolution, a distortion coefficient and a relative position of the simulated image sensor and the simulated vehicle, wherein the imaging visual angle is used for representing an area range which can be shot by the simulated image sensor, and the larger the imaging visual angle is, the larger the area range which can be shot is; the resolution refers to the number of pixels in a unit area, namely the pixel density, and in general, the higher the resolution, the clearer the image is; the distortion coefficient is composed of two parameters of radial distortion and tangential distortion, and is used for measuring the degree of image distortion due to perspective image distortion caused by lens shooting distortion.
The position and the angle of the simulation image acquired by the simulation image sensor in the simulation scene correspond to the position and the angle of the real image acquired by the real image sensor in the real scene. That is, if the real image sensor of the real vehicle shoots the real image at the position a of the real scene, the simulation image needs to be shot at the position a 'of the simulation scene through the simulation image sensor of the simulation vehicle, the position a' is the simulation position corresponding to the position a in the simulation scene, and the shooting angles of the real image sensor and the simulation image sensor are ensured to be consistent during shooting.
In addition, when the simulation image is captured, in addition to the above-described capturing position and capturing angle being consistent with the real scene, other capturing parameters are also required to be consistent. For example, the setting of the ambient brightness in the simulated scene is consistent with the ambient brightness in the real scene, i.e. the manner in which the real image sensor is shot in the real scene, the simulated image sensor is shot in the simulated scene in the same manner.
For example, a piece of black and white checkerboard paper is printed in a real scene, the checkerboard paper is placed at a position d cm in front of an image sensor of a real vehicle, and image shooting is carried out to obtain a real image; establishing simulation chessboard check paper with the same specification in a virtual environment, placing the simulation chessboard check paper at a position d cm in front of a simulation image sensor of a simulation vehicle, and carrying out image shooting to obtain a simulation image; then, the obtained real image and the obtained simulation image are compared, the imaging range of the simulation image sensor is adjusted according to the number of squares in the two images, the resolution of the simulation image sensor is adjusted according to the number of pixel points of the two images, the real distortion coefficient of the real image sensor is obtained according to the real image, the simulation distortion coefficient of the simulation image sensor is obtained according to the simulation image, and the real distortion coefficient and the simulation distortion coefficient are compared to adjust the distortion coefficient of the simulation image sensor.
In the embodiment, before the driving data is collected, the simulation parameters of the simulation image sensor are adjusted (corrected), so that the simulation image sensor is closer to the real image sensor, the collected driving data is ensured to be more consistent with the real driving data of the vehicle, and the quality and the authenticity of the collected driving data are improved.
In some possible embodiments, before controlling the simulated vehicle to travel in the simulated scene, the method further comprises: comparing simulation data with real data to obtain a second comparison result, wherein the real data are obtained by driving the real vehicle in the real scene according to a preset driving condition, and the simulation data are obtained by controlling the simulation vehicle to drive in the simulation scene according to the preset driving condition; and adjusting a second simulation parameter of the simulated vehicle according to the second comparison result.
The second simulation parameter comprises one or more of turning radius, speed and collision volume of the simulated vehicle, wherein the turning radius is the distance from the steering center to the contact point between the front steering wheel and the outer steering wheel and the ground during the running of the simulated vehicle, the speed is the running speed (such as the maximum running speed and the like) during the running of the simulated vehicle, and the collision volume is used for measuring the deformation degree of the vehicle when the vehicle collides with an obstacle during the running.
Specifically, after a simulation scene and a simulation vehicle are created, real data of the real vehicle running under a preset running condition are obtained, wherein the real data comprise one or more of turning radius, collision volume and speed; and controlling the simulated vehicle to run in the simulated scene according to the preset condition to obtain simulated data in the running process, comparing the real data with the simulated data to obtain a second comparison result (data difference), and adjusting a second simulation parameter of the simulated vehicle according to the second comparison result (data difference).
The process of adjusting the second simulation parameter of the simulated vehicle is specifically described below by taking the turning radius as an example.
Controlling a real vehicle to turn according to a first speed in a real environment, and acquiring a turning radius during turning; and controlling the simulated vehicle to turn according to the first speed in the simulation environment, acquiring the turning radius when the vehicle turns, comparing the two turning radii to obtain the difference value of the two turning radii, and adjusting the turning radius of the simulated vehicle according to the difference value so as to enable the turning radius of the simulated vehicle to be closer to the real vehicle.
In this example, the simulation parameters of the simulated vehicle are adjusted (calibrated) so that the simulated vehicle is closer to the real vehicle, thereby ensuring that the acquired driving data better conforms to the data during real driving, and improving the quality and authenticity of the acquired driving data.
In some possible embodiments, controlling the simulated vehicle to travel in the simulation scene, and obtaining the travel data set collected by the simulated vehicle during the travel may be:
controlling the simulation vehicle to run according to N preset running conditions in the simulation scene, and acquiring N running data corresponding to the N preset running conditions in the running process through a simulation image sensor of the simulation vehicle; and composing the N types of driving data into the driving data set.
For example, the N preset driving conditions may include: straight line driving (without signal lamps), lane departure correction, signal lamp driving, curve driving, obstacle avoidance driving, and the like.
It should be noted that the simulation vehicle is controlled to run repeatedly at different times under different preset running conditions, so that the collected running data amount is different under different running conditions. Generally, in a real environment, a number of repeated driving times corresponding to a driving situation in which it is difficult to acquire driving data is large. For example, the number of times of repeated traveling corresponding to straight traveling (including no traffic light), lane departure correction, passing of a traffic light, curve traveling, and obstacle avoidance traveling may be increased in order.
In addition, the N kinds of driving data corresponding to the N kinds of preset driving conditions may be continuous or discontinuous, which is not limited in this application.
Wherein the set of driving data comprises: the driving speed, the acceleration and deceleration, the turning radius of the simulated vehicle and a simulated image shot by the simulated image sensor during driving.
In the example, under the simulation environment, the running data under each preset running condition is collected, so that the running data which are difficult to collect in the real environment are obtained, the types of the running data are enriched, the enrichment degree of the training data set is increased, the trained control network can accurately identify each running scene where a real vehicle is located, and the driving safety is improved.
In some possible embodiments, the modifying the data in the driving data set to obtain the training data set may be implemented by:
acquiring the real running condition of the real vehicle in the real scene; modifying the data in the driving data set according to the real driving condition to obtain a target driving data set so that the obtained target data set is more consistent with the real driving data of the vehicle; and obtaining a training data set according to the target driving data set.
In this example, an adjustment operation corresponding to the actual driving situation is obtained, and the driving data is modified by using the adjustment operation to obtain the target driving data set, wherein the adjustment operation includes at least one of the following modes:
acquiring a bias corresponding to the real driving condition, and modifying the data in the driving data set according to the bias to obtain a target driving data set;
acquiring image noise corresponding to the real driving condition, and modifying data in the driving data set according to the image noise to obtain a target driving data set;
and acquiring a sampling frequency corresponding to the real running condition, and resampling the data in the first running data set according to the sampling frequency to obtain a target running data set.
The following describes modifying the training data set by taking resampling, biasing and image noise as examples, but the present application does not limit the adjustment operation.
When a real vehicle runs, due to a plurality of reasons such as natural environment factors and shooting errors of an image sensor, the acquired real image inevitably carries image noise, and the image sensor in the simulation scene is not influenced by the factors. Therefore, when the real vehicle runs in the real scene and the real image is shot, whether image noise (random noise or other types of noise) exists in the real image or not is determined, if so, the image noise corresponding to the real image is obtained, and the image noise is automatically added to the simulation image collected in the simulation scene, so that the simulation image is more consistent with the real image.
In addition, when the real vehicle runs, because the data collecting capacity is limited, errors exist between the collected running data and the real running data of the vehicle inevitably, and the data collecting capacity in the simulation scene is strong. Therefore, if it is determined that the driving data collected in the real scene has an error, the offset can be determined according to the error, and then the offset is automatically added to the driving data collected in the simulation environment, so that the driving data in the simulation scene is more consistent with the real driving process.
If the real vehicle cannot turn in time in the real environment scene, the curve cannot be identified in time by the real vehicle; the sampling frequency can be determined according to the frequency of the turning which cannot be timely turned, the data related to the curve driving of the driving data set is resampled by using the sampling frequency to obtain a second driving data set, and the second driving data set and the first driving data set form a target driving data set, so that the data related to the curve driving is increased, and the training data related to the curve driving is increased.
Further, after the target driving data set is obtained, the distribution proportion of each type of driving data of the target driving data set can be determined according to the training purpose corresponding to the control network; and selecting the training data set from the target driving data set according to the distribution proportion.
Specifically, a training purpose corresponding to the control network is obtained, and the training purpose of the control network includes path planning, driving decision, environment perception, execution control, and the like, which is not limited in the present application; determining the distribution proportion of each type of running data of the target running data set according to the mapping relation between the training purpose and the sample distribution proportion; and acquiring the total number of the required training samples, and selecting a training data set from the target data set according to the total number and the distribution proportion.
For example, when the training purpose is lane recognition and is mainly used for curve recognition, the corresponding sample distribution ratio determined for the training purpose is: the ratio of the curve driving sample to the straight driving sample is 3:1, and if the total number of the required training samples is 2000, 1500 parts of curve driving data and 500 parts of straight driving data need to be selected from the target driving data set, and the two parts of data form a training data set.
In some possible embodiments, the present application further provides a vehicle driving method, including but not limited to the following steps:
acquiring a target image of a real vehicle acquired in a real scene; and controlling the real vehicle to run in the real scene through a processing result of the target image by a target control network.
The target control network is obtained by network training using the training data set, and the training process is a training process for a neural network in the prior art and is not described.
Specifically, a real image sensor of a real vehicle acquires a target image in the driving process, and then a target control network is used for processing the target image to obtain a processing result; and controlling the vehicle to run in a real scene according to the mapping relation between the processing result and the driving decision.
It can be seen that, in the embodiment of the application, the target control network is used for identifying the target image, and the control network is obtained by driving according to the modified driving data, and the modified driving data is more consistent with the driving data of the real vehicle, so that the target image can be accurately identified by using the target control network, an accurate driving decision can be made, and the driving safety can be improved.
Fig. 2 is a schematic structural diagram of a network training apparatus 200 according to an embodiment of the present disclosure, as shown in fig. 2, the network training apparatus 200 includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are different from the one or more application programs, and the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for:
controlling a simulation vehicle to run in a simulation scene to obtain a running data set collected by the simulation vehicle in the running process, wherein the simulation vehicle is a simulation model of a real vehicle, and the simulation scene is a simulation model of a real scene;
modifying the data in the driving data set to obtain a training data set;
training a control network according to the training data set, wherein the control network is used for controlling the real vehicle to run in the real scene.
In some possible embodiments, the program is further for instructions to perform the following steps before controlling the simulated vehicle to travel in the simulated scene:
comparing a simulation image with a real image to obtain a first comparison result, wherein the real image is obtained by shooting through a real image sensor of the real vehicle in the real scene, the simulation image is obtained by shooting through a simulation image sensor of the simulation vehicle in the simulation scene, and the corresponding shooting parameters of the simulation image sensor and the real image sensor are the same;
and adjusting a first simulation parameter of the simulated image sensor according to the first comparison result.
In some possible embodiments, the position and angle at which the simulated image sensor captures the simulated image in the simulated scene correspond to the position and angle at which the real image sensor captures the real image in the real scene; the first simulation parameter includes one or more of an imaging perspective, a resolution, a distortion coefficient, a relative position of the simulated image sensor to the simulated vehicle.
In some possible embodiments, the program is further for instructions to perform the following steps before controlling the simulated vehicle to travel in the simulated scene:
comparing simulation data with real data to obtain a second comparison result, wherein the real data is obtained by driving the real vehicle in the real scene according to a preset driving condition, and the simulation data is obtained by controlling the simulation vehicle to drive in the simulation scene according to the preset driving condition;
and adjusting a second simulation parameter of the simulated vehicle according to the second comparison result.
In some possible embodiments, the second simulation parameter includes one or more of a turning radius, a speed, and a collision volume of the simulated vehicle.
In some possible embodiments, in controlling the simulated vehicle to travel in the simulation scene, obtaining the travel data set collected during the travel of the simulated vehicle, the program is specifically configured to execute the following steps:
controlling the simulation vehicle to run according to N preset running conditions in the simulation scene, and acquiring N running data corresponding to the N preset running conditions in the running process through a simulation image sensor of the simulation vehicle;
and composing the N types of driving data into the driving data set.
In some possible embodiments, the program is specifically adapted to execute the following steps in terms of modifying the data in the driving dataset to obtain a training dataset:
acquiring the real running condition of the real vehicle in the real scene;
modifying the data in the driving data set according to the real driving condition to obtain a target driving data set;
and obtaining a training data set according to the target driving data set.
In some possible embodiments, the program is specifically adapted to execute instructions for performing at least one of the following steps in terms of modifying the data in the travel data set according to the actual travel condition to obtain a target travel data set:
acquiring a bias corresponding to the real driving condition, and modifying the data in the driving data set according to the bias to obtain a target driving data set;
acquiring image noise corresponding to the real driving condition, and modifying data in the driving data set according to the image noise to obtain a target driving data set;
and acquiring a sampling frequency corresponding to the real running condition, and resampling the data in the first running data set according to the sampling frequency to obtain a target running data set.
In some possible embodiments, the program is specifically adapted to execute the following steps in terms of deriving a training data set from the target driving data set:
determining the distribution proportion of each type of driving data in the target driving data set according to the training purpose corresponding to the control network;
and selecting the training data set from the target driving data set according to the distribution proportion.
Fig. 3 is a schematic structural diagram of a vehicle driving device 300 according to an embodiment of the present application, and as shown in fig. 3, the network training device 300 includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are different from the one or more application programs, and the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for:
acquiring a target image of a real vehicle acquired in a real scene;
and controlling the real vehicle to run in the real scene according to the processing result of the target image by the control network obtained by training by the network training device 200.
Referring to fig. 4, fig. 4 shows a block diagram of a possible functional unit composition of a network training apparatus 400 according to the above embodiment, the network training apparatus 400 includes a control unit 410, an adjusting unit 420, and a training unit 430, wherein:
the control unit 410 is configured to control a simulated vehicle to run in a simulated scene, so as to obtain a running data set acquired by the simulated vehicle in a running process, where the simulated vehicle is a simulation model of a real vehicle, and the simulated scene is a simulation model of a real scene;
an adjusting unit 420, configured to modify data in the driving data set to obtain a training data set;
a training unit 430, configured to train a control network according to the training data set, where the control network is configured to control the real vehicle to travel in the real scene.
In some possible embodiments, the adjusting unit 420 is further configured to, before the control unit 410 controls the simulated vehicle to travel in the simulation scenario,
comparing a simulation image with a real image to obtain a first comparison result, wherein the real image is obtained by shooting through a real image sensor of the real vehicle in the real scene, the simulation image is obtained by shooting through a simulation image sensor of the simulation vehicle in the simulation scene, and the corresponding shooting parameters of the simulation image sensor and the real image sensor are the same;
and adjusting a first simulation parameter of the simulated image sensor according to the first comparison result.
In some possible embodiments, the position and angle at which the simulated image sensor captures the simulated image in the simulated scene correspond to the position and angle at which the real image sensor captures the real image in the real scene; the first simulation parameter includes one or more of an imaging perspective, a resolution, a distortion coefficient, a relative position of the simulated image sensor to the simulated vehicle.
In some possible embodiments, the adjusting unit 420 is further configured to, before the control unit 410 controls the simulated vehicle to travel in the simulation scenario,
comparing simulation data with real data to obtain a second comparison result, wherein the real data is obtained by driving the real vehicle in the real scene according to a preset driving condition, and the simulation data is obtained by controlling the simulation vehicle to drive in the simulation scene according to the preset driving condition;
and adjusting a second simulation parameter of the simulated vehicle according to the second comparison result.
In some possible embodiments, the second simulation parameter includes one or more of a turning radius, a speed, and a collision volume of the simulated vehicle.
In some possible embodiments, in controlling the simulated vehicle to travel in the simulation scenario, and obtaining the travel data set collected during the travel of the simulated vehicle, the control unit 410 is specifically configured to:
controlling the simulation vehicle to run according to N preset running conditions in the simulation scene, and acquiring N running data corresponding to the N preset running conditions in the running process through a simulation image sensor of the simulation vehicle;
and composing the N types of driving data into the driving data set.
In some possible embodiments, in terms of modifying the data in the driving data set to obtain the training data set, the adjusting unit 420 is specifically configured to:
acquiring the real running condition of the real vehicle in the real scene;
modifying the data in the driving data set according to the real driving condition to obtain a target driving data set;
and obtaining a training data set according to the target driving data set.
In some possible embodiments, in terms of modifying the data in the driving data set according to the real driving condition to obtain the target driving data set, the adjusting unit 420 is specifically configured to perform at least one of the following manners:
acquiring a bias corresponding to the real driving condition, and modifying the data in the driving data set according to the bias to obtain a target driving data set;
acquiring image noise corresponding to the real driving condition, and modifying data in the driving data set according to the image noise to obtain a target driving data set;
and acquiring a sampling frequency corresponding to the real running condition, and resampling the data in the first running data set according to the sampling frequency to obtain a target running data set.
In some possible embodiments, in obtaining the training data set according to the target driving data set, the adjusting unit 420 is specifically configured to:
determining the distribution proportion of each type of driving data in the target driving data set according to the training purpose corresponding to the control network;
and selecting the training data set from the target driving data set according to the distribution proportion.
Referring to fig. 5, fig. 5 shows a block diagram of a possible functional unit of a vehicle driving device 500 according to the above embodiment, and the network training device 500 includes an obtaining unit 510 and a control unit 520, wherein:
an obtaining unit 510, configured to obtain a target image acquired by a real vehicle in a real scene;
a control unit 520, configured to control the real vehicle to run in the real scene according to a processing result of the target image obtained through the training of the network training apparatus 400.
Embodiments of the present application also provide a computer storage medium, which stores a computer program, where the computer program is executed by a processor to implement part or all of the steps of any one of the methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of network training, comprising:
controlling a simulation vehicle to run in a simulation scene to obtain a running data set collected by the simulation vehicle in the running process, wherein the simulation vehicle is a simulation model of a real vehicle, and the simulation scene is a simulation model of a real scene;
modifying the data in the driving data set to obtain a training data set;
training a control network according to the training data set, wherein the control network is used for controlling the real vehicle to run in the real scene.
2. The method of claim 1, wherein prior to controlling the simulated vehicle to travel in the simulated scene, the method further comprises:
comparing a simulation image with a real image to obtain a first comparison result, wherein the real image is obtained by shooting through a real image sensor of the real vehicle in the real scene, the simulation image is obtained by shooting through a simulation image sensor of the simulation vehicle in the simulation scene, and the corresponding shooting parameters of the simulation image sensor and the real image sensor are the same;
and adjusting a first simulation parameter of the simulated image sensor according to the first comparison result.
3. The method according to claim 1 or 2, wherein the controlling the simulated vehicle to run in the simulated scene to obtain the running data set collected by the simulated vehicle during running comprises:
controlling the simulation vehicle to run according to N preset running conditions in the simulation scene, and acquiring N running data corresponding to the N preset running conditions in the running process through a simulation image sensor of the simulation vehicle;
and composing the N types of driving data into the driving data set.
4. The method according to any one of claims 1-3, wherein said modifying data in said driving dataset to obtain a training dataset comprises:
acquiring the real running condition of the real vehicle in the real scene;
modifying the data in the driving data set according to the real driving condition to obtain a target driving data set;
and obtaining a training data set according to the target driving data set.
5. The method of claim 4, wherein the deriving a training data set from the target driving data set comprises:
determining the distribution proportion of each type of driving data in the target driving data set according to the training purpose corresponding to the control network;
and selecting the training data set from the target driving data set according to the distribution proportion.
6. A vehicle running method characterized by comprising;
acquiring a target image of a real vehicle acquired in a real scene;
and controlling the real vehicle to run in the real scene through the processing result of the target image by the control network trained in the claims 1 to 5.
7. A network training apparatus, comprising:
the control unit is used for controlling a simulation vehicle to run in a simulation scene to obtain a running data set acquired by the simulation vehicle in the running process, wherein the simulation vehicle is a simulation model of a real vehicle, and the simulation scene is a simulation model of the real scene;
the adjusting unit is used for modifying the data in the driving data set to obtain a training data set;
and the training unit is used for training a control network according to the training data set, wherein the control network is used for controlling the real vehicle to run in the real scene.
8. A vehicle running device characterized by comprising:
the acquiring unit is used for acquiring a target image acquired by a real vehicle in a real scene;
a control unit, configured to control the real vehicle to run in the real scene through a processing result of the target image obtained through the control network trained in claims 1 to 9.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of any of claims 1-5 or 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method according to any one of claims 1-5 or claim 6.
CN201910944404.9A 2019-09-30 2019-09-30 Network training method, vehicle driving method and related product Pending CN110705101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910944404.9A CN110705101A (en) 2019-09-30 2019-09-30 Network training method, vehicle driving method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910944404.9A CN110705101A (en) 2019-09-30 2019-09-30 Network training method, vehicle driving method and related product

Publications (1)

Publication Number Publication Date
CN110705101A true CN110705101A (en) 2020-01-17

Family

ID=69197739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910944404.9A Pending CN110705101A (en) 2019-09-30 2019-09-30 Network training method, vehicle driving method and related product

Country Status (1)

Country Link
CN (1) CN110705101A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382870A (en) * 2020-03-06 2020-07-07 商汤集团有限公司 Method and device for training neural network
CN112256589A (en) * 2020-11-11 2021-01-22 腾讯科技(深圳)有限公司 Simulation model training method and point cloud data generation method and device
CN112836395A (en) * 2021-03-10 2021-05-25 北京车和家信息技术有限公司 Vehicle driving data simulation method and device, electronic equipment and storage medium
CN116225024A (en) * 2023-04-11 2023-06-06 酷黑科技(北京)有限公司 Data processing method and device and automatic driving rack

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544351A (en) * 2013-10-25 2014-01-29 北京世纪高通科技有限公司 Method and device for adjusting parameters of simulation model
CN106503393A (en) * 2016-11-15 2017-03-15 浙江大学 A kind of method for realizing that using emulation generation sample unmanned vehicle is independently advanced
CN109747659A (en) * 2018-11-26 2019-05-14 北京汽车集团有限公司 The control method and device of vehicle drive
CN109886198A (en) * 2019-02-21 2019-06-14 百度在线网络技术(北京)有限公司 A kind of information processing method, device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544351A (en) * 2013-10-25 2014-01-29 北京世纪高通科技有限公司 Method and device for adjusting parameters of simulation model
CN106503393A (en) * 2016-11-15 2017-03-15 浙江大学 A kind of method for realizing that using emulation generation sample unmanned vehicle is independently advanced
CN109747659A (en) * 2018-11-26 2019-05-14 北京汽车集团有限公司 The control method and device of vehicle drive
CN109886198A (en) * 2019-02-21 2019-06-14 百度在线网络技术(北京)有限公司 A kind of information processing method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
林丹丹等: "红外场景产生技术及应用研究", 《红外技术》 *
王少丹等: "基于GPU的红外传感器频域物理效应实时仿真方法研究", 《中国体视学与图像分析》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382870A (en) * 2020-03-06 2020-07-07 商汤集团有限公司 Method and device for training neural network
CN112256589A (en) * 2020-11-11 2021-01-22 腾讯科技(深圳)有限公司 Simulation model training method and point cloud data generation method and device
CN112836395A (en) * 2021-03-10 2021-05-25 北京车和家信息技术有限公司 Vehicle driving data simulation method and device, electronic equipment and storage medium
CN116225024A (en) * 2023-04-11 2023-06-06 酷黑科技(北京)有限公司 Data processing method and device and automatic driving rack

Similar Documents

Publication Publication Date Title
CN110705101A (en) Network training method, vehicle driving method and related product
CN108508881B (en) Automatic driving control strategy adjusting method, device, equipment and storage medium
CN110188482B (en) Test scene creating method and device based on intelligent driving
US11694388B2 (en) System and method for generating large simulation data sets for testing an autonomous driver
CN111291676A (en) Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN111376895B (en) Around-looking parking sensing method and device, automatic parking system and vehicle
CN111401133A (en) Target data augmentation method, device, electronic device and readable storage medium
CN108345840A (en) Vehicle is detected in low light conditions
CN113256739B (en) Self-calibration method and device for vehicle-mounted BSD camera and storage medium
CN111325136B (en) Method and device for labeling object in intelligent vehicle and unmanned vehicle
CN111797526A (en) Simulation test scene construction method and device
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN111243125B (en) Vehicle card punching method, device, equipment and medium
CN107944425A (en) The recognition methods of road sign and device
CN111091023A (en) Vehicle detection method and device and electronic equipment
CN110667474A (en) General obstacle detection method and device and automatic driving system
CN113989772A (en) Traffic light detection method and device, vehicle and readable storage medium
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN110727269B (en) Vehicle control method and related product
CN116776288A (en) Optimization method and device of intelligent driving perception model and storage medium
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN114926724A (en) Data processing method, device, equipment and storage medium
CN110942010B (en) Detection method and device for determining road surface fluctuation state based on vehicle-mounted camera
CN112966622A (en) Parking lot semantic map improving method, device, equipment and medium
CN109101908B (en) Method and device for detecting region of interest in driving process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117