CN105892471B - Automatic driving method and apparatus - Google Patents

Automatic driving method and apparatus Download PDF

Info

Publication number
CN105892471B
CN105892471B CN201610515191.4A CN201610515191A CN105892471B CN 105892471 B CN105892471 B CN 105892471B CN 201610515191 A CN201610515191 A CN 201610515191A CN 105892471 B CN105892471 B CN 105892471B
Authority
CN
China
Prior art keywords
information
driver
field
training
running environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610515191.4A
Other languages
Chinese (zh)
Other versions
CN105892471A (en
Inventor
李晓飞
张德兆
王肖
霍舒豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN201610515191.4A priority Critical patent/CN105892471B/en
Publication of CN105892471A publication Critical patent/CN105892471A/en
Application granted granted Critical
Publication of CN105892471B publication Critical patent/CN105892471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a kind of automatic driving method and apparatus, are related to Vehicular intelligent control technology field.The present invention is based on the risk models that the theory of field establishes vehicle running environment, to comprehensively reflect the driving environment of vehicle, the automatic Pilot being advantageously implemented under different road environments using running environment risk field.And Vehicular automatic driving model is trained according to running environment risk field and driver's operation, to learn the experience of outstanding human driver, realizes the automatic Pilot that personalizes.

Description

Automatic driving method and apparatus
Technical field
The present invention relates to Vehicular intelligent control technology fields, more particularly to a kind of automatic driving method and apparatus.
Background technique
In recent years the car steering auxiliary system such as self-adaption cruise system, Lane Keeping System rapidly develops, and improves Road traffic safety situation.
People are studying automatic driving technology at present.One of which realizes automobile certainly based on distributed approach One big system is divided into multiple subsystems by the dynamic technology driven, the technology, and each subsystem has clear semantic information. For example, automated driving system is divided into the Lane detection of view-based access control model, the vehicle identification based on radar, pedestrian detection, vehicle The subsystems such as control.One or more tasks of each subsystem responsible setting therein.For example, being responsible for the difference of environment sensing Subsystem exports specific environment sensing information respectively, such as exports lane line, vehicle, pedestrian respectively.Vehicle control subsystems Movement decision, output vehicle control instruction are made according to these environment sensing information.
However, these specific environment sensing information are usually to be previously set, such as lane line deviation and angle, vehicle Distance and speed etc. can not comprehensively reflect the driving environment of vehicle, for example, in road environment in complicated road environment It is likely to occur non-common barrier etc., this will lead to the failure of vehicle control subsystems.In addition, traditional distributed treatment side Formula determines that the perception of automated driving system and drive manner cannot learn the experience of outstanding human driver well, can not do To the automatic Pilot that personalizes.
Summary of the invention
The invention solves one of technical problem be how comprehensively to reflect the driving environment of vehicle, realize quasi- Peopleization automatic Pilot.
To achieve the above object, the present invention provides a kind of automatic driving method, comprising: according to the environment sensing of acquisition Information and driver's operation information establish Vehicular automatic driving database, and the Vehicular automatic driving database is divided into instruction Practice collection and test set;The running environment risk field that training is established according to the environment sensing information in the training set, according to institute The driver's operation information stated in the running environment risk field and the training set of training carries out the deep learning model Training;The running environment risk field that test is established according to the environment sensing information in the test set, by the test Running environment risk field inputs the deep learning model, exports the vehicle control variable of prediction, by comparing the vehicle of prediction Driver's operation information in control variable and the test set tests the deep learning model.
Wherein, training use or test running environment risk field using following methods foundation: according to the resting bodily form At potential energy field information, moving object formed kinetic energy field information and driver formed behavior field information establish running environment Risk field;Wherein,
The potential energy field information formed for the running environment risk field of training, stationary object and moving object are formed dynamic Energy field information determines that the behavior field information that driver is formed is according to the training according to the environment sensing information in the training set Driver's operation information of concentration determines;
The potential energy field information formed for the running environment risk field of test, stationary object and moving object are formed dynamic Energy field information determines that the behavior field information that driver is formed is according to the test according to the environment sensing information in the test set Driver's operation information of concentration determines.
In one embodiment, the potential energy field information that stationary object is formed is true according to the attribute and road conditions of stationary object It is fixed;The kinetic energy field information that moving object is formed is determined according to the attribute of moving object, motion state and road conditions.
In the case where environment sensing information is collected by multiple sensors, this method further include:
The coordinate system of multiple sensors is converted, to form unified coordinate system;
And/or
Using the same target of the mahalanobis distance association different sensors observation of target, to the same of different sensors observation Target is weighted and averaged by probability of happening, the probability of happening as the same target.
In one embodiment, it is grasped according to the driver in the running environment risk field and the training set of the training It is used as information to be trained the deep learning model to include: by the running environment risk field of the training and the training set In driver's operation information input deep learning model, export the vehicle control variable of prediction and the damage of driver's operation information It breaks one's promise breath;According to the ginseng of vehicle control variable in deep learning model described in the loss Information revision of driver's operation information Number.
To achieve the above object, the present invention provides a kind of automatic driving device, comprising: sample forms module, is used for Vehicular automatic driving database is established according to the environment sensing information of acquisition and driver's operation information, and the vehicle is automatic Driving data library is divided into training set and test set;Model training module, for being believed according to the environment sensing in the training set Breath establishes the running environment risk field of training, according to driving in the running environment risk field and the training set of the training The person's of sailing operation information is trained the deep learning model;Model measurement module, for according to the ring in the test set Border perception information establishes the running environment risk field of test, and the running environment risk field of the test is inputted the depth Learning model exports the vehicle control variable of prediction, by comparing driving in the vehicle control variable and the test set of prediction The person's of sailing operation information tests the deep learning model.
The model training module includes that unit is established in the first risk field, the potential energy field letter for being formed according to stationary object The behavior field information that the kinetic energy field information and driver that breath, moving object are formed are formed establishes the running environment risk of training ?;Wherein, the kinetic energy field information that the potential energy field information and moving object that stationary object is formed are formed is according in the training set Environment sensing information determines that the behavior field information that driver is formed is determined according to driver's operation information in the training set;
The model measurement module includes that unit is established in the second risk field, the potential energy field letter for being formed according to stationary object The behavior field information that the kinetic energy field information and driver that breath, moving object are formed are formed establishes the running environment risk of test ?;Wherein, the kinetic energy field information that the potential energy field information and moving object that stationary object is formed are formed is according in the test set Environment sensing information determines that the behavior field information that driver is formed is determined according to driver's operation information in the test set.
Wherein, the potential energy field information that stationary object is formed is determined according to the attribute and road conditions of stationary object;Moving object The kinetic energy field information that body is formed is determined according to the attribute of moving object, motion state and road conditions.
In the case where environment sensing information is collected by multiple sensors, it includes: data that the sample, which forms module, Processing unit and sample form unit;
The data processing unit, is used for
The coordinate system of multiple sensors is converted, to form unified coordinate system;
And/or
Using the same target of the mahalanobis distance association different sensors observation of target, to the same of different sensors observation Target is weighted and averaged by probability of happening, the probability of happening as the same target;
The sample forms unit, establishes vehicle certainly for the environment sensing information and driver's operation information according to acquisition Dynamic driving data library, and the Vehicular automatic driving database is divided into training set and test set.
The model training module includes model training unit, for by the running environment risk field of the training and institute Driver's operation information input deep learning model in training set is stated, vehicle control variable and the driver's operation of prediction are exported The loss information of information;According to vehicle control in deep learning model described in the loss Information revision of driver's operation information The parameter of variable.
The present invention is based on the risk models that the theory of field establishes vehicle running environment, thus complete using running environment risk field The driving environment of the reflection vehicle in face, the automatic Pilot being advantageously implemented under different road environments.And according to running environment wind Dangerous field and driver's operation are trained Vehicular automatic driving model, to learn the experience of outstanding human driver, realize quasi- Peopleization automatic Pilot.
Detailed description of the invention
Fig. 1 is the flow diagram of automatic driving method one embodiment of the present invention.
Fig. 2 is the flow chart of one embodiment that the present invention is trained deep learning model.
Fig. 3 shows the schematic diagram of the running environment risk field under a typical road environment of the invention.
Fig. 4 is the flow chart of one embodiment that the present invention tests deep learning model.
Fig. 5 is the structural schematic diagram of automatic driving device one embodiment of the present invention.
Fig. 6 is the structural schematic diagram of automatic driving device further embodiment of the present invention.
Specific embodiment
The invention proposes a kind of automatic driving method, this method establishes traveling using the environment sensing information of acquisition Environmental risk field, according to running environment risk field and driver's operation training deep learning model, it can be achieved that vehicle is driven automatically It sails, reduces the training difficulty of Vehicular automatic driving model (abbreviation model).
Fig. 1 is the flow diagram of automatic driving method one embodiment of the present invention.As shown in Figure 1, this method packet Include following steps:
Step S102 establishes Vehicular automatic driving data according to the environment sensing information of acquisition and driver's operation information Library, and Vehicular automatic driving database is divided into training set and test set, to form sample.Training set is used to training pattern, It is used in model training stage;Test set is used to verify the availability of model, uses in the model measurement stage.
Wherein, environment sensing information is the environmental data acquired by least one sensor.For example, vehicle-mounted vidicon acquires Image, point cloud information and the target information of millimetre-wave radar of laser radar etc., but be not limited to examples cited.
Wherein, driver's operation information includes the information such as vehicle steering angle, vehicle plus/minus speed.It is abundant more in order to obtain The driving data of sample can select the driving data of different drivers.For example, the database sampling frequency used is 10 hertz, It selects different two hours driving datas of driver as training set, amounts to 72000 frames, select driving for different driver's half an hour Data are sailed as test set, amount to 18000 frames.By learning the driving behavior of different drivers, the personification of vehicle may be implemented Change automatic Pilot.
Step S104 establishes the running environment risk field of training according to the environment sensing information in training set, according to instruction The driver's operation information in running environment risk field and training set practiced is trained deep learning model.
In one embodiment, deep learning model is trained include: will training running environment risk field with Driver's operation information in training set inputs deep learning model, exports the vehicle control variable of prediction, such as Vehicular turn Angle, vehicle plus/minus speed etc., (operate according to the vehicle control variable of prediction and desired vehicle control variable according to driver Information determines desired vehicle control variable) the loss information that determines driver's operation information, according to driver's operation information Lose the parameter of vehicle control variable in Information revision deep learning model.By the iteration of certain number, met the requirements Deep learning model.Wherein, deep learning model for example can be depth convolutional neural networks model.
Running environment risk field can comprehensively describe running environment, be advantageously implemented driving under different road environments automatically It sails.
Step S106 establishes the running environment risk field of test according to the environment sensing information in test set, will test Running environment risk field inputs deep learning model, exports the vehicle control variable of prediction, by comparing the vehicle of prediction Driver's operation information in control variable and test set tests deep learning model.
A kind of illustrative test method is described as follows, if the driver in the vehicle control variable and test set of prediction Gap between operation information, which is less than preset value, can then determine that deep learning model can that is, the consistency of the two is preferable With.Wherein, vehicle control variable is for example including vehicle steering angle, vehicle plus/minus speed etc..
However, it will be understood by those skilled in the art that above-mentioned test method is not unique.For example, by the vehicle of prediction Control variable for controlling vehicle, observation vehicle can normally travel, if can be with normally travel, it is determined that depth It is available to practise model.
The present invention is based on the risk models that the theory of field establishes vehicle running environment for above-described embodiment, to utilize traveling ring The driving environment of vehicle, the automatic Pilot being advantageously implemented under different road environments are comprehensively reflected in border risk field.And according to Running environment risk field and driver's operation are trained Vehicular automatic driving model, to learn the warp of outstanding human driver It tests, realizes the automatic Pilot that personalizes.In addition, relative to directly being instructed using environment sensing information to Vehicular automatic driving model Practice, reduces the training difficulty of Vehicular automatic driving model.
The present invention also provides the methods that a kind of pair of deep learning model is trained.It is shown in Figure 2 to depth The flow chart that model is trained is practised, for the data in training set, training process is as follows:
Step S202 identifies the environment sensing information acquired in training set by least one sensor, identifies Such as the driving-environment informations such as stationary object, moving object, road.
Identification process is described by taking video camera, laser radar, millimetre-wave radar etc. as an example below.
According to the image that video camera acquires, the targets such as lane line and vehicle are identified.Wherein it is possible to use image processing method Method identifies the lane line in image.This method passes through image adaptive Threshold segmentation, the extraction of lane markings line feature point, characteristic point Cluster with fitting, lane lines matching and tracking and etc., realize lane line accurately identify and tenacious tracking.Wherein it is possible to make With the vehicle target in the method identification image of machine learning.This method uses HOG (Histogram of Oriented Gradient, histograms of oriented gradients) feature and a kind of AdaBoost (iterative algorithm) cascade classifier training vehicle detection mould Type, and then use the accurate detection of vehicle detection model realization vehicle target.It will be understood by those skilled in the art that for row The targets such as people, cyclist, road, road sign can be no longer superfluous here with reference to the recognition methods of lane line above-mentioned and vehicle target It states.
In addition, laser radar can get the point cloud information (i.e. spatial coordinated information) on barrier and road surface on road.Millimeter Wave radar can get the information such as the Position And Velocity of barrier (such as vehicle, fence target).
Step S204 optionally in the case where multiple sensors acquire environment sensing information, can also carry out coordinate and turn It changes and/or data fusion.
Wherein, coordinate conversion refers to that the coordinate system to multiple sensors is converted, to form unified coordinate system, after making Continuous data fusion is easier.A kind of method of coordinate conversion for example can be, and image coordinate system is converted to camera coordinates, The coordinate system of camera coordinates and other sensors is transformed into unified vehicle axis system again and (such as is fixed on the coordinate from vehicle System, coordinate origin is in vehicle centroid), realize the coordinate conversion of different sensors perception information.
Wherein, since the attribute of different sensors perception information is different, as millimetre-wave radar lateral resolution is low, vision passes Sensor range accuracy difference etc., the same target that the present invention is observed using the mahalanobis distance association different sensors of target, further In order to merge different sensors observation, the same target of different sensors observation is weighted and averaged by probability of happening, is made For the probability of happening of the same target, to realize the fusion of multi-sensor information and being effectively estimated for observation time of day. Wherein, joint probability data association (JPDA, Joint Probability Data Association) side can be used for example Method is weighted and averaged the same target that different sensors are observed by probability of happening.
It is converted by above-mentioned coordinate or data fusion, can more accurately identify road environment information.
Step S206 establishes the running environment risk field of training according to the environment sensing information in training set.
The present invention provides the risk field method for building up that one kind can comprehensively reflect the degree of risk of vehicle running environment. That is, according to the potential energy field information of stationary object (vehicle such as stopped) formation, moving object (such as the vehicle and pedestrian of movement) The behavior field information that the kinetic energy field information of formation and driver are formed establishes running environment risk field, and formula is expressed as follows:
Es=Er+Ev+Ed (1)
Wherein, Es indicates running environment risk field, and Er indicates the potential energy field information that stationary object is formed, and Ev indicates moving object The kinetic energy field information that body is formed, Ed indicate the behavior field information that driver is formed.
The potential energy field information formed for the running environment risk field of training, stationary object and moving object are formed dynamic Energy field information determines that the behavior field information that driver is formed is according to driving in training set according to the environment sensing information in training set The person's of sailing operation information determines.Specifically, the physical field that stationary object influences traffic safety on potential energy field characterization road, potential energy The size and Orientation of field field strength is mainly determined by stationary object attribute and road conditions.Kinetic energy field is moving object on characterization road On traffic safety influence physical field, the size and Orientation of kinetic energy field field strength mainly by the attribute of moving object, motion state and Road conditions determine.Behavior field is to characterize the physical field that influences on traffic safety of driver behavior pattern, behavior field field strength it is big It is small mainly to be determined by the behavioral trait of driver.Under the same terms, the driver of radical type is usually made than conservative driver At driving risk it is big, behavior field field strength is with regard to big;The low driver of driving efficiency driver's row usually higher than driving efficiency It is big for field field strength.
Fig. 3 shows the schematic diagram of the running environment risk field under a typical road environment.For convenience of the training of deep learning Process by risk field discretization and can project on two-dimensional image.Wherein, the abscissa of risk field picture indicates the cross of vehicle To direction, ordinate indicates that the longitudinal direction of vehicle, image pixel value indicate degree of risk (such as can quantify to 0 to 255). In the present embodiment for example it is contemplated that 20 meters of left and right vehicle wheel, 100 meters first, rear 50 meters of range, each pixel represent 0.5 meter Length, therefore the risk field gray level image size generated is 300x80.
Step S208, by driver's operation information in the running environment risk field and training set of training, (i.e. supervision is believed Breath) input deep learning model, export the vehicle control variable of prediction.
Wherein, driver's operation information includes the information such as vehicle steering angle, vehicle plus/minus speed.It is abundant more in order to obtain The driving data of sample can select the driving data of different drivers.
Wherein, deep learning model for example can be depth convolutional neural networks model, which includes five layers of convolutional layer With two layers of full articulamentum, the last layer exports two-dimensional vehicle control amount.
Step S210 (is operated according to driver and is believed according to the vehicle control variable of prediction and desired vehicle control variable Breath determines) determine the loss information of driver's operation information, such as with L2 loss function, according to the loss of driver's operation information The parameter of vehicle control variable in Information revision deep learning model.
By the iteration of certain number (such as 100,000 times), available satisfactory deep learning model, thus complete The training process of pairs of deep learning model.
Above-described embodiment, the theory based on field establish the risk evaluation model of vehicle running environment, merge multiple sensors Information input establishes comprehensive running environment description system, the automatic Pilot being advantageously implemented under different road environments.In conjunction with vehicle Running environment and the corresponding vehicle operating output of driver, based on the automatic Pilot model of deep learning method study vehicle, The automatic Pilot of vehicle can be achieved.By learn different drivers driving behavior, it can be achieved that vehicle the automatic Pilot that personalizes.
The present invention also provides the methods that a kind of pair of deep learning model is tested.It is shown in Figure 4 to depth The flow chart that model is tested is practised, for the data in test set, test process is as follows:
Step S402 identifies the environment sensing information acquired in test set by least one sensor, identifies Such as the driving-environment informations such as stationary object, moving object, road.
Wherein, the recognition methods of the environment sensing information in test set can be believed with the environment sensing in reference pair training set The recognition methods (referring to step S202) of breath, which is not described herein again.
Step S404 optionally in the case where multiple sensors acquire environment sensing information, can also carry out coordinate and turn It changes and/or data fusion.
Wherein, the coordinate conversion of the environment sensing information in test set and/or data fusion method can be instructed with reference pair Practice the coordinate conversion and/or data fusion method (referring to step S204) of the environment sensing information concentrated, which is not described herein again.
Step S406 establishes the running environment risk field of test according to the environment sensing information in test set.
Wherein, the method for building up of the running environment risk field of test can be with reference to the running environment risk field of training Method for building up (refers to step S206), and which is not described herein again.
Step S408, the deep learning model that the running environment risk field input training of test is obtained, i.e., using instruction The running environment risk field of experienced deep learning model treatment input, exports the vehicle control variable of prediction.
For example, the risk field gray level image that the size of generation is 300x80 is input to the depth convolutional Neural that training obtains Network model obtains the information such as two-dimensional vehicle control amount, such as vehicle steering angle, plus/minus speed by returning.
Step S410, by comparing driver's operation information in the vehicle control variable and test set of prediction to depth Model is practised to be tested.
A kind of illustrative test method is described as follows, if the driver in the vehicle control variable and test set of prediction Gap between operation information, which is less than preset value, can then determine that deep learning model can that is, the consistency of the two is preferable With.Wherein, vehicle control variable is for example including vehicle steering angle, vehicle plus/minus speed etc..
It, can be according to vehicle control amount (such as vehicle turn that deep learning model exports if deep learning model is available To angle, plus/minus speed etc.), effective control to vehicle is realized using PID (proportional integral differential) control.
The present invention also provides a kind of automatic driving devices, and with reference to Fig. 5, which includes:
Sample forms module 502, establishes vehicle certainly for the environment sensing information and driver's operation information according to acquisition Dynamic driving data library, and Vehicular automatic driving database is divided into training set and test set;
Model training module 504, for establishing the running environment wind of training according to the environment sensing information in training set Deep learning model is instructed according to driver's operation information in the running environment risk field and training set of training in dangerous field Practice;
Model measurement module 506, for establishing the running environment wind of test according to the environment sensing information in test set The running environment risk field of test is inputted deep learning model, exports the vehicle control variable of prediction, by comparing by dangerous field Driver's operation information in the vehicle control variable and test set of prediction tests deep learning model.
With reference to Fig. 6, in the case where environment sensing information is collected by multiple sensors, sample forms module 502 and wraps Include: data processing unit 5022 and sample form unit 5024;
Data processing unit 5022 is converted for the coordinate system to multiple sensors, to form unified coordinate system; And/or the same target of the mahalanobis distance association different sensors observation using target, to the same mesh of different sensors observation Mark is weighted and averaged by probability of happening, the probability of happening as the same target.
Sample forms unit 5024, establishes vehicle certainly for the environment sensing information and driver's operation information according to acquisition Dynamic driving data library, and Vehicular automatic driving database is divided into training set and test set.
Wherein, model training module 504 includes that unit 5042 is established in the first risk field, for what is formed according to stationary object The behavior field information that the kinetic energy field information and driver that potential energy field information, moving object are formed are formed establishes the traveling of training Environmental risk field;Wherein, the kinetic energy field information that the potential energy field information and moving object that stationary object is formed are formed is according to training set In environment sensing information determine, driver formed behavior field information according in training set driver's operation information determine.
Wherein, model training module 504 includes model training unit 5044, the running environment risk field for that will train Deep learning model is inputted with driver's operation information in training set, exports vehicle control variable and the driver's operation of prediction The loss information of information;According to the ginseng of vehicle control variable in the loss Information revision deep learning model of driver's operation information Number.
Wherein, model measurement module 506 includes that unit 5062 is established in the second risk field, for what is formed according to stationary object The behavior field information that the kinetic energy field information and driver that potential energy field information, moving object are formed are formed establishes the traveling of test Environmental risk field;Wherein, the kinetic energy field information that the potential energy field information and moving object that stationary object is formed are formed is according to test set In environment sensing information determine, driver formed behavior field information according in test set driver's operation information determine.
Wherein, the potential energy field information that stationary object is formed is determined according to the attribute and road conditions of stationary object;Moving object The kinetic energy field information that body is formed is determined according to the attribute of moving object, motion state and road conditions.
Wherein, model measurement module 506 includes model measurement unit 5064, for by the running environment risk field of test Deep learning model is inputted, the vehicle control variable of prediction is exported, in the vehicle control variable and test set by comparing prediction Driver's operation information deep learning model is tested.
The present invention is based on the risk models that the theory of field establishes vehicle running environment, thus complete using running environment risk field The driving environment of the reflection vehicle in face, the automatic Pilot being advantageously implemented under different road environments.And according to running environment wind Dangerous field and driver's operation are trained Vehicular automatic driving model, to learn the experience of outstanding human driver, realize quasi- Peopleization automatic Pilot.In addition, being reduced relative to being directly trained using environment sensing information to Vehicular automatic driving model The training difficulty of Vehicular automatic driving model.
Finally it is noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations.This The those of ordinary skill in field is it is understood that be possible to modify the technical solutions described in the foregoing embodiments or right Part of technical characteristic is equivalently replaced;These are modified or replaceed, and it does not separate the essence of the corresponding technical solution originally Invent the spirit and scope of each embodiment technical solution.

Claims (10)

1. a kind of automatic driving method characterized by comprising
Vehicular automatic driving database is established according to the environment sensing information of acquisition and driver's operation information, and by the vehicle Automatic Pilot database is divided into training set and test set;
The running environment risk field that training is established according to the environment sensing information in the training set, according to the training Driver's operation information in running environment risk field and the training set is trained deep learning model;
The running environment risk field that test is established according to the environment sensing information in the test set, by the row of the test It sails environmental risk field and inputs the deep learning model, export the vehicle control variable of prediction, by comparing the vehicle control of prediction Driver's operation information in variable processed and the test set tests the deep learning model.
2. the method as described in claim 1, which is characterized in that wherein, training with or test running environment risk field It is established using following methods:
The behavior of the potential energy field information, the kinetic energy field information that moving object is formed and the driver's formation that are formed according to stationary object Field information establishes running environment risk field;
Wherein,
The kinetic energy field that the potential energy field information formed for the running environment risk field of training, stationary object and moving object are formed Information determines that the behavior field information that driver is formed is according in the training set according to the environment sensing information in the training set Driver's operation information determine;
The kinetic energy field that the potential energy field information formed for the running environment risk field of test, stationary object and moving object are formed Information determines that the behavior field information that driver is formed is according in the test set according to the environment sensing information in the test set Driver's operation information determine.
3. method according to claim 2, which is characterized in that
The potential energy field information that stationary object is formed is determined according to the attribute and road conditions of stationary object;
The kinetic energy field information that moving object is formed is determined according to the attribute of moving object, motion state and road conditions.
4. the method as described in claim 1, which is characterized in that in the feelings that environment sensing information is collected by multiple sensors Under condition, further includes:
The coordinate system of multiple sensors is converted, to form unified coordinate system;
Or
Using the same target of the mahalanobis distance association different sensors observation of target, to the same target of different sensors observation It is weighted and averaged by probability of happening, the probability of happening as the same target.
5. the method as described in claim 1, which is characterized in that according to the running environment risk field of the training and the instruction The driver's operation information for practicing concentration, which is trained the deep learning model, includes:
Driver's operation information in the running environment risk field and the training set of the training is inputted into deep learning mould Type exports the vehicle control variable of prediction and the loss information of driver's operation information;
According to the parameter of vehicle control variable in deep learning model described in the loss Information revision of driver's operation information.
6. a kind of automatic driving device characterized by comprising
Sample forms module, for establishing Vehicular automatic driving number according to the environment sensing information and driver's operation information of acquisition Training set and test set are divided into according to library, and by the Vehicular automatic driving database;
Model training module, for establishing the running environment risk of training according to the environment sensing information in the training set , according to driver's operation information in the running environment risk field and the training set of the training to deep learning model It is trained;
Model measurement module, for establishing the running environment risk of test according to the environment sensing information in the test set , the running environment risk field of the test is inputted into the deep learning model, exports the vehicle control variable of prediction, is led to The driver's operation information crossed in the vehicle control variable and the test set of comparison prediction carries out the deep learning model Test.
7. device as claimed in claim 6, which is characterized in that
The model training module includes that unit is established in the first risk field, potential energy field information for being formed according to stationary object, The behavior field information that the kinetic energy field information and driver that moving object is formed are formed establishes the running environment risk field of training; Wherein, the kinetic energy field information that the potential energy field information and moving object that stationary object is formed are formed is according to the environment in the training set Perception information determines that the behavior field information that driver is formed is determined according to driver's operation information in the training set;
The model measurement module includes that unit is established in the second risk field, potential energy field information for being formed according to stationary object, The behavior field information that the kinetic energy field information and driver that moving object is formed are formed establishes the running environment risk field of test; Wherein, the kinetic energy field information that the potential energy field information and moving object that stationary object is formed are formed is according to the environment in the test set Perception information determines that the behavior field information that driver is formed is determined according to driver's operation information in the test set.
8. device as claimed in claim 7, which is characterized in that
The potential energy field information that stationary object is formed is determined according to the attribute and road conditions of stationary object;
The kinetic energy field information that moving object is formed is determined according to the attribute of moving object, motion state and road conditions.
9. device as claimed in claim 6, which is characterized in that in the feelings that environment sensing information is collected by multiple sensors Under condition, it includes: that data processing unit and sample form unit that the sample, which forms module,;
The data processing unit, is used for
The coordinate system of multiple sensors is converted, to form unified coordinate system;
Or
Using the same target of the mahalanobis distance association different sensors observation of target, to the same target of different sensors observation It is weighted and averaged by probability of happening, the probability of happening as the same target;
The sample forms unit, establishes vehicle for the environment sensing information and driver's operation information according to acquisition and drives automatically Database is sailed, and the Vehicular automatic driving database is divided into training set and test set.
10. device as claimed in claim 6, which is characterized in that
The model training module includes model training unit, for by the running environment risk field of the training and the instruction Practice the driver's operation information concentrated and input deep learning model, exports the vehicle control variable and driver's operation information of prediction Loss information;According to vehicle control variable in deep learning model described in the loss Information revision of driver's operation information Parameter.
CN201610515191.4A 2016-07-01 2016-07-01 Automatic driving method and apparatus Active CN105892471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610515191.4A CN105892471B (en) 2016-07-01 2016-07-01 Automatic driving method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610515191.4A CN105892471B (en) 2016-07-01 2016-07-01 Automatic driving method and apparatus

Publications (2)

Publication Number Publication Date
CN105892471A CN105892471A (en) 2016-08-24
CN105892471B true CN105892471B (en) 2019-01-29

Family

ID=56718584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610515191.4A Active CN105892471B (en) 2016-07-01 2016-07-01 Automatic driving method and apparatus

Country Status (1)

Country Link
CN (1) CN105892471B (en)

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108773373B (en) * 2016-09-14 2020-04-24 北京百度网讯科技有限公司 Method and device for operating an autonomous vehicle
CN106340205A (en) * 2016-09-30 2017-01-18 广东中星微电子有限公司 Traffic monitoring method and traffic monitoring apparatus
CN106394559A (en) * 2016-11-17 2017-02-15 吉林大学 Multi-target driving behavior evaluation analytical method based on environmental perception information
CN106556518B (en) * 2016-11-25 2020-03-31 特路(北京)科技有限公司 Method and test field for testing ability of automatic driving vehicle to pass through visual interference area
CN108205922A (en) * 2016-12-19 2018-06-26 乐视汽车(北京)有限公司 A kind of automatic Pilot decision-making technique and system
WO2018123019A1 (en) * 2016-12-28 2018-07-05 本田技研工業株式会社 Vehicle control system, vehicle control method, and vehicle control program
CN106844949B (en) * 2017-01-18 2020-01-10 清华大学 Training method of bidirectional LSTM model for realizing energy-saving control of locomotive
US20180217603A1 (en) * 2017-01-31 2018-08-02 GM Global Technology Operations LLC Efficient situational awareness from perception streams in autonomous driving systems
US10752239B2 (en) * 2017-02-22 2020-08-25 International Business Machines Corporation Training a self-driving vehicle
KR102406507B1 (en) * 2017-03-27 2022-06-10 현대자동차주식회사 Apparatus for controlling autonomous vehicle based on deep learning, system having the same and method thereof
US10705525B2 (en) * 2017-04-07 2020-07-07 Nvidia Corporation Performing autonomous path navigation using deep neural networks
CN115855022A (en) 2017-04-07 2023-03-28 辉达公司 Performing autonomous path navigation using deep neural networks
CN107150691B (en) * 2017-04-21 2022-03-25 百度在线网络技术(北京)有限公司 Stunt performance method, device and equipment for unmanned vehicle and storage medium
DE112017007596T5 (en) * 2017-06-02 2020-02-20 Honda Motor Co., Ltd. Strategy generator and vehicle
CN110692094B (en) * 2017-06-02 2022-02-01 本田技研工业株式会社 Vehicle control apparatus and method for control of autonomous vehicle
WO2018220851A1 (en) 2017-06-02 2018-12-06 本田技研工業株式会社 Vehicle control device and method for controlling autonomous driving vehicle
CN106990714A (en) * 2017-06-05 2017-07-28 李德毅 Adaptive Control Method and device based on deep learning
CN107918392B (en) * 2017-06-26 2021-10-22 深圳瑞尔图像技术有限公司 Method for personalized driving of automatic driving vehicle and obtaining driving license
CN108803623B (en) * 2017-10-22 2021-12-21 深圳瑞尔图像技术有限公司 Method for personalized driving of automatic driving vehicle and system for legalization of driving
CN110809542B (en) * 2017-06-30 2021-05-11 华为技术有限公司 Vehicle control method, device and equipment
KR102342143B1 (en) * 2017-08-08 2021-12-23 주식회사 만도모빌리티솔루션즈 Deep learning based self-driving car, deep learning based self-driving control device, and deep learning based self-driving control method
CN107745711B (en) * 2017-09-05 2021-01-05 百度在线网络技术(北京)有限公司 Method and device for determining route in automatic driving mode
CN107491073B (en) * 2017-09-05 2021-04-02 百度在线网络技术(北京)有限公司 Data training method and device for unmanned vehicle
CN107564363B (en) * 2017-09-05 2019-11-05 百度在线网络技术(北京)有限公司 A kind of method and apparatus for driving mode switching
CN107783943A (en) * 2017-09-05 2018-03-09 百度在线网络技术(北京)有限公司 A kind of appraisal procedure and device of the longitudinally controlled model of end-to-end automated driving system
CN109670597A (en) * 2017-09-20 2019-04-23 顾泽苍 A kind of more purpose control methods of the machine learning of automatic Pilot
CN109543497A (en) * 2017-09-20 2019-03-29 顾泽苍 A kind of construction method of more purposes control machine learning model suitable for automatic Pilot
US10860034B1 (en) 2017-09-27 2020-12-08 Apple Inc. Barrier detection
JP6889274B2 (en) * 2017-10-17 2021-06-18 本田技研工業株式会社 Driving model generation system, vehicle in driving model generation system, processing method and program
CN107845159B (en) * 2017-10-30 2021-05-28 青岛慧拓智能机器有限公司 Operation monitoring system of automatic driving vehicle evaluation system
CN107826105B (en) * 2017-10-31 2019-07-02 清华大学 Translucent automatic Pilot artificial intelligence system and vehicle
US10591914B2 (en) * 2017-11-08 2020-03-17 GM Global Technology Operations LLC Systems and methods for autonomous vehicle behavior control
CN109835410B (en) * 2017-11-28 2022-02-01 湖南中车时代电动汽车股份有限公司 Method for extracting experience data of vehicle running and related device
JP6917878B2 (en) * 2017-12-18 2021-08-11 日立Astemo株式会社 Mobile behavior prediction device
CN108829083A (en) * 2018-06-04 2018-11-16 北京智行者科技有限公司 Control unit for vehicle
US11120688B2 (en) * 2018-06-29 2021-09-14 Nissan North America, Inc. Orientation-adjust actions for autonomous vehicle operational management
CN108983787B (en) * 2018-08-09 2021-09-10 北京智行者科技有限公司 Road driving method
EP3864568A4 (en) * 2018-10-11 2022-05-18 Bayerische Motoren Werke Aktiengesellschaft Snapshot image to train an event detector
US11260872B2 (en) * 2018-10-12 2022-03-01 Honda Motor Co., Ltd. System and method for utilizing a temporal recurrent network for online action detection
CN111409648B (en) * 2019-01-08 2021-08-20 上海汽车集团股份有限公司 Driving behavior analysis method and device
CN109801534A (en) * 2019-02-19 2019-05-24 上海思致汽车工程技术有限公司 Driving behavior hardware-in-the-loop test system based on automatic Pilot simulator
CN109895777A (en) * 2019-03-11 2019-06-18 汉腾汽车有限公司 A kind of shared autonomous driving vehicle system
CN110007675B (en) * 2019-04-12 2021-01-15 北京航空航天大学 Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle
US20200379471A1 (en) * 2019-06-03 2020-12-03 Byton North America Corporation Traffic blocking detection
CN110602393B (en) * 2019-09-04 2020-06-05 南京博润智能科技有限公司 Video anti-shake method based on image content understanding
CN110703732B (en) * 2019-10-21 2021-04-13 北京百度网讯科技有限公司 Correlation detection method, device, equipment and computer readable storage medium
CN112698578B (en) * 2019-10-22 2023-11-14 北京车和家信息技术有限公司 Training method of automatic driving model and related equipment
CN110968839A (en) * 2019-12-05 2020-04-07 深圳鼎然信息科技有限公司 Driving risk assessment method, device, equipment and storage medium
CN111204336B (en) * 2020-01-10 2021-04-30 清华大学 Vehicle driving risk assessment method and device
CN111653125B (en) * 2020-05-28 2021-09-28 长安大学 Method for determining pedestrian mode of zebra crossing of unmanned automobile
CN111717221B (en) * 2020-05-29 2022-11-11 重庆大学 Automatic driving takeover risk assessment and man-machine friendly early warning method and early warning system
CN111984018A (en) * 2020-09-25 2020-11-24 斑马网络技术有限公司 Automatic driving method and device
CN112232254B (en) * 2020-10-26 2021-04-30 清华大学 Pedestrian risk assessment method considering pedestrian acceleration rate
WO2022141912A1 (en) * 2021-01-01 2022-07-07 杜豫川 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method
CN112896185A (en) * 2021-01-25 2021-06-04 北京理工大学 Intelligent driving behavior decision planning method and system for vehicle-road cooperation
CN115605777A (en) * 2021-03-01 2023-01-13 杜豫川(Cn) Dynamic target point cloud rapid identification and point cloud segmentation method based on road side sensing unit
CN113548047B (en) * 2021-06-08 2022-11-11 重庆大学 Personalized lane keeping auxiliary method and device based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202005001254U1 (en) * 2005-01-26 2006-06-08 Conrad, Michael Miniature radio-controlled vehicle has a control system with a memory to permit automatic control of the vehicle so that if follows one of a number of courses stored in the memory
CN102030007A (en) * 2010-11-26 2011-04-27 清华大学 Method for acquiring overall dynamics controlled quantity of independently driven-independent steering vehicle
CN102171084A (en) * 2008-09-30 2011-08-31 日产自动车株式会社 System provided with an assistance-controller for assisting an operator of the system, control-operation assisting device, control-operation assisting method, driving-operation assisting device, and driving-operation assisting method
CN105303197A (en) * 2015-11-11 2016-02-03 江苏省邮电规划设计院有限责任公司 Vehicle following safety automatic assessment method based on machine learning
EP2993544A1 (en) * 2013-05-01 2016-03-09 Murata Machinery, Ltd. Autonomous moving body

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100076599A1 (en) * 2008-09-20 2010-03-25 Steven Jacobs Manually driven determination of a region of interest (roi) or a path of interest (poi) for a robotic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202005001254U1 (en) * 2005-01-26 2006-06-08 Conrad, Michael Miniature radio-controlled vehicle has a control system with a memory to permit automatic control of the vehicle so that if follows one of a number of courses stored in the memory
CN102171084A (en) * 2008-09-30 2011-08-31 日产自动车株式会社 System provided with an assistance-controller for assisting an operator of the system, control-operation assisting device, control-operation assisting method, driving-operation assisting device, and driving-operation assisting method
CN102030007A (en) * 2010-11-26 2011-04-27 清华大学 Method for acquiring overall dynamics controlled quantity of independently driven-independent steering vehicle
EP2993544A1 (en) * 2013-05-01 2016-03-09 Murata Machinery, Ltd. Autonomous moving body
CN105303197A (en) * 2015-11-11 2016-02-03 江苏省邮电规划设计院有限责任公司 Vehicle following safety automatic assessment method based on machine learning

Also Published As

Publication number Publication date
CN105892471A (en) 2016-08-24

Similar Documents

Publication Publication Date Title
CN105892471B (en) Automatic driving method and apparatus
CN107972662B (en) Vehicle forward collision early warning method based on deep learning
CN109597087B (en) Point cloud data-based 3D target detection method
CN107609522B (en) Information fusion vehicle detection system based on laser radar and machine vision
CN106096525B (en) A kind of compound lane recognition system and method
CN114384920B (en) Dynamic obstacle avoidance method based on real-time construction of local grid map
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN107499262A (en) ACC/AEB systems and vehicle based on machine learning
CN109919074B (en) Vehicle sensing method and device based on visual sensing technology
CN108230254A (en) A kind of full lane line automatic testing method of the high-speed transit of adaptive scene switching
CN110531376A (en) Detection of obstacles and tracking for harbour automatic driving vehicle
CN111292366B (en) Visual driving ranging algorithm based on deep learning and edge calculation
CN107031661A (en) A kind of lane change method for early warning and system based on blind area camera input
CN107985189A (en) Towards driver's lane change Deep Early Warning method under scorch environment
CN110674674A (en) Rotary target detection method based on YOLO V3
Zhang et al. A framework for turning behavior classification at intersections using 3D LIDAR
CN112810619A (en) Radar-based method for identifying front target vehicle of assistant driving system
CN110515041A (en) A kind of measuring vehicle distance control method and system based on Kalman filter technology
CN111016901A (en) Intelligent driving decision method and system based on deep learning
Zhang et al. Vehicle detection method for intelligent vehicle at night time based on video and laser information
Ren et al. Applying deep learning to autonomous vehicles: A survey
CN113313182B (en) Target identification method and terminal based on radar and video fusion
CN110472508A (en) Lane line distance measuring method based on deep learning and binocular vision
Liu et al. Research on security of key algorithms in intelligent driving system
Ochman Hybrid approach to road detection in front of the vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 102208 Beijing City, Changping District Huilongguan East Street No. 338 hit off the square B4-006

Applicant after: Beijing Idriverplus Technology Co.,Ltd.

Address before: 102206 Changping road Beijing Changping District city Shahe Town, No. 97 Xinyuan Science Park A block 511

Applicant before: Beijing Idriverplus Technology Co.,Ltd.

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Building C-3, Northern Territory, Zhongguancun Dongsheng Science Park, 66 Xixiaokou Road, Haidian District, Beijing, 100176

Patentee after: Beijing Idriverplus Technology Co.,Ltd.

Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 102208

Patentee before: Beijing Idriverplus Technology Co.,Ltd.

CP03 Change of name, title or address