It is a kind of to be realized based on more sensing datas to road traffic scene and driver driving behavior
Restoring method
Technical field
The invention belongs to intelligent transportation and field of image recognition, in particular to a kind of to be realized based on more sensing datas to road
The restoring method of traffic scene and driver driving behavior.
Background technique
Automobile to realize really it is unmanned, it allow for perceive and identify around object, and it is to be understood that from
Oneself accurate location.These two aspects is all the core of unmanned technology.A series of making for driving behaviors of driver, is in accordance with
Locating traffic environment at that time, the either research of automatic Pilot algorithm or assist driver drive, while the car is driving
Environment around induction at any time, with collecting data, progress static state, the identification of dynamic object, detecting and tracking, and navigation instrument
Diagram data carries out the operation and analysis of system, is all highly important.
At this stage in the unmanned research and development technology of mainstream, laser radar perceptually equipment has all been selected.Laser radar
Advantage is that its investigative range is wider, and detection accuracy is higher.But the shortcomings that laser radar, is it is also obvious that in poles such as sleet mists
Performance is poor under the weather of end, and the data volume of acquisition is very huge also sufficiently expensive.The core sensor indispensable as ADAS
Type, millimetre-wave radar technology relative maturity.But the shortcomings that millimetre-wave radar, is also very intuitive, and detection range is lost by frequency range
Direct restriction, can not also perceive pedestrian, and can not accurately be modeled to all barriers in periphery.
Vision is the most important function means in the human cognitive world, and biological study shows that the mankind obtain external information
75% relies on vision system, and this ratio is even as high as 90% in driving environment, if it is possible to answer human visual system
Automatic Pilot field is used, the accuracy of automatic Pilot will be undoubtedly increased substantially.
In response to this, in method proposed by the present invention, two kinds of data of monocular vision and radar is combined and are realized outside vehicle
Scene accurately perceives.Monocular vision is higher using the target detection scalability that convolutional neural networks are realized, can not only pass through
Modification recall rate obtains more object detection results to avoid radar to the unrecognized problem of partial impairment object, can also be with
By constantly training new model, different traffic scenes are adapted to, the barrier that can be identified is continuously increased;By millimetre-wave radar
The reversed correction to camera parameters in monocular vision may be implemented in precise information, improves ranging and range rate arithmetic accuracy, while auxiliary
Help the target detection of monocular vision.Existing track of vehicle retrieving algorithm, precision depends on the data of high quality, for this
Situation, the present invention realize the fusion of two and three dimensions retrieving algorithm, while being constructed by monocular vision data and GPS data
Kalman filter, realizes the correction to the result of data de-noising and reduction, and the precision of track reduction result is higher.
It can be studied on the basis of the data pair that reduction obtains and realize the automatic Pilot Decision Control system towards complex environment
System depth learning technology, to make automated driving system gradually form good driving habit by the self-study of data-driven, together
When can help improve driver driving habit, can also be used as research and development manufacturer's work out development strategy and relevant department's policy making and
The important reference foundation implemented.
Summary of the invention
The main problem that the present invention solves is all kinds of sensing equipments by laying on vehicle, and acquisition storage vehicle is in reality
Various driving contextual datas during the road driving of border, using the big datas knowledge such as statistical analysis, data mining, from data dimension
The multi-angles such as degree, vehicle dimension, time dimension, region dimension are analysed in depth, compare user's driving behavior, find driving behavior
General character and individual character.Understood using the image scene based on deep learning, further data are labeled by radar data, also
The outer traffic scene of former vehicle, ultimately forms<traffic scene, driving behavior>data pair.Overcome existing reduction technique data source list
One, more dependence expensive device obtains quality data, and restores the lower defect of precision, provides a kind of based on more sensing datas
Realize the restoring method to road traffic scene and driver driving behavior.
The technology of the present invention solution: one kind is realized based on more sensing datas to road traffic scene and driver driving behavior
Restoring method, by more sensing datas, the fusion of more retrieving algorithms realizes the essence of road traffic scene and driver driving behavior
Really reduction, specifically includes: multi-source data, which mutually corrects, realizes vehicle driving trace precise restoration;Based on monocular vision and millimeter wave
Radar realizes the precise measurement to barrier speed and distance;Multisource data fusion generation<road traffic scene, driving behavior>
Data pair.
It is implemented as follows:
(1) based on gyroscope, accelerometer, GPS data, monocular cam data are as auxiliary, by two peacekeepings three
Dimension retrieving algorithm combines, and carries out the mutual correction between GPS data and gyroscope accelerometer data, accomplishes vehicle driving rail
The precise restoration of mark obtains the driving trace of vehicle;
(2) on the basis of CAN bus data and OBD data, concrete operations of the driver under various scenes are restored, including
Play lamp, steering wheel control, throttle and brake control;
(3) by monocular cam and radar, using convolutional neural networks, the target for completing to participate in traffic outside vehicle object is examined
It surveys, obtains the type of vehicle-surroundings barrier;
(4) the bigness scale amount to obstacle distance is realized using method of geometry relation by monocular cam data, by milli
Metre wave radar data obtain the accurate distance of barrier and this vehicle, and herein on the basis of distance reversely to camera parameters
Calibration, at the same by the parallel relationship of lane line can also calibrating camera pitch angle, two kinds of calibration results are mutually authenticated, and obtain standard
True video camera pitch angle is measured for other obstacle distances, finally obtain vehicle-surroundings barrier and this vehicle it is accurate away from
From;
(5) measurement to barrier speed in the unidirectional lane of vehicle front is realized based on monocular cam data, with
Based on millimetre-wave radar, the tachometric survey of object is participated in the traffic except the unidirectional lane in front, obtains vehicle-surroundings barrier
With the relative velocity of this vehicle;
(6) vehicle driving trace for obtaining step (1), driver's concrete operations behavior that step (2) obtains, step (3)
The type of obtained barrier, the barrier that step (4) obtains is at a distance from this vehicle, barrier and this vehicle that step (5) obtains
Relative velocity carry out the fusions of more reduction results, generation<traffic scene, driving behavior>data pair, the traffic scene refers to:
The various traffic of vehicle-surroundings participate in the type of object, including pedestrian, vehicle, the people of cycling, traffic sign;Traffic participates in object
State, distance and relative velocity including moving object, the topology information of road, traffic sign;The driving behavior refers to driver
Concrete operations in the car, including lamp is played, steering wheel controls, throttle and brake control and vehicle driving trace.
Under step (1) specific implementation:
(11) on three dimension scale, parsing vehicle axis system parses vehicle with respect to the position relation between reference frame
Coordinate system uses Quaternion Method with respect to the position relation between reference frame, and Quaternion Method and Kalman filter algorithm are mutually tied
It closes, improves the attitude algorithm accuracy and real-time of SINS;
(12) Kalman filter constructed by monocular vision data and GPS data, is realized to data de-noising and reduction
Result correction, obtain more accurate vehicle driving trace.
Under step (2) specific implementation:
Based on the available various running informations arrived of CAN bus and OBD interface, including speed, oil consumption, steering wheel, turn
To lamp, throttle, brake pedal, these data are uploaded onto the server by terminal, and the big data using statistical analysis, data mining is known
Know, how reduction driver by the operation to steering wheel, brake pedal, gas pedal, turn signal, other light controls vehicle
Traveling will correspond to the time on each concrete operations label, convenient to be merged with subsequent with other data.
Under step (3) specific implementation:
(31) by Faster-RCNN convolutional neural networks, four steps of target detection, i.e. candidate region is generated,
Feature extraction, classification and Bounding Box are returned, and within unification to a depth network frame, traffic in the visual field is joined in realization
It with the target detection of object, is marked with corresponding rectangle frame, and obtains confidence level;
(32) millimetre-wave radar identifies to obtain vehicle-surroundings barrier, and records the corresponding time.
Under step (4) specific implementation:
(41) it according to video camera projection model, is obtained by geometry derivation between road surface coordinate system and image coordinate system
Relationship, the road plane that video camera is shot are trapezoid areas, the plane domain in corresponding flat coordinate system, road surface coordinate system with
Point between plane coordinate system corresponds;
(42) image of the plane of delineation coordinate and plane of delineation bottom edge midpoint of seeking barrier rectangle frame bottom edge respectively is flat
Areal coordinate, and road surface plane coordinates is derived as by geometrical relationship;
(43) two road surface coordinates obtained in step (42) are applied into two o'clock range formula, obtain between two o'clock away from
From;
(44) precise measurement to a certain obstacle distance is realized using millimetre-wave radar, it is anti-by geometrical relationship derivation
To solution, calibration is re-started to camera parameters, camera parameters refer to video camera pitch angle, acquire accurate video camera and bow
The elevation angle;
(45) simultaneously, traffic lane line in the plane of delineation is obtained using machine vision algorithm, and two o'clock is recycled to determine one
The method of straight line, determines the straight line in image on road plane corresponding to traffic lane line, by the relationship of two straight line parallels
Solve video camera pitch angle;
(46) camera parameters that step (44), (45) obtain are mutually authenticated, Real-time solution obtains accurately video camera
Pitch angle, for obtaining the accurate distance of barrier Yu this vehicle in the measurement of other obstacle distances.
Under step (5) specific implementation:
(51) method for using step (4), obtains front vehicles in nth frame image and N+K frame image, with this vehicle
Two distances S1, S2;
(52) according to this vehicle speed, time difference T between nth frame image and N+K frame image calculates this vehicle traveling
Distance S3;
(53) front truck operating range S=S2+S3-S1 is calculated;
(54) vehicle velocity V=S/T is calculated;Object is participated in for the traffic not before vehicle into lane, the measurement of speed relies on
Millimetre-wave radar finally obtains the relative velocity of vehicle-surroundings barrier Yu this vehicle.
Under step (6) specific implementation:
(61) vehicle driving trace that will be obtained, driver's concrete operations behavior, the type of periphery barrier, barrier with
The relative velocity of the distance of this vehicle, barrier and this vehicle carries out time label respectively;
(62) it by above-mentioned reduction result, is attached using the time as unique major key, the time, identical data fusion was one
It rises, formation<traffic scene, driving behavior>data pair.
The advantages of the present invention over the prior art are that:
Traffic scene restoring method proposed by the present invention based on more sensing datas, can ultimately form < traffic scene, drive
Sail behavior > data pair, and restore precision it is higher.
In track of vehicle reduction, the fusion of two and three dimensions retrieving algorithm is realized, while by monocular vision data
With the Kalman filter of GPS data building, the correction to the result of data de-noising and reduction, the essence of track reduction result are realized
Du Genggao.In the measurement of target detection and obstacle distance speed, while monocular vision and millimetre-wave radar is utilized, a side
Face, monocular vision is higher using the target detection scalability that convolutional neural networks are realized, not only can be by modifying recall rate
More object detection results are obtained to avoiding radar to the unrecognized problem of partial impairment object, it can also be by constantly instructing
Practice new model, adapts to different traffic scenes, be continuously increased the barrier that can be identified;On the other hand, by millimetre-wave radar
The reversed correction to camera parameters in monocular vision may be implemented in precise information, improves ranging and range rate arithmetic accuracy.
Meanwhile it can be studied on the basis of the data pair that reduction obtains and realize the automatic Pilot decision control towards complex environment
System depth learning art processed is practised so that automated driving system be made to gradually form good driving by the self-study of data-driven
It is used, it generates the dynamic response for meeting Chinese transportation feature and driving habit and ability of making decisions on one's own, promotes the safety of automatic Pilot
Property and validity;Driver driving habit can be helped improve;It can also be used as research and development manufacturer's work out development strategy and relevant department
The important reference foundation of policy making and implementation.
Detailed description of the invention
Fig. 1 is the schematic diagram of the method for the present invention;
Fig. 2 is that the track in the present invention restores flow chart;
Fig. 3 is video camera projection model in the present invention, wherein upper figure is projection relation, the following figure is projection plane.
Specific embodiment
As shown in Figure 1, driver's sequence of operations is made in driving procedure, it is the traffic conditions according to periphery at that time,
As periphery main traffic participates in the motion state of object, traffic sign, traffic lights, weather condition, road conditions etc..The present invention
Function essentially consist in: based on CAN bus, OBD, gyroscope, accelerometer, GPS/BD, millimetre-wave radar and monocular cam
Equal multi-sensor datas, realize the reduction to the reduction of road traffic scene and driver driving behavior outside vehicle, formation < traffic field
Scape, driving behavior > data pair.Traffic scene specifically includes that the various traffic of vehicle-surroundings participate in the type of objects, as pedestrian, vehicle,
The people of cycling, traffic sign;Traffic participates in the state of object, such as the distance and relative velocity of moving object, the topology of road
Information, traffic sign.Driver driving behavior specifically includes that the concrete operations of driver in the car, such as plays lamp, steering wheel control, oil
Door and brake control and vehicle driving trace.
1. vehicle driving trace restores
(1) on three dimension scale, vehicle axis system is parsed with respect to the position relation between reference frame.Currently, SINS
Middle description carrier movement coordinate system mainly has Euler's horn cupping, direction cosines with respect to the method for position relation between reference frame
Method, trigonometric function method, Rodrigues parametric method, Quaternion Method and equivalent rotating vector method, focus on Quaternion Method
Face, and Quaternion Method is combined with Kalman filter algorithm, improve the attitude algorithm accuracy and real-time of SINS;
(2) Kalman filter constructed by monocular vision data and GPS data, is realized to data de-noising and reduction
As a result correction obtains more accurate vehicle driving trace.
2. analyzing driver driving behavior based on CAN bus and OBD data
Based on the available various running informations arrived of CAN bus and OBD interface, such as speed, oil consumption, steering wheel, steering
Lamp, throttle, brake pedal etc., these data can be uploaded onto the server by terminal, utilize the number greatly such as statistical analysis, data mining
It is reported that knowing, how reduction driver is controlled by the operation to steering wheel, brake pedal, gas pedal, turn signal, other light etc.
Vehicle driving processed.The time will be corresponded on each concrete operations label, it is convenient to be merged with subsequent with other data.
3. realizing the precise measurement adjusted the distance based on monocular camera machine vision and millimetre-wave radar
(1) by Faster-RCNN convolutional neural networks, four steps of target detection, (candidate region is generated, feature
Extract, classification and Bounding Box are returned) it is unified within a depth network frame, it realizes and object is participated in traffic in the visual field
Target detection, marked with corresponding rectangle frame;
(2) it according to video camera projection model, is obtained by geometry derivation between road surface coordinate system and image coordinate system
Relationship.As shown in Figure 3.In the upper figure of Fig. 3, plane ABU represents road plane, and ABCD is the ladder in the road plane that video camera takes
Shape region, O point are camera lens central point, and OG is camera optical axis, and G point is the intersection point of camera optical axis and road plane, I point
For the upright projection in the plane of the road O Dian.In the coordinate system of road surface, G point is defined as coordinate origin, vehicle forward direction definition
For Y direction.For corresponding points of the GABCD each point in the plane of delineation as shown in Fig. 3 following figure, abcd is four ends as planar rectangular
Point, H and W are respectively the height and width as plane.The midpoint g for defining image rectangle is the coordinate origin of photo coordinate system, y-axis generation
Table vehicle forward direction;
(3) plane of delineation coordinate on barrier rectangle frame bottom edge and the plane of delineation at plane of delineation bottom edge midpoint are sought respectively
Coordinate, and road surface plane coordinates is derived as by geometrical relationship;
(4) two road surface coordinates obtained in (3) are applied into two o'clock range formula, obtains the distance between two o'clock;
(5) precise measurement to a certain obstacle distance is realized using millimetre-wave radar, it is anti-by geometrical relationship derivation
To solution, calibration is re-started to camera parameters, camera parameters refer to video camera pitch angle, acquire accurate video camera and bow
The elevation angle;
(6) simultaneously, in real road environment, since markings are parallel lines, therefore can be first with machine vision algorithm
Obtain traffic lane line in the plane of delineation, the method for recycling two o'clock to determine straight line determines traffic lane line in image
Straight line on corresponding road plane solves video camera pitch angle by the relationship of two straight line parallels, and is not using just always
The video camera pitch angle that beginningization markers is set;
(7) camera parameters that step (5), (6) obtain are mutually authenticated, Real-time solution obtains accurately video camera pitching
Angle, for obtaining the accurate distance of barrier Yu this vehicle in the measurement of other obstacle distances.
4. realizing the precise measurement to speed based on monocular camera machine vision and millimetre-wave radar
By shooting forward object of which movement image sequence, image procossing and view then are used to these image sequences
Feel that measuring technique is analyzed, calculates measurement and obtain real-time displacement of the objects in front between two field pictures, and then be calculated
Object real time kinematics speed.
(1) by the distance measuring method in step 3, front vehicles are obtained in nth frame image and N+K frame image, with this
Two distances S1, S2 of vehicle;
(2) according to this vehicle speed, time difference T between nth frame image and N+K frame image calculates this vehicle traveling
Distance S3;
(3) front truck operating range S=S2+S3-S1 is calculated;
(4) vehicle velocity V=S/T is calculated.
(5) by millimetre-wave radar, correct velocity value is obtained, for correcting the time in monocular vision tachometric survey algorithm
Deviation and range deviation, while realizing the measurement that the speed of object is participated in the traffic not before vehicle into lane, it finally obtains
The relative velocity of vehicle-surroundings barrier and this vehicle.
5. data fusion, generation<traffic scene, driving behavior data>data pair.
(1) vehicle driving trace that will be obtained, driver's concrete operations behavior, the type of periphery barrier, barrier with
The relative velocity of the distance of this vehicle, barrier and this vehicle carries out time label respectively;
(2) it by above-mentioned reduction result, is attached using the time as unique major key, the time, identical data fusion was one
It rises, formation<traffic scene, driving behavior>data pair.
Above embodiments are provided just for the sake of the description purpose of the present invention, and are not intended to limit the scope of the invention.This
The range of invention is defined by the following claims.It does not depart from spirit and principles of the present invention and the various equivalent replacements made and repairs
Change, should all cover within the scope of the present invention.