CN107235044B - A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior - Google Patents

A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior Download PDF

Info

Publication number
CN107235044B
CN107235044B CN201710401034.5A CN201710401034A CN107235044B CN 107235044 B CN107235044 B CN 107235044B CN 201710401034 A CN201710401034 A CN 201710401034A CN 107235044 B CN107235044 B CN 107235044B
Authority
CN
China
Prior art keywords
vehicle
data
barrier
traffic
driving behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710401034.5A
Other languages
Chinese (zh)
Other versions
CN107235044A (en
Inventor
黄坚
金玉辉
郭袭
金天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuangketianxia (Beijing) Technology Development Co.,Ltd.
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201710401034.5A priority Critical patent/CN107235044B/en
Publication of CN107235044A publication Critical patent/CN107235044A/en
Application granted granted Critical
Publication of CN107235044B publication Critical patent/CN107235044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention provides a kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior, pass through more sensing datas, more retrieving algorithm fusions realize the precise restoration of road traffic scene and driver driving behavior, specifically include: multi-source data, which mutually corrects, realizes vehicle driving trace precise restoration;The precise measurement to barrier speed and distance is realized based on monocular vision and millimetre-wave radar;Multisource data fusion generation<road traffic scene, driving behavior>data pair.The present invention overcomes existing reduction technique data source is single, more dependence expensive device obtains quality data, and restores the lower defect of precision.

Description

It is a kind of to be realized based on more sensing datas to road traffic scene and driver driving behavior Restoring method
Technical field
The invention belongs to intelligent transportation and field of image recognition, in particular to a kind of to be realized based on more sensing datas to road The restoring method of traffic scene and driver driving behavior.
Background technique
Automobile to realize really it is unmanned, it allow for perceive and identify around object, and it is to be understood that from Oneself accurate location.These two aspects is all the core of unmanned technology.A series of making for driving behaviors of driver, is in accordance with Locating traffic environment at that time, the either research of automatic Pilot algorithm or assist driver drive, while the car is driving Environment around induction at any time, with collecting data, progress static state, the identification of dynamic object, detecting and tracking, and navigation instrument Diagram data carries out the operation and analysis of system, is all highly important.
At this stage in the unmanned research and development technology of mainstream, laser radar perceptually equipment has all been selected.Laser radar Advantage is that its investigative range is wider, and detection accuracy is higher.But the shortcomings that laser radar, is it is also obvious that in poles such as sleet mists Performance is poor under the weather of end, and the data volume of acquisition is very huge also sufficiently expensive.The core sensor indispensable as ADAS Type, millimetre-wave radar technology relative maturity.But the shortcomings that millimetre-wave radar, is also very intuitive, and detection range is lost by frequency range Direct restriction, can not also perceive pedestrian, and can not accurately be modeled to all barriers in periphery.
Vision is the most important function means in the human cognitive world, and biological study shows that the mankind obtain external information 75% relies on vision system, and this ratio is even as high as 90% in driving environment, if it is possible to answer human visual system Automatic Pilot field is used, the accuracy of automatic Pilot will be undoubtedly increased substantially.
In response to this, in method proposed by the present invention, two kinds of data of monocular vision and radar is combined and are realized outside vehicle Scene accurately perceives.Monocular vision is higher using the target detection scalability that convolutional neural networks are realized, can not only pass through Modification recall rate obtains more object detection results to avoid radar to the unrecognized problem of partial impairment object, can also be with By constantly training new model, different traffic scenes are adapted to, the barrier that can be identified is continuously increased;By millimetre-wave radar The reversed correction to camera parameters in monocular vision may be implemented in precise information, improves ranging and range rate arithmetic accuracy, while auxiliary Help the target detection of monocular vision.Existing track of vehicle retrieving algorithm, precision depends on the data of high quality, for this Situation, the present invention realize the fusion of two and three dimensions retrieving algorithm, while being constructed by monocular vision data and GPS data Kalman filter, realizes the correction to the result of data de-noising and reduction, and the precision of track reduction result is higher.
It can be studied on the basis of the data pair that reduction obtains and realize the automatic Pilot Decision Control system towards complex environment System depth learning technology, to make automated driving system gradually form good driving habit by the self-study of data-driven, together When can help improve driver driving habit, can also be used as research and development manufacturer's work out development strategy and relevant department's policy making and The important reference foundation implemented.
Summary of the invention
The main problem that the present invention solves is all kinds of sensing equipments by laying on vehicle, and acquisition storage vehicle is in reality Various driving contextual datas during the road driving of border, using the big datas knowledge such as statistical analysis, data mining, from data dimension The multi-angles such as degree, vehicle dimension, time dimension, region dimension are analysed in depth, compare user's driving behavior, find driving behavior General character and individual character.Understood using the image scene based on deep learning, further data are labeled by radar data, also The outer traffic scene of former vehicle, ultimately forms<traffic scene, driving behavior>data pair.Overcome existing reduction technique data source list One, more dependence expensive device obtains quality data, and restores the lower defect of precision, provides a kind of based on more sensing datas Realize the restoring method to road traffic scene and driver driving behavior.
The technology of the present invention solution: one kind is realized based on more sensing datas to road traffic scene and driver driving behavior Restoring method, by more sensing datas, the fusion of more retrieving algorithms realizes the essence of road traffic scene and driver driving behavior Really reduction, specifically includes: multi-source data, which mutually corrects, realizes vehicle driving trace precise restoration;Based on monocular vision and millimeter wave Radar realizes the precise measurement to barrier speed and distance;Multisource data fusion generation<road traffic scene, driving behavior> Data pair.
It is implemented as follows:
(1) based on gyroscope, accelerometer, GPS data, monocular cam data are as auxiliary, by two peacekeepings three Dimension retrieving algorithm combines, and carries out the mutual correction between GPS data and gyroscope accelerometer data, accomplishes vehicle driving rail The precise restoration of mark obtains the driving trace of vehicle;
(2) on the basis of CAN bus data and OBD data, concrete operations of the driver under various scenes are restored, including Play lamp, steering wheel control, throttle and brake control;
(3) by monocular cam and radar, using convolutional neural networks, the target for completing to participate in traffic outside vehicle object is examined It surveys, obtains the type of vehicle-surroundings barrier;
(4) the bigness scale amount to obstacle distance is realized using method of geometry relation by monocular cam data, by milli Metre wave radar data obtain the accurate distance of barrier and this vehicle, and herein on the basis of distance reversely to camera parameters Calibration, at the same by the parallel relationship of lane line can also calibrating camera pitch angle, two kinds of calibration results are mutually authenticated, and obtain standard True video camera pitch angle is measured for other obstacle distances, finally obtain vehicle-surroundings barrier and this vehicle it is accurate away from From;
(5) measurement to barrier speed in the unidirectional lane of vehicle front is realized based on monocular cam data, with Based on millimetre-wave radar, the tachometric survey of object is participated in the traffic except the unidirectional lane in front, obtains vehicle-surroundings barrier With the relative velocity of this vehicle;
(6) vehicle driving trace for obtaining step (1), driver's concrete operations behavior that step (2) obtains, step (3) The type of obtained barrier, the barrier that step (4) obtains is at a distance from this vehicle, barrier and this vehicle that step (5) obtains Relative velocity carry out the fusions of more reduction results, generation<traffic scene, driving behavior>data pair, the traffic scene refers to: The various traffic of vehicle-surroundings participate in the type of object, including pedestrian, vehicle, the people of cycling, traffic sign;Traffic participates in object State, distance and relative velocity including moving object, the topology information of road, traffic sign;The driving behavior refers to driver Concrete operations in the car, including lamp is played, steering wheel controls, throttle and brake control and vehicle driving trace.
Under step (1) specific implementation:
(11) on three dimension scale, parsing vehicle axis system parses vehicle with respect to the position relation between reference frame Coordinate system uses Quaternion Method with respect to the position relation between reference frame, and Quaternion Method and Kalman filter algorithm are mutually tied It closes, improves the attitude algorithm accuracy and real-time of SINS;
(12) Kalman filter constructed by monocular vision data and GPS data, is realized to data de-noising and reduction Result correction, obtain more accurate vehicle driving trace.
Under step (2) specific implementation:
Based on the available various running informations arrived of CAN bus and OBD interface, including speed, oil consumption, steering wheel, turn To lamp, throttle, brake pedal, these data are uploaded onto the server by terminal, and the big data using statistical analysis, data mining is known Know, how reduction driver by the operation to steering wheel, brake pedal, gas pedal, turn signal, other light controls vehicle Traveling will correspond to the time on each concrete operations label, convenient to be merged with subsequent with other data.
Under step (3) specific implementation:
(31) by Faster-RCNN convolutional neural networks, four steps of target detection, i.e. candidate region is generated, Feature extraction, classification and Bounding Box are returned, and within unification to a depth network frame, traffic in the visual field is joined in realization It with the target detection of object, is marked with corresponding rectangle frame, and obtains confidence level;
(32) millimetre-wave radar identifies to obtain vehicle-surroundings barrier, and records the corresponding time.
Under step (4) specific implementation:
(41) it according to video camera projection model, is obtained by geometry derivation between road surface coordinate system and image coordinate system Relationship, the road plane that video camera is shot are trapezoid areas, the plane domain in corresponding flat coordinate system, road surface coordinate system with Point between plane coordinate system corresponds;
(42) image of the plane of delineation coordinate and plane of delineation bottom edge midpoint of seeking barrier rectangle frame bottom edge respectively is flat Areal coordinate, and road surface plane coordinates is derived as by geometrical relationship;
(43) two road surface coordinates obtained in step (42) are applied into two o'clock range formula, obtain between two o'clock away from From;
(44) precise measurement to a certain obstacle distance is realized using millimetre-wave radar, it is anti-by geometrical relationship derivation To solution, calibration is re-started to camera parameters, camera parameters refer to video camera pitch angle, acquire accurate video camera and bow The elevation angle;
(45) simultaneously, traffic lane line in the plane of delineation is obtained using machine vision algorithm, and two o'clock is recycled to determine one The method of straight line, determines the straight line in image on road plane corresponding to traffic lane line, by the relationship of two straight line parallels Solve video camera pitch angle;
(46) camera parameters that step (44), (45) obtain are mutually authenticated, Real-time solution obtains accurately video camera Pitch angle, for obtaining the accurate distance of barrier Yu this vehicle in the measurement of other obstacle distances.
Under step (5) specific implementation:
(51) method for using step (4), obtains front vehicles in nth frame image and N+K frame image, with this vehicle Two distances S1, S2;
(52) according to this vehicle speed, time difference T between nth frame image and N+K frame image calculates this vehicle traveling Distance S3;
(53) front truck operating range S=S2+S3-S1 is calculated;
(54) vehicle velocity V=S/T is calculated;Object is participated in for the traffic not before vehicle into lane, the measurement of speed relies on Millimetre-wave radar finally obtains the relative velocity of vehicle-surroundings barrier Yu this vehicle.
Under step (6) specific implementation:
(61) vehicle driving trace that will be obtained, driver's concrete operations behavior, the type of periphery barrier, barrier with The relative velocity of the distance of this vehicle, barrier and this vehicle carries out time label respectively;
(62) it by above-mentioned reduction result, is attached using the time as unique major key, the time, identical data fusion was one It rises, formation<traffic scene, driving behavior>data pair.
The advantages of the present invention over the prior art are that:
Traffic scene restoring method proposed by the present invention based on more sensing datas, can ultimately form < traffic scene, drive Sail behavior > data pair, and restore precision it is higher.
In track of vehicle reduction, the fusion of two and three dimensions retrieving algorithm is realized, while by monocular vision data With the Kalman filter of GPS data building, the correction to the result of data de-noising and reduction, the essence of track reduction result are realized Du Genggao.In the measurement of target detection and obstacle distance speed, while monocular vision and millimetre-wave radar is utilized, a side Face, monocular vision is higher using the target detection scalability that convolutional neural networks are realized, not only can be by modifying recall rate More object detection results are obtained to avoiding radar to the unrecognized problem of partial impairment object, it can also be by constantly instructing Practice new model, adapts to different traffic scenes, be continuously increased the barrier that can be identified;On the other hand, by millimetre-wave radar The reversed correction to camera parameters in monocular vision may be implemented in precise information, improves ranging and range rate arithmetic accuracy.
Meanwhile it can be studied on the basis of the data pair that reduction obtains and realize the automatic Pilot decision control towards complex environment System depth learning art processed is practised so that automated driving system be made to gradually form good driving by the self-study of data-driven It is used, it generates the dynamic response for meeting Chinese transportation feature and driving habit and ability of making decisions on one's own, promotes the safety of automatic Pilot Property and validity;Driver driving habit can be helped improve;It can also be used as research and development manufacturer's work out development strategy and relevant department The important reference foundation of policy making and implementation.
Detailed description of the invention
Fig. 1 is the schematic diagram of the method for the present invention;
Fig. 2 is that the track in the present invention restores flow chart;
Fig. 3 is video camera projection model in the present invention, wherein upper figure is projection relation, the following figure is projection plane.
Specific embodiment
As shown in Figure 1, driver's sequence of operations is made in driving procedure, it is the traffic conditions according to periphery at that time, As periphery main traffic participates in the motion state of object, traffic sign, traffic lights, weather condition, road conditions etc..The present invention Function essentially consist in: based on CAN bus, OBD, gyroscope, accelerometer, GPS/BD, millimetre-wave radar and monocular cam Equal multi-sensor datas, realize the reduction to the reduction of road traffic scene and driver driving behavior outside vehicle, formation < traffic field Scape, driving behavior > data pair.Traffic scene specifically includes that the various traffic of vehicle-surroundings participate in the type of objects, as pedestrian, vehicle, The people of cycling, traffic sign;Traffic participates in the state of object, such as the distance and relative velocity of moving object, the topology of road Information, traffic sign.Driver driving behavior specifically includes that the concrete operations of driver in the car, such as plays lamp, steering wheel control, oil Door and brake control and vehicle driving trace.
1. vehicle driving trace restores
(1) on three dimension scale, vehicle axis system is parsed with respect to the position relation between reference frame.Currently, SINS Middle description carrier movement coordinate system mainly has Euler's horn cupping, direction cosines with respect to the method for position relation between reference frame Method, trigonometric function method, Rodrigues parametric method, Quaternion Method and equivalent rotating vector method, focus on Quaternion Method Face, and Quaternion Method is combined with Kalman filter algorithm, improve the attitude algorithm accuracy and real-time of SINS;
(2) Kalman filter constructed by monocular vision data and GPS data, is realized to data de-noising and reduction As a result correction obtains more accurate vehicle driving trace.
2. analyzing driver driving behavior based on CAN bus and OBD data
Based on the available various running informations arrived of CAN bus and OBD interface, such as speed, oil consumption, steering wheel, steering Lamp, throttle, brake pedal etc., these data can be uploaded onto the server by terminal, utilize the number greatly such as statistical analysis, data mining It is reported that knowing, how reduction driver is controlled by the operation to steering wheel, brake pedal, gas pedal, turn signal, other light etc. Vehicle driving processed.The time will be corresponded on each concrete operations label, it is convenient to be merged with subsequent with other data.
3. realizing the precise measurement adjusted the distance based on monocular camera machine vision and millimetre-wave radar
(1) by Faster-RCNN convolutional neural networks, four steps of target detection, (candidate region is generated, feature Extract, classification and Bounding Box are returned) it is unified within a depth network frame, it realizes and object is participated in traffic in the visual field Target detection, marked with corresponding rectangle frame;
(2) it according to video camera projection model, is obtained by geometry derivation between road surface coordinate system and image coordinate system Relationship.As shown in Figure 3.In the upper figure of Fig. 3, plane ABU represents road plane, and ABCD is the ladder in the road plane that video camera takes Shape region, O point are camera lens central point, and OG is camera optical axis, and G point is the intersection point of camera optical axis and road plane, I point For the upright projection in the plane of the road O Dian.In the coordinate system of road surface, G point is defined as coordinate origin, vehicle forward direction definition For Y direction.For corresponding points of the GABCD each point in the plane of delineation as shown in Fig. 3 following figure, abcd is four ends as planar rectangular Point, H and W are respectively the height and width as plane.The midpoint g for defining image rectangle is the coordinate origin of photo coordinate system, y-axis generation Table vehicle forward direction;
(3) plane of delineation coordinate on barrier rectangle frame bottom edge and the plane of delineation at plane of delineation bottom edge midpoint are sought respectively Coordinate, and road surface plane coordinates is derived as by geometrical relationship;
(4) two road surface coordinates obtained in (3) are applied into two o'clock range formula, obtains the distance between two o'clock;
(5) precise measurement to a certain obstacle distance is realized using millimetre-wave radar, it is anti-by geometrical relationship derivation To solution, calibration is re-started to camera parameters, camera parameters refer to video camera pitch angle, acquire accurate video camera and bow The elevation angle;
(6) simultaneously, in real road environment, since markings are parallel lines, therefore can be first with machine vision algorithm Obtain traffic lane line in the plane of delineation, the method for recycling two o'clock to determine straight line determines traffic lane line in image Straight line on corresponding road plane solves video camera pitch angle by the relationship of two straight line parallels, and is not using just always The video camera pitch angle that beginningization markers is set;
(7) camera parameters that step (5), (6) obtain are mutually authenticated, Real-time solution obtains accurately video camera pitching Angle, for obtaining the accurate distance of barrier Yu this vehicle in the measurement of other obstacle distances.
4. realizing the precise measurement to speed based on monocular camera machine vision and millimetre-wave radar
By shooting forward object of which movement image sequence, image procossing and view then are used to these image sequences Feel that measuring technique is analyzed, calculates measurement and obtain real-time displacement of the objects in front between two field pictures, and then be calculated Object real time kinematics speed.
(1) by the distance measuring method in step 3, front vehicles are obtained in nth frame image and N+K frame image, with this Two distances S1, S2 of vehicle;
(2) according to this vehicle speed, time difference T between nth frame image and N+K frame image calculates this vehicle traveling Distance S3;
(3) front truck operating range S=S2+S3-S1 is calculated;
(4) vehicle velocity V=S/T is calculated.
(5) by millimetre-wave radar, correct velocity value is obtained, for correcting the time in monocular vision tachometric survey algorithm Deviation and range deviation, while realizing the measurement that the speed of object is participated in the traffic not before vehicle into lane, it finally obtains The relative velocity of vehicle-surroundings barrier and this vehicle.
5. data fusion, generation<traffic scene, driving behavior data>data pair.
(1) vehicle driving trace that will be obtained, driver's concrete operations behavior, the type of periphery barrier, barrier with The relative velocity of the distance of this vehicle, barrier and this vehicle carries out time label respectively;
(2) it by above-mentioned reduction result, is attached using the time as unique major key, the time, identical data fusion was one It rises, formation<traffic scene, driving behavior>data pair.
Above embodiments are provided just for the sake of the description purpose of the present invention, and are not intended to limit the scope of the invention.This The range of invention is defined by the following claims.It does not depart from spirit and principles of the present invention and the various equivalent replacements made and repairs Change, should all cover within the scope of the present invention.

Claims (7)

1. a kind of realize the method restored to road traffic scene and driver driving behavior based on more sensing datas, it is characterised in that The following steps are included:
(1) based on gyroscope, accelerometer, GPS data, monocular cam data are as auxiliary, also by two and three dimensions Former algorithm combines, and carries out the mutual correction between GPS data and gyroscope accelerometer data, accomplishes vehicle driving trace Precise restoration obtains the driving trace of vehicle;
(2) on the basis of CAN bus data and OBD data, concrete operations of the driver under various scenes are restored, including beat Lamp, steering wheel control, throttle and brake control;
(3) by monocular cam and millimetre-wave radar, using convolutional neural networks, the target that object is participated in traffic outside vehicle is completed Detection, obtains the type of vehicle-surroundings barrier;
(4) the bigness scale amount to obstacle distance is realized, by millimeter wave using method of geometry relation by monocular cam data Radar data obtains the accurate distance of barrier and this vehicle, and herein on the basis of distance reversely to monocular cam parameter Calibration, while monocular cam pitch angle can also be demarcated by the parallel relationship of lane line, two kinds of calibration results are mutually authenticated, obtain To accurate monocular cam pitch angle, is measured for other obstacle distances, finally obtain vehicle-surroundings barrier and this vehicle Accurate distance;
(5) measurement to barrier speed in the unidirectional lane of vehicle front is realized, based on monocular cam data with millimeter Based on wave radar, the tachometric survey of object is participated in the traffic except the unidirectional lane in front, obtains vehicle-surroundings barrier and this The relative velocity of vehicle;
(6) vehicle driving trace for obtaining step (1), driver's concrete operations behavior that step (2) obtains, step (3) obtain Barrier type, the barrier that step (4) obtains is at a distance from this vehicle, the phase of barrier and this vehicle that step (5) obtains Carry out the fusion of more reduction results to speed, generation<traffic scene, driving behavior>data pair, the traffic scene refers to: vehicle The various traffic in periphery participate in the type of object, including pedestrian, vehicle, the people of cycling, traffic sign;The shape of traffic participation object State, distance and relative velocity including moving object, the topology information of road, traffic sign;The driving behavior refers to that driver exists Interior concrete operations, including lamp is played, steering wheel controls, throttle and brake control and vehicle driving trace.
2. according to claim 1 realized based on more sensing datas restores road traffic scene and driver driving behavior Method, it is characterised in that: under step (1) specific implementation:
(11) on three dimension scale, parsing vehicle axis system parses vehicle coordinate with respect to the position relation between reference frame It is the position relation between opposite reference frame using Quaternion Method, Quaternion Method is combined with Kalman filter algorithm, Improve the attitude algorithm accuracy and real-time of SINS;
(12) Kalman filter constructed by monocular vision data and GPS data, realizes the knot to data de-noising and reduction The correction of fruit obtains more accurate vehicle driving trace.
3. according to claim 1 realized based on more sensing datas restores road traffic scene and driver driving behavior Method, it is characterised in that: under step (2) specific implementation:
Based on the available various running informations arrived of CAN bus and OBD interface, including speed, oil consumption, steering wheel, turn signal, Throttle, brake pedal, these data are uploaded onto the server by terminal, using statistical analysis, the big data knowledge of data mining, also How former driver by the operation to steering wheel, brake pedal, gas pedal, turn signal, other light controls vehicle driving, The time will be corresponded on each concrete operations label, it is convenient to be merged with subsequent with other data.
4. according to claim 1 realized based on more sensing datas restores road traffic scene and driver driving behavior Method, it is characterised in that: under step (3) specific implementation:
(31) by Faster-RCNN convolutional neural networks, four steps of target detection, i.e. candidate region is generated, feature It extracts, classification and Bounding Box are returned, and within unification to a depth network frame, are realized and are participated in object to traffic in the visual field Target detection, marked with corresponding rectangle frame, and obtain confidence level;
(32) millimetre-wave radar identifies to obtain vehicle-surroundings barrier, and records the corresponding time.
5. according to claim 1 realized based on more sensing datas restores road traffic scene and driver driving behavior Method, it is characterised in that: under step (4) specific implementation:
(41) it according to monocular cam projection model, is obtained by geometry derivation between road surface coordinate system and image coordinate system Relationship, the road plane that monocular cam is shot are trapezoid areas, the plane domain in corresponding flat coordinate system, road surface coordinate Point between system and plane coordinate system corresponds;
(42) plane of delineation of the plane of delineation coordinate and plane of delineation bottom edge midpoint of seeking barrier rectangle frame bottom edge respectively is sat Mark, and road surface plane coordinates is derived as by geometrical relationship;
(43) two road surface coordinates obtained in step (42) are applied into two o'clock range formula, obtains the distance between two o'clock;
(44) it is realized using millimetre-wave radar to the precise measurement of a certain obstacle distance, is reversely asked by geometrical relationship derivation Solution, re-starts calibration to monocular cam parameter, and monocular cam parameter refers to monocular cam pitch angle, acquires accurately Monocular cam pitch angle;
(45) simultaneously, obtain traffic lane line in the plane of delineation using machine vision algorithm, recycle two o'clock determine one it is straight The method of line determines the straight line in image on road plane corresponding to traffic lane line, is solved by the relationship of two straight line parallels Monocular cam pitch angle;
(46) the monocular cam parameter that step (44), (45) obtain is mutually authenticated, Real-time solution obtains accurately monocular and takes the photograph As head pitch angle, for obtaining the accurate distance of barrier Yu this vehicle in the measurement of other obstacle distances.
6. according to claim 1 realized based on more sensing datas restores road traffic scene and driver driving behavior Method, it is characterised in that: under step (5) specific implementation:
(51) distance measuring method for using step (4), obtains front vehicles in nth frame image and N+K frame image, with this vehicle Two distances S1, S2;
(52) according to this vehicle speed, time difference T between nth frame image and N+K frame image calculates the distance of this vehicle traveling S3;
(53) front truck operating range S=S2+S3-S1 is calculated;
(54) vehicle velocity V=S/T is calculated;Object is participated in for the traffic not before vehicle into lane, the measurement of speed relies on millimeter wave Radar finally obtains the relative velocity of vehicle-surroundings barrier Yu this vehicle.
7. according to claim 1 realized based on more sensing datas restores road traffic scene and driver driving behavior Method, it is characterised in that: under step (6) specific implementation:
(61) vehicle driving trace that will be obtained, driver's concrete operations behavior, the type of periphery barrier, barrier with this vehicle Distance, the relative velocity of barrier and this vehicle carries out time label respectively;
(62) by above-mentioned reduction result, be attached using the time as unique major key, time identical data fusion together, shape At<traffic scene, driving behavior>data pair.
CN201710401034.5A 2017-05-31 2017-05-31 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior Active CN107235044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710401034.5A CN107235044B (en) 2017-05-31 2017-05-31 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710401034.5A CN107235044B (en) 2017-05-31 2017-05-31 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior

Publications (2)

Publication Number Publication Date
CN107235044A CN107235044A (en) 2017-10-10
CN107235044B true CN107235044B (en) 2019-05-28

Family

ID=59984711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710401034.5A Active CN107235044B (en) 2017-05-31 2017-05-31 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior

Country Status (1)

Country Link
CN (1) CN107235044B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960183A (en) * 2018-07-19 2018-12-07 北京航空航天大学 A kind of bend target identification system and method based on Multi-sensor Fusion

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107499262A (en) * 2017-10-17 2017-12-22 芜湖伯特利汽车安全系统股份有限公司 ACC/AEB systems and vehicle based on machine learning
KR102334158B1 (en) * 2017-10-30 2021-12-02 현대모비스 주식회사 Autonomous emergency braking apparatus and control method thereof
CN108196535B (en) * 2017-12-12 2021-09-07 清华大学苏州汽车研究院(吴江) Automatic driving system based on reinforcement learning and multi-sensor fusion
CN108227707B (en) * 2017-12-25 2021-11-26 清华大学苏州汽车研究院(吴江) Automatic driving method based on laser radar and end-to-end deep learning method
CN109974687A (en) * 2017-12-28 2019-07-05 周秦娜 Co-located method, apparatus and system in a kind of multisensor room based on depth camera
CN108596081B (en) * 2018-04-23 2021-04-20 吉林大学 Vehicle and pedestrian detection method based on integration of radar and camera
CN109061706A (en) * 2018-07-17 2018-12-21 江苏新通达电子科技股份有限公司 A method of the vehicle drive behavioural analysis based on T-Box and real-time road map datum
CN109002800A (en) * 2018-07-20 2018-12-14 苏州索亚机器人技术有限公司 The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion
CN108983219B (en) * 2018-08-17 2020-04-07 北京航空航天大学 Fusion method and system for image information and radar information of traffic scene
CN109263649B (en) * 2018-08-21 2021-09-17 北京汽车股份有限公司 Vehicle, object recognition method and object recognition system thereof in automatic driving mode
CN110901638B (en) * 2018-08-28 2021-05-28 大陆泰密克汽车系统(上海)有限公司 Driving assistance method and system
CN111267863B (en) * 2018-12-04 2021-03-19 广州汽车集团股份有限公司 Driver driving type identification method and device, storage medium and terminal equipment
CN109606374B (en) * 2018-12-28 2020-07-10 智博汽车科技(上海)有限公司 Vehicle, method and device for verifying fuel consumption data of electronic horizon
US11126179B2 (en) * 2019-02-21 2021-09-21 Zoox, Inc. Motion prediction based on appearance
CN110068818A (en) * 2019-05-05 2019-07-30 中国汽车工程研究院股份有限公司 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device
CN110309741B (en) * 2019-06-19 2022-03-08 百度在线网络技术(北京)有限公司 Obstacle detection method and device
JP7088137B2 (en) * 2019-07-26 2022-06-21 トヨタ自動車株式会社 Traffic light information management system
CN110497914B (en) * 2019-08-26 2020-10-30 格物汽车科技(苏州)有限公司 Method, apparatus and storage medium for developing a model of driver behavior for autonomous driving
CN110751836A (en) * 2019-09-26 2020-02-04 武汉光庭信息技术股份有限公司 Vehicle driving early warning method and system
CN110853393B (en) * 2019-11-26 2020-12-11 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
JP7412254B2 (en) * 2020-04-02 2024-01-12 三菱電機株式会社 Object recognition device and object recognition method
CN111564051B (en) * 2020-04-28 2021-07-20 安徽江淮汽车集团股份有限公司 Safe driving control method, device and equipment for automatic driving automobile and storage medium
CN111681422A (en) * 2020-06-16 2020-09-18 衢州量智科技有限公司 Management method and system for tunnel road
CN111879314B (en) * 2020-08-10 2022-08-02 中国铁建重工集团股份有限公司 Multi-sensor fusion roadway driving equipment real-time positioning system and method
CN113379945A (en) * 2021-07-26 2021-09-10 陕西天行健车联网信息技术有限公司 Vehicle driving behavior analysis device, method and system
CN113947893A (en) * 2021-09-03 2022-01-18 网络通信与安全紫金山实验室 Method and system for restoring driving scene of automatic driving vehicle
CN113642548B (en) * 2021-10-18 2022-03-25 氢山科技有限公司 Abnormal driving behavior detection device and device for hydrogen energy transport vehicle and computer equipment
CN116204791B (en) * 2023-04-25 2023-08-11 山东港口渤海湾港集团有限公司 Construction and management method and system for vehicle behavior prediction scene data set

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010014090A (en) * 2008-07-07 2010-01-21 Toyota Motor Corp Control device for vehicle
CN101908272A (en) * 2010-07-20 2010-12-08 南京理工大学 Traffic safety sensing network based on mobile information
KR20170028631A (en) * 2015-09-04 2017-03-14 (주) 이즈테크놀로지 Method and Apparatus for Detecting Carelessness of Driver Using Restoration of Front Face Image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101526386B1 (en) * 2013-07-10 2015-06-08 현대자동차 주식회사 Apparatus and method of processing road data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010014090A (en) * 2008-07-07 2010-01-21 Toyota Motor Corp Control device for vehicle
CN101908272A (en) * 2010-07-20 2010-12-08 南京理工大学 Traffic safety sensing network based on mobile information
KR20170028631A (en) * 2015-09-04 2017-03-14 (주) 이즈테크놀로지 Method and Apparatus for Detecting Carelessness of Driver Using Restoration of Front Face Image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于潜变量的驾驶员路径选择行为分析;张文娜 李军;《科学技术与工程》;20170217;第16卷(第34期);280-284

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960183A (en) * 2018-07-19 2018-12-07 北京航空航天大学 A kind of bend target identification system and method based on Multi-sensor Fusion
CN108960183B (en) * 2018-07-19 2020-06-02 北京航空航天大学 Curve target identification system and method based on multi-sensor fusion

Also Published As

Publication number Publication date
CN107235044A (en) 2017-10-10

Similar Documents

Publication Publication Date Title
CN107235044B (en) A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior
US11676296B2 (en) Augmenting reality using semantic segmentation
US11067693B2 (en) System and method for calibrating a LIDAR and a camera together using semantic segmentation
CN108196535B (en) Automatic driving system based on reinforcement learning and multi-sensor fusion
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
Cai et al. Probabilistic end-to-end vehicle navigation in complex dynamic environments with multimodal sensor fusion
US8791996B2 (en) Image processing system and position measurement system
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
EP2372308B1 (en) Image processing system and vehicle control system
CN102208036B (en) Vehicle position detection system
KR102091580B1 (en) Method for collecting road signs information using MMS
EP2372607A2 (en) Scene matching reference data generation system and position measurement system
CN110531376A (en) Detection of obstacles and tracking for harbour automatic driving vehicle
Levinson Automatic laser calibration, mapping, and localization for autonomous vehicles
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
CN112861748B (en) Traffic light detection system and method in automatic driving
CN113544467A (en) Aligning road information for navigation
CN111649740B (en) Method and system for high-precision positioning of vehicle based on IMU
Amaradi et al. Lane following and obstacle detection techniques in autonomous driving vehicles
CN109583312A (en) Lane detection method, apparatus, equipment and storage medium
CN106446785A (en) Passable road detection method based on binocular vision
Masi et al. Augmented perception with cooperative roadside vision systems for autonomous driving in complex scenarios
CN114509065B (en) Map construction method, system, vehicle terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210824

Address after: 100191 045, 2f, commercial garage, No. 17, Zhichun Road, Haidian District, Beijing

Patentee after: Chuangketianxia (Beijing) Technology Development Co.,Ltd.

Address before: 100191 No. 37, Haidian District, Beijing, Xueyuan Road

Patentee before: BEIHANG University

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20171010

Assignee: Beijing Zhimou Technology Development Co.,Ltd.

Assignor: Chuangketianxia (Beijing) Technology Development Co.,Ltd.

Contract record no.: X2023990000843

Denomination of invention: A Method for Restoring Road Traffic Scenarios and Driver Driving Behavior Based on Multi sensor Data

Granted publication date: 20190528

License type: Exclusive License

Record date: 20231008