CN107235044A - It is a kind of to be realized based on many sensing datas to road traffic scene and the restoring method of driver driving behavior - Google Patents

It is a kind of to be realized based on many sensing datas to road traffic scene and the restoring method of driver driving behavior Download PDF

Info

Publication number
CN107235044A
CN107235044A CN201710401034.5A CN201710401034A CN107235044A CN 107235044 A CN107235044 A CN 107235044A CN 201710401034 A CN201710401034 A CN 201710401034A CN 107235044 A CN107235044 A CN 107235044A
Authority
CN
China
Prior art keywords
data
vehicle
barrier
car
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710401034.5A
Other languages
Chinese (zh)
Other versions
CN107235044B (en
Inventor
黄坚
金玉辉
郭袭
金天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuangketianxia (Beijing) Technology Development Co.,Ltd.
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201710401034.5A priority Critical patent/CN107235044B/en
Publication of CN107235044A publication Critical patent/CN107235044A/en
Application granted granted Critical
Publication of CN107235044B publication Critical patent/CN107235044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

Realized the invention provides a kind of based on many sensing datas to road traffic scene and the restoring method of driver driving behavior, by many sensing datas, many retrieving algorithm fusions realize the precise restoration of road traffic scene and driver driving behavior, specifically include:Multi-source data, which is mutually corrected, realizes vehicle driving trace precise restoration;The accurate measurement to barrier speed and distance is realized based on monocular vision and millimetre-wave radar;Multisource data fusion is generated<Road traffic scene, driving behavior>Data pair.Instant invention overcomes existing reduction technique data source is single, more relies on expensive device and obtain quality data, and the relatively low defect of reduction precision.

Description

It is a kind of to be realized based on many sensing datas to road traffic scene and driver driving behavior Restoring method
Technical field
It is more particularly to a kind of to be realized based on many sensing datas to road the invention belongs to intelligent transportation and field of image recognition Traffic scene and the restoring method of driver driving behavior.
Background technology
Automobile to realize really it is unmanned, it allows for perceiving and recognize the object of surrounding, and it is to be understood that oneself Oneself accurate location.These two aspects is all the core of unmanned technology.A series of making for driving behaviors of driver, is in accordance with The research of traffic environment residing at that time, either automatic Pilot algorithm, or assist driver drive, in vehicle traveling process The environment around sensing, collects data at any time, carries out static state, the identification of dynamic object, detecting with following the trail of, and navigation instrument Diagram data, carries out the computing and analysis of system, is all highly important.
At this stage in the unmanned research and development technology of main flow, laser radar perceptually equipment all have selected.Laser radar Advantage is that its investigative range is wider, and detection accuracy is higher.But, the shortcoming of laser radar is it is also obvious that in poles such as sleet mists Poor-performing under weather is held, the data volume of collection is very huge also sufficiently expensive.It is used as the indispensable core sensors of ADAS Type, millimetre-wave radar technology relative maturity.But the shortcoming of millimetre-wave radar is also very directly perceived, and detection range is lost by frequency range Direct restriction, can not also perceive pedestrian, and all barriers in periphery can not be modeled accurately.
Vision is the most important function means in the human cognitive world, and biological study shows, the mankind obtain external information 75% relies on vision system, and this ratio is even as high as 90% in driving environment, if it is possible to should by human visual system Automatic Pilot field is used, the accuracy of automatic Pilot will be undoubtedly increased substantially.
For such case, in method proposed by the present invention, combine two kinds of data of monocular vision and radar and realize outside car Scene is accurately perceived.The target detection scalability that monocular vision is realized using convolutional neural networks is higher, can not only pass through Modification recall rate obtains more object detection results to avoid the problem of radar is to partial impairment thing None- identified, can also By constantly training new model, different traffic scenes are adapted to, the barrier that can be recognized is continuously increased;By millimetre-wave radar Precise information, it is possible to achieve the reverse correction to camera parameters in monocular vision, improves ranging and range rate arithmetic accuracy, while auxiliary Help the target detection of monocular vision.Existing track of vehicle retrieving algorithm, its precision depends on high-quality data, for this Situation, the present invention realizes the fusion of two and three dimensions retrieving algorithm, while built by monocular vision data and gps data Kalman filter, realizes the correction of the result to data de-noising and reduction, and the precision of track reduction result is higher.
The automatic Pilot Decision Control system realized towards complex environment can be studied on the basis of the data pair that reduction is obtained System depth learning technology, so that automated driving system gradually forms good driving habit by the self-study of data-driven, together When can help improve driver driving custom, can also as research and development manufacturer's work out development strategy and relevant department's policy making and The important reference foundation implemented.
The content of the invention
The subject matter that the present invention is solved is all kinds of sensing equipments by being laid on vehicle, and collection storage vehicle is in reality Various driving contextual datas during the road driving of border, using the big data knowledge such as statistical analysis, data mining, from data dimension The multi-angle such as degree, vehicle dimension, time dimension, region dimension, analyses in depth, compares user's driving behavior, find driving behavior General character and individual character.Understood using the image scene based on deep learning, further data are labeled by radar data, also The outer traffic scene of former car, is ultimately formed<Traffic scene, driving behavior>Data pair.Overcome existing reduction technique data source list One, more rely on expensive device and obtain quality data, and the relatively low defect of reduction precision is based on many sensing datas there is provided one kind Realize to road traffic scene and the restoring method of driver driving behavior.
The technology of the present invention solution:One kind is realized to road traffic scene and driver driving behavior based on many sensing datas Restoring method, by many sensing datas, the fusion of many retrieving algorithms realizes the essence of road traffic scene and driver driving behavior Really reduction, is specifically included:Multi-source data, which is mutually corrected, realizes vehicle driving trace precise restoration;Based on monocular vision and millimeter wave Radar realizes the accurate measurement to barrier speed and distance;Multisource data fusion is generated<Road traffic scene, driving behavior> Data pair.
It is implemented as follows:
(1) based on gyroscope, accelerometer, gps data, monocular cam data are as auxiliary, by two peacekeepings three Dimension retrieving algorithm is combined, and carries out the mutual correction between gps data and gyroscope accelerometer data, accomplishes that vehicle travels rail The precise restoration of mark, obtains the driving trace of vehicle;
(2) on the basis of CAN data and OBD data, concrete operations of the reduction driver under various scenes, including Play lamp, steering wheel control, throttle and brake control;
(3) by monocular cam and radar, using convolutional neural networks, the target for completing to participate in traffic outside car thing is examined Survey, obtain the species of vehicle-surroundings barrier;
(4) by monocular cam data, using method of geometry relation, the bigness scale amount to obstacle distance is realized, by milli Metre wave radar data, obtain the accurate distance of barrier and this car, and herein on the basis of distance reversely to camera parameters Demarcation, at the same by the parallel relation of lane line also can the calibrating camera angle of pitch, two kinds of calibration results are mutually authenticated, and obtain standard The true video camera angle of pitch, for the measurement of other obstacle distances, finally give vehicle-surroundings barrier and this car it is accurate away from From;
(5) measurement to barrier speed in the unidirectional track of vehicle front is realized based on monocular cam data, with Based on millimetre-wave radar, the tachometric survey of thing is participated in the traffic outside the unidirectional track in front, vehicle-surroundings barrier is obtained With the relative velocity of this car;
(6) vehicle driving trace for obtaining step (1), driver's concrete operations behavior that step (2) is obtained, step (3) The species of obtained barrier, the distance of the barrier that step (4) is obtained and this car, the barrier that step (5) is obtained and this car Relative velocity carry out the fusions of many reduction results, generation<Traffic scene, driving behavior>Data pair, the traffic scene refers to: The various traffic of vehicle-surroundings participate in the species of thing, including pedestrian, vehicle, the people of cycling, traffic sign;Traffic participates in thing State, includes the distance and relative velocity of moving object, the topology information of road, traffic sign;The driving behavior refers to driver Concrete operations in the car, including beat lamp, steering wheel control, throttle and brake control and vehicle driving trace.
The step (1) implements down:
(11) on three dimension scale, parsing vehicle axis system parses vehicle with respect to the position relation between reference frame Coordinate system uses Quaternion Method with respect to the position relation between reference frame, and Quaternion Method is mutually tied with Kalman filter algorithm Close, improve SINS attitude algorithm accuracy and real-time;
(12) Kalman filter built by monocular vision data and gps data, is realized to data de-noising and reduction Result correction, obtain more accurate vehicle driving trace.
The step (2) implements down:
The various running informations that can be got based on CAN and OBD interfaces, including speed, oil consumption, steering wheel, turn To lamp, throttle, brake pedal, these data are uploaded onto the server by terminal, and the big data using statistical analysis, data mining is known Know, how reduction driver controls vehicle by the operation to steering wheel, brake pedal, gas pedal, steering indicating light, other light Each concrete operations was marked the upper correspondence time by traveling, convenient with subsequently being merged with other data.
The step (3) implements down:
(31) by Faster-RCNN convolutional neural networks, four steps of target detection, i.e. candidate region are generated, Feature extraction, classification and Bounding Box are returned, within unification to a depth network frame, and realization is joined to traffic in the visual field With the target detection of thing, marked with correspondence rectangle frame, and obtain confidence level;
(32) millimetre-wave radar identification obtains vehicle-surroundings barrier, and records the correspondence time.
The step (4) implements down:
(41) according to video camera projection model, obtained by geometry derivation between road surface coordinate system and image coordinate system Relation, it is the plane domain in trapezoid area, corresponding flat coordinate system that video camera, which shoots obtained road plane, road surface coordinate system with Point between plane coordinate system is corresponded;
(42) plane of delineation coordinate on barrier rectangle frame base is asked for respectively and the image at plane of delineation base midpoint is put down Areal coordinate, and road surface plane coordinates is derived as by geometrical relationship;
(43) the two road surface coordinates obtained in step (42) are applied mechanically into 2 range formulas, obtain between 2 points away from From;
(44) the accurate measurement to a certain obstacle distance is realized using millimetre-wave radar, it is anti-by geometrical relationship derivation To solution, camera parameters are re-started with demarcation, camera parameters refer to the video camera angle of pitch, try to achieve accurate video camera and bow The elevation angle;
(45) simultaneously, traffic lane line in the plane of delineation is obtained using machine vision algorithm, recycles at 2 points and determines one The method of bar straight line, determines the straight line corresponding to traffic lane line on road plane in image, by the relation of two straight line parallels Solve the video camera angle of pitch;
(46) camera parameters for obtaining step (44), (45) are mutually authenticated, and Real-time solution obtains accurately video camera The angle of pitch, in the measurement of other obstacle distances, obtaining the accurate distance of barrier and this car.
The step (5) implements down:
(51) side for using claim 1 step (4) not, obtains front vehicles in nth frame image and N+K two field pictures When, with two of this car apart from S1, S2;
(52) according to this car speed, the time difference T between nth frame image and N+K two field pictures calculates this car traveling Apart from S3;
(53) front truck operating range S=S2+S3-S1 is calculated;
(54) vehicle velocity V=S/T is calculated;Thing is participated in for the traffic not before vehicle into track, the measurement of speed is relied on Millimetre-wave radar, finally gives the relative velocity of vehicle-surroundings barrier and this car.
The step (6) implements down:
(61) by obtained vehicle driving trace, driver's concrete operations behavior, the species of periphery barrier, barrier with The distance of this car, the relative velocity of barrier and this car carries out time mark respectively;
(62) by above-mentioned reduction result, it is attached using the time as unique major key, the time, identical data fusion was one Rise, formed<Traffic scene, driving behavior>Data pair.
The advantage of the present invention compared with prior art is:
Traffic scene restoring method proposed by the present invention based on many sensing datas, can be ultimately formed<Traffic scene, drives Sail behavior>Data pair, and reduction precision it is higher.
In track of vehicle reduction, the fusion of two and three dimensions retrieving algorithm is realized, while by monocular vision data The Kalman filter built with gps data, realizes the correction of the result to data de-noising and reduction, the essence of track reduction result Du Genggao.In the measurement of target detection and obstacle distance speed, while monocular vision and millimetre-wave radar are make use of, a side Face, the target detection scalability that monocular vision is realized using convolutional neural networks is higher, not only can be by changing recall rate More object detection results are obtained to avoid the problem of radar is to partial impairment thing None- identified, can also be by constantly instructing Practice new model, adapt to different traffic scenes, be continuously increased the barrier that can be recognized;On the other hand, by millimetre-wave radar Precise information, it is possible to achieve the reverse correction to camera parameters in monocular vision, improves ranging and range rate arithmetic accuracy.
Meanwhile, the automatic Pilot decision-making control realized towards complex environment can be studied on the basis of the data pair that reduction is obtained System depth learning art processed, is practised so that automated driving system gradually forms good driving by the self-study of data-driven Used, generation meets the dynamic response and ability of making decisions on one's own of Chinese transportation feature and driving habit, lifts the safety of automatic Pilot Property and validity;Driver driving custom can be helped improve;Research and development manufacturer's work out development strategy and relevant department can also be used as Policy making and the important reference foundation implemented.
Brief description of the drawings
Fig. 1 is the schematic diagram of the inventive method;
Fig. 2 is the track reduction flow chart in the present invention;
Fig. 3 is video camera projection model in the present invention, wherein upper figure is projection relation, figure below is projection plane.
Embodiment
As shown in figure 1, in driving procedure, making for driver's sequence of operations, is the traffic conditions according to periphery at that time, As periphery main traffic participates in the motion state of thing, traffic sign, traffic lights, weather condition, road conditions etc..The present invention Function essentially consist in:Based on CAN, OBD, gyroscope, accelerometer, GPS/BD, millimetre-wave radar and monocular cam Deng multi-sensor data, the reduction to the reduction of road traffic scene and driver driving behavior outside car is realized, is formed<Traffic Scape, driving behavior>Data pair.Traffic scene mainly includes:The various traffic of vehicle-surroundings participate in the species of things, such as pedestrian, vehicle, The people of cycling, traffic sign;Traffic participates in the state of thing, the distance and relative velocity of such as moving object, the topology of road Information, traffic sign.Driver driving behavior mainly includes:The concrete operations of driver in the car, such as play lamp, steering wheel control, oil Door and brake control, and vehicle driving trace.
1. vehicle driving trace is reduced
(1) on three dimension scale, parsing vehicle axis system is with respect to the position relation between reference frame.At present, SINS Described in carrier movement coordinate system mainly have Euler's horn cupping, direction cosines with respect to the method for position relation between reference frame Method, trigonometric function method, Rodrigues parametric methods, Quaternion Method and equivalent rotating vector method, are focused on Quaternion Method Face, and Quaternion Method is combined with Kalman filter algorithm, improve SINS attitude algorithm accuracy and real-time;
(2) Kalman filter built by monocular vision data and gps data, is realized to data de-noising and reduction As a result correction, obtains more accurate vehicle driving trace.
2. based on CAN and OBD data analysis driver driving behaviors
The various running informations that can be got based on CAN and OBD interfaces, such as speed, oil consumption, steering wheel, steering Lamp, throttle, brake pedal etc., these data can be uploaded onto the server by terminal, utilize the number greatly such as statistical analysis, data mining It is reported that knowing, how reduction driver is controlled by the operation to steering wheel, brake pedal, gas pedal, steering indicating light, other light etc. Vehicle traveling processed.Each concrete operations was marked into the upper correspondence time, it is convenient with subsequently being merged with other data.
3. the accurate measurement adjusted the distance is realized based on monocular camera machine vision and millimetre-wave radar
(1) by Faster-RCNN convolutional neural networks, four steps of target detection, (candidate region is generated, feature Extract, classification and Bounding Box are returned) it is unified within a depth network frame, realize and thing is participated in traffic in the visual field Target detection, with correspondence rectangle frame mark;
(2) according to video camera projection model, obtained by geometry derivation between road surface coordinate system and image coordinate system Relation.As shown in Figure 3.In the upper figures of Fig. 3, plane ABU represents road plane, and ABCD is the ladder in the road plane that video camera is photographed Shape region, O points are camera lens central point, and OG is camera optical axis, and G points are camera optical axis and the intersection point of road plane, I points For the upright projection in the plane of O Dian roads.In the coordinate system of road surface, G points are defined as the origin of coordinates, vehicle forward direction definition For Y direction.Corresponding points of the GABCD each points in the plane of delineation are as shown in Fig. 3 figure below, and abcd is four ends of image plane rectangle Point, H and W are respectively the height and width of image plane.The midpoint g for defining image rectangle is the origin of coordinates of photo coordinate system, y-axis generation Table vehicle forward direction;
(3) plane of delineation coordinate on barrier rectangle frame base and the plane of delineation at plane of delineation base midpoint are asked for respectively Coordinate, and road surface plane coordinates is derived as by geometrical relationship;
(4) the two road surface coordinates obtained in (3) are applied mechanically into 2 range formulas, obtains at the distance between 2 points;
(5) the accurate measurement to a certain obstacle distance is realized using millimetre-wave radar, it is anti-by geometrical relationship derivation To solution, camera parameters are re-started with demarcation, camera parameters refer to the video camera angle of pitch, try to achieve accurate video camera and bow The elevation angle;
(6) simultaneously, in real road environment, because markings are parallel lines, therefore can be first with machine vision algorithm To obtain traffic lane line in the plane of delineation, 2 points of methods for determining straight line are recycled, traffic lane line in image is determined Straight line on corresponding road plane, the video camera angle of pitch is solved by the relation of two straight line parallels, and is not using just always The video camera angle of pitch that beginningization markers is set;
(7) camera parameters for obtaining step (5), (6) are mutually authenticated, and Real-time solution obtains accurately video camera pitching Angle, in the measurement of other obstacle distances, obtaining the accurate distance of barrier and this car.
4. the accurate measurement to speed is realized based on monocular camera machine vision and millimetre-wave radar
By shooting forward object of which movement image sequence, then to these image sequences are using image procossing and regard Feel that e measurement technology is analyzed, calculate measurement and obtain real-time displacement of the objects in front between two field pictures, and then calculating is obtained Object real time kinematics speed.
(1) by the distance-finding method in step 3, front vehicles are obtained in nth frame image and N+K two field pictures, with this Two of car are apart from S1, S2;
(2) according to this car speed, the time difference T between nth frame image and N+K two field pictures calculates this car traveling Apart from S3;
(3) front truck operating range S=S2+S3-S1 is calculated;
(4) vehicle velocity V=S/T is calculated.
(5) by millimetre-wave radar, correct velocity value is obtained, for correcting the time in monocular vision tachometric survey algorithm Deviation and range deviation, while realizing the measurement for the speed that thing is participated in the traffic not before vehicle into track, finally give The relative velocity of vehicle-surroundings barrier and this car.
5. data fusion, generation<Traffic scene, driving behavior data>Data pair.
(1) by obtained vehicle driving trace, driver's concrete operations behavior, the species of periphery barrier, barrier with The distance of this car, the relative velocity of barrier and this car carries out time mark respectively;
(2) by above-mentioned reduction result, it is attached using the time as unique major key, the time, identical data fusion was one Rise, formed<Traffic scene, driving behavior>Data pair.
Above example is provided just for the sake of the description purpose of the present invention, and is not intended to limit the scope of the present invention.This The scope of invention is defined by the following claims.The various equivalent substitutions that do not depart from spirit and principles of the present invention and make and repair Change, all should cover within the scope of the present invention.

Claims (7)

1. a kind of realize that to road traffic scene and the restoring method of driver driving behavior its feature exists based on many sensing datas In:Comprise the following steps:
(1) based on gyroscope, accelerometer, gps data, monocular cam data are as auxiliary, by two and three dimensions also Former algorithm is combined, and is carried out the mutual correction between gps data and gyroscope accelerometer data, is accomplished vehicle driving trace Precise restoration, obtains the driving trace of vehicle;
(2) on the basis of CAN data and OBD data, concrete operations of the reduction driver under various scenes, including beat Lamp, steering wheel control, throttle and brake control;
(3) by monocular cam and radar, using convolutional neural networks, complete to participate in traffic outside car the target detection of thing, Obtain the species of vehicle-surroundings barrier;
(4) by monocular cam data, using method of geometry relation, the bigness scale amount to obstacle distance is realized, by millimeter wave Radar data, obtains the accurate distance of barrier and this car, and the demarcation on the basis of distance reversely to camera parameters herein, Simultaneously by the parallel relation of lane line also can the calibrating camera angle of pitch, two kinds of calibration results are mutually authenticated, and obtain accurately The video camera angle of pitch, for the measurement of other obstacle distances, finally gives the accurate distance of vehicle-surroundings barrier and this car;
(5) measurement to barrier speed in the unidirectional track of vehicle front is realized based on monocular cam data, with millimeter Based on ripple radar, the tachometric survey of thing is participated in the traffic outside the unidirectional track in front, vehicle-surroundings barrier and this is obtained The relative velocity of car;
(6) vehicle driving trace for obtaining step (1), driver's concrete operations behavior that step (2) is obtained, step (3) is obtained Barrier species, the distance of barrier that step (4) is obtained and this car, the phase of barrier and this car that step (5) is obtained The fusion of many reduction results, generation are carried out to speed<Traffic scene, driving behavior>Data pair, the traffic scene refers to:Vehicle The various traffic in periphery participate in the species of thing, including pedestrian, vehicle, the people of cycling, traffic sign;Traffic participates in the shape of thing State, includes the distance and relative velocity of moving object, the topology information of road, traffic sign;The driving behavior refers to driver and existed In-car concrete operations, including beat lamp, steering wheel control, throttle and brake control and vehicle driving trace.
A kind of realized 2. according to claim 1 based on many sensing datas to road traffic scene and driver driving behavior Restoring method, it is characterised in that:The step (1) implements down:
(11) on three dimension scale, parsing vehicle axis system parses vehicle coordinate with respect to the position relation between reference frame Position relation between the relative reference frame of system uses Quaternion Method, and Quaternion Method is combined with Kalman filter algorithm, Improve SINS attitude algorithm accuracy and real-time;
(12) Kalman filter built by monocular vision data and gps data, realizes the knot to data de-noising and reduction The correction of fruit, obtains more accurate vehicle driving trace.
A kind of realized 3. according to claim 1 based on many sensing datas to road traffic scene and driver driving behavior Restoring method, it is characterised in that:The step (2) implements down:
The various running informations that can be got based on CAN and OBD interfaces, including speed, oil consumption, steering wheel, steering indicating light, Throttle, brake pedal, these data are uploaded onto the server by terminal, using the big data knowledge of statistical analysis, data mining, also How former driver controls vehicle to travel by the operation to steering wheel, brake pedal, gas pedal, steering indicating light, other light, Each concrete operations was marked into the upper correspondence time, it is convenient with subsequently being merged with other data.
A kind of realized 4. according to claim 1 based on many sensing datas to road traffic scene and driver driving behavior Restoring method, it is characterised in that:The step (3) implements down:
(31) by Faster-RCNN convolutional neural networks, four steps of target detection, i.e. candidate region are generated, feature Extract, classification and Bounding Box are returned, within unification to a depth network frame, are realized and are participated in thing to traffic in the visual field Target detection, marked with correspondence rectangle frame, and obtain confidence level;
(32) millimetre-wave radar identification obtains vehicle-surroundings barrier, and records the correspondence time.
A kind of realized 5. according to claim 1 based on many sensing datas to road traffic scene and driver driving behavior Restoring method, it is characterised in that:The step (4) implements down:
(41) according to video camera projection model, the pass between road surface coordinate system and image coordinate system is obtained by geometry derivation System, it is the plane domain in trapezoid area, corresponding flat coordinate system that video camera, which shoots obtained road plane, and road surface coordinate system is with putting down Point between areal coordinate system is corresponded;
(42) plane of delineation coordinate on barrier rectangle frame base is asked for respectively and the plane of delineation at plane of delineation base midpoint is sat Mark, and road surface plane coordinates is derived as by geometrical relationship;
(43) the two road surface coordinates obtained in step (42) are applied mechanically into 2 range formulas, obtains at the distance between 2 points;
(44) the accurate measurement to a certain obstacle distance is realized using millimetre-wave radar, is reversely asked by geometrical relationship derivation Camera parameters are re-started demarcation, camera parameters refer to the video camera angle of pitch, try to achieve accurate video camera pitching by solution Angle;
(45) simultaneously, traffic lane line in the plane of delineation is obtained using machine vision algorithm, recycles determine one directly at 2 points The method of line, determines the straight line corresponding to traffic lane line on road plane in image, is solved by the relation of two straight line parallels The video camera angle of pitch;
(46) camera parameters for obtaining step (44), (45) are mutually authenticated, and Real-time solution obtains accurately video camera pitching Angle, in the measurement of other obstacle distances, obtaining the accurate distance of barrier and this car.
A kind of realized 6. according to claim 1 based on many sensing datas to road traffic scene and driver driving behavior Restoring method, it is characterised in that:The step (5) implements down:
(51) side for using claim 1 step (4) not, obtains front vehicles in nth frame image and N+K two field pictures, with Two of this car are apart from S1, S2;
(52) according to this car speed, the time difference T between nth frame image and N+K two field pictures calculates the distance of this car traveling S3;
(53) front truck operating range S=S2+S3-S1 is calculated;
(54) vehicle velocity V=S/T is calculated;Thing is participated in for the traffic not before vehicle into track, the measurement of speed relies on millimeter Ripple radar, finally gives the relative velocity of vehicle-surroundings barrier and this car.
A kind of realized 7. according to claim 1 based on many sensing datas to road traffic scene and driver driving behavior Restoring method, it is characterised in that:The step (6) implements down:
(61) by obtained vehicle driving trace, driver's concrete operations behavior, the species of periphery barrier, barrier with this car Distance, the relative velocity of barrier and this car carries out time mark respectively;
(62) by above-mentioned reduction result, be attached using the time as unique major key, time identical data fusion together, shape Into<Traffic scene, driving behavior>Data pair.
CN201710401034.5A 2017-05-31 2017-05-31 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior Active CN107235044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710401034.5A CN107235044B (en) 2017-05-31 2017-05-31 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710401034.5A CN107235044B (en) 2017-05-31 2017-05-31 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior

Publications (2)

Publication Number Publication Date
CN107235044A true CN107235044A (en) 2017-10-10
CN107235044B CN107235044B (en) 2019-05-28

Family

ID=59984711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710401034.5A Active CN107235044B (en) 2017-05-31 2017-05-31 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior

Country Status (1)

Country Link
CN (1) CN107235044B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107499262A (en) * 2017-10-17 2017-12-22 芜湖伯特利汽车安全系统股份有限公司 ACC/AEB systems and vehicle based on machine learning
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN108227707A (en) * 2017-12-25 2018-06-29 清华大学苏州汽车研究院(吴江) Automatic Pilot method based on laser radar and end-to-end deep learning method
CN108596081A (en) * 2018-04-23 2018-09-28 吉林大学 A kind of traffic detection method merged based on radar and video camera
CN108960183A (en) * 2018-07-19 2018-12-07 北京航空航天大学 A kind of bend target identification system and method based on Multi-sensor Fusion
CN108983219A (en) * 2018-08-17 2018-12-11 北京航空航天大学 A kind of image information of traffic scene and the fusion method and system of radar information
CN109002800A (en) * 2018-07-20 2018-12-14 苏州索亚机器人技术有限公司 The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion
CN109061706A (en) * 2018-07-17 2018-12-21 江苏新通达电子科技股份有限公司 A method of the vehicle drive behavioural analysis based on T-Box and real-time road map datum
CN109263649A (en) * 2018-08-21 2019-01-25 北京汽车股份有限公司 Object identification method and object identification system under vehicle and its automatic driving mode
CN109606374A (en) * 2018-12-28 2019-04-12 北汽福田汽车股份有限公司 The method and apparatus of vehicle, fuel consumption data for verifying electronic horizon
CN109720312A (en) * 2017-10-30 2019-05-07 现代摩比斯株式会社 Autonomous emergency braking apparatus and its control method
CN109974687A (en) * 2017-12-28 2019-07-05 周秦娜 Co-located method, apparatus and system in a kind of multisensor room based on depth camera
CN110068818A (en) * 2019-05-05 2019-07-30 中国汽车工程研究院股份有限公司 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device
CN110309741A (en) * 2019-06-19 2019-10-08 百度在线网络技术(北京)有限公司 Obstacle detection method and device
CN110751836A (en) * 2019-09-26 2020-02-04 武汉光庭信息技术股份有限公司 Vehicle driving early warning method and system
CN110853393A (en) * 2019-11-26 2020-02-28 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN110901638A (en) * 2018-08-28 2020-03-24 大陆泰密克汽车系统(上海)有限公司 Driving assistance method and system
CN111267863A (en) * 2018-12-04 2020-06-12 广州汽车集团股份有限公司 Driver driving type identification method and device, storage medium and terminal equipment
CN111564051A (en) * 2020-04-28 2020-08-21 安徽江淮汽车集团股份有限公司 Safe driving control method, device and equipment for automatic driving automobile and storage medium
CN111681422A (en) * 2020-06-16 2020-09-18 衢州量智科技有限公司 Management method and system for tunnel road
CN111879314A (en) * 2020-08-10 2020-11-03 中国铁建重工集团股份有限公司 Multi-sensor fusion roadway driving equipment real-time positioning system and method
CN112298196A (en) * 2019-07-26 2021-02-02 丰田自动车株式会社 Annunciator information management system
WO2021036083A1 (en) * 2019-08-26 2021-03-04 格物汽车科技(苏州)有限公司 Driver behavior model development method and device for automatic driving, and storage medium
CN113379945A (en) * 2021-07-26 2021-09-10 陕西天行健车联网信息技术有限公司 Vehicle driving behavior analysis device, method and system
CN113494938A (en) * 2020-04-02 2021-10-12 三菱电机株式会社 Object recognition device and object recognition method
CN113642548A (en) * 2021-10-18 2021-11-12 氢山科技有限公司 Abnormal driving behavior detection device and device for hydrogen energy transport vehicle and computer equipment
CN113947893A (en) * 2021-09-03 2022-01-18 网络通信与安全紫金山实验室 Method and system for restoring driving scene of automatic driving vehicle
EP3927588A4 (en) * 2019-02-21 2022-11-09 Zoox, Inc. Motion prediction based on appearance
CN116204791A (en) * 2023-04-25 2023-06-02 山东港口渤海湾港集团有限公司 Construction and management method and system for vehicle behavior prediction scene data set

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010014090A (en) * 2008-07-07 2010-01-21 Toyota Motor Corp Control device for vehicle
CN101908272A (en) * 2010-07-20 2010-12-08 南京理工大学 Traffic safety sensing network based on mobile information
US20150019088A1 (en) * 2013-07-10 2015-01-15 Kia Motors Corporation Apparatus and method of processing road data
KR20170028631A (en) * 2015-09-04 2017-03-14 (주) 이즈테크놀로지 Method and Apparatus for Detecting Carelessness of Driver Using Restoration of Front Face Image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010014090A (en) * 2008-07-07 2010-01-21 Toyota Motor Corp Control device for vehicle
CN101908272A (en) * 2010-07-20 2010-12-08 南京理工大学 Traffic safety sensing network based on mobile information
US20150019088A1 (en) * 2013-07-10 2015-01-15 Kia Motors Corporation Apparatus and method of processing road data
KR20170028631A (en) * 2015-09-04 2017-03-14 (주) 이즈테크놀로지 Method and Apparatus for Detecting Carelessness of Driver Using Restoration of Front Face Image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张文娜 李军: "基于潜变量的驾驶员路径选择行为分析", 《科学技术与工程》 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107499262A (en) * 2017-10-17 2017-12-22 芜湖伯特利汽车安全系统股份有限公司 ACC/AEB systems and vehicle based on machine learning
CN109720312B (en) * 2017-10-30 2021-08-10 现代摩比斯株式会社 Autonomous emergency brake device and control method thereof
CN109720312A (en) * 2017-10-30 2019-05-07 现代摩比斯株式会社 Autonomous emergency braking apparatus and its control method
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN108227707A (en) * 2017-12-25 2018-06-29 清华大学苏州汽车研究院(吴江) Automatic Pilot method based on laser radar and end-to-end deep learning method
CN108227707B (en) * 2017-12-25 2021-11-26 清华大学苏州汽车研究院(吴江) Automatic driving method based on laser radar and end-to-end deep learning method
CN109974687A (en) * 2017-12-28 2019-07-05 周秦娜 Co-located method, apparatus and system in a kind of multisensor room based on depth camera
CN108596081A (en) * 2018-04-23 2018-09-28 吉林大学 A kind of traffic detection method merged based on radar and video camera
CN108596081B (en) * 2018-04-23 2021-04-20 吉林大学 Vehicle and pedestrian detection method based on integration of radar and camera
CN109061706A (en) * 2018-07-17 2018-12-21 江苏新通达电子科技股份有限公司 A method of the vehicle drive behavioural analysis based on T-Box and real-time road map datum
CN108960183A (en) * 2018-07-19 2018-12-07 北京航空航天大学 A kind of bend target identification system and method based on Multi-sensor Fusion
CN108960183B (en) * 2018-07-19 2020-06-02 北京航空航天大学 Curve target identification system and method based on multi-sensor fusion
CN109002800A (en) * 2018-07-20 2018-12-14 苏州索亚机器人技术有限公司 The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion
CN108983219A (en) * 2018-08-17 2018-12-11 北京航空航天大学 A kind of image information of traffic scene and the fusion method and system of radar information
CN109263649A (en) * 2018-08-21 2019-01-25 北京汽车股份有限公司 Object identification method and object identification system under vehicle and its automatic driving mode
CN110901638B (en) * 2018-08-28 2021-05-28 大陆泰密克汽车系统(上海)有限公司 Driving assistance method and system
CN110901638A (en) * 2018-08-28 2020-03-24 大陆泰密克汽车系统(上海)有限公司 Driving assistance method and system
CN111267863A (en) * 2018-12-04 2020-06-12 广州汽车集团股份有限公司 Driver driving type identification method and device, storage medium and terminal equipment
CN111267863B (en) * 2018-12-04 2021-03-19 广州汽车集团股份有限公司 Driver driving type identification method and device, storage medium and terminal equipment
CN109606374A (en) * 2018-12-28 2019-04-12 北汽福田汽车股份有限公司 The method and apparatus of vehicle, fuel consumption data for verifying electronic horizon
CN109606374B (en) * 2018-12-28 2020-07-10 智博汽车科技(上海)有限公司 Vehicle, method and device for verifying fuel consumption data of electronic horizon
EP3927588A4 (en) * 2019-02-21 2022-11-09 Zoox, Inc. Motion prediction based on appearance
CN110068818A (en) * 2019-05-05 2019-07-30 中国汽车工程研究院股份有限公司 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device
CN110309741A (en) * 2019-06-19 2019-10-08 百度在线网络技术(北京)有限公司 Obstacle detection method and device
CN112298196A (en) * 2019-07-26 2021-02-02 丰田自动车株式会社 Annunciator information management system
CN112298196B (en) * 2019-07-26 2024-05-28 丰田自动车株式会社 Annunciator information management system
WO2021036083A1 (en) * 2019-08-26 2021-03-04 格物汽车科技(苏州)有限公司 Driver behavior model development method and device for automatic driving, and storage medium
CN110751836A (en) * 2019-09-26 2020-02-04 武汉光庭信息技术股份有限公司 Vehicle driving early warning method and system
CN110853393B (en) * 2019-11-26 2020-12-11 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN110853393A (en) * 2019-11-26 2020-02-28 清华大学 Intelligent network vehicle test field data acquisition and fusion method and system
CN113494938A (en) * 2020-04-02 2021-10-12 三菱电机株式会社 Object recognition device and object recognition method
CN113494938B (en) * 2020-04-02 2024-05-17 三菱电机株式会社 Object recognition device and object recognition method
CN111564051B (en) * 2020-04-28 2021-07-20 安徽江淮汽车集团股份有限公司 Safe driving control method, device and equipment for automatic driving automobile and storage medium
CN111564051A (en) * 2020-04-28 2020-08-21 安徽江淮汽车集团股份有限公司 Safe driving control method, device and equipment for automatic driving automobile and storage medium
CN111681422A (en) * 2020-06-16 2020-09-18 衢州量智科技有限公司 Management method and system for tunnel road
CN111879314A (en) * 2020-08-10 2020-11-03 中国铁建重工集团股份有限公司 Multi-sensor fusion roadway driving equipment real-time positioning system and method
CN111879314B (en) * 2020-08-10 2022-08-02 中国铁建重工集团股份有限公司 Multi-sensor fusion roadway driving equipment real-time positioning system and method
CN113379945A (en) * 2021-07-26 2021-09-10 陕西天行健车联网信息技术有限公司 Vehicle driving behavior analysis device, method and system
CN113947893A (en) * 2021-09-03 2022-01-18 网络通信与安全紫金山实验室 Method and system for restoring driving scene of automatic driving vehicle
CN113642548A (en) * 2021-10-18 2021-11-12 氢山科技有限公司 Abnormal driving behavior detection device and device for hydrogen energy transport vehicle and computer equipment
CN113642548B (en) * 2021-10-18 2022-03-25 氢山科技有限公司 Abnormal driving behavior detection device and device for hydrogen energy transport vehicle and computer equipment
CN116204791A (en) * 2023-04-25 2023-06-02 山东港口渤海湾港集团有限公司 Construction and management method and system for vehicle behavior prediction scene data set
CN116204791B (en) * 2023-04-25 2023-08-11 山东港口渤海湾港集团有限公司 Construction and management method and system for vehicle behavior prediction scene data set

Also Published As

Publication number Publication date
CN107235044B (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN107235044B (en) A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior
US8791996B2 (en) Image processing system and position measurement system
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
EP2372308B1 (en) Image processing system and vehicle control system
CN105260699B (en) A kind of processing method and processing device of lane line data
KR102595897B1 (en) Method and apparatus of determining road line
JP5468108B2 (en) Method and system for detecting road terrain for a driver assistance system
US8428362B2 (en) Scene matching reference data generation system and position measurement system
US8369577B2 (en) Vehicle position recognition system
US8452103B2 (en) Scene matching reference data generation system and position measurement system
CN102208036B (en) Vehicle position detection system
CN110531376A (en) Detection of obstacles and tracking for harbour automatic driving vehicle
KR102091580B1 (en) Method for collecting road signs information using MMS
US20110242319A1 (en) Image processing system and position measurement system
CN109583415A (en) A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
CN112861748B (en) Traffic light detection system and method in automatic driving
CN106446785A (en) Passable road detection method based on binocular vision
CN109583312A (en) Lane detection method, apparatus, equipment and storage medium
CN114808649B (en) Highway scribing method based on vision system control
WO2022230739A1 (en) Object tracking device
US20230314162A1 (en) Map generation apparatus
US20230314163A1 (en) Map generation apparatus
US20220307861A1 (en) Map generation apparatus
CN116802581A (en) Automatic driving perception system testing method, system and storage medium based on aerial survey data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210824

Address after: 100191 045, 2f, commercial garage, No. 17, Zhichun Road, Haidian District, Beijing

Patentee after: Chuangketianxia (Beijing) Technology Development Co.,Ltd.

Address before: 100191 No. 37, Haidian District, Beijing, Xueyuan Road

Patentee before: BEIHANG University

TR01 Transfer of patent right
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20171010

Assignee: Beijing Zhimou Technology Development Co.,Ltd.

Assignor: Chuangketianxia (Beijing) Technology Development Co.,Ltd.

Contract record no.: X2023990000843

Denomination of invention: A Method for Restoring Road Traffic Scenarios and Driver Driving Behavior Based on Multi sensor Data

Granted publication date: 20190528

License type: Exclusive License

Record date: 20231008

EE01 Entry into force of recordation of patent licensing contract