CN110264586A - L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading - Google Patents

L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading Download PDF

Info

Publication number
CN110264586A
CN110264586A CN201910454082.XA CN201910454082A CN110264586A CN 110264586 A CN110264586 A CN 110264586A CN 201910454082 A CN201910454082 A CN 201910454082A CN 110264586 A CN110264586 A CN 110264586A
Authority
CN
China
Prior art keywords
data
output
vehicle
driving
semanteme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910454082.XA
Other languages
Chinese (zh)
Inventor
缪其恒
金智
郑皓洲
吴建丰
王江明
许炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Leapmotor Technology Co Ltd
Zhejiang Zero Run Technology Co Ltd
Original Assignee
Zhejiang Zero Run Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zero Run Technology Co Ltd filed Critical Zhejiang Zero Run Technology Co Ltd
Priority to CN201910454082.XA priority Critical patent/CN110264586A/en
Publication of CN110264586A publication Critical patent/CN110264586A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]

Abstract

The present invention relates to a kind of acquisition of L3 grades of automated driving system driving path data, analysis and method for uploading, including the following steps: the acquisition of vehicle end driving data, acquiring and synchronous and driving data coding and caching including driving data;On-line data analysis, including the definition of automated driving system intermediate result output interface, object matching consistency detection, the output of positioning road sign semanteme, the detection of extreme vehicle operating and man-machine decision consistency detection are carried out to collected vehicle end driving data;Data communication is carried out upload to vehicle end driving data and is prepared;Received server-side simultaneously stores vehicle end driving data.The present invention meets L3 grades of automated driving system perception, positioning and the exploitation and verifying demand of each module of programmed decision-making;The detection of automatic Pilot extreme scenes substantially reduces data record and uploads occupied bandwidth;Can on-line operation respective algorithms module, maximize front-end platform data mining and verifying, substantially reduce post-processing data screening work demand of human resources.

Description

L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading
Technical field
The present invention relates to automatic vehicle control systems more particularly to a kind of L3 grades of automated driving system driving path data to adopt Collection, analysis and method for uploading.
Background technique
Intelligence is one of the important trend of nowadays China Automobile Industry, it is contemplated that intelligent driving during the year two thousand twenty~the year two thousand thirty Technology and system will be worldwide fast-developing.Automated driving system is divided into L0~L5 six by intelligence degree from low to high A grade, wherein L3 grades of automatic Pilot tier definitions are to allow corresponding system to substitute driver in the case where defining Driving Scene independently to drive Vehicle is sailed, such as mitigates under scorch scene and drives burden.L1 grades and L2 grades of advanced DAS (Driver Assistant System)s are in portion at present Component produces to land in vehicle, and L3 grades of automated driving systems also need largely to test and test at present then also in the prototyping stage Demonstrate,prove work.
Compared to L1 grades with L2 grades of DAS (Driver Assistant System)s, L3 grades of automated driving system application scenarios are more complicated, develop and test Driving data amount needed for card is bigger.The method of machine learning is more widely applied in L3 grades or more of system, thus right The demand of corresponding Driving Scene valid data also doubles.Compared to L1 grades and L2 grades of DAS (Driver Assistant System)s, L3 grades of automatic Pilot systems The data transmission of system is multiplied with operand.Therefore, it is extracted needed for L3 grades of automated driving systems in driving path scene Valid data have important practical application value for the industrialization landing of such system.L3 automated driving system is uploaded in real time Run-time scenario data are in the 5G epoch and infeasible, not only need a large amount of transmission bandwidth, it is also necessary to subsequent a large amount of manpowers Above-mentioned Driving Scene is classified and verified.
Existing vehicular data recording system, mostly based on CAN bus data record, such system is mostly that vehicle is locally remembered Record, and required data bandwidth and memory space are all very limited, thus do not have reference to the data record of automated driving system Meaning.Filled behind part travelling data recording equipment (operation and commercial automobile-used), can recorde multi-channel video flow data (it is interior with And outside vehicle), and associated section CAN bus vehicle data (speed etc.) and GPS information etc..However such equipment does not have or not Power is calculated using front end, it is recorded just to can be used for partial visual system exploitation and test job by a large amount of post-processings (offline development mode).
Existing vehicle driving record system data storage and transmission mode have the following disadvantages: that (i) is unable to complete documentation L3 Exploitation and test data needed for grade and the above automated driving system;(ii) it will record bulk redundancy, to system development and verifying Little data are helped, unnecessary data transmission and storage resource are consumed;(iii) it needs to consume a large amount of human resources offline Extracted valid data is screened, and off-line verification can only be carried out to problem;(iv) it installs afterwards standby with practical vehicle-mounted operation platform operation Characteristic and ability are variant, can not online iteration tests algoritic module.
Summary of the invention
The present invention in order to solve the above-mentioned technical problem, provide a kind of acquisition of L3 grades of automated driving system driving path data, Analysis and method for uploading, can reach following purpose: it is each that (i) meets L3 grades of automated driving system perception, positioning and programmed decision-making The exploitation and verifying demand of module;(ii) automatic Pilot extreme scenes detect, and substantially reduce data record and upload occupied band It is wide;(iii) can on-line operation respective algorithms module, maximize front-end platform data mining and verifying, substantially reduce post-processing number According to screening operation demand of human resources.
Above-mentioned technical problem of the invention is mainly to be addressed by following technical proposals: L3 grade of the invention is automatic The acquisition of control loop driving path data, analysis and method for uploading, including the following steps:
1. vehicle end driving data acquires;
2. carrying out on-line data analysis to collected vehicle end driving data;
3. data communication is carried out upload to vehicle end driving data and is prepared;
4. received server-side simultaneously stores vehicle end driving data.
The present invention is based on vehicle-mounted WIFI or 4G network communications and Edge intelligence to calculate, and can satisfy L3 grades of automatic Pilot function It can the relevant data record demand developed and verifying is required.The automobile of L3 grades of automated driving systems is installed, is pacified on automobile Equipped with camera or camera (vision), millimetre-wave radar, hybrid navigation equipment and vehicle-mounted data processing terminal, vehicle-mounted data processing Terminal includes three data acquisition module, intelligent analysis module, communication module component parts.Energy automatic collection of the present invention simultaneously uploads Data have: positioning road sign data, accident scene data, the extreme operation scenario data of vehicle, perception match abnormal contextual data, Man-machine response mismatches contextual data, customized request data.The present invention meets L3 grades of automated driving system perception, positioning and rule Draw the exploitation and verifying demand of each module of decision;The detection of automatic Pilot extreme scenes substantially reduces shared by data record and upload Use bandwidth;Can on-line operation respective algorithms module, maximize front-end platform data mining and verifying, substantially reduce post-processing data Screening operation demand of human resources.
Preferably, 1. the step includes the following steps:
(11) data acquisition with it is synchronous: by the way of software synchronization, using acquiring GPS clock or when vehicle-mounted terminal system Clock synchronizes each driving data, and construction includes time, image data, radar initial data, integrated navigation data and vehicle power Learn the driving data structural body of parameter;
(12) data encoding and caching: image data is encoded by the way of H264 or H265, other data are adopted It is encoded with the mode of virtual CAN message;Data register is set, for caching driving data structural body.
Preferably, 2. the step includes the following steps: that (21) automated driving system intermediate result output interface is fixed Justice;(22) object matching consistency detection;(23) positioning road sign semanteme output;(24) extreme vehicle operating detection;(25) man-machine Decision consistency detection.
Preferably, described step (22) the object matching consistency detection method particularly includes:
It is greater than default threshold apart from tolerance as a result, extracting sequential coupling according to millimetre-wave radar and the output of visual perception system The driving data of value dmin, is calculated as follows matching distance:
Wherein,Coordinates of targets is exported for radar,Coordinates of targets is exported for in-vehicle camera, n is The targeted vital period;If d > dmin, uploads the driving data of respective segments.
Preferably, the extreme vehicle operating detection of the step (24) method particularly includes:
By inertial navigation system output yaw velocity, longitudinal acceleration, longitudinal deceleration and side acceleration into Row sequential coding carries out extreme vehicle operating point to timing coded data using numerical analysis method or machine learning method Class, extreme vehicle operating are divided into anxious acceleration operation, anxious deceleration-operation and zig zag operation;
Wherein numerical analysis method are as follows: if it is more than n times that longitudinal acceleration measured value, which is continuously greater than given threshold Almin, It is confirmed as anxious acceleration operation;If longitudinal deceleration measured value is continuously less than given threshold A2min more than n times, it is confirmed as anxious subtract Speed operation;If side acceleration measured value and yaw velocity measured value be continuously greater than respectively given threshold AYmin and Tmin is more than n times, then is confirmed as zig zag operation;
Machine learning method are as follows: using extreme vehicle operating drive time series sample data, off-line training support vector machines or Shot and long term memory network;Trained above-mentioned model is deployed in vehicle analysis terminal, is inputted as sequential coding inertial navigation number According to exporting as the anxious event signal for accelerating operation, anxious deceleration-operation or zig zag operation.
Preferably, the man-machine decision consistency detection of step (25) method particularly includes:
Result is exported according to planning layer and vehicle pose exports as a result, according to preset preview distance, extracts actual path And the deviation of planned trajectory is greater than the driving data segment of preset threshold Dmin;It calculates and returns under vehicle axis system as follows One changes trajector deviation:
Wherein, [Xi, Yi] is actual path point coordinate under vehicle axis system, and [xi, yi] is to plan rail under vehicle axis system Mark point coordinate;M is track points;If D > Dmin, uploads the driving data of respective segments.
Preferably, what described step (21) the automated driving system intermediate result output interface defined method particularly includes: Including the output of perception target, the output of positioning road sign semanteme, the output of vehicle pose and programmed decision-making output;
Wherein perception target output: including the output of vision system target, the output of millimetre-wave radar aims of systems and millimeter wave The output of radar system original object point cloud;
Positioning road sign semanteme output: it is semantic that positioning road sign is exported in a manner of binary system figure layer, including can travel region language Justice output, the output of lane boundary semanteme and the output of indication road sign semanteme;
The output of vehicle pose: it is exported including vehicle location, speed, course angle and 6 axis inertial sensors;
Programmed decision-making output: it is provided by fixed longitudinal spacing separation and pre- takes aim at the locus of points.
Preferably, step (23) the positioning road sign semanteme output method particularly includes:
It is exported according to sensing module seeking semantics, and utilizes the vehicle pose estimation result of locating module, building positioning road The output of poster justice;Road sign semanteme output method is positioned using the complete semantic output method of key frame semanteme output method or compression;
Wherein key frame semanteme output method are as follows: integrated and estimated according to the mileage of locating module, every 50 meters of one frames of extraction close The output of key semanteme, constructs key frame semanteme location register, and key frame and corresponding is stored in key frame semanteme location register Moment vehicle longitude and latitude data;Every 20 frame group packet compression is primary, and issues upload request signal;
The complete semantic output method of compression are as follows: semantic including lane grade semanteme and guidance instruction grade;From vision system lane And lane quantity, lane width, boundary types, affiliated lane and partially are extracted in post-processing in travelable region semantic output figure layer From centre distance;Road sign and two part data of space road sign are extracted in post-processing from indication road sign semanteme output figure layer; The every traveling 1km group packet compression together with corresponding moment vehicle longitude and latitude data of above-mentioned data is primary, and issues upload request signal.
Preferably, the step 3. data communication method particularly includes: according to the on-line data analysis knot of step 2. The driving data queue obtained after step 1. middle acquisition is compressed, and names corresponding compressed file by predefined rule by fruit; Using TCP or udp protocol, data after being compressed by 4G network or wireless network to server transparent transmission;
4. the step received server-side and stores vehicle end driving data method particularly includes: in server end, lead to TCP or udp protocol are crossed, driving data transmitted by car-mounted terminal is received;It is driven using the data acquisition date as specific item address book stored correlation Sail data.
The beneficial effects of the present invention are: the present invention is based on vehicle-mounted WIFI or 4G network communications and Edge intelligence to calculate, mistake The data flow having little significance in most of Driving Scene to system optimization and upgrading is filtered, can be acquired needed for automatic Pilot positioning Compress road sign data, accident scene data, the contextual data under specified limit vehicle operating, radar and camera detection result With abnormal contextual data and the biggish contextual data of man-machine response difference.The present invention can reach following effect: (i) meets L3 grades certainly Dynamic control loop perception, positioning, the exploitation of each module of programmed decision-making and verifying demand;(ii) automatic extreme scenes detection, significantly It reduces data record and uploads occupied bandwidth;(iii) can on-line operation respective algorithms module, maximize front-end platform data dig Pick and verifying substantially reduce post-processing data screening work demand of human resources.
Detailed description of the invention
Fig. 1 is a kind of overlooking structure diagram of automobile in the present invention.
Fig. 2 is a kind of algorithm flow chart of the invention.
Specific embodiment
Below with reference to the embodiments and with reference to the accompanying drawing the technical solutions of the present invention will be further described.
Embodiment: the acquisition of L3 grade automated driving system driving path data, analysis and method for uploading of the present embodiment are based on Vehicle-mounted WIFI or 4G network communication and Edge intelligence calculate, and can satisfy the relevant exploitation of L3 grades of Function for Automatic Pilot and test Data record demand needed for card.The automobile of L3 grades of automated driving systems is installed as shown in Figure 1, being equipped with camera on automobile Or camera (vision), millimetre-wave radar, hybrid navigation equipment and vehicle-mounted data processing terminal.Vehicle-mounted data processing terminal includes number According to three acquisition module, intelligent analysis module, communication module component parts.Wherein, data acquisition module mainly includes all kinds of numbers According to interface, acquisition chip (single-chip microcontroller) and video encoding module, it is responsible for acquisition, synchronizes and encode each road driving data;It is fixed Position module mainly includes GPS and inertial navigation module, is responsible for that positioning road sign extracts and logout and map are associated with;Intelligence Analysis module mainly includes L3 grades of automated driving system processing terminals and vehicle intelligent analysing terminal (integrated multicore arm and mind Through network acceleration unit), it is responsible for processing scene data flow in real time, and select contextual data stream to be recorded by preset rules;Communication Module includes 4G and WIFI module, is mainly responsible for compression and uploads the driving data after corresponding encoded.
L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading, as shown in Fig. 2, including following step It is rapid:
1. vehicle end driving data acquires:
Vehicle end driving data is by contextual data (radar and vision system initial data), position and attitude information (integrated navigation Device data) and vehicle dynamics data (speed, throttle, braking and turn to input) etc. composition;Vehicle end driving data is adopted Collection includes the following steps:
(11) data acquisition with it is synchronous: by the way of software synchronization, using acquiring GPS clock or when vehicle-mounted terminal system Clock synchronizes each driving data, and construction includes time, image data, radar initial data, integrated navigation data and vehicle power Learn the driving data structural body of parameter;Driving data structural body includes that attribute is as follows:
Time: time point corresponding to driving data;
Image data: 8 road original images input (Imgl-Img8);
Radar initial data: radar system exports object listing (default maximum output number is 32:Obj1-Obj32);
Integrated navigation data: including longitude and latitude and vehicle 6DOF motion information (3-axis acceleration and three shaft angles speed Degree);
Vehicle dynamic parameters: including speed, steering wheel angle, throttle stroke and braking distance etc.;
(12) data encoding and caching: image data is encoded by the way of H264 or H265, other data are adopted It is encoded with the mode of virtual CAN message (64);Data register is set, and length is configurable (to be defaulted as 300, i.e., 12 seconds 25fps data or 10 seconds 30fps data), for caching driving data structural body.
2. carrying out on-line data analysis to collected vehicle end driving data:
It is defeated in perception, fusion, positioning and the programmed decision-making algoritic module of the operation of L3 grades of automated driving system processing terminals Each customized intermediate level is inputted by user as a result, after the post-processing of intelligent vehicle-carried analysing terminal and uploads type, Xiang Tongxin mould out Block sends data uploading instructions;Detailed content includes the following steps:
(21) automated driving system intermediate result output interface defines, method particularly includes: including the output of perception target, positioning The output of road sign semanteme, the output of vehicle pose and programmed decision-making output;
Wherein perception target output: including vision system, (forward sight, blind area and backsight, single camera default the upper limit 16 Target, attribute include target category, fore-and-aft distance, lateral distance and relative velocity etc.) target output, millimetre-wave radar system (front and back 776Hz radar and blind area 24GHz radar, single the radar default objects upper limit 16, attribute includes radial distance, angle Degree and relative velocity etc.) (single radar defaults point target with the output of millimetre-wave radar system original object point cloud for target output Number 150, attribute includes reflectivity, radial distance, angle and relative velocity etc.);
Positioning road sign semanteme output: it is semantic that positioning road sign is exported in a manner of binary system figure layer, including can travel region language Justice output, the output of lane boundary semanteme and the output of indication road sign semanteme;
Vehicle pose output: including vehicle location (longitude and latitude or customized initialization world coordinate system lower plane position), Speed, course angle and the output of 6 axis inertial sensors (transverse direction, longitudinally, laterally acceleration and sideway, pitching, angle of heel speed Degree);
Programmed decision-making output: pre- point of taking aim at is provided by fixed longitudinal spacing separation (1 meter of default) and (defaults 10 to take aim at a little) rail in advance Mark;
(22) object matching consistency detection, method particularly includes:
It is greater than default threshold apart from tolerance as a result, extracting sequential coupling according to millimetre-wave radar and the output of visual perception system The driving data of value dmin, is calculated as follows matching distance:
Wherein,Coordinates of targets is exported for radar,Coordinates of targets is exported for in-vehicle camera, n is The targeted vital period;If d > dmin, uploads the driving data of respective segments;
(23) positioning road sign semanteme output, method particularly includes:
It is exported according to sensing module seeking semantics, and utilizes the vehicle pose estimation result of locating module, building positioning road The output of poster justice;Road sign semanteme output method is positioned using the complete semantic output method of key frame semanteme output method or compression;
Wherein key frame semanteme output method are as follows: integrated and estimated according to the mileage of locating module, every 50 meters of one frames of extraction close Key semanteme output (i.e. post-treated 2 system semanteme of after-vision system exports figure layer), constructs key frame semanteme location register, closes Key frame and corresponding moment vehicle longitude and latitude data are stored in key frame semanteme location register;Every 20 frame (i.e. every traveling 1km) Group packet compression is primary, and issues upload request signal;
The complete semantic output method of compression are as follows: semantic including lane grade semanteme and guidance instruction grade;From vision system lane And lane quantity, lane width, boundary types, affiliated lane and partially are extracted in post-processing in travelable region semantic output figure layer From centre distance;Road sign and two part data of space road sign are extracted in post-processing from indication road sign semanteme output figure layer; The every traveling 1km group packet compression together with corresponding moment vehicle longitude and latitude data of above-mentioned data is primary, and issues upload request signal;
(24) extreme vehicle operating detection, method particularly includes:
By inertial navigation system output yaw velocity, longitudinal acceleration, longitudinal deceleration and side acceleration into Row sequential coding carries out pole to timing coded data using numerical analysis method or machine learning method (SVM or LSTM) Vehicle operating classification is held, extreme vehicle operating is divided into anxious acceleration operation, anxious deceleration-operation and zig zag operation;
Wherein numerical analysis method are as follows: (N is silent more than n times if longitudinal acceleration measured value is continuously greater than given threshold A1min Recognizing value is 3), to be then confirmed as anxious acceleration operation;If it is more than n times (N that longitudinal deceleration measured value, which is continuously less than given threshold A2min, Default value is 3), to be then confirmed as anxious deceleration-operation;If side acceleration measured value and yaw velocity measured value difference are continuous It is more than n times (N default value is 3) greater than given threshold AYmin and Tmin, then is confirmed as zig zag operation;
Machine learning method are as follows: drive time series sample data, off-line training support vector machines using extreme vehicle operating (SVM) or shot and long term memory network (LSTM);Trained above-mentioned model is deployed in (the anxious acceleration, deceleration two of vehicle analysis terminal Classification and two classification of zig zag), input as sequential coding inertial navigation data, export for it is anxious accelerate operation, anxious deceleration-operation or Take a sudden turn the event signal operated;
(25) man-machine decision consistency detection, method particularly includes:
Result is exported according to planning layer and vehicle pose exports as a result, according to preset preview distance, extracts actual path And the deviation of planned trajectory is greater than the driving data segment of preset threshold Dmin;It calculates and returns under vehicle axis system as follows One changes trajector deviation:
Wherein, [Xi, Yi] is actual path point coordinate under vehicle axis system, and [xi, yi] is to plan rail under vehicle axis system Mark point coordinate;M is track points, is defaulted as 10;If D > Dmin, uploads the driving data of respective segments;
(26) self-defining data is requested: being inputted according to the request signal of human-computer interaction port, is sent data by preset rules Uploading instructions, i.e., the driving data of register in upload request time step (12);
3. data communication is carried out upload to vehicle end driving data and is prepared, method particularly includes:
According to the on-line data analysis of step 2. as a result, the driving data queue of register in step (12) is compressed (Lz4 mode can be used and carry out data compression), and corresponding compressed file is named by predefined rule;By vehicle-mounted data processing terminal It is established as server, using TCP or udp protocol, data after being compressed by 4G network or wireless network to server transparent transmission;
4. received server-side simultaneously stores vehicle end driving data, method particularly includes:
In storage server end (cloud), client is established, by TCP or udp protocol, receives vehicle-mounted data processing terminal Transmitted driving data;Relevant driving data is stored under new technology file system using the data acquisition date as subdirectory.
The present invention is based on vehicle-mounted WIFI or 4G network communications and Edge intelligence to calculate, can be in the road of people's driving vehicle Under Driving Scene, automatic collection simultaneously uploads following driving data:
1. positioning road sign data: as option, for define under L3 system function application scenarios (including park scene with And High-speed Circumstance etc.) positioning road sign carry out structuring semantic (including road sign and space road sign etc.) and extract, by predefining Data structure upload server end (cloud) after compression.
2. accident scene data: identifying vehicle collision state according to collision sensor signal as option.By predetermined Adopted accident record rule, saves corresponding contextual data.
3. extreme vehicle operating contextual data: as option, being surveyed according to 3 axis or 6 axis inertial navigation systems (gyroscope) Data are measured, identify extreme dynamics of vehicle state, such as zig zag, anxious acceleration and deceleration.By predefined event correlation rule, record is simultaneously Upload corresponding contextual data.
4. perception matches abnormal contextual data: as option, according to millimetre-wave radar and vision system scene perception With as a result, record is simultaneously according to goal-selling matching tolerance (overlooking vehicle axis system distance or image coordinate system target registration) Upload corresponding contextual data.
5. man-machine response mismatches contextual data: as option, running " Virtual drivers " in vehicle-mounted operation platform, i.e., Local path planning algorithm module is matched with real vehicles kinestate (i.e. true driver vehicle operates), by pre- If the man-machine track similitude of regular record is unsatisfactory for requiring Driving Scene data.
6. customized request data: as option, driver/tester's input interface is provided in interactive terminal, it can Current Driving Scene data are recorded according to the request of predetermined manner (being defaulted as a preset duration segment driving data complete record) key.
The present invention is based on vehicle-mounted WIFI or 4G network communications and Edge intelligence to calculate, and it is right in most of Driving Scene to filter The data flow that system optimization and upgrading have little significance, compression road sign data, accident field needed for automatic Pilot positioning can be acquired Scape data, the contextual data under specified limit vehicle operating, radar match abnormal contextual data and people with camera detection result The biggish contextual data of machine response difference.The present invention can reach following effect: (i) meets L3 grades of automated driving system perception, determines Position, the exploitation of each module of programmed decision-making and verifying demand;(ii) automatic extreme scenes detection, substantially reduces data record and uploads Occupied bandwidth;(iii) can on-line operation respective algorithms module, maximize front-end platform data mining and verifying, substantially reduce Post-process data screening work demand of human resources.

Claims (9)

1. a kind of L3 grades of automated driving system driving path data acquisition, analysis and method for uploading, it is characterised in that including following Step:
1. vehicle end driving data acquires;
2. carrying out on-line data analysis to collected vehicle end driving data;
3. data communication is carried out upload to vehicle end driving data and is prepared;
4. received server-side simultaneously stores vehicle end driving data.
2. the acquisition of L3 grades of automated driving system driving path data, analysis and method for uploading according to claim 1, special Sign is 1. the step includes the following steps:
(11) data acquisition with it is synchronous: by the way of software synchronization, using acquire CPS clock or vehicle-mounted terminal system clock it is same Each driving data is walked, construction includes time, image data, radar initial data, integrated navigation data and dynamics of vehicle ginseng Several driving data structural bodies;
(12) data encoding and caching: image data is encoded by the way of H264 or H265, other data are using empty The mode of quasi- CAN message is encoded;Data register is set, for caching driving data structural body.
3. the acquisition of L3 grades of automated driving system driving path data, analysis and method for uploading according to claim 1, special Sign is 2. the step includes the following steps: that (21) automated driving system intermediate result output interface defines;(22) target Match consistency detection;(23) positioning road sign semanteme output;(24) extreme vehicle operating detection;(25) man-machine decision consistency inspection It surveys.
4. the acquisition of L3 grades of automated driving system driving path data, analysis and method for uploading according to claim 3, special Sign is described step (22) the object matching consistency detection method particularly includes:
It is greater than preset threshold apart from tolerance as a result, extracting sequential coupling according to millimetre-wave radar and the output of visual perception system The driving data of dmin, is calculated as follows matching distance:
Wherein,Coordinates of targets is exported for radar,Coordinates of targets is exported for in-vehicle camera, n is raw for target Order the period;If d > dmin, uploads the driving data of respective segments.
5. the acquisition of L3 grades of automated driving system driving path data, analysis and method for uploading according to claim 3, special Sign is the extreme vehicle operating detection of the step (24) method particularly includes:
When the yaw velocity of inertial navigation system output, longitudinal acceleration, longitudinal deceleration and side acceleration are carried out Sequence coding carries out extreme vehicle operating classification, pole to timing coded data using numerical analysis method or machine learning method End vehicle operating is divided into anxious acceleration operation, anxious deceleration-operation and zig zag operation;
Wherein numerical analysis method are as follows: if longitudinal acceleration measured value is continuously greater than given threshold Almin more than n times, confirm Accelerate operation to be anxious;If longitudinal deceleration measured value is continuously less than given threshold A2min more than n times, it is confirmed as anxious behaviour of slowing down Make;If it is super that side acceleration measured value and yaw velocity measured value are continuously greater than given threshold AYmin and Tmin respectively N times are crossed, then are confirmed as zig zag operation;
Machine learning method are as follows: drive time series sample data, off-line training support vector machines or length using extreme vehicle operating Phase memory network;Trained above-mentioned model is deployed in vehicle analysis terminal, is inputted as sequential coding inertial navigation data, it is defeated It is out the anxious event signal for accelerating operation, anxious deceleration-operation or zig zag operation.
6. the acquisition of L3 grades of automated driving system driving path data, analysis and method for uploading according to claim 3, special Sign is the man-machine decision consistency detection of step (25) method particularly includes:
Result is exported according to planning layer and vehicle pose exports as a result, according to preset preview distance, extracts actual path and rule The deviation for drawing track is greater than the driving data segment of preset threshold Dmin;Normalization is calculated under vehicle axis system as follows Trajector deviation:
Wherein, [Xi, Yi] is actual path point coordinate under vehicle axis system, and [xi, yi] is planned trajectory point under vehicle axis system Coordinate;M is track points;If D > Dmin, uploads the driving data of respective segments.
7. L3 grades of automated driving system driving path data acquisitions, analysis and upload according to claim 3 or 4 or 5 or 6 Method, it is characterised in that described step (21) the automated driving system intermediate result output interface defined method particularly includes: packet Include the output of perception target, the output of positioning road sign semanteme, the output of vehicle pose and programmed decision-making output;
Wherein perception target output: including the output of vision system target, the output of millimetre-wave radar aims of systems and millimetre-wave radar The output of system original object point cloud;
Positioning road sign semanteme output: being exported in a manner of binary system figure layer and position road sign semanteme, including travelable region semantic is defeated Out, the output of lane boundary semanteme and the output of indication road sign semanteme;
The output of vehicle pose: it is exported including vehicle location, speed, course angle and 6 axis inertial sensors;
Programmed decision-making output: it is provided by fixed longitudinal spacing separation and pre- takes aim at the locus of points.
8. L3 grades of automated driving system driving path data acquisitions, analysis and upload according to claim 3 or 4 or 5 or 6 Method, it is characterised in that step (23) the positioning road sign semanteme output method particularly includes:
It is exported according to sensing module seeking semantics, and utilizes the vehicle pose estimation result of locating module, building positioning road sign language Justice output;Road sign semanteme output method is positioned using the complete semantic output method of key frame semanteme output method or compression;
Wherein key frame semanteme output method are as follows: it is integrated and is estimated according to the mileage of locating module, one frame Key Words of every 50 meters of extractions Justice output constructs key frame semanteme location register, is stored with key frame and corresponding moment in key frame semanteme location register Vehicle longitude and latitude data;Every 20 frame group packet compression is primary, and issues upload request signal;
The complete semantic output method of compression are as follows: semantic including lane grade semanteme and guidance instruction grade;From vision system lane and can Running region semanteme exports post-processing in figure layer and extracts in lane quantity, lane width, boundary types, affiliated lane and deviation Heart distance;Road sign and two part data of space road sign are extracted in post-processing from indication road sign semanteme output figure layer;It is above-mentioned Data every traveling 1km group packet compression together with corresponding moment vehicle longitude and latitude data is primary, and issues upload request signal.
9. the acquisition of L3 grades of automated driving system driving path data, analysis and upload side according to claim 1 or 2 or 3 Method, it is characterised in that:
The step 3. data communication method particularly includes: according to step on-line data analysis 2. as a result, by step 1. in The driving data queue obtained after acquisition is compressed, and names corresponding compressed file by predefined rule;Utilize TCP or UDP Agreement, data after being compressed by 4G network or wireless network to server transparent transmission;
4. the step received server-side and stores vehicle end driving data method particularly includes: in server end, pass through TCP Or udp protocol, receive driving data transmitted by car-mounted terminal;Number is driven using the data acquisition date as specific item address book stored correlation According to.
CN201910454082.XA 2019-05-28 2019-05-28 L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading Pending CN110264586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910454082.XA CN110264586A (en) 2019-05-28 2019-05-28 L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910454082.XA CN110264586A (en) 2019-05-28 2019-05-28 L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading

Publications (1)

Publication Number Publication Date
CN110264586A true CN110264586A (en) 2019-09-20

Family

ID=67915760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910454082.XA Pending CN110264586A (en) 2019-05-28 2019-05-28 L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading

Country Status (1)

Country Link
CN (1) CN110264586A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110703289A (en) * 2019-10-29 2020-01-17 杭州鸿泉物联网技术股份有限公司 Track data reporting method and moving track restoring method
CN110785718A (en) * 2019-09-29 2020-02-11 驭势科技(北京)有限公司 Vehicle-mounted automatic driving test system and test method
CN110852192A (en) * 2019-10-23 2020-02-28 上海能塔智能科技有限公司 Method and device for determining noise data, storage medium, terminal and vehicle
CN111619482A (en) * 2020-06-08 2020-09-04 武汉光庭信息技术股份有限公司 Vehicle driving data acquisition and processing system and method
CN111710158A (en) * 2020-05-28 2020-09-25 深圳市元征科技股份有限公司 Vehicle data processing method and related equipment
CN111845728A (en) * 2020-06-22 2020-10-30 福瑞泰克智能系统有限公司 Driving assistance data acquisition method and system
CN112286925A (en) * 2020-12-09 2021-01-29 新石器慧义知行智驰(北京)科技有限公司 Method for cleaning data collected by unmanned vehicle
CN112346969A (en) * 2020-10-28 2021-02-09 武汉极目智能技术有限公司 AEB development verification system and method based on data acquisition platform
CN113479218A (en) * 2021-08-09 2021-10-08 哈尔滨工业大学 Roadbed automatic driving auxiliary detection system and control method thereof
CN113743356A (en) * 2021-09-15 2021-12-03 东软睿驰汽车技术(沈阳)有限公司 Data acquisition method and device and electronic equipment
CN113903102A (en) * 2021-10-29 2022-01-07 广汽埃安新能源汽车有限公司 Adjustment information acquisition method, adjustment device, electronic device, and medium
CN114047003A (en) * 2021-12-22 2022-02-15 吉林大学 Man-vehicle difference data triggering recording control method based on dynamic time warping algorithm
CN114354220A (en) * 2022-01-07 2022-04-15 苏州挚途科技有限公司 Driving data processing method and device and electronic equipment
CN114608592A (en) * 2022-02-10 2022-06-10 上海追势科技有限公司 Crowdsourcing method, system, equipment and storage medium for map
CN115203216A (en) * 2022-05-23 2022-10-18 中国测绘科学研究院 Geographic information data classification grading and protecting method and system for automatic driving map online updating scene
CN115225422A (en) * 2022-06-30 2022-10-21 际络科技(上海)有限公司 Vehicle CAN bus data acquisition method and device
CN116238545A (en) * 2023-05-12 2023-06-09 禾多科技(北京)有限公司 Automatic driving track deviation detection method and detection system
CN116485626A (en) * 2023-04-10 2023-07-25 北京辉羲智能科技有限公司 Automatic driving SoC chip for sensor data dump
CN116664964A (en) * 2023-07-31 2023-08-29 福思(杭州)智能科技有限公司 Data screening method, device, vehicle-mounted equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463244A (en) * 2014-12-04 2015-03-25 上海交通大学 Aberrant driving behavior monitoring and recognizing method and system based on smart mobile terminal
CN105270411A (en) * 2015-08-25 2016-01-27 南京联创科技集团股份有限公司 Analysis method and device of driving behavior
CN105717939A (en) * 2016-01-20 2016-06-29 李万鸿 Informatization and networking implementation method of road pavement supporting automobile unmanned automatic driving
CN106114515A (en) * 2016-06-29 2016-11-16 北京奇虎科技有限公司 Car steering behavior based reminding method and system
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
US20170371036A1 (en) * 2015-02-06 2017-12-28 Delphi Technologies, Inc. Autonomous vehicle with unobtrusive sensors
CN107564280A (en) * 2017-08-22 2018-01-09 王浩宇 Driving behavior data acquisition and analysis system and method based on environment sensing
CN107610464A (en) * 2017-08-11 2018-01-19 河海大学 A kind of trajectory predictions method based on Gaussian Mixture time series models
CN107784587A (en) * 2016-08-25 2018-03-09 大连楼兰科技股份有限公司 A kind of driving behavior evaluation system
CN107895501A (en) * 2017-09-29 2018-04-10 大圣科技股份有限公司 Unmanned car steering decision-making technique based on the training of magnanimity driving video data
CN108646748A (en) * 2018-06-05 2018-10-12 北京联合大学 A kind of place unmanned vehicle trace tracking method and system
CN108860165A (en) * 2018-05-11 2018-11-23 深圳市图灵奇点智能科技有限公司 Vehicle assistant drive method and system
CN109117718A (en) * 2018-07-02 2019-01-01 东南大学 A kind of semantic map structuring of three-dimensional towards road scene and storage method
CN109459750A (en) * 2018-10-19 2019-03-12 吉林大学 A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision
CN109471096A (en) * 2018-10-31 2019-03-15 奇瑞汽车股份有限公司 Multi-Sensor Target matching process, device and automobile
CN109634282A (en) * 2018-12-25 2019-04-16 奇瑞汽车股份有限公司 Automatic driving vehicle, method and apparatus

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463244A (en) * 2014-12-04 2015-03-25 上海交通大学 Aberrant driving behavior monitoring and recognizing method and system based on smart mobile terminal
US20170371036A1 (en) * 2015-02-06 2017-12-28 Delphi Technologies, Inc. Autonomous vehicle with unobtrusive sensors
CN105270411A (en) * 2015-08-25 2016-01-27 南京联创科技集团股份有限公司 Analysis method and device of driving behavior
CN105717939A (en) * 2016-01-20 2016-06-29 李万鸿 Informatization and networking implementation method of road pavement supporting automobile unmanned automatic driving
CN106114515A (en) * 2016-06-29 2016-11-16 北京奇虎科技有限公司 Car steering behavior based reminding method and system
CN107784587A (en) * 2016-08-25 2018-03-09 大连楼兰科技股份有限公司 A kind of driving behavior evaluation system
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN107610464A (en) * 2017-08-11 2018-01-19 河海大学 A kind of trajectory predictions method based on Gaussian Mixture time series models
CN107564280A (en) * 2017-08-22 2018-01-09 王浩宇 Driving behavior data acquisition and analysis system and method based on environment sensing
CN107895501A (en) * 2017-09-29 2018-04-10 大圣科技股份有限公司 Unmanned car steering decision-making technique based on the training of magnanimity driving video data
CN108860165A (en) * 2018-05-11 2018-11-23 深圳市图灵奇点智能科技有限公司 Vehicle assistant drive method and system
CN108646748A (en) * 2018-06-05 2018-10-12 北京联合大学 A kind of place unmanned vehicle trace tracking method and system
CN109117718A (en) * 2018-07-02 2019-01-01 东南大学 A kind of semantic map structuring of three-dimensional towards road scene and storage method
CN109459750A (en) * 2018-10-19 2019-03-12 吉林大学 A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision
CN109471096A (en) * 2018-10-31 2019-03-15 奇瑞汽车股份有限公司 Multi-Sensor Target matching process, device and automobile
CN109634282A (en) * 2018-12-25 2019-04-16 奇瑞汽车股份有限公司 Automatic driving vehicle, method and apparatus

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110785718B (en) * 2019-09-29 2021-11-02 驭势科技(北京)有限公司 Vehicle-mounted automatic driving test system and test method
CN110785718A (en) * 2019-09-29 2020-02-11 驭势科技(北京)有限公司 Vehicle-mounted automatic driving test system and test method
WO2021056556A1 (en) * 2019-09-29 2021-04-01 驭势科技(北京)有限公司 Vehicle-mounted autonomous driving test system and test method
CN110852192A (en) * 2019-10-23 2020-02-28 上海能塔智能科技有限公司 Method and device for determining noise data, storage medium, terminal and vehicle
CN110852192B (en) * 2019-10-23 2023-03-17 上海能塔智能科技有限公司 Method and device for determining noise data, storage medium, terminal and vehicle
CN110703289A (en) * 2019-10-29 2020-01-17 杭州鸿泉物联网技术股份有限公司 Track data reporting method and moving track restoring method
CN110703289B (en) * 2019-10-29 2021-07-06 杭州鸿泉物联网技术股份有限公司 Track data reporting method and moving track restoring method
CN111710158A (en) * 2020-05-28 2020-09-25 深圳市元征科技股份有限公司 Vehicle data processing method and related equipment
CN111619482A (en) * 2020-06-08 2020-09-04 武汉光庭信息技术股份有限公司 Vehicle driving data acquisition and processing system and method
CN111845728A (en) * 2020-06-22 2020-10-30 福瑞泰克智能系统有限公司 Driving assistance data acquisition method and system
CN111845728B (en) * 2020-06-22 2021-09-21 福瑞泰克智能系统有限公司 Driving assistance data acquisition method and system
CN112346969B (en) * 2020-10-28 2023-02-28 武汉极目智能技术有限公司 AEB development verification system and method based on data acquisition platform
CN112346969A (en) * 2020-10-28 2021-02-09 武汉极目智能技术有限公司 AEB development verification system and method based on data acquisition platform
CN112286925A (en) * 2020-12-09 2021-01-29 新石器慧义知行智驰(北京)科技有限公司 Method for cleaning data collected by unmanned vehicle
CN113479218A (en) * 2021-08-09 2021-10-08 哈尔滨工业大学 Roadbed automatic driving auxiliary detection system and control method thereof
CN113743356A (en) * 2021-09-15 2021-12-03 东软睿驰汽车技术(沈阳)有限公司 Data acquisition method and device and electronic equipment
CN113903102A (en) * 2021-10-29 2022-01-07 广汽埃安新能源汽车有限公司 Adjustment information acquisition method, adjustment device, electronic device, and medium
CN113903102B (en) * 2021-10-29 2023-11-17 广汽埃安新能源汽车有限公司 Adjustment information acquisition method, adjustment device, electronic equipment and medium
CN114047003A (en) * 2021-12-22 2022-02-15 吉林大学 Man-vehicle difference data triggering recording control method based on dynamic time warping algorithm
CN114354220A (en) * 2022-01-07 2022-04-15 苏州挚途科技有限公司 Driving data processing method and device and electronic equipment
CN114608592A (en) * 2022-02-10 2022-06-10 上海追势科技有限公司 Crowdsourcing method, system, equipment and storage medium for map
CN115203216B (en) * 2022-05-23 2023-02-07 中国测绘科学研究院 Geographic information data classification grading and protecting method and system for automatic driving map online updating scene
CN115203216A (en) * 2022-05-23 2022-10-18 中国测绘科学研究院 Geographic information data classification grading and protecting method and system for automatic driving map online updating scene
CN115225422A (en) * 2022-06-30 2022-10-21 际络科技(上海)有限公司 Vehicle CAN bus data acquisition method and device
CN115225422B (en) * 2022-06-30 2023-10-03 际络科技(上海)有限公司 Vehicle CAN bus data acquisition method and device
CN116485626A (en) * 2023-04-10 2023-07-25 北京辉羲智能科技有限公司 Automatic driving SoC chip for sensor data dump
CN116485626B (en) * 2023-04-10 2024-03-12 北京辉羲智能科技有限公司 Automatic driving SoC chip for sensor data dump
CN116238545A (en) * 2023-05-12 2023-06-09 禾多科技(北京)有限公司 Automatic driving track deviation detection method and detection system
CN116238545B (en) * 2023-05-12 2023-10-27 禾多科技(北京)有限公司 Automatic driving track deviation detection method and detection system
CN116664964A (en) * 2023-07-31 2023-08-29 福思(杭州)智能科技有限公司 Data screening method, device, vehicle-mounted equipment and storage medium
CN116664964B (en) * 2023-07-31 2023-10-20 福思(杭州)智能科技有限公司 Data screening method, device, vehicle-mounted equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110264586A (en) L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading
EP3104284B1 (en) Automatic labeling and learning of driver yield intention
JP6599986B2 (en) Hyperclass expansion and regularization deep learning for fine-grained image classification
CN103359123B (en) A kind of intelligent vehicle speed Control management system and implementation method
CN107272683A (en) Parallel intelligent vehicle control based on ACP methods
CN112639793A (en) Test method and device for automatically driving vehicle
CN107886750B (en) Unmanned automobile control method and system based on beyond-visual-range cooperative cognition
WO2016077026A1 (en) Near-online multi-target tracking with aggregated local flow descriptor (alfd)
CN112906126B (en) Vehicle hardware in-loop simulation training system and method based on deep reinforcement learning
CN112543877B (en) Positioning method and positioning device
US20220198107A1 (en) Simulations for evaluating driving behaviors of autonomous vehicles
CN113330497A (en) Automatic driving method and device based on intelligent traffic system and intelligent traffic system
WO2020123105A1 (en) Detecting spurious objects for autonomous vehicles
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
CN114813157A (en) Test scene construction method and device
CN113359724A (en) Vehicle intelligent driving system and method based on unmanned aerial vehicle and storage medium
CN116135654A (en) Vehicle running speed generation method and related equipment
CN113741384B (en) Method and device for detecting automatic driving system
CN115205311A (en) Image processing method, image processing apparatus, vehicle, medium, and chip
CN115257785A (en) Automatic driving data set manufacturing method and system
CN111077893B (en) Navigation method based on multiple vanishing points, electronic equipment and storage medium
CN115042814A (en) Traffic light state identification method and device, vehicle and storage medium
CN110446106B (en) Method for identifying front camera file, electronic equipment and storage medium
Sun A method for judging abnormal driving behaviors in diversion areas based on group intelligence perception of internet of vehicles.
CN117612127B (en) Scene generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Zero run Technology Co.,Ltd.

Address before: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG LEAPMOTOR TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190920