CN117763342A - Automatic driving data reinjection method and system based on environment awareness - Google Patents
Automatic driving data reinjection method and system based on environment awareness Download PDFInfo
- Publication number
- CN117763342A CN117763342A CN202311440458.4A CN202311440458A CN117763342A CN 117763342 A CN117763342 A CN 117763342A CN 202311440458 A CN202311440458 A CN 202311440458A CN 117763342 A CN117763342 A CN 117763342A
- Authority
- CN
- China
- Prior art keywords
- data
- reinjection
- scene
- module
- structured
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000002372 labelling Methods 0.000 claims abstract description 30
- 238000012360 testing method Methods 0.000 claims abstract description 29
- 230000001360 synchronised effect Effects 0.000 claims abstract description 26
- 230000008447 perception Effects 0.000 claims abstract description 22
- 230000006870 function Effects 0.000 claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 50
- 238000005259 measurement Methods 0.000 claims description 34
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000012545 processing Methods 0.000 claims description 24
- 238000004364 calculation method Methods 0.000 claims description 22
- 230000004927 fusion Effects 0.000 claims description 15
- 238000012795 verification Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 11
- 238000004088 simulation Methods 0.000 claims description 7
- 238000002347 injection Methods 0.000 claims description 6
- 239000007924 injection Substances 0.000 claims description 6
- 230000007704 transition Effects 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims 4
- 230000001133 acceleration Effects 0.000 claims 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Landscapes
- Traffic Control Systems (AREA)
Abstract
The invention relates to an automatic driving data reinjection method and system based on environment awareness, and belongs to the technical field of intelligent driving. The embodiment of the invention discloses an automatic driving data reinjection method and system based on environment awareness. Wherein the method comprises the following steps: acquiring acquisition data, converting the acquisition data into an OpenX format for deserializing and storing; acquiring positioning data and time data, and fusing the acquired data, the positioning data and the time data to obtain structured data; carrying out true value labeling on the structured data, transferring the structured data to a data reinjection end, and obtaining synchronous data through true value matching; building a functional scene and a test scene, extracting intelligent driving function categories and analysis cases of the scene, and building an environment perception model; and inputting the synchronous data into the environment perception model to obtain reinjection data, and adding strings to the reinjection data and injecting the reinjection data into the domain controller. And realizing the restoration and analysis of the real natural driving scene and the rapid iterative update of the domain control algorithm.
Description
Technical Field
The invention belongs to the technical field of intelligent driving, and particularly relates to an automatic driving data reinjection method and system based on environment awareness.
Background
With the continuous development of automatic driving automobile technology, automobile environment perception is increasingly important, wherein a data reinjection system is used for restoring and analyzing real natural driving scenes.
In an autopilot remote take over scenario, the existence of communication delays results in delayed perception of the vehicle environment by the remote operator and the lack of effective hazard prediction and system intervention; due to the fact that the weather and the problems of the interference of the real environment cause the data acquired by the sensor to be abnormal, the calculation accuracy of the system is lowered.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an automatic driving data reinjection method and system based on environment awareness, and the aim of the invention can be achieved through the following technical scheme:
s1: acquiring acquisition data, converting the acquisition data into an OpenX format to obtain measurement data, deserializing the measurement data and storing the measurement data;
s2: acquiring positioning data and time data, and fusing the acquired data, the positioning data and the time data to obtain structured data;
s3: carrying out data annotation on the structured data to obtain true value data, transferring the true value data to a data reinjection end, and carrying out true value matching on the data reinjection end to obtain synchronous data;
s4: setting up a functional scene according to the real data, extracting key parameters of the functional scene, presetting a distribution range of the key parameters, setting up a real vehicle working condition simulation test scene according to the distribution range, extracting intelligent driving function types and scene analysis cases of the test scene to obtain scene parameter vectors, and setting up an environment perception model according to the scene parameter vectors;
s5: and inputting the synchronous data into the environment perception model to obtain reinjection data, and carrying out string adding and injection on the reinjection data into a domain controller.
Specifically, the step S2 specifically includes the following steps:
s201: preprocessing the measurement data, and obtaining observation vectors and state vectors of a plurality of processes according to the positioning information and the time information;
s202: presetting a state vector initial value, estimating a current time state according to the state vector initial value to obtain a state vector prior estimated value, and calculating a covariance matrix of the state vector prior estimated value by introducing noise, wherein the calculation formula is as follows:
wherein,a priori estimated value of the kth moment, A is a state transition matrix, B is a control matrix, u is a control variable, k is a moment count, +.>The posterior estimated value at the k-1 time is represented by P, P and Q, respectively, the covariance matrix of the processed noise;
s203: according to the observation vector at the current moment, calculating Kalman gain and correcting the posterior estimation value of the prediction stage, updating a covariance matrix at the same time, and carrying out data fusion on the state vector and the observation vector to obtain structured data, wherein the calculation formula is as follows:
wherein K is Kalman gain, H is observation matrix, R is observation noise covariance matrix, z is observation vector, K is time count,the posterior estimation of the state vector at the moment k is carried out, and P is a covariance matrix.
Specifically, the step S3 specifically includes:
s301: presetting a labeling training data set and an initial labeling model, and carrying out score normalization processing on all the initial labeling models to obtain labeling categories and score output, wherein a calculation formula is as follows:
c k =argmax(b k ),
wherein b is score output, k is category count, M is the total number of initial labeling models, i is model count, p is probability vector, d is structured data, s is initial labeling model, c is labeling category, argmax is maximized parametrization function;
s302: judging whether the score output is larger than a preset threshold value, if yes, calculating the trust marking accuracy rate, if not, calculating the non-trust marking accuracy rate, updating the threshold value according to the trust marking accuracy rate and the non-trust marking accuracy rate, establishing a classifier model for iterative training, wherein the calculation formula is as follows:
T(n+1)=update(a n ,b n ,T(n)),
wherein T is a threshold value calculation function, n is a calculation step length, update is a threshold value update function, a n To trust the marking accuracy, b n The accuracy rate of the non-trust labeling is achieved;
s303: dividing the training data set into a test set and a verification set, inputting the test set and the verification set into the classifier model to obtain probability scores, dividing score intervals according to the probability scores, and calculating the frequency of the probability scores corresponding to the test set and the verification set falling into the score intervals to obtain a stability threshold, wherein the calculation formula is as follows:
wherein V is a stability threshold, N is the total data amount of the test set and the verification set, i is the single sample count corresponding to the data set,score frequency for test set samples falling within score interval, +.>The score frequency of the sample falling into the score interval is verified;
s304: inputting the structured data into the classifier model with iteration completed to obtain true value data, sending data which is larger than the stability threshold value in the true value data to a data reinjection end, and performing time alignment on the true value data and the acquired data at the data reinjection end to obtain synchronous data.
Specifically, the adding string specifically comprises converting the reinjection data into 27-bit byte stream data, performing time sequence control on the 27-bit byte stream data, converting the 27-bit byte stream data into a pair of differential pair signals through direct current balance coding, and transmitting the differential pair signals to a data reinjection end.
An automatic driving data reinjection system based on environment perception comprises a data acquisition processing module, a data fusion module, a truth value processing module, a scene generalization module and a data reinjection module;
the data acquisition processing module is used for acquiring acquisition data, converting the acquisition data into an OpenX format to obtain measurement data, deserializing the measurement data, storing the deserialized measurement data and sending the deserialized measurement data to the data fusion module;
the data fusion module is used for acquiring positioning data and time data, receiving measurement data transmitted by the data acquisition processing module, fusing the measurement data, the positioning data and the time data to obtain structured data, and transmitting the structured data to the truth processing module;
the truth processing module is used for receiving the structured data sent by the data fusion module, carrying out data annotation on the structured data to obtain truth data, transferring the truth data to a data reinjection end, carrying out truth matching on the data reinjection end to obtain synchronous data, and transmitting the synchronous data to the data reinjection module;
the scene generalization module is used for extracting true value data of the data reinjection end, constructing a functional scene according to the real data, extracting functional scene parameters, presetting a distribution range of the functional scene parameters, constructing a real vehicle condition simulation test scene according to the distribution range, extracting intelligent driving function types and scene analysis cases of the test scene to obtain scene parameter vectors, constructing an environment perception model according to the scene parameter vectors, calculating synchronous data sent by the data reinjection module through the environment perception model to obtain reinjection data, and returning the reinjection data to the data reinjection module;
the data reinjection module is used for receiving the synchronous data of the truth processing module, sending the synchronous data to the scene generalization module environment perception model to calculate to obtain reinjection data, receiving the reinjection data and carrying out string injection into the domain controller.
The beneficial effects of the invention are as follows:
(1) The acquired data are converted into standard format data, simulation software is imported to build a real vehicle working condition simulation test scene, and the multi-mode data are fused, so that the relevance among the data is enhanced while the authenticity of the data is ensured, the dimension of the motion state of the detection target is expanded, and the environment perception capability of the system is improved.
(2) The real data reinjection of the real vehicle is realized by synchronously injecting the collected original data into the domain control in the original data format, and the domain control algorithm is subjected to rapid iterative updating, so that the safety and the accuracy of the system are ensured, and the intelligence of automatic driving is improved.
Drawings
The present invention is further described below with reference to the accompanying drawings for the convenience of understanding by those skilled in the art.
Fig. 1 is a schematic flow chart of an automatic driving data reinjection method based on environment awareness.
Detailed Description
In order to further describe the technical means and effects adopted by the invention for achieving the preset aim, the following detailed description is given below of the specific implementation, structure, characteristics and effects according to the invention with reference to the attached drawings and the preferred embodiment.
Referring to fig. 1, an automatic driving data reinjection method based on environment awareness specifically includes the following steps:
s1: acquiring acquisition data, converting the acquisition data into an OpenX format to obtain measurement data, deserializing the measurement data and storing the measurement data;
s2: acquiring positioning data and time data, and fusing the acquired data, the positioning data and the time data to obtain structured data;
s3: carrying out data annotation on the structured data to obtain true value data, transferring the true value data to a data reinjection end, and carrying out true value matching on the data reinjection end to obtain synchronous data;
s4: setting up a functional scene according to the real data, extracting key parameters of the functional scene, presetting a distribution range of the key parameters, setting up a real vehicle working condition simulation test scene according to the distribution range, extracting intelligent driving function types and scene analysis cases of the test scene to obtain scene parameter vectors, and setting up an environment perception model according to the scene parameter vectors;
s5: and inputting the synchronous data into the environment perception model to obtain reinjection data, and carrying out string adding and injection on the reinjection data into a domain controller.
Specifically, the step S2 specifically includes the following steps:
s201: preprocessing the measurement data, and obtaining observation vectors and state vectors of a plurality of processes according to the positioning information and the time information;
s202: presetting a state vector initial value, estimating a current time state according to the state vector initial value to obtain a state vector prior estimated value, and calculating a covariance matrix of the state vector prior estimated value by introducing noise, wherein the calculation formula is as follows:
wherein,a priori estimated value of the kth moment, A is a state transition matrix, B is a control matrix, u is a control variable, k is a moment count, +.>The posterior estimated value at the k-1 time is represented by P, P and Q, respectively, the covariance matrix of the processed noise;
s203: according to the observation vector at the current moment, calculating Kalman gain and correcting the posterior estimation value of the prediction stage, updating a covariance matrix at the same time, and carrying out data fusion on the state vector and the observation vector to obtain structured data, wherein the calculation formula is as follows:
wherein K is Kalman gain, H is observation matrix, R is observation noise covariance matrix, z is observation vector, K is time count,the posterior estimation of the state vector at the moment k is carried out, and P is a covariance matrix.
In this embodiment, the motion state at the time of observation is set to be uniform motion, the observation time interval is 1s, a python implementation algorithm is used, in which a get_state_measurement_matrix (self, a, B, C) is used to obtain a state transition matrix, an excitation transition matrix, and an observation matrix, get a covariance of an excitation vector and a measurement error vector by get_ cov _matrix (self, R, q=none), and a kalman gain is obtained by self.k=self.p×self.c.t. (self.c×self.p×self.c.t).
Specifically, the step S3 specifically includes:
s301: presetting a labeling training data set and an initial labeling model, and carrying out score normalization processing on all the initial labeling models to obtain labeling categories and score output, wherein a calculation formula is as follows:
c k =argmax(b k ),
wherein b is score output, k is category count, M is the total number of initial labeling models, i is model count, p is probability vector, d is structured data, s is initial labeling model, c is labeling category, argmax is maximized parametrization function;
s302: judging whether the score output is larger than a preset threshold value, if yes, calculating the trust marking accuracy rate, if not, calculating the non-trust marking accuracy rate, updating the threshold value according to the trust marking accuracy rate and the non-trust marking accuracy rate, establishing a classifier model for iterative training, wherein the calculation formula is as follows:
T(n+1)=update(a n ,b n ,T(n)),
wherein T is a threshold value calculation function, n is a calculation step length, update is a threshold value update function, a n To trust the marking accuracy, b n The accuracy rate of the non-trust labeling is achieved;
s303: dividing the training data set into a test set and a verification set, inputting the test set and the verification set into the classifier model to obtain probability scores, dividing score intervals according to the probability scores, and calculating the frequency of the probability scores corresponding to the test set and the verification set falling into the score intervals to obtain a stability threshold, wherein the calculation formula is as follows:
wherein V is a stability threshold, N is the total data amount of the test set and the verification set, i is the single sample count corresponding to the data set,score frequency for test set samples falling within score interval, +.>The score frequency of the sample falling into the score interval is verified;
s304: inputting the structured data into the classifier model with iteration completed to obtain true value data, sending data which is larger than the stability threshold value in the true value data to a data reinjection end, and performing time alignment on the true value data and the acquired data at the data reinjection end to obtain synchronous data.
In the embodiment, the truth value data marks the structured data, traverses the structured data at each moment, and filters out targets far away from the vehicle; the true value data specifically comprises the steps of calculating the 3D size of a labeling target, the ground center position of the 3D frame, a 2D detection frame, a target orientation angle, a target observation angle, a cut-off degree and a shielding degree, inputting the true value data into a data labeling model, adding a label to each piece of real data, acquiring a predicted value of each piece of input real data by the data labeling model through forward propagation, taking the gradient descent direction and the gradient descent amplitude as training parameters according to a total loss function value between the predicted value and the real value of all samples, and optimizing parameters by training a data set to obtain an optimal data labeling model; inputting true value data predicts the corresponding labeling address.
Specifically, the adding string specifically comprises converting the reinjection data into 27-bit byte stream data, performing time sequence control on the 27-bit byte stream data, converting the 27-bit byte stream data into a pair of differential pair signals through direct current balance coding, and transmitting the differential pair signals to a data reinjection end.
An automatic driving data reinjection system based on environment perception comprises a data acquisition processing module, a data fusion module, a truth value processing module, a scene generalization module and a data reinjection module;
the data acquisition processing module is used for acquiring acquisition data, converting the acquisition data into an OpenX format to obtain measurement data, deserializing the measurement data, storing the deserialized measurement data and sending the deserialized measurement data to the data fusion module;
the data fusion module is used for acquiring positioning data and time data, receiving measurement data transmitted by the data acquisition processing module, fusing the measurement data, the positioning data and the time data to obtain structured data, and transmitting the structured data to the truth processing module;
the truth processing module is used for receiving the structured data sent by the data fusion module, carrying out data annotation on the structured data to obtain truth data, transferring the truth data to a data reinjection end, carrying out truth matching on the data reinjection end to obtain synchronous data, and transmitting the synchronous data to the data reinjection module;
the scene generalization module is used for extracting true value data of the data reinjection end, constructing a functional scene according to the real data, extracting functional scene parameters, presetting a distribution range of the functional scene parameters, constructing a real vehicle condition simulation test scene according to the distribution range, extracting intelligent driving function types and scene analysis cases of the test scene to obtain scene parameter vectors, constructing an environment perception model according to the scene parameter vectors, calculating synchronous data sent by the data reinjection module through the environment perception model to obtain reinjection data, and returning the reinjection data to the data reinjection module;
the data reinjection module is used for receiving the synchronous data of the truth processing module, sending the synchronous data to the scene generalization module environment perception model to calculate to obtain reinjection data, receiving the reinjection data and carrying out string injection into the domain controller.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or to an external computer (for example, through the Internet using an Internet service provider).
The present invention is not limited to the above embodiments, but is capable of modification and variation in detail, and other modifications and variations can be made by those skilled in the art without departing from the scope of the present invention.
Claims (8)
1. An automatic driving data reinjection method based on environment awareness, which is characterized by comprising the following steps: the method comprises the following steps:
s1: acquiring acquisition data, converting the acquisition data into an OpenX format to obtain measurement data, deserializing the measurement data and storing the measurement data;
s2: positioning data and time data are obtained, and the measurement data, the positioning data and the time data are fused to obtain structured data;
s3: carrying out data annotation on the structured data to obtain true value data, transferring the true value data to a data reinjection end, and carrying out true value matching on the data reinjection end to obtain synchronous data;
s4: setting up a functional scene according to the real data, extracting the functional scene parameters, presetting a distribution range of the functional scene parameters, setting up a real vehicle working condition simulation test scene according to the distribution range, extracting intelligent driving function types and scene analysis cases of the test scene to obtain scene parameter vectors, and setting up an environment perception model according to the scene parameter vectors;
s5: and inputting the synchronous data into the environment perception model to obtain reinjection data, and carrying out string adding and injection on the reinjection data into a domain controller.
2. The method of claim 1, wherein the acquisition data comprises video data, bus data, laser point cloud data, millimeter wave radar data.
3. The method according to claim 1, wherein the de-serialization specifically comprises: dividing a deserializing process into a data conversion period and a control conversion period, converting the measurement data into 18-bit parallel output data in the data conversion period, converting the measurement data into 9-bit parallel control data in the control conversion period, splicing the 18-bit parallel output data and the 9-bit parallel control data into 27-bit parallel data, and storing the 27-bit parallel data.
4. The method according to claim 1, wherein said step S2 comprises the steps of:
s201: preprocessing the measurement data, and obtaining observation vectors and state vectors of a plurality of processes according to the positioning data and the time data;
s202: presetting a state vector initial value, estimating a current time state according to the state vector initial value to obtain a state vector prior estimated value, and calculating a covariance matrix of the state vector prior estimated value by introducing noise, wherein the calculation formula is as follows:
wherein,a priori estimated value of the kth moment, A is a state transition matrix, B is a control matrix, u is a control variable, k is a moment count, +.>The posterior estimated value at the k-1 time is represented by P, P and Q, respectively, the covariance matrix of the processed noise;
s203: according to the observation vector at the current moment, calculating Kalman gain and correcting the posterior estimation value of the prediction stage, updating a covariance matrix at the same time, and carrying out data fusion on the state vector and the observation vector to obtain structured data, wherein the calculation formula is as follows:
wherein K is Kalman gain, H is observation matrix, R is observation noise covariance matrix, z is observation vector, K is time count,the posterior estimation of the state vector at the moment k is carried out, and P is a covariance matrix.
5. The method according to claim 1, wherein the step S3 specifically includes:
s301: presetting a labeling training data set and an initial labeling model, and carrying out score normalization processing on all the initial labeling models to obtain labeling categories and score output, wherein a calculation formula is as follows:
c k =argmax(b k ),
wherein b is score output, k is category count, M is the total number of initial labeling models, i is model count, p is probability vector, d is structured data, s is initial labeling model, c is labeling category, argmax is maximized parametrization function;
s302: judging whether the score output is larger than a preset threshold value, if yes, calculating the trust marking accuracy rate, if not, calculating the non-trust marking accuracy rate, updating the threshold value according to the trust marking accuracy rate and the non-trust marking accuracy rate, establishing a classifier model for iterative training, wherein the calculation formula is as follows:
T(n+1)=update(a n ,b n ,T(n)),
wherein T is a threshold value calculation function, n is a calculation step length, update is a threshold value update function, a n To trust the marking accuracy, b n The accuracy rate of the non-trust labeling is achieved;
s303: dividing the training data set into a test set and a verification set, inputting the test set and the verification set into the classifier model to obtain probability scores, dividing score intervals according to the probability scores, and calculating the frequency of the probability scores corresponding to the test set and the verification set falling into the score intervals to obtain a stability threshold, wherein the calculation formula is as follows:
wherein V is a stability threshold, N is the total data amount of the test set and the verification set, i is the single sample count corresponding to the data set,score frequency for test set samples falling within score interval, +.>The score frequency of the sample falling into the score interval is verified;
s304: inputting the structured data into the classifier model with iteration completed to obtain true value data, sending data which is larger than the stability threshold value in the true value data to a data reinjection end, and performing time alignment on the true value data and the acquired data at the data reinjection end to obtain synchronous data.
6. The method according to claim 1, wherein the functional scene parameters specifically include a vehicle speed, a vehicle speed acceleration, a front vehicle speed acceleration, a vehicle-to-front vehicle distance; the intelligent driving function category specifically comprises that the self-vehicle runs on a current lane, the front vehicle accelerates before the self-vehicle, and the self-vehicle runs along with the front vehicle.
7. The method of claim 1, wherein the concatenating specifically includes converting the reinjection data into 27-bit byte stream data, timing controlling the 27-bit byte stream data, converting the 27-bit byte stream data into a pair of differential pair signals by dc-balanced encoding, and transmitting the differential pair signals to a data reinjection terminal.
8. An automatic driving data reinjection system based on environment awareness and operated by the method according to any one of claims 1 to 6, and the automatic driving data reinjection system comprises a data acquisition processing module, a data fusion module, a truth processing module, a scene generalization module and a data reinjection module;
the data acquisition processing module is used for acquiring acquisition data, converting the acquisition data into an OpenX format to obtain measurement data, deserializing the measurement data, storing the deserialized measurement data and sending the deserialized measurement data to the data fusion module;
the data fusion module is used for acquiring positioning data and time data, receiving measurement data transmitted by the data acquisition processing module, fusing the measurement data, the positioning data and the time data to obtain structured data, and transmitting the structured data to the truth processing module;
the truth processing module is used for receiving the structured data sent by the data fusion module, carrying out data annotation on the structured data to obtain truth data, transferring the truth data to a data reinjection end, carrying out truth matching on the data reinjection end to obtain synchronous data, and transmitting the synchronous data to the data reinjection module;
the scene generalization module is used for extracting true value data of the data reinjection end, constructing a functional scene according to the real data, extracting functional scene parameters, presetting a distribution range of the functional scene parameters, constructing a real vehicle condition simulation test scene according to the distribution range, extracting intelligent driving function types and scene analysis cases of the test scene to obtain scene parameter vectors, constructing an environment perception model according to the scene parameter vectors, calculating synchronous data sent by the data reinjection module through the environment perception model to obtain reinjection data, and returning the reinjection data to the data reinjection module;
the data reinjection module is used for receiving the synchronous data of the truth processing module, sending the synchronous data to the scene generalization module environment perception model to calculate to obtain reinjection data, receiving the reinjection data and carrying out string injection into the domain controller.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311440458.4A CN117763342A (en) | 2023-11-01 | 2023-11-01 | Automatic driving data reinjection method and system based on environment awareness |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311440458.4A CN117763342A (en) | 2023-11-01 | 2023-11-01 | Automatic driving data reinjection method and system based on environment awareness |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117763342A true CN117763342A (en) | 2024-03-26 |
Family
ID=90311232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311440458.4A Pending CN117763342A (en) | 2023-11-01 | 2023-11-01 | Automatic driving data reinjection method and system based on environment awareness |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117763342A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200406834A1 (en) * | 2019-06-14 | 2020-12-31 | Locomation, Inc. | Mirror pod environmental sensor arrangement for autonomous vehicle |
CN113868873A (en) * | 2021-09-30 | 2021-12-31 | 重庆长安汽车股份有限公司 | Automatic driving simulation scene expansion method and system based on data reinjection |
CN114741787A (en) * | 2022-04-06 | 2022-07-12 | 重庆交通大学 | Scene data acquisition and automatic labeling method and system for high-level automatic driving simulation test and storage medium |
CN115268964A (en) * | 2022-07-08 | 2022-11-01 | 重庆长安汽车股份有限公司 | Data reinjection method and system, electronic device and readable storage medium |
CN115830562A (en) * | 2022-12-12 | 2023-03-21 | 昆易电子科技(上海)有限公司 | Method for determining lane information, computer device, and medium |
CN116105712A (en) * | 2022-12-12 | 2023-05-12 | 昆易电子科技(上海)有限公司 | Road map generation method, reinjection method, computer device and medium |
CN116484971A (en) * | 2023-04-19 | 2023-07-25 | 重庆长安汽车股份有限公司 | Automatic driving perception self-learning method and device for vehicle and electronic equipment |
WO2023193196A1 (en) * | 2022-04-07 | 2023-10-12 | 中国科学院深圳先进技术研究院 | Autonomous driving test case generation method and apparatus, and electronic device and storage medium |
-
2023
- 2023-11-01 CN CN202311440458.4A patent/CN117763342A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200406834A1 (en) * | 2019-06-14 | 2020-12-31 | Locomation, Inc. | Mirror pod environmental sensor arrangement for autonomous vehicle |
CN113868873A (en) * | 2021-09-30 | 2021-12-31 | 重庆长安汽车股份有限公司 | Automatic driving simulation scene expansion method and system based on data reinjection |
CN114741787A (en) * | 2022-04-06 | 2022-07-12 | 重庆交通大学 | Scene data acquisition and automatic labeling method and system for high-level automatic driving simulation test and storage medium |
WO2023193196A1 (en) * | 2022-04-07 | 2023-10-12 | 中国科学院深圳先进技术研究院 | Autonomous driving test case generation method and apparatus, and electronic device and storage medium |
CN115268964A (en) * | 2022-07-08 | 2022-11-01 | 重庆长安汽车股份有限公司 | Data reinjection method and system, electronic device and readable storage medium |
CN115830562A (en) * | 2022-12-12 | 2023-03-21 | 昆易电子科技(上海)有限公司 | Method for determining lane information, computer device, and medium |
CN116105712A (en) * | 2022-12-12 | 2023-05-12 | 昆易电子科技(上海)有限公司 | Road map generation method, reinjection method, computer device and medium |
CN116484971A (en) * | 2023-04-19 | 2023-07-25 | 重庆长安汽车股份有限公司 | Automatic driving perception self-learning method and device for vehicle and electronic equipment |
Non-Patent Citations (1)
Title |
---|
郭景华;李克强;王进;陈涛;李文昌;王班;: "基于危险场景聚类分析的前车随机运动状态预测研究", 汽车工程, no. 07, 25 July 2020 (2020-07-25) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921200B (en) | Method, apparatus, device and medium for classifying driving scene data | |
US11054518B2 (en) | Method and apparatus for determining obstacle speed | |
JP6771059B2 (en) | Methods, devices, equipment and computer readable storage media for reconstructing 3D scenes | |
US20210302585A1 (en) | Smart navigation method and system based on topological map | |
CN113361710B (en) | Student model training method, picture processing device and electronic equipment | |
CN115540896A (en) | Path planning method, path planning device, electronic equipment and computer readable medium | |
EP4155679A2 (en) | Positioning method and apparatus based on lane line and feature point | |
CN115540894B (en) | Vehicle trajectory planning method and device, electronic equipment and computer readable medium | |
WO2023155580A1 (en) | Object recognition method and apparatus | |
CN115880536A (en) | Data processing method, training method, target object detection method and device | |
CN115616937A (en) | Automatic driving simulation test method, device, equipment and computer readable medium | |
CN115326099A (en) | Local path planning method and device, electronic equipment and computer readable medium | |
CN114612616A (en) | Mapping method and device, electronic equipment and storage medium | |
CN113392793A (en) | Method, device, equipment, storage medium and unmanned vehicle for identifying lane line | |
CN112578781A (en) | Data processing method, device, chip system and medium | |
CN115641359A (en) | Method, apparatus, electronic device, and medium for determining motion trajectory of object | |
CN115016435A (en) | Automatic driving vehicle test method, device, system, equipment and medium | |
CN112509321A (en) | Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium | |
CN114724116B (en) | Vehicle traffic information generation method, device, equipment and computer readable medium | |
CN117763342A (en) | Automatic driving data reinjection method and system based on environment awareness | |
CN115937449A (en) | High-precision map generation method and device, electronic equipment and storage medium | |
CN114119973A (en) | Spatial distance prediction method and system based on image semantic segmentation network | |
CN113887391A (en) | Method and device for recognizing road sign and automatic driving vehicle | |
CN112561956A (en) | Video target tracking method and device, electronic equipment and storage medium | |
CN117191068B (en) | Model training method and device, and track prediction method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |