CN115422417A - Data processing method, device and storage medium - Google Patents

Data processing method, device and storage medium Download PDF

Info

Publication number
CN115422417A
CN115422417A CN202211015100.2A CN202211015100A CN115422417A CN 115422417 A CN115422417 A CN 115422417A CN 202211015100 A CN202211015100 A CN 202211015100A CN 115422417 A CN115422417 A CN 115422417A
Authority
CN
China
Prior art keywords
data
time
target
vehicle
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211015100.2A
Other languages
Chinese (zh)
Inventor
顾佳杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202211015100.2A priority Critical patent/CN115422417A/en
Publication of CN115422417A publication Critical patent/CN115422417A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/904Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/909Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Remote Sensing (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the application provides a data processing method, data processing equipment and a storage medium. In the embodiment of the application, the original driving environment data of the automatic driving vehicle and the original vehicle driving route can be aligned in time and space; obtaining a regional map through which the vehicle running routes which are aligned in time and space pass according to the vehicle running routes which are aligned in time and space; and then, the driving environment at the corresponding moment can be rendered on the regional map according to the driving environment data aligned in time and space, so that the visualization of the regional map and the driving environment of the vehicle at the corresponding moment is realized, a visual reference can be provided for a user to perceive the quality of multidimensional data, and the user can intuitively perceive the quality of data acquired by the automatic driving vehicle.

Description

Data processing method, device and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing method, device, and storage medium.
Background
Autopilot has become a development trend in the automotive industry, and research on autopilot perception, positioning, planning decision making, control and the like is actively carried out. In order to ensure the safety of the automatic driving vehicle, the automatic driving vehicle must be tested and verified by millions of kilometers, so that the dependence of automatic driving on a simulation platform is increasingly greater.
In general, systems for autonomous vehicles can be divided into three modules: the sensing module is equivalent to human eyes and collects the state of the surrounding environment in real time through a sensor; the planning decision module is equivalent to the brain of a person, plans a driving path, and converts the planned path into executable instructions such as an accelerator, a brake, a steering and the like; and the control module is equivalent to hands and feet of a person and is used for controlling the execution of operations such as accelerator, brake, steering and the like of the vehicle.
In the prior art, the simulation, training and verification of the module of the simulation platform can be performed through data collected in the real world. The quality of the data collected in the real world has a crucial influence on the accuracy of the modules of the simulation platform. Therefore, it is very important how to enable the user to perceive the quality of the data used for the simulation platform to train.
Disclosure of Invention
Aspects of the present application provide a data processing method, device, and storage medium to realize visualization of real world data acquired by an autonomous vehicle, so that a user intuitively perceives data quality.
An embodiment of the present application provides a data processing method, including:
acquiring original multi-dimensional data acquired by an automatic driving vehicle in the driving process; the raw multi-dimensional data includes at least: original driving environment data and an original vehicle driving route;
performing space-time alignment on the original multi-dimensional data to obtain space-time aligned target multi-dimensional data; the target multi-dimensional data includes at least: the time-space aligned driving environment data and the time-space aligned vehicle driving route;
obtaining a region map through which the vehicle running routes which are aligned in time and space pass according to the vehicle running routes which are aligned in time and space;
rendering the driving environment at the corresponding moment on the regional map according to the time-space aligned driving environment data at the corresponding moment; the corresponding time refers to the time stamp corresponding time of the vehicle driving route which is determined to be aligned in space and time of the regional map.
An embodiment of the present application further provides a data processing system, including: the system comprises a data storage node, a data processing node and a rendering engine;
the data storage node is used for storing map data and original multi-dimensional data collected by the automatic driving vehicle; the raw multi-dimensional data includes at least: original driving environment data and an original vehicle driving route;
the data processing node is used for performing space-time alignment on the original multi-dimensional data to obtain space-time aligned target multi-dimensional data; the target multi-dimensional data includes at least: the time-space aligned driving environment data and the time-space aligned vehicle driving route; acquiring a regional map through which the spatio-temporal aligned vehicle driving route passes from the data storage system according to the spatio-temporal aligned vehicle driving route;
the rendering engine is used for rendering the driving environment at the corresponding moment on the regional map according to the driving environment data aligned in time and space at the corresponding moment; the corresponding time refers to the time stamp corresponding time of the vehicle driving route which is determined to be aligned in space and time of the regional map.
An embodiment of the present application further provides a computing device, including: a memory, a processor, and a display component; wherein the memory is used for storing a computer program;
the processor is coupled to the memory and the display component for executing the computer program for performing the steps in the above-mentioned data processing method.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the data processing method.
In the embodiment of the application, the original driving environment data of the automatic driving vehicle and the original vehicle driving route are aligned in time and space; obtaining a regional map through which the vehicle running routes which are aligned in time and space pass according to the vehicle running routes which are aligned in time and space; and then, the driving environment at the corresponding moment can be rendered on the regional map according to the driving environment data aligned in time and space, so that the visualization of the regional map and the driving environment of the vehicle at the corresponding moment is realized, a visual reference can be provided for a user to perceive the quality of multidimensional data, and the user can intuitively perceive the quality of data acquired by the automatic driving vehicle.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a block diagram of a data processing system according to an embodiment of the present application;
FIG. 2 is a diagram of an operating architecture of a data processing system according to an embodiment of the present application;
FIG. 3 is a diagram illustrating a data time alignment method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the application, in order that a user can perceive the quality of data used for training the simulation platform, the original driving environment data of the automatic driving vehicle and the original vehicle driving route can be aligned in time and space; obtaining a regional map through which the vehicle running routes which are aligned in time and space pass according to the vehicle running routes which are aligned in time and space; and then, the driving environment at the corresponding moment can be rendered on the regional map according to the driving environment data aligned in time and space, so that the visualization of the regional map and the driving environment of the vehicle at the corresponding moment is realized, a visual reference can be provided for a user to perceive the quality of multidimensional data, and the user can intuitively perceive the quality of data acquired by the automatic driving vehicle.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
Fig. 1 is a schematic structural diagram of a data processing system according to an embodiment of the present application. As shown in fig. 1, the data processing system mainly includes: data storage node 10, data processing node 20 and rendering engine 30.
In the embodiment of the present application, the data storage node 10 refers to a device for data storage, and may store data in the form of a database. The database may be one or more of an object storage (OSS) database, an index database, a relational database, and the like. The plurality means 2 or more than 2. In the present embodiment, the data storage node 10 mainly stores data collected during driving of the autonomous vehicle. It should be noted that, in the embodiment of the present application, the automatic driving level of the automatic driving vehicle may be any of the levels L1 to L5.
Autonomous vehicles are equipped with various sensors. The plurality means 2 or more than 2. Various sensors include, but are not limited to: the system comprises an image collector, a radar, a positioning device, an Inertial Measurement Unit (IMU), a mileage meter, a vehicle state sensor and the like. Wherein, image collector can be visual equipment such as monocular camera, binocular camera or degree of depth camera, and the radar includes at least one kind such as laser radar, millimeter wave radar, microwave radar. The image collector and the radar can be carried on an automatic vehicle.
The automatic driving vehicle can acquire data of multiple dimensions, namely multi-dimensional data for short, through a carried sensor in the driving process. The plurality means 2 or more than 2. As shown in FIG. 2, the multi-dimensional data includes, but is not limited to: vehicle travel route, vehicle state data, travel environment data, and the like.
The vehicle driving route can be represented as a time sequence of a series of positioning points during the driving process of the vehicle. The vehicle state data is data reflecting the vehicle driving state, and comprises at least one of the following items: speed data, throttle data, brake data, steering wheel data, attitude data, inertial Measurement Unit (IMU) data, vehicle driving patterns, and the like. Wherein the IMU data includes: attitude angle (or angular velocity) and acceleration of the autonomous vehicle, and the like. The driving environment data of the vehicle refers to environment data collected during the driving process of the vehicle, and includes but is not limited to: at least one of image data of an environment collected by an image collector on the autonomous vehicle, point cloud data of the environment collected by a radar on the autonomous vehicle, and the like. The image data may be a single frame image or may be a continuous video frame.
The data collected by the autonomous vehicle may be defined as raw data, and accordingly, the multi-dimensional data may be referred to as raw multi-dimensional data. The raw multidimensional data may include: one or more of raw driving environment data, raw vehicle state data, and raw vehicle driving route. The plurality means 2 or more than 2. As shown in FIG. 2, the autonomous vehicle may provide the raw multi-dimensional data collected to a data processing system. The data processing system can be deployed at the cloud end, so that data can be seen in the cloud under the automatic driving scene.
Data processing node 20 refers to a computing device having computing communication capabilities. The computing device may be a single server device, a cloud server array, or a Virtual Machine (VM) running in a cloud server array. In addition, the server device may also refer to other computing devices with corresponding service capabilities, such as a terminal device (running a service program) such as a computer.
The data storage node 10 and the data processing node 20 may be connected wirelessly or by wire. The wireless connection may be a network connection. For example, data storage node 10 and data processing node 20 may be connected via a public or private network. Or, the data storage node 10 and the data processing node 20 may be further connected through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (GSM), 2.5G (GPRS), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, wiMax, and the like.
In an embodiment of the present application, an autonomous vehicle is communicatively coupled to a data processing system via a network. For the network connection, reference may be made to the network connection between the data storage node 10 and the data processing node 20. The autonomous vehicle may transmit the collected raw multidimensional data to a data processing system via a network, and the raw multidimensional data may be stored by the data storage node 10.
The data storage node 10 may store and manage raw multidimensional data in the form of a database. Accordingly, as shown in FIG. 2, data processing node 20 may normalize and index raw multidimensional data. Of course, the data processing node 20 may label the original multidimensional data to determine the data dimensions of the original multidimensional data. Accordingly, data tags of the original multidimensional data include, but are not limited to: data set identification, time stamp, vehicle identification, data dimension, and the like.
In this embodiment, as shown in fig. 2, in order to improve data security, the data processing node 20 may further generate mirror image data of the original multidimensional data, and the data storage node 10 may further store the mirror image data of the original multidimensional data, so as to implement redundant backup of the original multidimensional data, which is beneficial to improving the security of the original multidimensional data.
In practical applications, in order to ensure the safety of the autonomous vehicle, simulation needs to be performed on a planning decision module and a control module of the autonomous vehicle on a simulation platform, so as to obtain a model capable of accurately planning decision and control for the autonomous vehicle. In the actual model simulation training process, the simulation model can be trained through real world data acquired by an automatic driving vehicle. Therefore, the quality and accuracy of the data for performing the simulation on the simulation model have a crucial influence on the training efficiency and accuracy of the simulation model. If the user can perceive the quality of the data used for the simulation module training in advance, the data with the quality not reaching the standard is found in time and is removed from the training data of the simulation module, and the efficiency and the accuracy of the subsequent simulation model training are undoubtedly improved.
Based on the analysis, in the embodiment of the application, in order to enable a user to visually perceive the quality of the data trained by the simulation module, a scheme for visually displaying the data is provided. Specifically, prior to visually presenting the multidimensional data, the data processing node 20 may obtain the raw multidimensional data collected by the autonomous vehicle during travel from the data storage node 10. The raw multi-dimensional data includes at least: original driving environment data and an original vehicle driving route. In some embodiments, the raw multi-dimensional data may further include: raw vehicle state data.
In the embodiment of the present application, a specific implementation form of the data processing node 20 acquiring the raw multi-dimensional data from the data storage node 10 is not limited. In some embodiments, the target dimension of the visualization may be selected autonomously by the user. For example, a human-machine interface may be provided. The man-machine interaction interface comprises dimension selection items which are displayed visually. The dimensions of the visual presentation include various data dimensions collected by the autonomous vehicle, including but not limited to: some or all of the dimensions in the radar point cloud, images, videos, and vehicle states and vehicle travel routes, etc. The user can select the target dimension of visual display to realize visual display of the target dimension information.
Accordingly, the data processing node 20 may obtain a target dimension selected by the user based on the human-computer interaction interface for visual presentation; data processing node 20 may retrieve raw data for a target dimension from data storage node 10. The target dimension can be any dimension of a user, and in the embodiment of the application, the target dimension is emphasized to include: the visualization display method is exemplified by a driving environment dimension, a vehicle state dimension and a vehicle driving route dimension.
Of course, the user may set query conditions in addition to selecting the target dimension of the visualization. In some embodiments, the user may provide the query conditions through a human-machine interface. Optionally, the user may select the query condition through the human-computer interface, or may input the query condition through the human-computer interface. The user can set the query condition according to the query requirement. Query conditions include, but are not limited to: one or more of a temporal condition, a vehicle identification condition, a data dimension condition, a spatial condition, and the like.
Accordingly, the data processing node 20 may obtain the query conditions provided by the user based on the human-computer interaction interface; generating a query expression of a data format supported by a query engine according to the query condition; furthermore, the syntax analysis can be carried out on the query expression to obtain a query statement adapted to the query engine; the query statement may then be utilized to query the data storage node 10 for target dimension data that satisfies the query condition.
According to the data query mode, the user is not required to input complex query statements, the user only needs to input query conditions, and the data processing node 20 can convert the query conditions into the query statements matched with the query engine, so that the learning cost of the user is reduced, and the use flexibility and universality of the data processing system are improved.
The query provided by the user may be any dimension of query. Accordingly, query conditions include, but are not limited to: one or more of a temporal condition, a vehicle identification condition, a sensor dimension condition, a spatial condition, and the like. Plural means 2 or more than 2.
After acquiring the target dimension data satisfying the query condition, the data processing node 20 may classify the target dimension data with the vehicle dimension to determine target dimension data of each autonomous vehicle. Wherein the target dimension data can be a data dimension satisfying the query condition. For each autonomous vehicle's target dimension data, the rendering engine 30 may render a corresponding data billboard. In the embodiment of the application, the data billboard refers to a form for displaying data in a visual visualization form.
In the embodiment of the present application, the format of the data billboard is not limited. Optionally, the format of the data billboard may include: at least one of a picture billboard, a text billboard, a digital billboard, a progress bar billboard, a graphical billboard and a list billboard; and so on. Wherein, graphical billboard includes: at least one of a scatter diagram board, a curve diagram board, a line graph board, a histogram board, a pie chart board, a bullet chart board, an area chart board, a waterfall chart board and the like; but is not limited thereto.
In the embodiment of the application, the visualization display method is exemplarily explained by taking target dimension data including a driving environment dimension, a vehicle state dimension and a vehicle driving route dimension as an example.
Considering that various sensors mounted on the automatic driving vehicle respectively have respective coordinate systems, namely data collected by the various sensors are based on the coordinate systems of the sensors, and the data are in different spatial domains; on the other hand, since different sensors have different sampling frequencies even after hardware synchronization, there is a problem that data acquired by the sensors are not synchronized on the time stamp.
And performing space-time alignment on the original multi-dimensional data to obtain space-time aligned target multi-dimensional data. Wherein the spatio-temporally aligned target multidimensional data at least comprises: and the time-space aligned driving environment data, the vehicle state data set, the vehicle driving route and the like.
In particular, the data processing node 20 may time align the original multi-dimensional data according to the time stamps of the original multi-dimensional data. Optionally, for any first dimension data and second dimension data in the original multi-dimension data, a first target timestamp corresponding to the first dimension data and a second target timestamp corresponding to the second dimension data may be determined according to a timestamp of the first dimension data and a timestamp of the second dimension data. The first dimension data has no corresponding data on the first target timestamp, and the second dimension data has data on the first target timestamp; the second dimension data has no corresponding data at the second target timestamp, and the first dimension data has data at the second target timestamp. For example, if the first dimension data is point cloud data, and the second dimension data is image data, the point cloud data has no corresponding point cloud data at a certain timestamp T, and if the image data has corresponding image data at the timestamp T, it is determined that the timestamp T is a target timestamp corresponding to the point cloud data, i.e., the timestamp T is a first target timestamp corresponding to the first dimension data (point cloud data).
Further, the data processing node 20 may obtain, from the first dimension data, first dimension target sample data whose sample time is adjacent to a first target timestamp of the first dimension data; and interpolating the first dimension data at the first target timestamp according to the first dimension target sample data. The first-dimension target sampling data adjacent to the first target timestamp may be data of a previous sampling point in the first target timestamp in the first-dimension data, or data of a next sampling point in the first target timestamp in the first-dimension data, or data of two sampling points in front of and behind the first target timestamp in the first-dimension data.
For example, as shown in FIG. 3, assume that the first dimension data is A1-A6, respectively; the second dimension data are B1-B4 respectively. From the time series distribution shown in fig. 3, it can be seen that: the first dimension data has no data in the time stamps T3 and T7, and the second dimension data has data B2 and B4 in the time stamps T3 and T7, so that the first target time stamps corresponding to the first dimension data are determined as the time stamps T3 and T7; and the second dimension data has no data at the time stamps T2, T4, T6 and T8, and the first dimension data has data A2, A3, A5 and A6 at the time stamps T2, T4, T6 and T8, so that the second target time stamps corresponding to the second dimension data are determined to be the time stamps T2, T4, T6 and T8. Further, for the first dimension data, it may be determined that the adjacent first dimension target sample data corresponding to the target timestamp T3 corresponding to the first dimension data may be data A2, or data point A3, or data points A2 and A3.
Further, for an embodiment in which the first-dimension target sample data is data of a sample point located before the first target timestamp in the first dimension data, the data of the sample point located before the target timestamp may be interpolated on the first target timestamp corresponding to the first dimension data, so that the first dimension data and the second dimension data are aligned on the first target timestamp. For example, for a target timestamp T3 corresponding to the first dimension data, data point A2 may be interpolated on the target timestamp T3 to align the first dimension data and the second dimension data on the timestamp T3.
For embodiments in which the first-dimension target sample data is data at a sample point subsequent to the first target timestamp in the first-dimension data, the data at the sample point subsequent to the first target timestamp may be interpolated on the first target timestamp to align the first-dimension data with the second-dimension data at the first target timestamp. For example, fig. 3 may interpolate data point A3 on target timestamp T3 for the first dimension data to align the first dimension data and the second dimension data on timestamp T3.
For the first-dimension target sampling data which are data of two sampling points in front of and behind the first target timestamp in the first-dimension data, the mean value of the two sampling points in front of and behind the target timestamp can be calculated, and the mean value is interpolated on the first target timestamp corresponding to the first-dimension data, so that the first-dimension data and the second-dimension data are aligned on the first target timestamp. For example, fig. 3 interpolates the mean of data points A2 and A3 over target timestamp T3 to align the first dimension data and the second dimension data over timestamp T3. The interpolation method is merely exemplary and not limiting.
Similarly, the data processing node 20 may further obtain, from the second dimension data, second dimension target sample data whose sample time is adjacent to a second target timestamp of the second dimension data; and interpolating the second dimension data on a second target timestamp according to the second dimension target sampling data, thereby realizing the time alignment of the first dimension data and the second dimension data. For a description of second-dimension target sample data of the second-dimension data and a specific implementation manner of interpolating the second-dimension data on a second target timestamp corresponding to the second-dimension data according to the second-dimension target sample data, reference may be made to the relevant content of the interpolation processing performed on the first-dimension data, which is not described herein again.
In some embodiments, in addition to the above-mentioned time alignment of the multidimensional data by interpolating the original multidimensional data, the time-aligned multidimensional data may also be interpolated according to the same target sampling period. And the target sampling period is less than the sampling period of the multi-dimensional data after time alignment. In this way, multidimensional data can be data-aligned at a finer time granularity. In the embodiment of the present application, a specific value of the target sampling period is not limited. Alternatively, the target sampling period may be a sampling period on the order of milliseconds, nanoseconds, or even finer granularity.
For the data processing node 20, besides time alignment of the multidimensional data, the multidimensional data also needs to be spatially aligned, that is, the multidimensional data is converted into the same spatial domain. Accordingly, the data processing node 20 may convert the original multidimensional data into the same coordinate system according to the mapping relationship between the coordinate systems in which the original multidimensional data are located, so as to implement alignment on the multidimensional data space. After the multi-dimensional data are aligned in time and space, the target multi-dimensional data aligned in time and space can be obtained.
The researchers of the application find that the head of the data collected by some sensors contains the coordinate information of the sensors. Based on this, the data processing node 20 may obtain coordinate system data corresponding to the original multidimensional data from the original multidimensional data. The coordinate coefficient data corresponding to the original data of each dimension refers to coordinate system information where the original data of the dimension is located. Furthermore, the mapping relation between the coordinates of the original multi-dimensional data can be determined according to the coordinate system data respectively corresponding to the original multi-dimensional data. Furthermore, the original multi-dimensional data can be converted to the same coordinate system according to the mapping relation between the coordinate systems of the original multi-dimensional data, and the spatial alignment of the multi-dimensional data is realized.
In some embodiments of the present application, the raw multi-dimensional data may be converted to a coordinate system in which the vehicle pose data resides. The coordinate of the vehicle attitude data is the coordinate established by taking the vehicle central point as the origin of coordinates. Of course, the original multidimensional data can also be converted into a world coordinate system, so that the spatial alignment of the multidimensional data is realized.
For the spatio-temporally aligned target multidimensional data, the data processing node 20 may store the spatio-temporally aligned target multidimensional data in the data storage node 10 for subsequent simulation training of the simulation platform using the spatio-temporally aligned target multidimensional data. Before the simulation training is performed on the simulation platform by using the time-space aligned target multidimensional data, in the embodiment of the application, the time-space aligned target multidimensional data can be visually displayed, so that a user can visually perceive the quality of the target multidimensional data.
For the embodiment comprising the time-space aligned driving environment data, the vehicle state data and the vehicle driving route, in order to enable a user to intuitively perceive the quality of the target multidimensional data, the time-space aligned regional map through which the vehicle driving route passes can be obtained according to the time-space aligned vehicle driving route. The map data may be stored in the data storage node 10. In this embodiment, the map data may be basic map data or Point of Interest (POI) map data.
In the present embodiment, the position of the anchor point in the vehicle travel route that is aligned in space-time may be used to make a query in the data storage node 10 to obtain an area map containing the position of the anchor point in the vehicle travel route that is aligned in space-time.
Further, the data processing node 20 may provide the region map data and the time-space aligned target multi-dimensional data to the rendering engine 30. Accordingly, as shown in FIG. 1, rendering engine 30 may render a map of the area on the front-end interface; and rendering the driving environment at the corresponding moment on the regional map according to the driving environment data aligned in time and space at the corresponding moment. In this embodiment, the corresponding time refers to: and determining the time corresponding to the time stamp of the vehicle driving route aligned in time and space of the regional map. In some embodiments, the raw multi-dimensional data for the autonomous vehicle may further include: vehicle state data. For the description of the vehicle state data, reference may be made to the relevant contents of the above embodiments, and details are not repeated herein. Accordingly, the spatiotemporally aligned target multidimensional data may comprise: spatiotemporally aligned vehicle state data. In this embodiment, the rendering engine 30 may further render a vehicle icon reflecting the vehicle state and the vehicle position at the corresponding time on the area map according to the vehicle travel route spatially and temporally aligned at the corresponding time and the vehicle state data spatially and temporally aligned at the corresponding time.
In some embodiments, in order to ensure the time sequence of the multi-dimensional data visualization display, and avoid that the data at the next time arrives at the rendering engine 30 before the data at the previous time, the data processing node 20 may further arrange the time-space aligned target multi-dimensional data according to the time sequence, so as to obtain the time sequence of the target multi-dimensional data. The rendering engine 30 may render the target multidimensional data in a time sequence when the multidimensional data is visually displayed.
Further, to increase rendering speed, the data processing node 20 may enable multiple threads to download the chronological sequence to the rendering engine 30 in chronological order. The rendering engine 30 may render the target multi-dimensional data included in the time series in time order. For a specific implementation of rendering the target multi-dimensional data included in the time sequence by the rendering engine 30, reference may be made to relevant contents of the foregoing embodiments, and details are not described herein again.
In order to further improve rendering efficiency, a buffer queue may be further configured to buffer the target multidimensional data downloaded to the rendering engine 30, so as to reduce the number of repeated downloads and reduce the number of Input/Output (IO) times.
In some embodiments, because the sampling period of the spatio-temporally aligned target multidimensional data is small, the time granularity is fine, and if the target multidimensional data is displayed frame by frame, the rendering efficiency is low. In order to improve the visual rendering speed, before the target multi-dimensional data is rendered, the time-space aligned target multi-dimensional data can be subjected to frame extraction according to a set frame extraction period, so as to obtain the framed target multi-dimensional data. And the frame drawing period is less than or equal to the visual persistence time of the user and is greater than the sampling period of the time-space aligned target multi-dimensional data. For example, the sampling period of the time-aligned target multi-dimensional data may be in the order of nanoseconds, and the frame-decimation period may be in the order of millimeters, such as 50 milliseconds.
Furthermore, the target multi-dimensional data after frame extraction can be visually displayed. Specifically, the data processing node 20 may render vehicle icons reflecting the vehicle states and the vehicle positions on the area map according to the temporally and spatially aligned vehicle travel routes at the respective times after the frame extraction and the temporally and spatially aligned vehicle state data at the respective times after the frame extraction; and rendering the driving environment at the corresponding moment on the regional map according to the driving environment data which is aligned in space and time after the frame extraction at the corresponding moment.
In the above embodiment, the original travel environment data of the autonomous vehicle and the original vehicle travel route may be aligned spatiotemporally; obtaining a regional map through which the vehicle running routes which are aligned in time and space pass according to the vehicle running routes which are aligned in time and space; and then, the driving environment at the corresponding moment can be rendered on the regional map according to the driving environment data aligned in time and space, so that the visualization of the regional map and the driving environment of the vehicle at the corresponding moment is realized, a visual reference can be provided for a user to perceive the quality of multidimensional data, and the user can intuitively perceive the quality of data acquired by the automatic driving vehicle. For example, the user may perceive whether the driving environment of the vehicle at the corresponding time is matched with the area map through the visualized area map and the driving environment of the vehicle at the corresponding time, so as to perceive the quality of the driving environment data at the corresponding time, and the like. For example, the area map is displayed as a viaduct and the vehicle running environment at the corresponding time is a tunnel, it may be determined that the vehicle running route and the vehicle running environment data at the time are not suitable, that is, the quality of the vehicle running route and the vehicle running environment data at the time is not good, and the like.
In some embodiments, the driving environment data of the autonomous vehicle may include: driving environment point cloud data and driving environment image data. Accordingly, the rendering engine 30 may render the point cloud driving environment and the driving environment image at the corresponding time on the area map according to the driving environment point cloud data and the driving environment image data, so as to realize visualization of the point cloud driving environment and the driving environment image, and provide a reference for the user to intuitively perceive the quality of the vehicle driving environment. For a user, whether the driving environments reflected by the two driving environment data are the same or not can be judged through the visual point cloud driving environment and the visual image driving environment. If the data are the same, the quality of the driving environment point cloud data and the driving environment image data at the corresponding moment is qualified; if the point cloud data and the image data of the driving environment are different, the point cloud data of the driving environment and the image data of the driving environment at the corresponding moment are unqualified. For example, if the driving environment point cloud data reflects that the obstacle X exists in the driving environment, but the driving environment image data at the same time reflects that the obstacle X does not exist at the same position, it is determined that the driving environment point cloud data at the time and the driving environment image data reflect different driving environments, and the quality of the driving environment point cloud data and the driving environment image data at the time is not good.
Rendering a vehicle icon reflecting the vehicle state and the vehicle position on the regional map to realize visual display of the vehicle state; and rendering the driving environment at the corresponding moment on the regional map to realize the visualization of the driving environment of the vehicle. For a user, whether the vehicle state at the corresponding moment is matched with the vehicle running environment or not can be judged through the visualized vehicle state and the vehicle running environment, and therefore the quality of the multidimensional data at the corresponding moment can be intuitively perceived. The vehicle state is matched with the vehicle running environment, and the quality of the multidimensional data at the moment is known to be qualified. And if the vehicle state is not adaptive to the similar environment of the vehicle, the quality of the multidimensional data at the moment is known to be unqualified. For example, if the vehicle driving environment reflection road is a straight road, but the vehicle state data reflects the steering of the vehicle, it is determined that the vehicle state data does not match the vehicle driving environment, and it is determined that the vehicle state data and the driving environment data at that time are not good.
And for the target multi-dimensional data with qualified quality confirmed by the user, the user can confirm that the data quality is qualified through the human-computer interaction interface. Accordingly, the data processing node 20 may only train the simulation module in the autopilot simulation platform with the spatio-temporally aligned target multidimensional data in response to a quality qualification validation operation for the spatio-temporally aligned target multidimensional data; and the trained simulation module is used for controlling the running of the automatic driving vehicle.
Or, for the target multidimensional data with qualified quality verified by the user through visual display, the data processing node 20 may store the target multidimensional data with qualified quality to the data storage node 10, so that the subsequent simulation platform performs simulation model training by using the target multidimensional data with qualified quality. Specifically, as shown in fig. 2, the simulation module of the automatic driving simulation platform may be trained by using the target multidimensional data that is qualified through verification. Because the quality of the source data trained by the simulation module directly influences the accuracy of the trained simulation module, the target multidimensional data trained by the simulation module is the multidimensional data which is aligned in time and space and qualified in verification, so that the quality of the source data trained by the simulation module is higher, and the accuracy of the trained simulation module is improved.
Furthermore, the trained simulation module can be used for carrying out running control on the automatic driving vehicle, such as running route planning, vehicle state control and the like.
In addition to the data processing system provided in the foregoing embodiment, an embodiment of the present application also provides a data processing method, and the data processing method provided in the embodiment of the present application is exemplarily described below.
Fig. 4 is a schematic flowchart of a data processing method according to an embodiment of the present application. As shown in fig. 4, the data processing method mainly includes the following steps
401. Acquiring original multi-dimensional data acquired by an automatic driving vehicle in the driving process; the raw multidimensional data includes at least: original driving environment data and an original vehicle driving route.
402. Performing space-time alignment on the original multi-dimensional data to obtain space-time aligned target multi-dimensional data; the target multi-dimensional data includes at least: the time-space aligned driving environment data and the time-space aligned vehicle driving route.
403. And obtaining a region map through which the vehicle driving route which is aligned in time and space passes according to the vehicle driving route which is aligned in time and space.
404. And rendering the driving environment at the corresponding moment on the regional map according to the driving environment data of the space-time alignment at the corresponding moment, wherein the corresponding moment is the moment corresponding to the timestamp of the vehicle driving route for determining the space-time alignment of the regional map.
In the embodiment of the application, in order to enable a user to intuitively perceive the quality of data trained by a simulation module, a scheme for visually displaying the data is provided. Specifically, in step 401, raw multidimensional data collected by an autonomous vehicle during travel may be retrieved from a data storage node. The raw multi-dimensional data includes at least: original driving environment data and an original vehicle driving route.
In the embodiment of the present application, a specific implementation form of acquiring original multidimensional data from a data storage node is not limited. In some embodiments, the target dimension of the visualization may be selected autonomously by the user. For example, a human-machine interface may be provided. The man-machine interaction interface comprises dimension selection items which are displayed visually. The dimensions of the visual presentation include various data dimensions collected by the autonomous vehicle, including but not limited to: some or all of the dimensions in the radar point cloud, images, videos, and vehicle states and vehicle travel routes, etc. The user can select the target dimension of visual display to realize visual display of the target dimension information.
Accordingly, the target dimension selected by the user based on the human-computer interaction interface for visual display can be obtained; raw data for a target dimension may be obtained from a data storage node. The target dimension can be any dimension of a user, and in the embodiment of the application, the target dimension is emphasized and comprises: the visualization display method is exemplified by a driving environment dimension, a vehicle state dimension and a vehicle driving route dimension.
Of course, the user may set query conditions in addition to selecting the target dimension of the visualization. In some embodiments, the user may provide the query conditions through a human-machine interface. Optionally, the user may select the query condition through the human-computer interface, or may input the query condition through the human-computer interface. The user can set the query condition according to the query requirement. Query conditions include, but are not limited to: one or more of a temporal condition, a vehicle identification condition, a data dimension condition, a spatial condition, and the like.
Accordingly, the query conditions provided by the user based on the human-computer interaction interface can be obtained; generating a query expression of a data format supported by a query engine according to the query condition; furthermore, the syntax analysis can be carried out on the query expression to obtain a query statement adapted to the query engine; and then, querying in the data storage node by using the query statement to obtain target dimension data meeting the query condition.
According to the data query mode, the user is not required to input complex query sentences, the user only needs to input query conditions, the query conditions can be converted into the query sentences matched with the query engine, the learning cost of the user is reduced, and the use flexibility and universality of the data processing system are improved.
The query provided by the user may be any dimension of query. Accordingly, query conditions include, but are not limited to: one or more of a temporal condition, a vehicle identification condition, a sensor dimension condition, a spatial condition, and the like. Plural means 2 or more than 2.
Optionally, after obtaining the target dimension data satisfying the query condition, the target dimension data may be classified by vehicle dimension to determine the target dimension data of each autonomous vehicle. Wherein the target dimension data can be a data dimension satisfying the query condition. For each autonomous vehicle's target dimensional data, a corresponding data billboard may be rendered.
In the embodiment of the application, the visualization display method is exemplarily explained by taking target dimension data including a driving environment dimension, a vehicle state dimension and a vehicle driving route dimension as an example.
Considering that various sensors mounted on the automatic driving vehicle respectively have respective coordinate systems, namely data collected by the various sensors are based on the coordinate systems of the sensors, and the data are in different spatial domains; on the other hand, since different sensors have different sampling frequencies even after hardware synchronization, there is a problem that data acquired by the sensors are not synchronized on the time stamp.
Based on this, in step 402, the original multidimensional data can be spatio-temporally aligned to obtain spatio-temporally aligned target multidimensional data. Wherein the spatio-temporally aligned target multidimensional data at least comprises: the time-space aligned driving environment data, the time-space aligned vehicle driving route and the like.
In particular, the original multidimensional data may be time aligned according to a timestamp of the original multidimensional data. Optionally, for any first dimension data and second dimension data in the original multi-dimension data, a first target timestamp corresponding to the first dimension data and a second target timestamp corresponding to the second dimension data may be determined according to a timestamp of the first dimension data and a timestamp of the second dimension data. The first dimension data has no corresponding data on the first target timestamp, and the second dimension data has data on the first target timestamp; the second dimension data has no corresponding data at the second target timestamp, and the second dimension data has data at the second target timestamp.
Further, first dimension target sample data having a sample time adjacent to the first target timestamp may be obtained from the first dimension data; and interpolating the first dimension data on a target timestamp corresponding to the first dimension data according to the first dimension target sampling data. The first dimension target sampling data adjacent to the first target timestamp of the first dimension data may be data of a previous sampling point located in the first dimension data at the first target timestamp, or data of a next sampling point located in the first dimension data at the first target timestamp, or data of two sampling points located in front of and behind the first target timestamp in the first dimension data.
Further, for an embodiment in which the first-dimension target sampling data is data of a sample point before the first target timestamp in the first dimension data, the data of the sample point before the first target timestamp may be interpolated on a target timestamp corresponding to the first dimension data, so that the first dimension data and the second dimension data are aligned on the first target timestamp.
For an embodiment in which the first-dimension target sampling data is data of a sample point located after the first target timestamp in the first-dimension data, the data of the sample point located after the first target timestamp may be interpolated on the first target timestamp corresponding to the first-dimension data, so that the first-dimension data and the second-dimension data are aligned on the first target timestamp.
For the first-dimension target sampling data which are data of two sampling points in front of and behind the first target timestamp in the first-dimension data, the mean value of the two sampling points in front of and behind the first target timestamp can be calculated, and the mean value is interpolated on the first target timestamp corresponding to the first-dimension data, so that the first-dimension data and the second-dimension data are aligned on the first target timestamp.
Similarly, second dimension target sampling data with sampling time adjacent to a second target timestamp of the second dimension data can be obtained from the second dimension data; and according to the second dimension target sampling data, interpolating the second dimension data on a second target timestamp corresponding to the second dimension data, thereby realizing the time alignment of the first dimension data and the second dimension data. For a description of second-dimension target sample data of the second-dimension data and a specific implementation manner of performing interpolation on a second target timestamp corresponding to the second-dimension data according to the second-dimension target sample data, reference may be made to the above-mentioned related content of performing interpolation on the first-dimension data, which is not described herein again.
In some embodiments, in addition to the above-mentioned time alignment of the multidimensional data by interpolating the original multidimensional data, the time-aligned multidimensional data may also be interpolated according to the same target sampling period. And the target sampling period is less than the sampling period of the multi-dimensional data after time alignment. In this way, multidimensional data can be data-aligned at a finer time granularity. In the embodiment of the present application, a specific value of the target sampling period is not limited. Alternatively, the target sampling period may be a sampling period on the order of milliseconds, nanoseconds, or even finer granularity.
Besides time alignment of the multidimensional data, the multidimensional data also needs to be spatially aligned, that is, the multidimensional data is converted into the same spatial domain. Correspondingly, the original multi-dimensional data can be converted to the same coordinate system according to the mapping relation between the coordinate systems of the original multi-dimensional data, and alignment on a multi-dimensional data space is achieved. After the multi-dimensional data are aligned in time and space, the target multi-dimensional data aligned in time and space can be obtained.
The researchers of the application find that the head of the data collected by some sensors contains the coordinate information of the sensors. Based on the above, coordinate system data corresponding to the original multidimensional data can be obtained from the original multidimensional data. The coordinate coefficient data corresponding to the original data of each dimension refers to coordinate system information where the original data of the dimension is located. Furthermore, the mapping relation between the coordinates of the original multi-dimensional data can be determined according to the coordinate system data respectively corresponding to the original multi-dimensional data. Furthermore, the original multi-dimensional data can be converted to the same coordinate system according to the mapping relation between the coordinate systems of the original multi-dimensional data, and the spatial alignment of the multi-dimensional data is realized.
In some embodiments of the present application, the raw multi-dimensional data may be converted to a coordinate system in which the vehicle pose data resides. The coordinate of the vehicle attitude data is the coordinate established by taking the vehicle central point as the origin of coordinates. Of course, the original multidimensional data can also be converted into a world coordinate system, so that the spatial alignment of the multidimensional data is realized.
For the time-space aligned target multi-dimensional data, the time-space aligned target multi-dimensional data can be stored in the data storage nodes, so that the time-space aligned target multi-dimensional data can be used for carrying out simulation training on a simulation platform subsequently. Before the simulation training is performed on the simulation platform by using the time-space aligned target multidimensional data, in the embodiment of the application, the time-space aligned target multidimensional data can be visually displayed, so that a user can visually perceive the quality of the target multidimensional data.
For the above embodiment including the time-space aligned driving environment data, the vehicle state data and the vehicle driving route, in order to enable the user to intuitively perceive the quality of the target multidimensional data, in step 403, a region map through which the time-space aligned vehicle driving route passes is obtained according to the time-space aligned vehicle driving route. The map data can be stored in the data storage node. In this embodiment, the map data may be basic map data or POI map data.
In the embodiment, the positions of the positioning points in the vehicle driving route which are aligned in space and time can be used for inquiring in the data storage nodes so as to obtain the area map containing the positions of the positioning points in the vehicle driving route which is aligned in space and time.
Further, in step 404, the driving environment at the corresponding time may be rendered on the area map according to the spatio-temporally aligned driving environment data at the corresponding time. In this embodiment, the corresponding time refers to: and determining the time corresponding to the time stamp of the vehicle driving route aligned in time and space of the regional map.
In some embodiments, the raw multi-dimensional data for the autonomous vehicle may further include: vehicle state data. For the description of the vehicle state data, reference may be made to the relevant contents of the above embodiments, and details are not repeated herein. Accordingly, the spatiotemporally aligned target multidimensional data may comprise: spatiotemporally aligned vehicle state data. In this embodiment, a vehicle icon reflecting the vehicle state and the vehicle position at the corresponding time may be rendered on the area map according to the vehicle travel route spatially aligned at the corresponding time and the vehicle state data spatially aligned at the corresponding time.
In some embodiments, in order to ensure the time sequence of the multi-dimensional data visualization display, the data at the next moment is prevented from reaching the rendering engine before the data at the previous moment, and the time-space aligned target multi-dimensional data can be arranged according to the time sequence to obtain the time sequence of the target multi-dimensional data. When the multidimensional data are visually displayed, the target multidimensional data can be rendered according to the time sequence in the time sequence.
Further, to increase rendering speed, multiple threads may be started to download the time-ordered sequence to the rendering engine in chronological order. The rendering engine may render the target multi-dimensional data contained in the time series in a time order. For a specific implementation of the rendering engine rendering the target multi-dimensional data included in the time sequence, reference may be made to relevant contents of the foregoing embodiments, which are not described herein again.
In order to further improve the rendering efficiency, a cache queue can be further arranged and used for caching the target multi-dimensional data downloaded to the rendering engine, so that the repeated downloading times can be reduced, and the IO times can be reduced.
In some embodiments, because the sampling period of the spatio-temporally aligned target multidimensional data is small, the time granularity is fine, and if the target multidimensional data is displayed frame by frame, the rendering efficiency is low. In order to improve the visual rendering speed, before the target multi-dimensional data is rendered, the time-space aligned target multi-dimensional data can be subjected to frame extraction according to a set frame extraction period, so as to obtain the framed target multi-dimensional data. And the frame drawing period is less than or equal to the visual persistence time of the user and is greater than the sampling period of the time-space aligned target multi-dimensional data. For example, the sampling period of the time-aligned target multi-dimensional data may be in the order of nanoseconds, and the frame-decimation period may be in the order of millimeters, such as 50 milliseconds, and so on.
Furthermore, the target multi-dimensional data after frame extraction can be visually displayed. Specifically, vehicle icons reflecting vehicle states and vehicle positions can be rendered on the area map according to the time-space aligned vehicle driving routes after frame extraction at corresponding moments and the time-space aligned vehicle state data after frame extraction at corresponding moments; and rendering the driving environment at the corresponding moment on the regional map according to the driving environment data which is aligned in time and space after the frame extraction at the corresponding moment.
In the above embodiment, the original travel environment data of the autonomous vehicle and the original vehicle travel route may be aligned spatiotemporally; obtaining a regional map through which the vehicle running routes which are aligned in time and space pass according to the vehicle running routes which are aligned in time and space; and then, the driving environment at the corresponding moment can be rendered on the regional map according to the driving environment data aligned in time and space, so that the visualization of the regional map and the driving environment of the vehicle at the corresponding moment is realized, a visual reference can be provided for a user to perceive the quality of multidimensional data, and the user can intuitively perceive the quality of data acquired by the automatic driving vehicle. For example, the user may perceive whether the driving environment of the vehicle at the corresponding time is matched with the area map through the visualized area map and the driving environment of the vehicle at the corresponding time, so as to perceive the quality of the driving environment data at the corresponding time, and the like. For example, the area map is displayed as a viaduct and the vehicle running environment at the corresponding time is a tunnel, it may be determined that the vehicle running route and the vehicle running environment data at the time are not suitable, that is, the quality of the vehicle running route and the vehicle running environment data at the time is not good, and the like.
The driving environment data of the autonomous vehicle may include: driving environment point cloud data and driving environment image data. Correspondingly, the point cloud driving environment and the driving environment image at the corresponding moment can be rendered on the regional map according to the driving environment point cloud data and the driving environment image data, so that the point cloud driving environment and the driving environment image can be visualized, and a reference is provided for a user to intuitively perceive the quality of the vehicle driving environment. For a user, whether the driving environments reflected by the two driving environment data are the same or not can be judged through the visual point cloud driving environment and the visual image driving environment. If the data are the same, the quality of the driving environment point cloud data and the driving environment image data at the corresponding moment is qualified; if the data are different, the quality of the driving environment point cloud data and the driving environment image data at the corresponding moment is unqualified. For example, if the driving environment point cloud data reflects that the obstacle X exists in the driving environment, but the driving environment image data at the same time reflects that the obstacle X does not exist at the same position, it is determined that the driving environment point cloud data at the time and the driving environment image data reflect different driving environments, and the quality of the driving environment point cloud data and the driving environment image data at the time is not good.
The vehicle icons reflecting the vehicle states and the vehicle positions are rendered on the regional map, so that the vehicle states are visually displayed; and rendering the driving environment at the corresponding moment on the regional map to realize the visualization of the driving environment of the vehicle. For a user, whether the vehicle state at the corresponding moment is matched with the vehicle running environment or not can be judged through the visual vehicle state and the vehicle running environment, and therefore the quality of the multidimensional data at the corresponding moment can be intuitively perceived. The vehicle state is matched with the vehicle running environment, and the quality of the multidimensional data at the moment is known to be qualified. And if the vehicle state is not adaptive to the similar environment of the vehicle, the quality of the multidimensional data at the moment is known to be unqualified. For example, if the vehicle driving environment reflecting road is a straight road, but the vehicle state data reflects the steering of the vehicle, it is determined that the vehicle state data does not match the vehicle driving environment, and it is determined that the vehicle state data and the driving environment data at that time are not good.
And for the target multi-dimensional data with qualified quality confirmed by the user, the user can confirm that the data quality is qualified through the human-computer interaction interface. Correspondingly, the simulation module in the automatic driving simulation platform can be only trained by utilizing the time-space aligned target multidimensional data in response to the quality qualification confirmation operation aiming at the time-space aligned target multidimensional data; and the trained simulation module is used for controlling the running of the automatic driving vehicle.
Or, for the target multidimensional data with qualified quality verified by the user through visual display, the target multidimensional data with qualified quality can be stored to the data storage node so that a subsequent simulation platform can perform simulation model training by using the target multidimensional data with qualified quality. Specifically, the simulation module of the automatic driving simulation platform can be trained by using the target multidimensional data which is qualified through verification. Because the quality of the source data trained by the simulation module directly influences the accuracy of the trained simulation module, the target multidimensional data trained by the simulation module is the multidimensional data which is subjected to space-time alignment and is qualified in verification, the quality of the source data trained by the simulation module is higher, and the accuracy of the trained simulation module is improved.
Furthermore, the trained simulation module can be used for carrying out running control on the automatic driving vehicle, such as running route planning, vehicle state control and the like.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subject of steps 401 and 402 may be device a; for another example, the execution subject of step 401 may be device a, and the execution subject of step 402 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations occurring in a specific order are included, but it should be clearly understood that these operations may be executed out of the order occurring herein or in parallel, and the sequence numbers of the operations, such as 401, 402, etc., are used merely to distinguish various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to execute the steps of the data processing method.
Fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present application. As shown in fig. 5, the computing device includes: a memory 50a, a processor 50b, and a display component 50c; the memory 50a is used for storing computer programs.
The processor 50b is coupled to the memory 50a and the display component 50c for executing computer programs for performing: acquiring original multi-dimensional data acquired by an automatic driving vehicle in the driving process; the raw multi-dimensional data includes at least: original driving environment data and an original vehicle driving route; performing space-time alignment on the original multi-dimensional data to obtain space-time aligned target multi-dimensional data; the target multi-dimensional data includes at least: the time-space aligned driving environment data and the time-space aligned vehicle driving route; acquiring a regional map through which the vehicle driving route which is aligned in time and space passes according to the vehicle driving route which is aligned in time and space; and rendering the driving environment of the corresponding time on the area map through the display component 50c according to the time-space aligned driving environment data of the corresponding time; the respective time is a time stamp corresponding time for determining a spatio-temporally aligned vehicle travel route of the area map.
In some embodiments, the raw multi-dimensional data further comprises: raw vehicle state data; target multidimensional data, further comprising: and (3) vehicle road state data aligned in time and space. The processor 50b is further configured to: and rendering vehicle icons reflecting the vehicle states and the vehicle positions at the corresponding moments on the area map according to the space-time aligned vehicle driving routes at the corresponding moments and the space-time aligned vehicle state data at the corresponding moments.
Optionally, the processor 50b is further configured to: responding to quality qualification confirmation operation aiming at the time-space aligned target multidimensional data, and training a simulation module in the automatic driving simulation platform by utilizing the time-space aligned target multidimensional data; and carrying out running control on the automatic driving vehicle by utilizing the trained simulation module.
Optionally, when performing the spatio-temporal alignment on the original multidimensional data, the processor 50b is specifically configured to: time alignment is carried out on the original multi-dimensional data according to the time stamp of the original multi-dimensional data; and converting the original multi-dimensional data into the same coordinate system according to the mapping relation between the coordinate systems of the original multi-dimensional data to obtain the target multi-dimensional data.
Further, when the processor 50b performs time alignment on the original multidimensional data according to the timestamp of the original multidimensional data, the processor is specifically configured to: according to any first dimension data and second dimension data in original multi-dimension data, a first target time stamp corresponding to the first dimension data and a target time stamp corresponding to the second dimension data are determined according to the time stamp of the first dimension data and the time stamp of the second dimension data; the first dimension data has no corresponding data in the first target timestamp, and the second dimension data has data in the first target timestamp; the second dimension data has no corresponding data in the second target timestamp, and the first dimension data has data in the second target timestamp; further, first dimension target sample data having a sample time adjacent to a target timestamp of the first dimension data may be acquired from the first dimension data; acquiring second-dimension target sampling data with sampling time adjacent to a target timestamp of the second-dimension data from the second-dimension data; interpolating on a target timestamp of the first dimension data according to the first dimension target sample data, and interpolating on a target timestamp of the second dimension data according to the second dimension target sample data to temporally align the first dimension data and the second dimension data.
Optionally, the processor 50b, when time-aligning the original multi-dimensional data according to the time stamp of the original multi-dimensional data, is further configured to: and interpolating the time-aligned multi-dimensional data according to the same target sampling period, wherein the target sampling period is smaller than the sampling period of the time-aligned multi-dimensional data.
Optionally, the processor 50b is further configured to: before converting original multi-dimensional data into the same coordinate system according to the mapping relation between the coordinate systems of the multi-dimensional data, acquiring coordinate system data corresponding to the original multi-dimensional data from the original multi-dimensional data; and determining the mapping relation between the coordinate systems of the original multi-dimensional data according to the coordinate system data respectively corresponding to the original multi-dimensional data.
In some embodiments, the processor 50b is further configured to: performing frame extraction on the time-space aligned target multi-dimensional data according to a set frame extraction period to obtain the framed target multi-dimensional data; the frame extraction period is less than or equal to the user visual persistence time. Accordingly, the processor 50b, when rendering a vehicle icon reflecting a vehicle state and a vehicle position on the area map according to the spatio-temporally aligned vehicle travel route at the corresponding time and the spatio-temporally aligned vehicle state data at the corresponding time, is specifically configured to: and rendering a vehicle icon reflecting the vehicle state and the vehicle position on the area map according to the time-space aligned vehicle driving route after the frame extraction at the corresponding moment and the time-space aligned vehicle state data after the frame extraction at the corresponding moment.
Accordingly, the processor 50b, when rendering the driving environment at the corresponding time on the area map according to the driving environment data aligned in time and space at the corresponding time, is specifically configured to: and rendering the driving environment at the corresponding moment on the regional map according to the driving environment data which is aligned in space and time after the frame extraction at the corresponding moment.
Optionally, the processor 50b is further configured to: acquiring a target dimension which is selected by a user based on a human-computer interaction interface and is used for visual display; acquiring time-space aligned data of a target dimension; and displaying a data billboard corresponding to the data of the space-time alignment of the target dimension at the corresponding moment on the regional map.
Optionally, the processor 50b is further configured to: arranging the time-space aligned target multi-dimensional data according to a time sequence to obtain a time sequence of the target multi-dimensional data; and starting a plurality of threads to download the time sequence to the rendering engine according to the time sequence, so that the rendering engine can render the target multidimensional data according to the time sequence.
Optionally, the processor 50b is further configured to: acquiring a query condition provided by a user based on a human-computer interaction interface; generating a query expression of a data format supported by a query engine according to the query condition; carrying out syntax analysis on the query expression to obtain a query statement matched with the query engine; and querying in a database storing data of the autonomous vehicle by using the query statement to obtain target dimension data meeting query conditions.
Optionally, the processor 50b is further configured to: performing space-time alignment on the target dimensional data to obtain space-time aligned target dimensional data; classifying the time-space aligned target dimensional data according to vehicle dimensions to determine the target dimensional data of each automatic driving vehicle; and displaying the data billboard corresponding to the target dimension data at the corresponding moment through the display component 50 c.
In some optional implementations, as shown in fig. 5, the computing device may further include: optional components such as a communications component 50d, a power component 50e, and an audio component 50 f. Only some of the components are shown schematically in fig. 5, and it is not meant that the computing device must include all of the components shown in fig. 5, nor that the computing device can include only the components shown in fig. 5.
The computing device provided by the embodiment can align the original driving environment data of the automatic driving vehicle and the original vehicle driving route in time and space; obtaining a regional map through which the vehicle running routes which are aligned in time and space pass according to the vehicle running routes which are aligned in time and space; and then, the driving environment at the corresponding moment can be rendered on the regional map according to the driving environment data aligned in time and space, so that the visualization of the regional map and the driving environment of the vehicle at the corresponding moment is realized, a visual reference can be provided for a user to perceive the quality of multidimensional data, and the user can intuitively perceive the quality of data acquired by the automatic driving vehicle.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the device on which it is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile and non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiments of the present application, the processor may be any hardware processing device that can execute the above described method logic. Alternatively, the processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Micro Controller Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), programmable Array Logic devices (PALs), general Array Logic devices (GAL), complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM), or System On Chips (SOC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device where the communication component is located can access a wireless network based on communication standards, such as WiFi,2G or 3G,4G,5G or a combination of the WiFi, the 2G or the 3G, the 4G and the 5G. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, or other technologies.
In the embodiment of the present application, the display assembly may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display assembly includes a touch panel, the display assembly may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
In embodiments of the present application, the power supply component is configured to provide power to the various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for devices with language interaction functionality, voice interaction with a user may be enabled through an audio component, and so forth.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
The storage medium of the computer is a readable storage medium, which may also be referred to as a readable medium. Readable storage media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A data processing method, comprising:
acquiring original multi-dimensional data acquired by an automatic driving vehicle in the driving process; the raw multi-dimensional data includes at least: original driving environment data and an original vehicle driving route;
performing space-time alignment on the original multi-dimensional data to obtain space-time aligned target multi-dimensional data; the target multi-dimensional data includes at least: the time-space aligned driving environment data and the time-space aligned vehicle driving route;
obtaining a region map through which the vehicle running routes which are aligned in time and space pass according to the vehicle running routes which are aligned in time and space;
rendering the driving environment at the corresponding moment on the regional map according to the time-space aligned driving environment data at the corresponding moment; the respective time is a time stamp corresponding time for determining a spatio-temporally aligned vehicle travel route of the area map.
2. The method of claim 1, wherein the raw multi-dimensional data further comprises: raw vehicle state data; the target multi-dimensional data further comprises: vehicle road state data aligned in time and space; the method further comprises the following steps:
and rendering a vehicle icon reflecting the vehicle state and the vehicle position at the corresponding moment on the area map according to the space-time aligned vehicle driving route at the corresponding moment and the space-time aligned vehicle state data at the corresponding moment.
3. The method of claim 1, further comprising:
responding to quality qualification confirmation operation aiming at the time-space aligned target multi-dimensional data, and training a simulation module in an automatic driving simulation platform by utilizing the time-space aligned target multi-dimensional data;
and carrying out running control on the automatic driving vehicle by utilizing the trained simulation module.
4. The method of claim 1, wherein the spatio-temporally aligning the original multidimensional data to obtain spatio-temporally aligned target multidimensional data comprises:
according to the time stamp of the original multi-dimensional data, time alignment is carried out on the original multi-dimensional data;
and converting the original multi-dimensional data into the same coordinate system according to the mapping relation between the coordinate systems of the original multi-dimensional data to obtain the target multi-dimensional data.
5. The method of claim 4, wherein time-aligning the original multi-dimensional data according to time-stamps of the original multi-dimensional data comprises:
aiming at any first dimension data and any second dimension data in the original multi-dimension data, determining a first target time stamp corresponding to the first dimension data and a second target time stamp corresponding to the second dimension data according to the time stamp of the first dimension data and the time stamp of the second dimension data; wherein the first dimension data has no corresponding data at the first target timestamp, and the second dimension data has data at the first target timestamp; the second dimension data has no corresponding data at the second target timestamp, and the first dimension data has data at the second target timestamp;
acquiring first dimension target sampling data with sampling time adjacent to the first target timestamp from the first dimension data;
acquiring second-dimension target sampling data with sampling time adjacent to the second target timestamp from the second-dimension data;
interpolating the first dimension data at the first target timestamp according to the first dimension target sample data, and interpolating the second dimension data at the second target timestamp according to the second dimension target sample data, so as to temporally align the first dimension data and the second dimension data.
6. The method of claim 5, wherein the time-aligning the original multi-dimensional data according to time stamps of the original multi-dimensional data further comprises:
and interpolating the time-aligned multi-dimensional data according to the same target sampling period, wherein the target sampling period is smaller than the sampling period of the time-aligned multi-dimensional data.
7. The method according to claim 4, before converting the original multi-dimensional data into the same coordinate system according to the mapping relationship between the coordinate systems of the original multi-dimensional data, further comprising:
acquiring coordinate system data respectively corresponding to the original multi-dimensional data from the original multi-dimensional data;
and determining the mapping relation between the coordinate systems of the original multi-dimensional data according to the coordinate system data respectively corresponding to the original multi-dimensional data.
8. The method of claim 2, further comprising:
performing frame extraction on the spatio-temporally aligned target multi-dimensional data according to a set frame extraction period to obtain the framed target multi-dimensional data; the frame extraction period is less than or equal to the visual persistence time of the user;
the rendering of the vehicle icon reflecting the vehicle state and the vehicle position at the corresponding time on the area map according to the space-time aligned vehicle driving route at the corresponding time and the space-time aligned vehicle state data at the corresponding time includes:
rendering a vehicle icon reflecting the vehicle state and the vehicle position on the area map according to the time-space aligned vehicle driving route after the frame extraction at the corresponding moment and the time-space aligned vehicle state data after the frame extraction at the corresponding moment;
the rendering of the driving environment of the corresponding time on the area map according to the time-space aligned driving environment data of the corresponding time comprises:
and rendering the driving environment at the corresponding moment on the regional map according to the driving environment data which is aligned in space and time after the frame extraction at the corresponding moment.
9. The method of claim 1, further comprising:
acquiring a target dimension which is selected by a user based on a human-computer interaction interface and is used for visual display;
acquiring time-space aligned data of the target dimension;
and displaying a data billboard corresponding to the data of the space-time alignment of the target dimension at the corresponding moment on the regional map.
10. The method of claim 1, further comprising:
arranging the time-space aligned target multi-dimensional data according to a time sequence to obtain a time sequence of the target multi-dimensional data;
starting a plurality of threads to download the time sequence to a rendering engine according to the time sequence, so that the rendering engine can render the target multi-dimensional data according to the time sequence.
11. The method of claim 1, further comprising:
acquiring a query condition provided by a user based on a human-computer interaction interface;
generating a query expression of a data format supported by a query engine according to the query condition;
carrying out syntax analysis on the query expression to obtain a query statement matched with a query engine;
and querying in a database storing data of the autonomous vehicle by using the query statement to obtain target dimension data meeting the query condition.
12. The method of claim 11, further comprising:
performing space-time alignment on the target dimensional data to obtain space-time aligned target dimensional data;
classifying the time-space aligned target dimension data according to vehicle dimensions to determine the target dimension data of each automatic driving vehicle;
and displaying the data billboard corresponding to the target dimension data at the corresponding moment.
13. A computing device, comprising: a memory, a processor, and a display component; wherein the memory is used for storing a computer program;
the processor is coupled to the memory and the display component for executing the computer program for performing the steps of the method of any one of claims 1-12.
14. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-12.
CN202211015100.2A 2022-08-23 2022-08-23 Data processing method, device and storage medium Pending CN115422417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211015100.2A CN115422417A (en) 2022-08-23 2022-08-23 Data processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211015100.2A CN115422417A (en) 2022-08-23 2022-08-23 Data processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN115422417A true CN115422417A (en) 2022-12-02

Family

ID=84199249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211015100.2A Pending CN115422417A (en) 2022-08-23 2022-08-23 Data processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115422417A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117171701A (en) * 2023-08-14 2023-12-05 陕西天行健车联网信息技术有限公司 Vehicle running data processing method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117171701A (en) * 2023-08-14 2023-12-05 陕西天行健车联网信息技术有限公司 Vehicle running data processing method, device, equipment and medium
CN117171701B (en) * 2023-08-14 2024-05-14 陕西天行健车联网信息技术有限公司 Vehicle running data processing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN105593641B (en) Increase the method and apparatus of display
US10699167B1 (en) Perception visualization tool
CN112750206A (en) Augmented reality wearable system for vehicle occupants
CN115016323A (en) Automatic driving simulation test system and method
CN113386785B (en) Method and device for displaying augmented reality warning information
KR20200067866A (en) Method for operating a display device in a car
US11409927B2 (en) Architecture for configurable distributed system simulation timing
CN115422417A (en) Data processing method, device and storage medium
Rao et al. AR-IVI—implementation of in-vehicle augmented reality
CN113970334A (en) Map rendering method, architecture, device and storage medium
JP2016024711A (en) Information presentation device, method, and program
US11988521B2 (en) Navigation system, navigation display method, and navigation display program
CN113728310A (en) Architecture for distributed system simulation
WO2022067295A1 (en) Architecture for distributed system simulation timing alignment
CN115061386B (en) Intelligent driving automatic simulation test system and related equipment
CN108595095B (en) Method and device for simulating movement locus of target body based on gesture control
US20230393801A1 (en) Synchronized rendering
KR20220054370A (en) Apparatus and method for providing extended functions to automobiles
CN110770540B (en) Method and device for constructing environment model
US10543852B2 (en) Environmental driver comfort feedback for autonomous vehicle
CN113268555B (en) Map generation method and device for multi-type data and computer equipment
CN112784128B (en) Data processing and displaying method, device, system and storage medium
CN113591744B (en) Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system
US20220092231A1 (en) Architecture for distributed system simulation timing alignment
US20220092232A1 (en) Architecture for distributed system simulation with realistic timing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination