CN117152700A - Data processing method, device, vehicle and storage medium - Google Patents

Data processing method, device, vehicle and storage medium Download PDF

Info

Publication number
CN117152700A
CN117152700A CN202311063092.3A CN202311063092A CN117152700A CN 117152700 A CN117152700 A CN 117152700A CN 202311063092 A CN202311063092 A CN 202311063092A CN 117152700 A CN117152700 A CN 117152700A
Authority
CN
China
Prior art keywords
map
coordinate system
road elements
vehicle
perceived
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311063092.3A
Other languages
Chinese (zh)
Inventor
徐佳飞
郭嘉斌
张超
李志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202311063092.3A priority Critical patent/CN117152700A/en
Publication of CN117152700A publication Critical patent/CN117152700A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The application provides a data processing method, a device, a vehicle and a storage medium, wherein the method comprises the following steps: acquiring a plurality of perceived road elements at a target moment and a plurality of perceived road elements at a historical moment before the target moment, splicing the perceived road elements according to first positions of the plurality of perceived road elements at the target moment and second positions of the plurality of perceived road elements at the historical moment under a first vehicle body coordinate system at the target moment to obtain a local reconstruction map, matching the plurality of perceived road elements in the local reconstruction map with the plurality of perceived road elements in a first local high-precision map under the first vehicle body coordinate system, and determining a result of identifying map data of the first local high-precision map and the local reconstruction map. The local reconstruction map is constructed by splicing the perceived road elements at different moments and is matched with the first local high-precision map, so that the map data identification result is determined, automatic data identification is realized, and the data processing efficiency is improved.

Description

Data processing method, device, vehicle and storage medium
Technical Field
The present application relates to the field of autopilot technology, and in particular, to a data processing method, apparatus, vehicle, and storage medium.
Background
The high-precision positioning method based on the high-precision map and the perception result can realize the centimeter-level positioning precision of the vehicle, and is one of the navigation auxiliary driving schemes of automatic driving (Navigation on Autopilot, NOA) favored by various large automatic driving companies at present. However, due to the limitation of timeliness of the high-precision map, in practical application, the high-precision map data may be wrong due to factors such as road structure change. Meanwhile, under various complex scenes, errors exist in the sensing result, and the errors can cause final positioning abnormality.
In the related art, error data is usually screened out manually in an automatic driving test scene, but the amount of test data generated in the automatic driving test scene is huge, and the manual screening efficiency is low, so how to improve the processing efficiency is a technical problem to be solved.
Disclosure of Invention
The present application aims to solve at least one of the technical problems in the related art to some extent.
To this end, the present application proposes a data processing method, apparatus, vehicle, and storage medium to achieve an improvement in data processing efficiency by automatic recognition.
In one aspect, an embodiment of the present application provides a data processing method, including:
Acquiring a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time before the target time;
splicing the perceived road elements according to the first positions of the perceived road elements at the target moment and the second positions of the perceived road elements at the history moment in the first vehicle body coordinate system at the target moment to obtain a local reconstruction map; acquiring a first local high-precision map under the first vehicle coordinate system corresponding to the target moment;
and matching the plurality of perceived road elements in the local reconstruction map with the plurality of perceived road elements in the first local high-precision map, and determining a result of identifying the map data of the first local high-precision map and the local reconstruction map.
Another embodiment of the present application provides a data processing apparatus, including:
a first acquisition module for executing acquisition of a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time before the target time;
the reconstruction module is used for executing the splicing of the perceived road elements according to the first positions of the perceived road elements at the target moment and the second positions of the perceived road elements at the history moment in the first vehicle body coordinate system at the target moment to obtain a local reconstruction map;
The second acquisition module is used for executing the acquisition of a first local high-precision map under the first vehicle coordinate system corresponding to the target moment;
and the first determining module is used for executing matching of a plurality of perceived road elements in the local reconstruction map and a plurality of perceived road elements in the first local high-precision map, and determining a result of identifying map data of the first local high-precision map and the local reconstruction map.
Another embodiment of the application provides a vehicle comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the previous aspect when executing the program.
Another aspect of the application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in the previous aspect.
Another aspect of the application provides a computer program product having a computer program stored thereon, which when executed by a processor implements a method according to the previous aspect.
The data processing method, the device, the vehicle and the storage medium provided by the application are used for acquiring a plurality of perceived road elements at the target moment and a plurality of perceived road elements at the historical moment before the target moment, splicing the perceived road elements according to the first position of the plurality of perceived road elements at the target moment in a first vehicle body coordinate system and the second position of the plurality of perceived road elements at the historical moment in the first vehicle body coordinate system to obtain a local reconstruction map, acquiring a first local high-precision map corresponding to the target moment, matching the plurality of perceived road elements in the local reconstruction map with the plurality of perceived road elements in the first local high-precision map, and determining the map data identification result of the first local high-precision map and the local reconstruction map. The local reconstruction map is constructed by splicing the perceived road elements at different moments, and the constructed local reconstruction map and the first local high-precision map are matched to determine the map data identification result of the first local high-precision map and the local reconstruction map, so that automatic data identification is realized, and the data processing efficiency is improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating another data processing method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another data processing method according to an embodiment of the present application
Fig. 4 is a schematic flow chart of a data processing method in a scenario provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
Data processing methods, apparatuses, vehicles, and storage media of embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a data processing method according to an embodiment of the present application.
The execution subject of the data processing method according to the embodiment of the present application is a data processing device, which may be provided in a vehicle, and is not limited in this embodiment.
As shown in fig. 1, the method may include the steps of:
step 101, obtaining a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time before the target time.
In the embodiment of the application, a large amount of test data can be generated in an automatic driving vehicle scene, wherein the test data comprises positioning data recorded at each moment and automatic perception data, the automatic perception data is perception road elements obtained by perception based on image data, point cloud data and the like acquired by a sensor and the position and attribute information of the perception road elements, and the perception road elements comprise lane lines, ground marks, lamp posts, guideboards and the like. The perceived position of the road element is a set of perceived positions of a plurality of points, and the attribute information comprises information such as type, color, size, shape, texture and the like.
The target time is any time under the test scene. The history time is a time before the target time, the history time may be one time or a plurality of times, and the target time and the history time may be continuous times or times at which a set interval exists.
It is to be appreciated that the plurality of perceived road elements at the target time and the plurality of perceived road elements at the history time prior to the target time may be the same perceived road element or may be different perceived road elements.
As an example, if 3 perceived road elements at the target time are respectively identified as A1, B1 and C1, and one historical time is one, that is, the previous time of the target time is 2 corresponding perceived road elements are respectively identified as A2 and D1, it is indicated that the same perceived road element is perceived in both the target time and the historical time, that is, A1 and A2 are only the identifications of the perceived road element a at different target times.
Step 102, splicing the perceived road elements according to the first positions of the perceived road elements at the target time and the second positions of the perceived road elements at the history time in the first vehicle body coordinate system at the target time to obtain a local reconstruction map.
The vehicle body coordinate system is a coordinate system established by taking the rear wheel of the vehicle as a center dot, and the vehicle is positioned at different positions at different moments, so that the corresponding vehicle body coordinate system is provided at each moment. For convenience of distinction, the body coordinate system at the target time is referred to as a first body coordinate system, and the body coordinate system at the history time is referred to as a second body coordinate system. Therefore, the second positions of the plurality of sensing road elements at the historical moment under the first vehicle body coordinate system are obtained by converting the third positions of the plurality of sensing road elements at the historical moment under the second vehicle body coordinate system corresponding to each historical moment into the first vehicle body coordinate system corresponding to the target moment, and the second positions are converted into the same coordinate system so as to facilitate the subsequent splicing of the sensing road elements.
In the embodiment of the application, a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time are matched based on positions in a first vehicle body coordinate system, the same perceived road element is determined from the plurality of perceived road elements at the target time and the plurality of perceived road elements at the history time, the same perceived road element is obtained by splicing according to a first position and a second position of the same perceived road element, and similarly, a plurality of identical perceived road elements can be spliced based on the first position and the second position to obtain each complete perceived road element, so that the completion of each perceived road element is realized through splicing, for example, the perceived road element is a lane line 1, the first position of the lane line 1 obtained at the target time and the second position of the lane line 1 obtained at each history time are spliced according to the first vehicle body coordinate system, a longer and more complete lane line 1 corresponding to the history time to the target time on a road surface is obtained, and further, a local reconstruction map under the first vehicle body coordinate system is generated according to each perceived road element obtained by splicing, and the local reconstruction map contains the position information of the lane line 1 from the target time to the target time is obtained.
Step 103, obtaining a first local high-precision map under a first vehicle body coordinate system corresponding to the target moment.
In the embodiment of the application, the high-precision map is used for navigation positioning, the high-precision map is a map under a station-center coordinate system (Local Cartesian Coordinates Coordinate System, ENU), in order to reduce the calculation amount, a local high-precision map is determined from a global high-precision map of a vector under the station-center coordinate system according to the positioning position of a vehicle obtained by positioning at a target moment, and the local high-precision map is called a second local high-precision map, and the distance between each position in the second local high-precision map and the positioning position of the vehicle is smaller than a set distance threshold value so as to acquire map data near the positioning position of the vehicle. In addition, because the second local high-precision map and the local reconstruction map are not the same coordinate system, the mapping relation between the geocentric coordinate system and the first vehicle body coordinate system is required to be obtained, and the second local high-precision map is mapped from the station centric coordinate system to the first vehicle body coordinate system according to the mapping relation between the station centric coordinate system and the first vehicle body coordinate system to obtain the first local high-precision map, so that the conversion to the same coordinate system is realized, and the subsequent data matching is facilitated.
And 104, matching the plurality of perceived road elements in the partially reconstructed map with the plurality of perceived road elements in the first partially high-precision map, and determining a result of identifying the map data of the first partially high-precision map and the partially reconstructed map.
In the embodiment of the application, a plurality of perceived road elements in a partially reconstructed map and a plurality of perceived road elements in a first partially high-precision map are respectively matched according to the position and attribute information, if the plurality of perceived road elements in the partially reconstructed map and the plurality of perceived road elements in the first partially high-precision map are matched, it is determined that map data errors do not exist in the first partially high-precision map and the partially reconstructed map, and the map data errors do not exist in the partially reconstructed map, so that the perceived data errors do not exist; if there is at least one mismatch between the plurality of perceived road elements in the partially reconstructed map and the plurality of perceived road elements in the first partial high-precision map, it is indicated that the first partial high-precision map has map data errors and/or the partially reconstructed map has map data errors, that is, the high-precision map may be an error or an error caused by a change of a road structure or the like, but the high-precision map is not updated in time, and/or the generated partially reconstructed map is an error caused by an error in perceived in the process of data perception based on the sensor. In the data processing method of the embodiment of the application, a plurality of perceived road elements at the target time and a plurality of perceived road elements at the historical time before the target time are obtained, the perceived road elements are spliced according to the first positions of the plurality of perceived road elements at the target time and the second positions of the plurality of perceived road elements at the historical time under the first vehicle body coordinate system at the target time, a local reconstruction map is obtained, a first local high-precision map under the first vehicle body coordinate system corresponding to the target time is obtained, a plurality of perceived road elements in the local reconstruction map and a plurality of perceived road elements in the first local high-precision map are matched, and a result of map data identification of the first local high-precision map and the local reconstruction map is determined. The local reconstruction map is constructed by splicing the perceived road elements at different moments, and the constructed local reconstruction map and the first local high-precision map are matched to determine the map data identification result of the first local high-precision map and the local reconstruction map, so that automatic data identification is realized, and the data processing efficiency is improved.
Based on the foregoing embodiments, another data processing method is provided in the embodiments of the present application, and fig. 2 is a schematic flow chart of another data processing method provided in the embodiments of the present application, as shown in fig. 2, the method includes the following steps:
step 201, obtaining a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time before the target time.
The principle of step 201 is the same as that of the previous embodiment, and will not be repeated here.
As an implementation manner, the plurality of perceived road elements at the target time and the plurality of perceived road elements at the history time before the target time may be smoothed, and filtering may be performed based on a voxel filtering algorithm to filter out interference data, so that the obtained processed plurality of perceived road elements at the target time and the obtained plurality of perceived road elements at the history time before the target time are closer to actual results, and thus the subsequent plurality of perceived road elements at the target time and the obtained plurality of perceived road elements at the history time before the target time are data obtained by performing element smoothing and voxel filtering.
Step 202, obtaining position information of a vehicle at a target moment in a first vehicle body coordinate system and position information of a vehicle at each history moment in a second vehicle body coordinate system at each history moment.
In one implementation manner of the embodiment of the present application, the position information of the vehicle at the target moment in the first vehicle body coordinate system and the position information of the vehicle at each history moment in the second vehicle body coordinate system at each history moment may be obtained from the setting storage unit of the vehicle, where the position information includes the position and the pose of the vehicle, and the pose includes the orientation, the pitch angle, the roll angle, and the like of the vehicle, and optionally, the position information is the relative position information of the vehicle at each moment relative to the starting position of the vehicle.
In another implementation manner of the embodiment of the application, the position information is obtained by recursion based on measured values of a wheel speed meter and an inertial measurement unit (Inertial Measurement Unit, IMU) mounted in the vehicle, for a target moment, measured values of the IMU and the wheel speed meter of the vehicle acquired at the target moment are obtained, and the position information of the vehicle under a first vehicle body coordinate system is obtained by recursion based on the measured values of the IMU and the wheel speed meter acquired at the target moment.
In step 203, for each historical time, offset information of the vehicle at the target time relative to the historical time is determined according to a difference between the position information of the vehicle in the first vehicle body coordinate system and the position information of the vehicle in the second vehicle body coordinate system at the historical time.
Wherein the offset information includes a translational offset value and a rotational offset value, the offset information indicating a mapping relationship between the second body coordinate system and the first body coordinate system.
In one implementation manner of the embodiment of the application, difference value calculation is performed on the position information of the vehicle under the first vehicle body coordinate system and the position information of the vehicle under the second vehicle body coordinate system at the historical moment, so as to obtain offset information of the vehicle at the target moment relative to the historical moment.
Step 204, obtaining a third position of each perceived road element at the historical moment under the second body coordinate system at the historical moment.
Step 205, correcting the third position of each perceived road element according to the offset information to obtain the second position of each perceived road element at the historical moment under the first vehicle body coordinate system.
In the embodiment of the application, the offset information and the third positions of the sensing road elements at the historical moment are respectively multiplied, so that the third positions of the sensing road elements at the historical moment under the second vehicle body coordinate system can be mapped to the first vehicle body coordinate system at the target moment to obtain the second positions of the sensing road elements at the historical moment under the first vehicle body coordinate system, and the second positions of the sensing road elements at other historical moments under the first vehicle body coordinate system can be determined in the same way. Based on the correction of the offset information on the positions of the sensing elements, the sensing road elements at each historical moment are uniformly mapped to the first vehicle body coordinate system at the target moment, so that a local reconstruction map is conveniently built in the uniform coordinate system.
Step 206, splicing the perceived road elements according to the first positions of the perceived road elements at the target time under the first vehicle body coordinate system at the target time and the second positions of the perceived road elements at the history time under the first vehicle body coordinate system to obtain the local reconstruction map.
As an implementation manner, before the local reconstruction map is reconstructed according to the spliced perceived road elements, smoothing processing can be performed on the spliced perceived road elements, filtering is performed on the basis of a voxel filtering algorithm, so that interference data are filtered, the processed spliced perceived road elements are closer to an actual result, and accuracy of the reconstructed local reconstruction map is improved.
Step 207, obtaining a first local high-precision map in a first vehicle body coordinate system corresponding to the target moment.
Step 208, matching the plurality of perceived road elements in the partially reconstructed map with the plurality of perceived road elements in the first partially high-precision map, and determining a result of identifying the map data of the first partially high-precision map and the partially reconstructed map.
The explanation of the foregoing embodiments is also applicable to the steps 206-208, and will not be repeated here.
According to the data processing method, the offset value of the position of the vehicle at the target moment relative to the position at each historical moment is determined according to the obtained IMU and wheel speed meter measured values of the vehicle at each moment, the third positions of the plurality of perceived road elements at each historical moment under the second vehicle body coordinate system are corrected according to the offset value, so that the second positions under the first vehicle body coordinate system are obtained, the positions of the perceived road elements perceived at the historical moment are mapped to the first vehicle body coordinate system at the current moment, splicing among the perceived road elements is facilitated, a local reconstruction map is generated, and the accuracy of building the local reconstruction map is improved.
Based on the foregoing embodiments, another data processing method is provided in the embodiments of the present application, and fig. 3 is a schematic flow chart of another data processing method provided in the embodiments of the present application, as shown in fig. 3, the method includes the following steps:
step 301, obtaining a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time before the target time.
Step 302, splicing the perceived road elements according to the first positions of the perceived road elements at the target time under the first vehicle body coordinate system at the target time and the second positions of the perceived road elements at the history time under the first vehicle body coordinate system to obtain the local reconstruction map.
Step 303, obtaining a first local high-precision map in a first vehicle body coordinate system corresponding to the target moment.
The principles of steps 301 to 303 may be the same as those of the previous embodiments, and are not repeated here.
Step 304, a plurality of set road parameters are obtained.
In the embodiment of the application, the set road parameters comprise relevant parameters of the lane lines, relevant parameters of the ground marks, relevant parameters of the lamp posts and the like, and the relevant parameters at least comprise distance, color, type and the like, wherein the relevant parameters of the lane lines comprise the distance between the lane lines, the color of the lane lines, the type of the lane lines, the lane curvature, the lane width and the like. The related parameters of the ground marks comprise the distance between the ground marks, the color of the ground marks, the size of the ground marks, the shape of the ground marks, the characters contained in the ground marks and the like. Relevant parameters of the lamp posts include the distance between the lamp posts, the height of the lamp posts, the type of the lamp posts, the shape of the lamp posts, etc.
And 305, matching the plurality of perceived road elements in the local reconstruction map with the plurality of perceived road elements in the first local high-precision map, and determining the matching value of each road parameter.
In the embodiment of the application, a plurality of perceived road elements in the local reconstruction map and a plurality of perceived road elements in the first local high-precision map are respectively matched by adopting a binary value method based on the position, the color, the direction, the shape, the size and the type characteristics of the perceived road elements, so that the matched value of each road parameter in the local reconstruction map and the first local high-precision map can be determined, and the matched value of the distance between the lane lines, the matched value of the color of the lane lines, the matched value of the type of the lane lines, the matched value of the lane curvature and the matched value of the lane width can be determined by taking the most common lane lines as an example.
And step 306, determining a result of map data identification of the first local high-precision map and the local reconstruction map according to the matched values of the plurality of road parameters.
Under one scene of the embodiment of the application, matching the matching value of each road parameter with the corresponding setting condition, and determining that the first local high-precision map and the local reconstruction map have no map data errors in response to the matching value of each road parameter meeting the corresponding setting condition.
In another scenario of the embodiment of the present application, the matching values of the road parameters are matched with the corresponding setting conditions, and in response to the matching value of at least one road parameter not meeting the corresponding setting condition, it is determined that the first local high-precision map has map data errors and/or the local reconstructed map has map data errors, and it is determined that the time of map data errors is the target time, and the time point of errors can be located by outputting the time of errors, so that the errors can be conveniently searched and corrected.
As an example, taking the perceived road element as the lane line and the road parameter as the lane line as an example, fig. 4 is a schematic flow chart of a data processing method in a scene provided by the embodiment of the present application, wherein the principle is the same and will not be repeated herein by the process of perceiving how the road element creates the partially reconstructed map and matching the partially reconstructed map with the determined first partially high-precision map.
As shown in fig. 4, in one scenario, the matching value of the road parameter is a distance matching value between lane lines, and the distance matching value between lane lines includes a distance matching value between non-edge lane lines and a distance matching value between edge lane lines.
The distance matching value between the non-edge lane lines indicates the distance between normal lane lines, the larger the distance matching value between the non-edge lane lines is, the smaller the distance between the non-edge lane lines is, if the distance matching value between the non-edge lane lines is larger than a distance matching threshold value, namely, the distance between the non-edge lane lines is smaller than the distance threshold value, it is determined that map data errors do not exist in both the first local high-precision map and the local reconstruction map; if the distance matching value of the non-edge lane lines is smaller than or equal to the distance threshold value, namely, the distance between the non-edge lane lines is larger than or equal to the distance threshold value, the first partial high-precision map is indicated to have map data errors and/or the partial reconstructed map is indicated to have map data errors, wherein the map data errors of the partial reconstructed map can be errors in the positioning process or errors caused by errors of the sensing data sensing result, and the processing efficiency is improved by automatically determining the data with errors.
The method for determining the distance matching value based on the edge lane line is the same as the method for determining the distance matching value based on the non-edge lane line, and will not be described here again.
In the second scenario, the matching value of the road parameter is a matching value of the curvature of the lane, wherein the larger the matching value of the curvature of the lane is, the smaller the difference between the curvature of the lane in the first local high-precision map and the curvature of the lane in the local reconstruction map is, so that the matching value of the curvature of the lane is compared with a set curvature matching threshold value, if the matching value of the curvature of the lane is larger than the curvature threshold value, the smaller the curvature deviation is determined, and the map data errors do not exist in the first local high-precision map and the local reconstruction map; conversely, it is determined that the first partial high-definition map has map data errors and/or that the partially reconstructed map has map data errors.
In a third scenario, the matching value of the road parameter is a matching value of the lane width, wherein the larger the matching value of the lane width is, the smaller the difference between the width of the lane in the first local high-precision map and the width of the lane in the local reconstruction map is, the matching value of the lane width is compared with a set width matching threshold value, if the matching value of the lane width is larger than the width threshold value, the smaller the lane width deviation is determined, and the map data errors do not exist in the first local high-precision map and the local reconstruction map. Conversely, it is determined that the first partial high-definition map has map data errors and/or that the partially reconstructed map has map data errors.
In a fourth scenario, the matching value of the road parameter is a matching value of a lane line color, wherein the larger the matching value of the lane line color is, the smaller the difference between the lane line color in the first local high-precision map and the lane line color in the local reconstruction map is, the matching value of the lane line color is compared with a set color matching threshold value, and if the matching value of the lane line color is larger than the set color threshold value, it is determined that map data errors do not exist in the first local high-precision map and the local reconstruction map. In contrast, there is a map data error.
The principle is the same for other road parameters, and this embodiment is not listed one by one.
It should be noted that, the local reconstruction map may correct the error of the first local high-precision map, optionally, when determining that the local reconstruction map and/or the first local high-precision map have an error, the time of the map data error may be output, and the scene with the error may be manually located according to the time of the map data error, so as to identify whether the local reconstruction map has an error or the first local high-precision map has an error, and when the local reconstruction map has no error, the first local map may be corrected according to the data of the local reconstruction map, so as to improve the efficiency of the first local map correction.
In the data processing method provided by the embodiment of the application, whether the first local high-precision map and the local reconstruction map have map data errors is determined through the evaluation of multiple dimensions, so that the efficiency of processing the test data is improved, the accuracy of identifying the map data is improved, and the accuracy of positioning in an automatic driving scene is improved.
In order to achieve the above embodiments, the embodiments of the present application further provide a data processing apparatus.
Fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
As shown in fig. 5, the apparatus may include:
the first obtaining module 51 is configured to obtain a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time before the target time.
The reconstruction module 52 is configured to perform stitching of the perceived road elements according to a first position of the perceived road elements at the target time under a first vehicle body coordinate system of the target time and a second position of the perceived road elements at the history time under the first vehicle body coordinate system, so as to obtain a locally reconstructed map.
The second obtaining module 53 is configured to perform obtaining a first local high-precision map in the first vehicle coordinate system corresponding to the target time.
The first determining module 54 is configured to perform matching between a plurality of perceived road elements in the partially reconstructed map and a plurality of perceived road elements in the first partially high-precision map, and determine a result of identifying map data of the first partially high-precision map and the partially reconstructed map.
Further, in an implementation manner of the embodiment of the present application, the first determining module 54 is further configured to perform:
acquiring a plurality of set road parameters;
matching a plurality of perceived road elements in the local reconstruction map with a plurality of perceived road elements in the first local high-precision map, and determining matching values of various road parameters;
and determining a result of map data identification of the first local high-precision map and the local reconstruction map according to the matching values of the plurality of road parameters.
In one implementation of the embodiment of the present application, the first determining module 54 is further configured to perform:
matching the matching value of each road parameter with the corresponding setting condition;
and responding to the matching values of the road parameters to meet corresponding setting conditions, and determining that the first local high-precision map and the local reconstruction map have no map data errors.
In one implementation of the embodiment of the present application, the first determining module 54 is further configured to perform:
and determining that the map data error exists in the first local high-precision map and/or the map data error exists in the local reconstruction map in response to the matching value of at least one road parameter does not meet the corresponding setting condition, and determining the time of map data error as the target moment.
In one implementation of the embodiment of the present application, the historical time is multiple, and the reconstruction module 52 is further configured to perform:
acquiring the position information of the vehicle at the target moment under the first vehicle body coordinate system and the position information of the vehicle at each history moment under the second vehicle body coordinate system;
determining, for each historical moment, offset information of the vehicle at the target moment relative to the historical moment according to a difference between position information of the vehicle in the first vehicle body coordinate system and position information of the vehicle in a second vehicle body coordinate system of the historical moment; wherein the offset information indicates a mapping relationship between the second body coordinate system and the first body coordinate system;
acquiring a third position of each perceived road element at the historical moment under a second vehicle body coordinate system at the historical moment;
And correcting the third position of each perceived road element according to the offset information to obtain the second position of each perceived road element at the historical moment under the first vehicle body coordinate system.
In one implementation of the embodiment of the present application, the apparatus further includes a second determining module;
the second determining module is used for executing the acquisition of the measured values of the Inertial Measurement Unit (IMU) and the wheel speed meter of the vehicle acquired at the target moment; and determining the position information of the vehicle at the target moment under the first vehicle body coordinate system according to the IMU and wheel speed meter measured values acquired at the target moment.
In one implementation manner of the embodiment of the present application, the apparatus further includes a third determining module;
the third determining module is used for executing the obtaining of the positioning position obtained by positioning the vehicle at the target moment and the mapping relation between the station center coordinate system and the first vehicle body coordinate system; determining a second local high-precision map from the high-precision map under the station-core coordinate system according to the positioning position; and according to the mapping relation, mapping the second local high-precision map under the station coordinates to the first vehicle body coordinate system to obtain the first local high-precision map under the first vehicle body coordinate system.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and will not be repeated here.
In the data processing device of the embodiment of the application, a plurality of perceived road elements at the target time and a plurality of perceived road elements at the historical time before the target time are acquired, the perceived road elements are spliced according to the first position of the plurality of perceived road elements at the target time under the first vehicle body coordinate system and the second position of the plurality of perceived road elements at the historical time under the first vehicle body coordinate system, a local reconstruction map is obtained, a first local high-precision map under the first vehicle body coordinate system corresponding to the target time is acquired, a plurality of perceived road elements in the local reconstruction map and a plurality of perceived road elements in the first local high-precision map are matched, and a result of map data identification of the first local high-precision map and the local reconstruction map is determined. The local reconstruction map is constructed by splicing the perceived road elements at different moments, and the constructed local reconstruction map and the first local high-precision map are matched to determine the map data identification result of the first local high-precision map and the local reconstruction map, so that automatic data identification is realized, and the data processing efficiency is improved.
In order to implement the above embodiments, the present application also proposes a vehicle comprising a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing the method according to the above method embodiments when executing said program.
In order to implement the above-described embodiments, the present application also proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements a method as described in the foregoing method embodiments.
In order to achieve the above-described embodiments, the present application also proposes a computer program product having a computer program stored thereon, which, when being executed by a processor, implements a method as described in the method embodiments described above.
Fig. 6 is a schematic structural diagram of a vehicle according to an embodiment of the present application. For example, vehicle 600 may be a hybrid vehicle, but may also be a non-hybrid vehicle, an electric vehicle, a fuel cell vehicle, or other type of vehicle. The vehicle 600 may be an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle.
Referring to fig. 6, a vehicle 600 may include various subsystems, such as an infotainment system 610, a perception system 620, a decision control system 630, a drive system 640, and a computing platform 650. Wherein the vehicle 600 may also include more or fewer subsystems, and each subsystem may include multiple components. In addition, interconnections between each subsystem and between each component of the vehicle 600 may be achieved by wired or wireless means.
In some embodiments, the infotainment system 610 may include a communication system, an entertainment system, a navigation system, and the like.
The perception system 620 may include several sensors for sensing information of the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system (which may be a GPS system, a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a lidar, millimeter wave radar, an ultrasonic radar, and a camera device.
Decision control system 630 may include a computing system, a vehicle controller, a steering system, a throttle, and a braking system.
The drive system 640 may include components that provide powered movement of the vehicle 600. In one embodiment, the drive system 640 may include an engine, an energy source, a transmission, and wheels. The engine may be one or a combination of an internal combustion engine, an electric motor, an air compression engine. The engine is capable of converting energy provided by the energy source into mechanical energy.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 may include at least one processor 651 and memory 652, the processor 651 may execute instructions 653 stored in the memory 652.
The processor 651 may be any conventional processor, such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof.
The memory 652 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition to instructions 653, memory 652 may store data such as road maps, route information, vehicle location, direction, speed, and the like. The data stored by memory 652 may be used by computing platform 650.
In the disclosed embodiment, the processor 651 may execute instructions 653 to perform all or part of the steps of the method embodiments described above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (10)

1. A method of data processing, comprising:
acquiring a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time before the target time;
splicing the perceived road elements according to the first positions of the perceived road elements at the target moment and the second positions of the perceived road elements at the history moment in the first vehicle body coordinate system at the target moment to obtain a local reconstruction map;
acquiring a first local high-precision map under the first vehicle coordinate system corresponding to the target moment;
and matching the plurality of perceived road elements in the local reconstruction map with the plurality of perceived road elements in the first local high-precision map, and determining a result of identifying the map data of the first local high-precision map and the local reconstruction map.
2. The method of claim 1, wherein the matching of the plurality of perceived road elements in the partially reconstructed map with the plurality of perceived road elements in the first partially high-precision map, determining a result of map data identification of the first partially high-precision map and the partially reconstructed map, comprises:
acquiring a plurality of set road parameters;
matching a plurality of perceived road elements in the local reconstruction map with a plurality of perceived road elements in the first local high-precision map, and determining matching values of various road parameters;
and determining a result of map data identification of the first local high-precision map and the local reconstruction map according to the matching values of the plurality of road parameters.
3. The method of claim 2, wherein the determining a result of map data identification of the first local high-definition map and the locally reconstructed map based on the matched values of the plurality of road parameters comprises:
matching the matching value of each road parameter with the corresponding setting condition;
and responding to the matching values of the road parameters to meet corresponding setting conditions, and determining that the first local high-precision map and the local reconstruction map have no map data errors.
4. A method as claimed in claim 3, wherein the method further comprises:
and determining that the map data error exists in the first local high-precision map and/or the map data error exists in the local reconstruction map in response to the matching value of at least one road parameter does not meet the corresponding setting condition, and determining the time of map data error as the target moment.
5. The method of claim 1, wherein the historical time is a plurality of, the splicing of the perceived road elements according to a first position of the perceived road elements at the target time in a first vehicle body coordinate system of the target time and a second position of the perceived road elements at the historical time in the first vehicle body coordinate system, before obtaining the partially reconstructed map, further comprises:
acquiring the position information of the vehicle at the target moment under the first vehicle body coordinate system and the position information of the vehicle at each history moment under the second vehicle body coordinate system;
determining, for each historical moment, offset information of the vehicle at the target moment relative to the historical moment according to a difference between position information of the vehicle in the first vehicle body coordinate system and position information of the vehicle in a second vehicle body coordinate system of the historical moment;
Acquiring a third position of each perceived road element at the historical moment under a second vehicle body coordinate system at the historical moment;
and correcting the third position of each perceived road element according to the offset information to obtain the second position of each perceived road element at the historical moment under the first vehicle body coordinate system.
6. The method of claim 5, wherein the acquiring the location information of the target time vehicle in the first body coordinate system further comprises:
acquiring measured values of an Inertial Measurement Unit (IMU) and a wheel speed meter of a vehicle, wherein the measured values are acquired at a target moment;
and determining the position information of the vehicle at the target moment under the first vehicle body coordinate system according to the IMU and wheel speed meter measured values acquired at the target moment.
7. The method of claim 1, wherein before the obtaining the first local high-precision map in the first vehicle coordinate system corresponding to the target time, further comprises:
acquiring a positioning position obtained by positioning a vehicle at the target moment and a mapping relation between a station coordinate system and the first vehicle body coordinate system;
determining a second local high-precision map from the high-precision map under the station-core coordinate system according to the positioning position;
And according to the mapping relation, mapping the second local high-precision map under the station coordinates to the first vehicle body coordinate system to obtain the first local high-precision map under the first vehicle body coordinate system.
8. A data processing apparatus, comprising:
a first acquisition module for executing acquisition of a plurality of perceived road elements at a target time and a plurality of perceived road elements at a history time before the target time;
the reconstruction module is used for executing the splicing of the perceived road elements according to the first positions of the perceived road elements at the target moment and the second positions of the perceived road elements at the history moment in the first vehicle body coordinate system at the target moment to obtain a local reconstruction map;
the second acquisition module is used for executing the acquisition of a first local high-precision map under the first vehicle coordinate system corresponding to the target moment;
and the first determining module is used for executing matching of a plurality of perceived road elements in the local reconstruction map and a plurality of perceived road elements in the first local high-precision map, and determining a result of identifying map data of the first local high-precision map and the local reconstruction map.
9. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the steps of carrying out the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the steps of the method of any one of claims 1-7.
CN202311063092.3A 2023-08-22 2023-08-22 Data processing method, device, vehicle and storage medium Pending CN117152700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311063092.3A CN117152700A (en) 2023-08-22 2023-08-22 Data processing method, device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311063092.3A CN117152700A (en) 2023-08-22 2023-08-22 Data processing method, device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN117152700A true CN117152700A (en) 2023-12-01

Family

ID=88883519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311063092.3A Pending CN117152700A (en) 2023-08-22 2023-08-22 Data processing method, device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN117152700A (en)

Similar Documents

Publication Publication Date Title
CN111442776B (en) Method and equipment for sequential ground scene image projection synthesis and complex scene reconstruction
US10460603B2 (en) Method for providing obstacle maps for vehicles
CN114394088B (en) Parking tracking track generation method and device, electronic equipment and storage medium
CN110579754A (en) Method for determining external parameters of a lidar and other sensors of a vehicle
CN114943952A (en) Method, system, device and medium for obstacle fusion under multi-camera overlapped view field
CN111060126B (en) Positioning method and device and vehicle
CN116626670B (en) Automatic driving model generation method and device, vehicle and storage medium
CN115546313A (en) Vehicle-mounted camera self-calibration method and device, electronic equipment and storage medium
CN116380088B (en) Vehicle positioning method and device, vehicle and storage medium
CN117152700A (en) Data processing method, device, vehicle and storage medium
CN113155143A (en) Method, device and vehicle for evaluating a map for automatic driving
CN115973164A (en) Vehicle navigation auxiliary driving method, medium and device
JP6941701B2 (en) Autonomous driving control methods, devices, vehicles, storage media and electronic devices
CN112180348B (en) Attitude calibration method and device for vehicle-mounted multi-line laser radar
CN116819964B (en) Model optimization method, model optimization device, electronic device, vehicle and medium
CN116363631B (en) Three-dimensional target detection method and device and vehicle
CN116863429B (en) Training method of detection model, and determination method and device of exercisable area
CN117128976B (en) Method and device for acquiring road center line, vehicle and storage medium
CN116659529B (en) Data detection method, device, vehicle and storage medium
CN116499488B (en) Target fusion method, device, vehicle and storage medium
CN116968726B (en) Memory parking method and device, vehicle and computer readable storage medium
CN116678423B (en) Multisource fusion positioning method, multisource fusion positioning device and vehicle
CN118097599A (en) Method and system for detecting course angle of vehicle, storage medium and electronic equipment
CN115731123A (en) Automatic extraction method and device for high-precision map lane line
CN116340192A (en) Vehicle perception version effect evaluation method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination