CN115147483A - High-precision map data detection method and equipment, road side unit and edge computing platform - Google Patents

High-precision map data detection method and equipment, road side unit and edge computing platform Download PDF

Info

Publication number
CN115147483A
CN115147483A CN202210777196.XA CN202210777196A CN115147483A CN 115147483 A CN115147483 A CN 115147483A CN 202210777196 A CN202210777196 A CN 202210777196A CN 115147483 A CN115147483 A CN 115147483A
Authority
CN
China
Prior art keywords
target
edge
determining
pose information
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210777196.XA
Other languages
Chinese (zh)
Inventor
高巍
丁文东
万国伟
彭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210777196.XA priority Critical patent/CN115147483A/en
Publication of CN115147483A publication Critical patent/CN115147483A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Abstract

The disclosure provides a map data detection method, which relates to the technical field of artificial intelligence, in particular to the technical fields of automatic driving, auxiliary driving, high-precision maps and the like. The specific implementation scheme is as follows: determining a processing graph related to a target frame, wherein the processing graph comprises a first node corresponding to the target frame and a second node corresponding to a preset point, the first node and the second node are connected through a first edge, and the two first nodes are connected through a second edge; determining at least one target processing subgraph from the processing graph, wherein the target processing subgraph comprises at least one first edge; fusing global pose information corresponding to a first edge in the target processing subgraph with target relative pose information corresponding to a second edge to obtain target pose information; and determining a detection result according to the target pose information, wherein the detection result is used for indicating whether the global pose information has deviation or not. The present disclosure also provides a map data detection apparatus, an electronic device, and a storage medium.

Description

High-precision map data detection method and equipment, road side unit and edge computing platform
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and more particularly to the field of automatic driving, assisted driving, high-precision maps, and the like. More specifically, the present disclosure provides a map data detection method, apparatus, electronic device, storage medium, computer program product, road side unit, and edge computing platform.
Background
The high-precision map is also called a high-precision map, and may be a map used by an autonomous vehicle. The high-precision map has accurate vehicle position information and rich road element data information, can help vehicles to predict road complex information such as gradient, curvature, course and the like, and can better avoid potential risks. With the development of artificial intelligence technology and high-precision map technology, the application scenarios of automatic driving technology and auxiliary driving technology are increasing. In the automatic driving mode or the auxiliary driving mode, the position of the vehicle can be determined using the high-precision map so as to control the vehicle to travel.
Disclosure of Invention
The disclosure provides a map data detection method, a map data detection device, an electronic device, a storage medium, a computer program product, a road side unit and an edge computing platform.
According to an aspect of the present disclosure, there is provided a map data detection method, the method including: determining a processing graph related to a target frame, wherein the processing graph comprises first nodes corresponding to the target frame and second nodes corresponding to preset points, the first nodes and the second nodes are connected through first edges, the two first nodes are connected through second edges, the first edges are used for representing global pose information of the first nodes, the second edges are used for representing target relative pose information between the two first nodes, and the target frame is related to point cloud data of a target area; determining at least one target processing subgraph from the processing graph, wherein the target processing subgraph comprises at least one first edge; fusing global pose information corresponding to a first edge in the target processing subgraph with target relative pose information corresponding to a second edge to obtain target pose information; and determining a detection result according to the target pose information, wherein the detection result is used for indicating whether the global pose information has deviation or not.
According to another aspect of the present disclosure, there is provided a map data detecting apparatus including: the first determining module is used for determining a processing graph related to the target frame, wherein the processing graph comprises first nodes corresponding to the target frame and second nodes corresponding to preset points, the first nodes and the second nodes are connected through first edges, the two first nodes are connected through second edges, the first edges are used for representing the global pose information of the first nodes, the second edges are used for representing the target relative pose information between the two first nodes, and the target frame is related to point cloud data of a target area; a second determining module, configured to determine at least one target processing subgraph from the processing graph, where the target processing subgraph includes at least one first edge; the fusion module is used for fusing the global pose information corresponding to the first edge in the target processing subgraph with the target relative pose information corresponding to the second edge to obtain target pose information; and a third determining module, configured to determine a detection result according to the target pose information, where the detection result is used to indicate whether there is a deviation in the global pose information.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method provided in accordance with the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method provided according to the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements a method provided according to the present disclosure.
According to another aspect of the present disclosure, there is provided a roadside unit including the electronic device provided by the present disclosure.
According to another aspect of the present disclosure, an edge computing platform is provided, comprising a plurality of edge computing units, the edge computing units comprising the electronic device provided by the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of an exemplary system architecture to which a map data detection method and apparatus may be applied, according to one embodiment of the present disclosure;
FIG. 2 is a flow diagram of a map data detection method according to one embodiment of the present disclosure;
FIG. 3A is an exemplary diagram of a processing graph according to one embodiment of the present disclosure;
FIG. 3B is an exemplary diagram of a target processing subgraph according to one embodiment of the present disclosure;
FIG. 3C is an exemplary diagram of another target processing subgraph according to one embodiment of the present disclosure;
fig. 4 is a block diagram of a map data detection apparatus according to one embodiment of the present disclosure; and
fig. 5 is a block diagram of an electronic device to which a map data detection method may be applied according to one embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to generate a high-precision map, the data acquisition device may be used to perform multiple data acquisitions on the target area to obtain point cloud data related to the target area. For example, the data acquisition device may be deployed on an autonomous vehicle or a non-autonomous vehicle. The vehicle can run in the target area to acquire point cloud data.
The data acquisition device may include a plurality of sensors. The plurality of sensors include, for example, laser Radar (LiDAR) and Camera (Camera). The data of the target area can be acquired through the laser radar and the camera, and the data are sent to the server after being preprocessed. For example, the laser radar emits a laser scanning beam, and when the laser scanning beam encounters an object and is reflected back, the laser scanning beam is received by the laser radar, and one-time emission and reception of the laser scanning beam are completed. Thereby, a large amount of point cloud data can be continuously acquired. In practical application, the data acquisition device may perform preprocessing, such as screening/filtering, on the collected mass point cloud data to obtain preprocessed point cloud data.
The data volume of the point cloud data is large, and a frame is taken as a unit in the actual processing process. Point cloud data corresponding to the same area can exist between two or more frames of point cloud data, and the point cloud data corresponding to the same area can be spliced together through point cloud splicing processing. In addition, through point cloud splicing processing, the laser radar position and pose corresponding to each target point cloud data can be obtained. The laser radar position and pose corresponding to each frame are all based on the center of the laser radar as the origin of coordinates, and as long as the position and pose of the laser radar are known, coordinate conversion can be performed on each laser point in the scanned point cloud data, and the coordinate value is converted into a coordinate value under a global coordinate system.
The initial value of the pose of the lidar may be obtained by an Inertial Measurement Unit (IMU) and a Global Navigation Satellite System (GNSS) device. For example, the trajectories solved by the inertial measurement unit and the global navigation satellite system are used as initial pose values. For example, the pose may include a position corresponding to three-dimensional coordinates (x, y, z) and a pose angle including rotation angles on three coordinate axes, respectively: course angle, pitch angle, roll angle.
The displacement component of the global pose in the initial value may be biased. For example, if the above-mentioned tracks include a part of the tracks where elevation drift exists, deviation of displacement components may be caused. The deviation can cause map layering or map deviation, and even potential safety hazards.
However, this deviation is difficult to detect. Under the conditions of good local smoothness of the track and accurate relative pose, the deviation is difficult to detect. In this case, the deviation detection may be performed on a manual basis, but the manual manner requires a high labor cost.
Fig. 1 is a schematic diagram of an exemplary system architecture to which a map data detection method and apparatus may be applied, according to one embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include sensors 101, 102, 103, a network 120, a server 130, and a Road Side Unit (RSU) 140. Network 120 is used to provide a medium for communication links between sensors 101, 102, 103 and server 130. Network 120 may include various connection types, such as wired and/or wireless communication links, and so forth.
The sensors 101, 102, 103 may interact with the server 130 over the network 120 to receive or send messages, etc.
The sensors 101, 102, 103 may be functional elements integrated on the vehicle 110, such as infrared sensors, ultrasonic sensors, millimeter wave radar, information acquisition devices, and the like. The sensors 101, 102, 103 may be used to collect status data of sensing objects (e.g., pedestrians, vehicles, obstacles, etc.) around the vehicle 110 as well as surrounding road data.
The vehicle 110 may communicate with the roadside unit 140, receive information from the roadside unit 140, or transmit information to the roadside unit.
The roadside unit 140 may be disposed on a signal light, for example, so as to adjust the duration or frequency of the signal light.
The server 130 may be disposed at a remote end capable of establishing communication with the vehicle-mounted terminal, and may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server.
The server 130 may be a server that provides various services. For example, a map application, a data processing application, and the like may be installed on the server 130. Taking the server 130 running the data processing application as an example: the state data of the obstacle and the map data transmitted from the sensors 101, 102, 103 are received via the network 120. One or more of the state data of the obstacle and the map data may be used as the data to be processed. And processing the data to be processed to obtain target data.
It should be noted that the map data detection method provided by the embodiment of the present disclosure may be generally executed by the server 130. Accordingly, the map data detection device provided by the embodiment of the present disclosure may also be disposed in the server 130. But is not limited thereto. The map data detection method provided by the embodiments of the present disclosure may also be generally performed by the sensor 101, 102, or 103. Accordingly, the map data detection device provided by the embodiment of the present disclosure may also be disposed in the sensor 101, 102, or 103.
It is understood that the number of sensors, networks, and servers in fig. 1 is merely illustrative. There may be any number of sensors, networks, and servers, as desired for implementation.
It should be noted that the sequence numbers of the respective operations in the following methods are merely used as representations of the operations for description, and should not be construed as representing the execution order of the respective operations. The method need not be performed in the exact order shown, unless explicitly stated.
Fig. 2 is a flowchart of a map data detection method according to one embodiment of the present disclosure.
As shown in fig. 2, the method 200 may include operations S210 to S240.
In operation S210, a process map related to the target frame is determined.
In embodiments of the present disclosure, the target frame is correlated with point cloud data of the target area.
For example, the data acquisition device described above may be used to acquire data of the target area, so as to obtain multi-frame point cloud data. And screening a plurality of target frames from the multi-frame point cloud data according to preset screening conditions.
In the embodiment of the present disclosure, the processing graph includes a first node corresponding to the target frame and a second node corresponding to the preset point. The first node and the second node are connected via a first edge. The two first nodes are connected via a second edge. The first edge may characterize global pose information of the first node. The second edge may characterize the relative pose information of the object between the two first nodes.
For example, the preset point may be a coordinate zero of a global coordinate system.
For example, the process graph may be a directed graph or an undirected graph.
In operation S220, at least one target processing sub-graph is determined from the processing graph.
For example, the target processing subgraph includes at least one first edge.
For another example, the target processing subgraph may include a second edge and a first edge.
For example, the target processing subgraph may be a closed-loop graph surrounded by a first node, a second node, a first edge, and a second edge.
In operation S230, the global pose information corresponding to the first edge in the target processing sub-graph is fused with the target relative pose information corresponding to the second edge to obtain target pose information.
For example, the global pose information may correspond to a transformation matrix. The relative pose information of the target can also correspond to a transformation matrix. And performing various matrix operations on the plurality of transformation matrixes to obtain an operation result so as to fuse the global pose information and the target relative pose information. As another example, the various matrix operations may include matrix multiplication operations.
In operation S240, a detection result is determined according to the target pose information.
For example, the detection result is used to indicate whether there is a deviation in the global pose information.
For example, the object pose information may correspond to a transformation matrix. The transformation matrix may be the result of the operations described above.
For another example, the detection result may be determined according to a difference value between the target pose information and the preset pose information. If the difference value is greater than the preset difference value, the detection result may be that the global pose information has a deviation.
According to the embodiment of the disclosure, whether the global pose has deviation is detected by using the relative pose, whether the global pose has deviation can be accurately determined under the condition that the true value is unknown, and the cost required by deviation detection is reduced.
In some embodiments, the preset pose information may correspond to an identity matrix.
In some embodiments, the pose information corresponds to a transformation matrix.
For example, the relative pose information between two frames of point cloud data corresponds to a transformation matrix. From the transformation matrix, a relative rotation angle value and a relative translation value between two frames of point cloud data may be determined.
In some embodiments, the plurality of target frames are from at least two point cloud data sets, each of the at least two point cloud data sets being a data acquisition of a target area.
For example, the data acquisition device may acquire the target area multiple times, resulting in multiple point cloud data sets.
For another example, in each acquisition process, the data acquisition device may acquire one frame of data every 0.1 second, resulting in a plurality of initial frames. And determining the key frame from the initial frame according to a first preset screening condition. The first preset screening condition may include, for example: the relative rotation angle value is greater than a preset relative rotation angle value or the relative translation value is greater than a first preset relative translation value. For example. The first preset relative translation value may be 10 meters. The predetermined relative rotation angle value may be 10 degrees.
In one example, the 1 st initial frame of the plurality of initial frames may be determined to be the 1 st key frame. According to initial relative pose information between the 1 st key frame and other initial frames, a plurality of relative translation values can be obtained, and the initial frame with the relative translation value between the 1 st key frame and 11 meters is used as the 2 nd key frame.
As another example, after multiple acquisitions, multiple keyframes are obtained during the multiple acquisitions. According to the second preset screening condition, a plurality of target frames can be determined from a plurality of key frames from different acquisition processes. The second preset screening condition may include, for example: the relative translation value is less than or equal to a second preset relative translation value. For example, the second preset relative translation value may be 30 meters.
In one example, taking 4 acquisitions as an example, after each acquisition, N keyframes may be obtained, where N is an integer greater than or equal to 1. And randomly determining one key frame as a target frame a from the N key frames acquired at the 1 st time. After the target frame a is determined, a plurality of relative translation values can be obtained according to the initial relative pose information between the target frame a and the N key frames acquired at the 2 nd time. Based on this, from the N key frames acquired at the 2 nd time, one key frame having a relative translation value of 30 meters from the target frame a is determined as the target frame b. Next, a target frame c may be determined from the N key frames acquired at the 3 rd time, and a target frame d may be determined from the N key frames acquired at the 4 th time.
In another example, taking 4 acquisitions as an example, after each acquisition, N keyframes may be obtained. And randomly determining one key frame as a target frame a from the N key frames acquired at the 1 st time. After the target frame a is determined, a plurality of relative translation values can be obtained according to the initial relative pose information between the target frame a and the N key frames acquired at the 2 nd time. Based on this, from the N key frames acquired at the 2 nd time, one key frame having a relative translation value of 30 meters from the target frame a is determined as the target frame b. The difference from the example described above is that after the target frame b is determined, the target frame c may also be determined from the 1 st acquisition of N key frames.
By the aid of the method and the device, the target frames are from at least two point cloud data sets, and whether the global pose is deviated or not can be accurately detected.
In some embodiments, a key frame is fused with an initial frame adjacent to the key frame to obtain a target frame.
For example, taking 4 acquisitions as an example, after each acquisition, N keyframes may be obtained. A key frame a' is randomly determined from the N key frames acquired at the 1 st acquisition. And fusing the key frame a 'and 3 initial frames adjacent to the key frame a' to obtain a target frame a.
Through the embodiment of the disclosure, the key frame and the initial frame are fused, and information is added to the sparse key frame, so that the related information in the target frame is richer, and the detection is more accurately performed.
In some embodiments, determining the processing graph comprises: performing point cloud splicing on the plurality of target frames according to the initial relative pose information of the plurality of target frames to obtain target relative pose information among the plurality of target frames; and determining a processing graph according to the relative pose information of the targets among the target frames.
For example, initial relative pose information for a target frame may be determined during acquisition of the target frame.
For example, stitching may be performed based on initial relative pose information. And the components related to the elevation in the initial pose information can be set to zero and then spliced.
It is to be understood that some embodiments of determining a processing graph are described above in detail, and that the following description is provided in connection with determining at least one target processing subgraph from the processing graph.
In some embodiments, determining at least one target processing subgraph from the processing graph comprises: performing closed-loop detection on the processing graph to obtain at least one initial processing subgraph; and determining the initial processing subgraph including the first edge as a target processing subgraph. The following will be described in detail with reference to fig. 3A to 3C.
Fig. 3A is an exemplary schematic of a processing diagram according to one embodiment of the disclosure.
As shown in fig. 3A, the process diagram 300 includes a first node 310, a first node 320, a first node 330, a first node 340, and a second node 350. The first node 310 corresponds to the target frame a, the first node 320 corresponds to the target frame b, the first node 330 corresponds to the target frame c, the first node 340 corresponds to the target frame d, and the second node 350 corresponds to the preset point G.
The first node 310 is connected to the second node 350 via a first edge 351. The first node 320 is connected to the second node 350 via a first edge 352. The first node 330 is connected to the second node 350 via a first edge 353. The first node 340 is connected to the second node 350 via a first edge 354.
The first node 310 is connected to the first node 320 via a second edge 312. The first node 310 and the first node 330 are connected by a second edge 313. The first node 310 is connected to the first node 340 via the second edge 314. The first node 320 is connected to the first node 330 via a second edge 323. The first node 320 is connected to the first node 340 via the second edge 324. The first node 330 is connected to the first node 340 via a second edge 334.
As shown in FIG. 3A, the process diagram 300 is a directed graph. The data acquisition device acquires point cloud data according to a preset direction. From this, the direction of each edge in the processing graph 300 can be determined.
FIG. 3B is an exemplary diagram of a target processing subgraph according to one embodiment of the present disclosure. Fig. 3C is an exemplary diagram of another target processing subgraph according to one embodiment of the present disclosure.
In the embodiment of the present disclosure, a plurality of initial processing subgraphs can be obtained by performing closed-loop detection on the processing graph 300.
For example, an initial processing subgraph includes: first node 310, first node 320, and second node 350, second edge 312, first edge 351, and first edge 352. The initial processing sub-graph, which includes the first edge 351 and the first edge 352, can be taken as a target processing sub-graph 301.
For example, an initial processing subgraph includes: first node 310, first node 320, first node 340, and second node 350, second edge 312, second edge 324, first edge 351, and first edge 354. The initial processing sub-graph includes a first edge 351 and a first edge 354, and the initial processing sub-graph can be used as a target processing sub-graph 302.
As another example, another processing subgraph includes first node 310, first node 320 and first node 330, second edge 312, second edge 313 and second edge 323. The initial processing subgraph does not include the first edge. The initial processing subgraph may not be the target processing subgraph in this embodiment.
In some embodiments, fusing the global pose information corresponding to the first edge and the target relative pose information corresponding to the second edge in the target processing subgraph to obtain the target pose information comprises: and determining a target transformation matrix as target pose information according to the transformation matrix corresponding to the target relative pose information and the transformation matrix corresponding to the global pose information.
In an embodiment of the disclosure, determining the target transformation matrix includes: determining a preset direction of a directed edge in a target processing subgraph; determining a target directed edge opposite to the preset direction from the plurality of directed edges according to the directions of the plurality of directed edges and the preset direction of the directed edges; and determining the target transformation matrix according to the inverse matrix of the transformation matrix related to the target directed edge and the transformation matrices related to other directed edges except the target directed edge.
For example, the directed edges include a first edge and a second edge.
For example, as shown in fig. 3B, the target processing sub-graph 301 may include: first node 310, first node 320, and second node 350, second edge 312, first edge 351, and first edge 352. The preset direction of the directed edge in the target processing subgraph may be, for example, a clockwise direction: from the first node 310 to the first node 320, from the first node 320 to the second node 350, and from the second node 350 to the first node 310.
As shown in fig. 3B, the first side 352 is oriented: from the second node 350 to the first node 320. The first side 352 is oriented opposite to the predetermined direction. The first edge 352 may be a target directed edge.
As shown in fig. 3B, the first side 351 has the following directions: from the second node 350 to the first node 310. The direction of the second edge 312 is: from first node 310 to first node 320. The direction of the first side 351 and the direction of the second side 312 are the same as the predetermined direction. The first edge 351 and the second edge 312 may be other directed edges.
The second edge 312 may characterize the target relative pose information between the first node 310 and the first node 320. For example, the second edge 312 may characterize object relative pose information between object frame a and object frame b. The object relative pose information may correspond to a transformation matrix M _ ab. The transformation matrix M ab may be the transformation matrix associated with the second edge 312.
The first edge 351 may characterize global pose information of the first node 310. The global pose information may correspond to a transformation matrix M _ ga. The transformation matrix M _ ga may be a transformation matrix associated with the first side 351.
First edge 352 may characterize global pose information for first node 320. The global pose information may correspond to a transformation matrix M _ gb. The transform matrix M _ gb may be the transform matrix associated with the first edge 352.
According to the transformation matrix M _ ab and the inverse matrix (M _ gb) of the transformation matrix M _ gb -1 And a transformation matrix M _ ga, and determining a target transformation matrix. For example, for the transformation matrix M _ ab and the inverse matrix (M _ gb) -1 And carrying out matrix multiplication to obtain a first intermediate transformation matrix. And performing matrix multiplication on the first intermediate transformation matrix and the transformation matrix M _ ga to obtain a target transformation matrix M _ abg.
For another example, as shown in fig. 3C, the target processing sub-graph 302 may include: first node 310, first node 320, first node 340, and second node 350, second edge 312, second edge 324, first edge 351, and first edge 354. The preset direction of the directed edge in the target processing subgraph may be, for example, a clockwise direction: from first node 310 to first node 320, from first node 320 to first node 340, from first node 340 to second node 350, and from second node 350 to first node 310.
As shown in fig. 3C, the first side 354 is oriented: from the second node 350 to the first node 340. The first side 354 is oriented opposite to the predetermined direction. The first edge 354 may be a target directed edge.
As shown in fig. 3C, the first side 351 has the following directions: from the second node 350 to the first node 310. The direction of the second side 312 is: from first node 310 to first node 320. The direction of the second edge 324 is: first node 320 through first node 340. The direction of the first side 351, the direction of the second side 312, and the direction of the second side 324 are the same as the predetermined direction. The first edge 351, the second edge 312, and the second edge 324 may be other directed edges.
The second edge 312 may characterize the target relative pose information between the first node 310 and the first node 320. For example, the second edge 312 may characterize object relative pose information between object frame a and object frame b. The target relative pose information may correspond to a transformation matrix M _ ab. The transformation matrix M _ ab may be the transformation matrix associated with the second edge 312.
The second edge 324 may characterize the target relative pose information between the first node 320 and the first node 340. For example, the second edge 324 may characterize object relative pose information between object frame b and object frame d. The object relative pose information may correspond to a transformation matrix M _ bd. The transformation matrix M _ bd may be the transformation matrix associated with the second edge 324.
The first edge 351 may characterize global pose information of the first node 310. The global pose information may correspond to a transformation matrix M _ ga. The transformation matrix M _ ga may be a transformation matrix associated with the first side 351.
The first edge 354 may characterize the global pose information of the first node 340. The global pose information may correspond to a transformation matrix Mgd. The transformation matrix Mgd may be the transformation matrix associated with the first side 354.
According to transformation matrix M _ ab, transformation matrix M _ bd, inverse matrix of transformation matrix M _ gd (M _ gd) -1 And a transformation matrix M _ ga, and determining a target transformation matrix. For example, matrix multiplication is performed on the transformation matrix M _ ab and the transformation matrix M _ bd to obtain a second intermediate transformation matrix. For the second intermediate transformation matrix and inverse matrix (M _ gd) -1 And performing matrix multiplication to obtain a third intermediate transformation matrix. Matrix the third intermediate transformation matrix and the transformation matrix M _ gaAnd performing multiplication to obtain a target transformation matrix M _ abdg.
In the embodiment of the present disclosure, determining the detection result according to the target pose information includes: determining a translation value and a rotation angle value according to the target transformation matrix; determining a detection result of the first edge according to the translation value, the rotation angle value, the target translation value and the target rotation angle value; and determining a detection result of the map data related to the target area according to the detection result of the first edge.
For example, from the target transform matrix M _ abg, a translation value T _ abg and a rotation angle value R _ abg may be determined. For another example, from the target transformation matrix M _ abdg, a translation value T _ abdg and a rotation angle value R _ abdg may be determined.
In this embodiment of the present disclosure, determining the detection result of the first edge according to the translation value, the rotation angle value, the target translation value, and the target rotation angle value further includes: determining edge number values of a first edge and a second edge in the target processing subgraph; and determining a target translation value and a target rotation angle value according to the edge quantity value, the preset translation value and the preset rotation angle value.
For example, as shown in FIG. 3B, a target processing sub-graph 301 may include a second edge 312, a first edge 351, and a first edge 352. The edge number value of the target processing subgraph 301 is 3. Taking the example that the preset translation value is 0.1 meter and the preset rotation angle value is 0.1 degrees, for the target processing sub-graph 301, the target translation value may be 0.3 meter and the target rotation angle value may be 0.3 degrees.
For another example, as shown in FIG. 3C, the target processing subgraph 302 can include a second edge 312, a second edge 324, a first edge 351, and a first edge 354. The edge number value of the target processing subgraph 302 is 4. Taking the example that the preset translation value is 0.1 meter and the preset rotation angle value is 0.1 degrees, for the target processing sub-graph 302, the target translation value may be 0.4 meter and the target rotation angle value may be 0.4 degrees.
Next, in the embodiment of the disclosure, it may be determined whether the translation value is less than or equal to the target translation value and the rotation angle value is less than or equal to the target rotation angle value, so as to obtain the sub-detection result of the first edge. The sub-detection results include a first class of sub-detection results and a second class of sub-detection results.
For example, in response to determining that the translation value is less than or equal to the target translation value and the rotation angle value is less than or equal to the target rotation angle value, the sub-detection result of the first edge in the target processing sub-graph is determined as the first class of sub-detection result. The first-class sub-detection result is used for indicating that no deviation exists in the global pose information corresponding to the first edge.
For another example, in response to determining that the translation value is greater than the target translation value or the rotation angle value is greater than the target rotation angle value, the sub-detection result of the first edge in the target processing sub-graph is determined as the second type of sub-detection result. And the second type of sub-detection result is used for indicating that the global pose information corresponding to the first edge has deviation.
In one example, if it is determined that the translation value T _ abg is smaller than 0.3 m and the rotation angle value R _ abg is smaller than 0.3 degrees, one sub-detection result of the first edge 351 may be determined as the first-type sub-detection result, and one sub-detection result of the first edge 352 may also be determined as the first-type sub-detection result.
In one example, if the translation value T _ abdg is determined to be less than 0.4 m and the rotation angle value R _ abdg is determined to be less than 0.4 degrees, another sub-detection result of the first edge 351 may be determined to be a first-type sub-detection result, and another sub-detection result of the first edge 354 may also be determined to be a first-type sub-detection result.
In an embodiment of the present disclosure, determining the detection result of the first edge includes: determining at least one sub-detection result of the first edge according to at least one target processing subgraph related to the first edge; and determining the detection result of the first edge according to at least one sub-detection result.
For example, as shown in fig. 3B and 3C, for the first edge 351, the target processing sub-graph associated with the first edge 351 includes the target processing sub-graph 301 and the target processing sub-graph 302, and so on. Among the plurality of sub-detection results of the first edge 351, if the number of the first-type sub-detection results is greater than the number of the second-type sub-detection results, the obtained detection result of the first edge may indicate that there is no deviation in the global pose information.
For another example, it may be determined that the point cloud data of the target region has a global position deviation, based on at least one of the detection result of the first edge 351, the detection result of the first edge 352, the detection result of the first edge 353, and the detection result of the first edge 354 indicating that the global position information has a deviation.
By the embodiment of the disclosure, under the condition that the processing graph is the directed graph, the detection efficiency can be improved, the calculation amount is reduced, and the cost of map data detection is further reduced.
It is to be understood that the process diagram 300 described above is a directed graph, and in the embodiment of the present disclosure, the process diagram may also be an undirected graph, which will be described in detail below.
For example, in the case that the processing graph is an undirected graph, at least one target processing subgraph can be determined from the processing graph. The target processing subgraph includes at least one first edge and at least one second edge.
And for one target processing subgraph, obtaining a transformation matrix corresponding to at least one first edge and a transformation matrix corresponding to at least one second edge to obtain a plurality of transformation matrices. An inverse of at least one of the plurality of transformation matrices is obtained. And performing matrix multiplication according to the inverse matrix and other transformation matrices to obtain an operation result. For a plurality of transformation matrices, after an operation is performed using the inverse matrix of each transformation matrix, a plurality of operation results can be obtained. And taking the operation result with the minimum difference with the unit matrix as the target pose information. And determining a detection result according to the target pose information.
Fig. 4 is a block diagram of a map detection apparatus according to one embodiment of the present disclosure.
As shown in fig. 4, the apparatus 400 may include a first determination module 410, a second determination module 420, a fusion module 430, and a third determination module 440.
A first determining module 410, configured to determine a processing map associated with the target frame. For example, the processing graph includes first nodes corresponding to the target frame and second nodes corresponding to the preset points, the first nodes and the second nodes are connected through first edges, the two first nodes are connected through second edges, the first edges are used for representing global pose information of the first nodes, the second edges are used for representing target relative pose information between the two first nodes, and the target frame is related to point cloud data of the target area.
A second determining module 420 for determining at least one target processing subgraph from the processing graph. For example, the target processing subgraph includes at least one first edge.
And the fusion module 430 is configured to fuse the global pose information corresponding to the first edge in the target processing sub-graph with the target relative pose information corresponding to the second edge to obtain target pose information.
And a third determining module 440, configured to determine a detection result according to the target pose information, where the detection result is used to indicate whether there is a deviation in the global pose information.
In some embodiments, the first determining module comprises: the point cloud splicing module is used for performing point cloud splicing on the plurality of target frames according to the initial relative pose information of the plurality of target frames to obtain target relative pose information among the plurality of target frames; and the first determining submodule is used for determining a processing graph according to the relative pose information of the target among the plurality of target frames.
In some embodiments, the plurality of target frames are from at least two point cloud data sets, each of the at least two point cloud data sets being a data acquisition of a target area.
In some embodiments, the second determining module comprises: the closed-loop detection sub-module is used for carrying out closed-loop detection on the processing graph to obtain at least one initial processing subgraph; and a second determining sub-module for determining the initial processing subgraph including the first edge as the target processing subgraph.
In some embodiments, the fusion module comprises: and the third determining submodule is used for determining a target transformation matrix as target pose information according to the transformation matrix corresponding to the target relative pose information and the transformation matrix corresponding to the global pose information.
In some embodiments, the process graph is a directed graph, and the third determining sub-module includes: the first determining unit is used for determining the preset direction of the directed edge in the target processing subgraph, wherein the directed edge comprises a first edge and a second edge; the second determining unit is used for determining a target directed edge opposite to the preset direction from the plurality of directed edges according to the directions of the plurality of directed edges and the preset direction of the directed edges; and a third determining unit configured to determine the target transformation matrix according to an inverse matrix of the transformation matrix associated with the target directed edge and transformation matrices associated with other directed edges other than the target directed edge.
In some embodiments, the third determining module comprises: the fourth determining submodule is used for determining a translation value and a rotation angle value according to the target transformation matrix; the fifth determining submodule is used for determining the detection result of the first edge according to the translation value, the rotation angle value, the target translation value and the target rotation angle value; and a sixth determining sub-module configured to determine a detection result of the map data related to the target area, according to the detection result of the first edge.
In some embodiments, the fifth determination submodule comprises: a fourth determining unit, configured to determine at least one sub-detection result of the first edge according to at least one target processing sub-graph associated with the first edge, where the sub-detection result includes a first class of sub-detection result and a second class of sub-detection result, the first class of sub-detection result is used to indicate that there is no deviation in the global pose information corresponding to the first edge, and the second class of sub-detection result is used to indicate that there is a deviation in the global pose information corresponding to the first edge; and a fifth determining unit for determining the detection result of the first edge according to the at least one sub-detection result.
In some embodiments, the fifth determination submodule comprises: and a sixth determining unit, configured to determine, in response to determining that the translation value is less than or equal to the target translation value and the rotation angle value is less than or equal to the target rotation angle value, the sub-detection result of the first edge in the target processing sub-graph as the first-class sub-detection result.
In some embodiments, the fifth determination submodule comprises: and the seventh determining unit is used for determining the sub-detection result of the first edge in the target processing sub-image as the second type sub-detection result in response to the fact that the translation value is larger than the target translation value or the rotation angle value is larger than the target rotation angle value.
In some embodiments, the fifth determination sub-module further comprises: an eighth determining unit, configured to determine edge quantity values of the first edge and the second edge in the target processing subgraph; and the ninth determining unit is used for determining a target translation value and a target rotation angle value according to the edge quantity value, the preset translation value and the preset rotation angle value.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, a computer program product, a road side unit and an edge computing platform according to embodiments of the present disclosure.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to pass through a computer network such as the internet the network and/or various telecommunications networks exchange information/data with other devices.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, a various computational units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processors, controllers, microcontrollers, or the like. The calculation unit 501 performs the respective methods and processes described above, such as the map data detection method. For example, in some embodiments, the map data detection method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the map data detection method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the map data detection method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to an embodiment of the present disclosure, the present disclosure also provides a road side unit, which may include the electronic device provided by the present disclosure. For example, the roadside unit may include the electronic device 500 described above.
According to an embodiment of the present disclosure, the present disclosure also provides an edge computing platform including a plurality of edge computing units, which may include the electronic device provided by the present disclosure. For example, the edge calculation unit may include the electronic device 500 described above.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (27)

1. A map data detection method, comprising:
determining a processing graph related to a target frame, wherein the processing graph comprises first nodes corresponding to the target frame and second nodes corresponding to preset points, the first nodes and the second nodes are connected through first edges, two first nodes are connected through second edges, the first edges are used for representing global pose information of the first nodes, the second edges are used for representing target relative pose information between the two first nodes, and the target frame is related to point cloud data of a target area;
determining at least one target processing subgraph from the processing graph, wherein the target processing subgraph comprises at least one first edge;
fusing the global pose information corresponding to the first edge in the target processing subgraph with the target relative pose information corresponding to the second edge to obtain target pose information; and
and determining a detection result according to the target pose information, wherein the detection result is used for indicating whether the global pose information has deviation or not.
2. The method of claim 1, wherein the determining a processing map associated with a target frame comprises:
performing point cloud splicing on the target frames according to the initial relative pose information of the target frames to obtain the target relative pose information among the target frames; and
and determining the processing graph according to the relative pose information of the target among the plurality of target frames.
3. The method of claim 1, wherein the plurality of target frames are from at least two point cloud data sets, each of the at least two point cloud data sets being a data acquisition of the target area.
4. The method of claim 1, wherein said determining at least one target processing subgraph from said processing graph comprises:
performing closed-loop detection on the processing graph to obtain at least one initial processing subgraph; and
determining the initial processing subgraph including the first edge as the target processing subgraph.
5. The method of claim 1, wherein the fusing the global pose information corresponding to the first edge with the object relative pose information corresponding to the second edge in the object processing subgraph to obtain object pose information comprises:
according to the transformation matrix corresponding to the relative pose information of the target and the transformation matrix corresponding to the global pose information, and determining a target transformation matrix as the target pose information.
6. The method of claim 5, wherein the process graph is a directed graph,
the determining a target transformation matrix comprises:
determining a preset direction of a directed edge in the target processing subgraph, wherein the directed edge comprises the first edge and the second edge;
determining a target directed edge opposite to the preset direction from the plurality of directed edges according to the directions of the plurality of directed edges and the preset direction of the directed edges; and
and determining the target transformation matrix according to the inverse matrix of the transformation matrix related to the target directed edge and the transformation matrices related to other directed edges except the target directed edge.
7. The method of claim 5, wherein the determining detection results from the target pose information comprises:
determining a translation value and a rotation angle value according to the target transformation matrix;
determining a detection result of the first edge according to the translation value, the rotation angle value, the target translation value and the target rotation angle value; and
and determining the detection result of the map data related to the target area according to the detection result of the first edge.
8. The method of claim 7, wherein the determining the detection of the first edge comprises:
determining at least one sub-detection result of the first edge according to at least one target processing sub-image related to the first edge, wherein the sub-detection result comprises a first type of sub-detection result and a second type of sub-detection result, the first type of sub-detection result is used for indicating that no deviation exists in the global pose information corresponding to the first edge, and the second type of sub-detection result is used for indicating that deviation exists in the global pose information corresponding to the first edge; and
and determining the detection result of the first edge according to at least one of the sub-detection results.
9. The method of claim 8, wherein the determining the detection result of the first edge according to the translation value, the rotation angle value, a target translation value, and a target rotation angle value comprises:
in response to determining that the translation value is less than or equal to the target translation value and the rotation angle value is less than or equal to the target rotation angle value, determining a sub-detection result of the first edge in the target processing sub-graph as the first class of sub-detection result.
10. The method of claim 8, wherein the determining the detection result of the first edge according to the translation value, the rotation angle value, a target translation value, and a target rotation angle value comprises:
in response to determining that the translation value is greater than a target translation value or the rotation angle value is greater than the target rotation angle value, determining a sub-detection result of the first edge in the target processing subgraph as the second class of sub-detection result.
11. The method of claim 7, wherein said determining a detection result of the first edge based on the translation value, the rotation angle value, a target translation value, and a target rotation angle value further comprises:
determining an edge quantity value of the first edge and the second edge in the target processing subgraph; and
and determining the target translation value and the target rotation angle value according to the edge quantity value, the preset translation value and the preset rotation angle value.
12. A map data detection apparatus comprising:
the processing graph comprises first nodes corresponding to the target frame and second nodes corresponding to preset points, wherein the first nodes and the second nodes are connected through first edges, the two first nodes are connected through second edges, the first edges are used for representing global pose information of the first nodes, the second edges are used for representing target relative pose information between the two first nodes, and the target frame is related to point cloud data of a target area;
a second determining module, configured to determine at least one target processing subgraph from the processing graph, wherein the target processing subgraph includes at least one of the first edges;
the fusion module is used for fusing the global pose information corresponding to the first edge and the target relative pose information corresponding to the second edge in the target processing subgraph to obtain target pose information; and
and a third determining module, configured to determine a detection result according to the target pose information, where the detection result is used to indicate whether the global pose information has a deviation.
13. The apparatus of claim 12, wherein the first determining means comprises:
the point cloud splicing module is used for performing point cloud splicing on the target frames according to the initial relative pose information of the target frames to obtain the target relative pose information among the target frames; and
and the first determining submodule is used for determining the processing graph according to the relative pose information of the target among the plurality of target frames.
14. The apparatus of claim 12, wherein the plurality of target frames are from at least two point cloud data sets, each of the at least two point cloud data sets being a data acquisition of the target area.
15. The apparatus of claim 12, wherein the second determining means comprises:
the closed-loop detection sub-module is used for carrying out closed-loop detection on the processing graph to obtain at least one initial processing sub-graph; and
a second determining sub-module for determining the initial processing subgraph including the first edge as the target processing subgraph.
16. The apparatus of claim 12, wherein the fusion module comprises:
and the third determining submodule is used for determining a target transformation matrix as the target pose information according to the transformation matrix corresponding to the target relative pose information and the transformation matrix corresponding to the global pose information.
17. The apparatus of claim 16, wherein the process graph is a directed graph,
the third determination submodule includes:
a first determining unit, configured to determine a preset direction of a directed edge in the target processing subgraph, where the directed edge includes the first edge and the second edge;
the second determining unit is used for determining a target directed edge opposite to the preset direction from the plurality of directed edges according to the directions of the plurality of directed edges and the preset direction of the directed edges; and
a third determining unit, configured to determine the target transformation matrix according to an inverse matrix of the transformation matrix associated with the target directed edge and transformation matrices associated with other directed edges except the target directed edge.
18. The apparatus of claim 16, wherein the third determining means comprises:
the fourth determining submodule is used for determining a translation value and a rotation angle value according to the target transformation matrix;
a fifth determining submodule, configured to determine a detection result of the first edge according to the translation value, the rotation angle value, the target translation value, and the target rotation angle value; and
and the sixth determining submodule is used for determining the detection result of the map data related to the target area according to the detection result of the first edge.
19. The apparatus of claim 18, wherein the fifth determination submodule comprises:
a fourth determining unit, configured to determine at least one sub-detection result of the first edge according to at least one target processing sub-graph associated with the first edge, where the sub-detection result includes a first type of sub-detection result and a second type of sub-detection result, the first type of sub-detection result is used to indicate that there is no deviation in the global pose information corresponding to the first edge, and the second type of sub-detection result is used to indicate that there is a deviation in the global pose information corresponding to the first edge; and
a fifth determining unit, configured to determine a detection result of the first edge according to at least one of the sub-detection results.
20. The apparatus of claim 19, wherein the fifth determination submodule comprises:
a sixth determining unit, configured to determine, in response to determining that the translation value is less than or equal to the target translation value and the rotation angle value is less than or equal to the target rotation angle value, a sub-detection result of the first edge in the target processing subgraph as the first-class sub-detection result.
21. The apparatus of claim 19, wherein the fifth determination submodule comprises:
a seventh determining unit, configured to determine, in response to determining that the translation value is greater than a target translation value or the rotation angle value is greater than the target rotation angle value, a sub-detection result of the first edge in the target processing subgraph as the sub-detection result of the second class.
22. The apparatus of claim 18, wherein the fifth determination sub-module further comprises:
an eighth determining unit, configured to determine edge quantity values of the first edge and the second edge in the target processing subgraph; and
and the ninth determining unit is used for determining the target translation value and the target rotation angle value according to the margin quantity value, the preset translation value and the preset rotation angle value.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 11.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 11.
26. A roadside unit comprising the electronic device of claim 23.
27. An edge computing platform comprising a plurality of edge computing units, the edge computing units comprising the electronic device of claim 23.
CN202210777196.XA 2022-06-30 2022-06-30 High-precision map data detection method and equipment, road side unit and edge computing platform Pending CN115147483A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210777196.XA CN115147483A (en) 2022-06-30 2022-06-30 High-precision map data detection method and equipment, road side unit and edge computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210777196.XA CN115147483A (en) 2022-06-30 2022-06-30 High-precision map data detection method and equipment, road side unit and edge computing platform

Publications (1)

Publication Number Publication Date
CN115147483A true CN115147483A (en) 2022-10-04

Family

ID=83410940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210777196.XA Pending CN115147483A (en) 2022-06-30 2022-06-30 High-precision map data detection method and equipment, road side unit and edge computing platform

Country Status (1)

Country Link
CN (1) CN115147483A (en)

Similar Documents

Publication Publication Date Title
US11009355B2 (en) Method and apparatus for positioning vehicle
KR102273559B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
US11579307B2 (en) Method and apparatus for detecting obstacle
JP2019145089A (en) Method and device for fusing point cloud data
CN113377888B (en) Method for training object detection model and detection object
CN113378693B (en) Method and device for generating target detection system and detecting target
KR20210151724A (en) Vehicle positioning method, apparatus, electronic device and storage medium and computer program
CN113704116A (en) Data processing method, device, electronic equipment and medium for automatic driving vehicle
EP4145408A1 (en) Obstacle detection method and apparatus, autonomous vehicle, device and storage medium
CN113378694B (en) Method and device for generating target detection and positioning system and target detection and positioning
CN113326796B (en) Object detection method, model training method and device and electronic equipment
CN113989760A (en) Method, device and equipment for detecting lane line by high-precision map and storage medium
CN113177980A (en) Target object speed determination method and device for automatic driving and electronic equipment
CN113758492A (en) Map detection method and device
CN115760827A (en) Point cloud data detection method, device, equipment and storage medium
CN115900697B (en) Object motion trail information processing method, electronic equipment and automatic driving vehicle
CN115147483A (en) High-precision map data detection method and equipment, road side unit and edge computing platform
CN115147561A (en) Pose graph generation method, high-precision map generation method and device
CN114281832A (en) High-precision map data updating method and device based on positioning result and electronic equipment
CN111398961B (en) Method and apparatus for detecting obstacles
CN112683216B (en) Method and device for generating vehicle length information, road side equipment and cloud control platform
CN114584949B (en) Method and equipment for determining attribute value of obstacle through vehicle-road cooperation and automatic driving vehicle
CN116663329B (en) Automatic driving simulation test scene generation method, device, equipment and storage medium
CN115115944B (en) Map data checking method and device, electronic equipment and medium
CN114323039B (en) Data acquisition method and device for high-precision map, vehicle, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination