CN117670785A - Ghost detection method of point cloud map - Google Patents

Ghost detection method of point cloud map Download PDF

Info

Publication number
CN117670785A
CN117670785A CN202211063801.3A CN202211063801A CN117670785A CN 117670785 A CN117670785 A CN 117670785A CN 202211063801 A CN202211063801 A CN 202211063801A CN 117670785 A CN117670785 A CN 117670785A
Authority
CN
China
Prior art keywords
point cloud
frame
query key
key frame
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211063801.3A
Other languages
Chinese (zh)
Inventor
辛喆
余丽
陆亚
邱靖烨
王昱杰
任海兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202211063801.3A priority Critical patent/CN117670785A/en
Publication of CN117670785A publication Critical patent/CN117670785A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a ghost detection method of a point cloud map, and belongs to the technical field of electronic information. The method comprises the following steps: dividing a point cloud map to be detected into a plurality of grids; determining a query key frame from a local point cloud track included in a target grid; determining at least one matching frame of the query key frame in the multi-frame point cloud track; determining registration results of the query key frame and each matching frame of the query key frame; in response to the presence of a registration result satisfying the requirements, the target grid is determined to be a grid in which ghosts are present. In the method, the full coverage of ghost detection in the point cloud map is realized by dividing the point cloud map into grids for detection, and the detection granularity is finer. And selecting the query key frame and the matching frame for registration, and improving the detection accuracy, so that an accurate high-precision map can be constructed based on the point cloud map after ghost detection, and the normal running of the automatic driving vehicle is ensured based on the accurate high-precision map. Since specific elements are not required to be extracted, the restriction of the scene on ghost detection is reduced.

Description

Ghost detection method of point cloud map
Technical Field
The embodiment of the application relates to the technical field of electronic information, in particular to a ghost detection method of a point cloud map.
Background
At present, an electronic high-precision map becomes an indispensable part in many travel scenes, particularly in the field of automatic driving, the electronic high-precision map is an important premise for ensuring normal running of an automatic driving vehicle, and the construction of a point cloud map is an indispensable link in the production flow of the electronic high-precision map. The point cloud is a set of scattered points with accurate angle and distance information, which are presented by collecting surrounding environment object information through a radar, and a map constructed by using the point cloud is called a point cloud map. The quality of the point cloud map determines the accuracy of the electronic high-definition map, and whether ghost images exist in the point cloud map determines the quality of the point cloud map. Therefore, ghost detection of the point cloud map is required.
Disclosure of Invention
The embodiment of the application provides a ghost detection method, device and equipment of a point cloud map and a storage medium, which can be used for improving the detection accuracy and reducing the restriction of a scene on ghost detection. The technical proposal is as follows:
in one aspect, an embodiment of the present application provides a ghost detection method for a point cloud map, where the method includes:
dividing a point cloud map to be detected into a plurality of grids, wherein the point cloud map comprises a plurality of frame point cloud tracks, and each grid comprises a local point cloud track in the plurality of frame point cloud tracks;
Determining a query key frame from a local point cloud track included in a target grid;
at least one matching frame of the query key frame is determined in the multi-frame point cloud track, wherein the matching frame is a local point cloud track in a reference range where the query key frame is positioned;
determining registration results of the query key frame and each matching frame of the query key frame;
in response to the presence of a registration result satisfying the requirements, the target grid is determined to be a grid in which ghosts are present.
In one possible implementation, determining a query keyframe from a local point cloud trajectory included in a target grid includes:
determining the road layer number in the target grid;
and determining a reference number of point cloud tracks from the local point cloud tracks included in the target grid as query key frames, wherein the reference number is determined based on the road layer number.
In one possible implementation, determining at least one matching frame of the query key frame in the multi-frame point cloud trajectory includes:
screening a local point cloud track in a reference range from multi-frame point cloud tracks;
and determining a local point cloud track meeting the time interval in the plurality of local point cloud tracks as a matching frame of the query key frame in response to the plurality of local point cloud tracks included in the reference range.
In one possible implementation, determining the registration results of the query key frame and respective matching frames of the query key frame includes:
determining a sub-graph of the query key frame, wherein the sub-graph of the query key frame is obtained based on a point cloud track included in the query key frame;
determining a subgraph of any matching frame for the query key frame and any matching frame, wherein the subgraph of any matching frame is obtained based on a point cloud track included in any matching frame;
determining a plurality of pose transformation matrices based on the subgraph of the query key frame and the subgraph of any matching frame, any pose transformation matrix including at least one of a rotation transformation parameter and a translation transformation parameter that transforms the query key frame to any matching frame;
and carrying out mean value calculation on the pose transformation matrix meeting the requirements in the plurality of pose transformation matrices, and taking the calculated result as a registration result of the query key frame and any matching frame.
In one possible implementation, determining a sub-graph of a query key frame includes:
and merging the point cloud tracks included in the query key frame and the relevant point cloud tracks of the query key frame to obtain a sub-graph of the query key frame, wherein the relevant point cloud tracks of the query key frame comprise point cloud tracks adjacent to the time stamp of the query key frame in the target grid.
In one possible implementation, determining a sub-graph of any matching frame includes:
and combining the point cloud track included in any matching frame with the relevant point cloud track of any matching frame to obtain a sub-image of any matching frame, wherein the relevant point cloud track of any matching frame comprises a point cloud track adjacent to the time stamp of any matching frame in the grid where any matching frame is positioned.
In one possible implementation, determining a plurality of pose transformation matrices based on the sub-graph of the query key frame and the sub-graph of any matching frame includes:
determining a plurality of groups of local point cloud data corresponding to the subgraphs of the query key frame and global point cloud data corresponding to the subgraphs of any matching frame;
registering a plurality of groups of local point cloud data with global point cloud data respectively to obtain a plurality of pose transformation matrixes, wherein one group of local point cloud data corresponds to one pose transformation matrix, and the pose transformation matrix corresponding to any group of local point cloud data comprises at least one of rotation transformation parameters and translation transformation parameters for transforming a query key frame into any matching frame through any group of local point cloud data.
In one possible implementation, determining multiple sets of local point cloud data corresponding to sub-graphs of a query key frame includes:
And screening out point cloud data meeting the proportion requirements according to different areas in global point cloud data included in the subgraph of the query key frame, and taking the point cloud data meeting the proportion requirements screened out by each area as a group of local point cloud data corresponding to the subgraph of the query key frame.
In one possible implementation, after determining the registration result of the query key frame and each matching frame of the query key frame, the method further includes:
for any matching frame, calculating pose errors of the query key frame according to registration results of the query key frame and any matching frame and a registration transformation matrix, wherein the registration transformation matrix comprises at least one of rotation transformation parameters and translation transformation parameters for transforming the query key frame into any matching frame through global point cloud data of the query key frame;
and determining that the registration result of any matching frame and the query key frame meets the requirement in response to the pose error being greater than the pose error threshold.
In one possible implementation, after determining the target grid as the grid in which the ghost exists, the method further includes:
merging the grids with the ghost, and determining the region of the ghost according to the merged grids;
and repairing the point cloud track included in the ghost area.
In another aspect, there is provided a ghost detection apparatus of a point cloud map, the apparatus including:
the dividing module is used for dividing the point cloud map to be detected into a plurality of grids, wherein the point cloud map comprises a plurality of frames of point cloud tracks, and each grid comprises a local point cloud track in the plurality of frames of point cloud tracks;
the determining module is used for determining a query key frame from the local point cloud track included in the target grid;
the determining module is further used for determining at least one matching frame of the query key frame in the multi-frame point cloud track, wherein the matching frame is a local point cloud track in a reference range where the query key frame is located;
the determining module is also used for determining registration results of the query key frame and each matching frame of the query key frame;
the determining module is further used for determining the target grid as the grid with ghost in response to the registration result meeting the requirement.
In one possible implementation, the determining module is configured to determine a number of road layers in the target grid; and determining a reference number of point cloud tracks from the local point cloud tracks included in the target grid as query key frames, wherein the reference number is determined based on the road layer number.
In one possible implementation, the apparatus further includes: the screening module is used for screening local point cloud tracks in a reference range from multi-frame point cloud tracks;
And the determining module is used for determining the local point cloud track meeting the time interval in the plurality of local point cloud tracks as a matching frame of the query key frame in response to the fact that the plurality of local point cloud tracks are included in the reference range.
In one possible implementation manner, the determining module is configured to determine a sub-graph of the query key frame, where the sub-graph of the query key frame is obtained based on a point cloud track included in the query key frame; determining a subgraph of any matching frame for the query key frame and any matching frame, wherein the subgraph of any matching frame is obtained based on a point cloud track included in any matching frame; determining a plurality of pose transformation matrices based on the subgraph of the query key frame and the subgraph of any matching frame, any pose transformation matrix including at least one of a rotation transformation parameter and a translation transformation parameter that transforms the query key frame to any matching frame;
the apparatus further comprises: and the calculation module is used for carrying out mean value calculation on the pose transformation matrix meeting the requirements in the plurality of pose transformation matrices, and taking the calculated result as a registration result of the query key frame and any matching frame.
In one possible implementation, the apparatus further includes: and the merging module is used for merging the point cloud tracks included in the query key frame and the relevant point cloud tracks of the query key frame to obtain a sub-graph of the query key frame, wherein the relevant point cloud tracks of the query key frame comprise point cloud tracks adjacent to the time stamp of the query key frame in the target grid.
In one possible implementation manner, the merging module is configured to merge a point cloud track included in any matching frame with a relevant point cloud track of any matching frame to obtain a sub-graph of any matching frame, where the relevant point cloud track of any matching frame includes a point cloud track adjacent to a timestamp of any matching frame in a grid where any matching frame is located.
In one possible implementation manner, the determining module is configured to determine a plurality of groups of local point cloud data corresponding to the sub-graph of the query key frame and global point cloud data corresponding to the sub-graph of any matching frame;
the apparatus further comprises: the registration module is used for registering a plurality of groups of local point cloud data with global point cloud data respectively to obtain a plurality of pose transformation matrixes, one group of local point cloud data corresponds to one pose transformation matrix, and the pose transformation matrix corresponding to any group of local point cloud data comprises at least one of rotation transformation parameters and translation transformation parameters for transforming the query key frame into any matching frame through any group of local point cloud data.
In one possible implementation manner, the screening module is configured to screen out point cloud data meeting the proportion requirement according to different regions in global point cloud data included in the sub-graph of the query key frame, and take the point cloud data meeting the proportion requirement screened out in each region as a set of local point cloud data corresponding to the sub-graph of the query key frame.
In one possible implementation manner, the calculating module is further configured to calculate, for any matching frame, a pose error of the query key frame according to a registration result of the query key frame and any matching frame and a registration transformation matrix, where the registration transformation matrix includes at least one of a rotation transformation parameter and a translation transformation parameter for transforming the query key frame to any matching frame through global point cloud data of the query key frame;
and the determining module is also used for determining that the registration result of any matching frame and the query key frame meets the requirement in response to the pose error being larger than the pose error threshold.
In one possible implementation manner, the determining module is further configured to combine the grids with the ghost, and determine the region of the ghost according to the combined grids;
the apparatus further comprises: and the restoration module is used for restoring the point cloud track included in the ghost area.
In another aspect, a computer device is provided, where the computer device includes a processor and a memory, where at least one computer program is stored in the memory, where the at least one computer program is loaded and executed by the processor, so that the computer device implements the ghost detection method of the point cloud map of any one of the above.
In another aspect, a computer readable storage medium is provided, in which at least one computer program is stored, where the at least one computer program is loaded and executed by a processor, so that the computer implements the ghost detection method of the point cloud map.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the ghost detection method of the point cloud map.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects:
by dividing the point cloud map into grids, converting a large-range point cloud map into point cloud tracks in a plurality of small-range grids, full coverage of ghost detection in the point cloud map is realized, and the detection granularity is finer. The query key frame and the matching frame are selected, and the detection accuracy is improved by registering the query key frame and the matching frame, so that an accurate high-precision map can be constructed based on the point cloud map after ghost detection, and further, the normal running of an automatic driving vehicle is ensured based on the accurate high-precision map, and the restriction of a scene on ghost detection is reduced because specific elements are not required to be extracted.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by embodiments of the present application;
fig. 2 is a flowchart of a ghost detection method of a point cloud map according to an embodiment of the present application;
fig. 3 is a schematic diagram of a point cloud map meshing provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of generating center local point cloud data according to an embodiment of the present application;
FIG. 5 is a schematic diagram of generating outline local point cloud data according to an embodiment of the present application;
FIG. 6 is a schematic diagram of generating upper and lower local point cloud data according to an embodiment of the present application;
FIG. 7 is a schematic diagram of generating left and right local point cloud data according to an embodiment of the present application;
FIG. 8 is a schematic diagram of generating left diagonal local point cloud data according to an embodiment of the present application;
FIG. 9 is a schematic diagram of generating right diagonal local point cloud data according to an embodiment of the present application;
FIG. 10 is a schematic view of a point cloud map after determining a grid with ghosts present in accordance with an embodiment of the present application;
fig. 11 is a schematic diagram of a ghost detection device of a point cloud map according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a ghost detection device for a point cloud map according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
An embodiment of the application provides a ghost detection method of a point cloud map, please refer to fig. 1, which shows a schematic diagram of an implementation environment of the method provided in the embodiment of the application. The implementation environment may include: a terminal 11 and a server 12.
The terminal 11 is provided with an application program or a web page capable of detecting ghost, and when the application program or the web page needs to detect ghost, the method provided by the embodiment of the application can be applied to display. The server 12 may store the point cloud map to be detected, and the terminal 11 may obtain the point cloud map to be detected from the server 12. Of course, the terminal 11 may also store the acquired point cloud map to be detected.
Alternatively, the terminal 11 may be a smart device such as a cellular phone, tablet computer, personal computer, or the like. The server 12 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center. The terminal 11 establishes a communication connection with the server 12 through a wired or wireless network.
Alternatively, the terminal 11 may be any electronic product that can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction or handwriting device, such as a PC (Personal Computer ), a mobile phone, a smart phone, a PDA (Personal Digital Assistant ), a wearable device, a PPC (Pocket PC), a tablet computer, a smart car machine, a smart television, a smart sound box, etc. The server 12 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center. The terminal 11 establishes a communication connection with the server 12 through a wired or wireless network.
Those skilled in the art will appreciate that the above-described terminal 11 and server 12 are by way of example only, and that other terminals or servers, either now present or later, may be suitable for use in the present application, and are intended to be within the scope of the present application and are incorporated herein by reference.
The embodiment of the application provides a ghost detection method of a point cloud map, and a flow chart of the ghost detection method of the point cloud map is shown in fig. 2. The method can be realized based on the implementation environment shown in the figure 1, and the ghost detection method of the point cloud map can be executed by a terminal or a server or can be realized by interaction of the terminal and the server. The embodiment of the application is illustrated by taking a server to execute the method, and referring to fig. 2, the method includes steps 201 to 205.
In step 201, a point cloud map to be detected is divided into a plurality of grids.
In the embodiment of the application, before ghost detection is performed on a map, a point cloud map is acquired. The point cloud map includes a multi-frame point cloud track, and each grid includes a local point cloud track in the multi-frame point cloud track. The embodiment of the application does not limit the acquisition mode of the point cloud map, for example, a radar scans an area needing to be built for multiple times, and one frame of point cloud track is output when the scanning is completed once, and the multiple frames of point cloud tracks form the point cloud map of the area.
In meshing a point cloud map, including but not limited to determining the number and size of meshing based on the range spanned by the point cloud map and the accuracy of the point cloud map required. And dividing the point cloud map into a plurality of grids according to the determined quantity and size. Each grid comprises all local point cloud tracks in a grid area, the number of the local point cloud tracks in each grid is at least one, and the number of the local point cloud tracks in each grid can be the same or different.
The point cloud map to be detected is divided into a plurality of grids, a large-range point cloud map can be reduced into a plurality of grids with small ranges, ghost detection is carried out on the point cloud tracks with the small ranges in the grids respectively, the granularity of detection is finer, and therefore accuracy of ghost detection can be improved, and full coverage of ghost detection is achieved.
Taking the point cloud map grid division schematic diagram shown in fig. 3 as an example, all the point cloud tracks in the map form a point cloud map of the area. In the range of the point cloud map, the point cloud map is divided into a plurality of grids according to the length H meters and the width W meters.
Optionally, after the radar finishes scanning once and outputs a frame of point cloud track, the point cloud track of each frame can be optimized by a mapping optimization algorithm, so that the accuracy of the point cloud pose of the point cloud track of each frame is improved. The map-building optimization algorithm is an algorithm for removing deviation of the position and the posture of the point cloud and obtaining the accurate position and the posture of the point cloud. The pose of the point cloud is used for expressing the transformation relation between the world coordinate system and the camera coordinate system, and comprises two transformation parameters of rotation and translation, which are usually expressed by a matrix. The world coordinate system is a coordinate system defined before scanning and can be used as a reference for scanning and constructing tracks. The camera coordinate system is a coordinate system constructed by taking the radar as an origin, and the camera coordinate system also changes along with the movement of the radar in the scanning process, so that the coordinates of the same frame of point cloud in the world coordinate system and the coordinates in the camera coordinate system are different and have corresponding transformation relations.
Step 202, determining a query key frame from a local point cloud track included in the target grid.
The query key frame is any frame of point cloud track in the local point cloud track included in the target grid, and the target grid is any grid in a plurality of grids. In the embodiment of the present application, the number of query key frames in each grid is not necessarily the same, and the number of query key frames in the grid may be determined according to the local point cloud track included in the grid and the number required for detection. Optionally, the method of determining query key frames includes, but is not limited to, steps 2021-2022.
In step 2021, the number of road layers in the target mesh is determined.
The method for determining the road layer number in the target grid is not limited, and includes, but is not limited to, determining the road layer number L contained in a radar scanning area corresponding to a point cloud track included in the target grid. For example, if a 3-layer overhead bridge and a common bidirectional multilane exist in a radar scanning area corresponding to a point cloud track included in a target grid, the road layer number L in the target grid is determined to be 3.
In step 2022, a reference number of point cloud tracks is determined from the local point cloud tracks included in the target grid as the query key frames, the reference number being determined based on the number of road layers.
In one possible implementation, determining a reference number of point cloud trajectories from the local point cloud trajectories included in the target grid as the query key frame includes: and selecting L.K frame point cloud tracks from all point cloud tracks included in the target grid as L.K query key frames of the target grid based on the road layer number L in the target grid. Where K may be an adjustment factor, for example, selected based on the complexity of the point cloud trajectory within the grid.
Optionally, if the target grid includes a point cloud track, and no road exists in the radar scanning area corresponding to the point cloud track, the K frame point cloud track may be directly selected as K query key frames of the target grid.
Step 203, at least one matching frame of the query key frame is determined in the multi-frame point cloud track, wherein the matching frame is a local point cloud track within the reference range of the query key frame.
The method for determining the number of the matching frames of the query key frame is not limited, and may be determined according to the number of point clouds contained in the query key frame, or may be determined by the size of the target grid. Optionally, the method of determining at least one matching frame includes, but is not limited to, steps 2031-2032.
Step 2031, selecting a local point cloud track in the reference range from the multi-frame point cloud tracks.
The reference range is not limited, and the reference range may be larger than or smaller than the area of the target grid. The first value is set according to a standard capable of ensuring that the multi-frame tracks are associated, the target query key frame is taken as a circle center, the first value is taken as a radius, and a reference range for screening the local point cloud tracks is determined.
When the local point cloud track in the reference range is screened out from the multi-frame point cloud tracks, the local point cloud track near the query key frame can be screened out in the geometric space of the radar scanning area according to the determined reference range, and the screened out local point cloud track can be the local point cloud track except the query key frame in the target grid or the local point cloud track in the adjacent grid of the target grid.
For example, the first value is set to be 50 meters, the first value is taken as a radius, the query key frame is taken as a circle center, the reference range of the local point cloud track near the screening query key frame is determined, the radius can cover a common bidirectional multi-lane scene, and the association between multi-frame tracks can be established, so that the coverage of ghost detection is more complete.
In step 2032, in response to the reference range including the plurality of local point cloud trajectories, determining a local point cloud trajectory satisfying the time interval from the plurality of local point cloud trajectories as a matching frame of the query key frame.
Optionally, before determining the local point cloud trajectory satisfying the time interval, a time interval threshold T for selecting the matching frame is set. If the time interval of any two local point cloud tracks in all the screened local point cloud tracks is smaller than T, the point clouds contained in the two local point cloud tracks are considered to be similar, and therefore when the matching frame of the query key frame is selected, any one of the two local point cloud tracks is selected as the matching frame of the query key frame. If the time interval of any two local point cloud tracks in all the screened local point cloud tracks is larger than T, the point clouds contained in all the local point cloud tracks are considered to be dissimilar, and all the screened local point cloud tracks can be used as matching frames of the query key frames.
And selecting at least one local point cloud track meeting the time interval from the screened local point cloud tracks near the query key frame as a matching frame of the query key frame no matter whether the time interval of any two local point cloud tracks is smaller than T.
The three local point cloud tracks, namely, a local point cloud track o, a local point cloud track s and a local point cloud track p, are sequentially screened out in the reference range of the query key frame q, wherein the time interval between the screened out local point cloud track o and the local point cloud track s is smaller than T, the time interval between the screened out local point cloud track o and the local point cloud track p is larger than T, the local point cloud track s is removed, and the local point cloud track o and the local point cloud track p are selected as two matching frames of the query key frame q.
Step 204, determining registration results of the query key frame and each matching frame of the query key frame.
After determining at least one matching frame of the query key frame, a registration result of the query key frame with each matching frame needs to be determined to reflect the degree of matching between the query key frame and each matching frame. The registration result of the query key frame and any matching frame refers to a matrix capable of reflecting the transformation relationship between the query key frame and any matching frame, including at least one of parameters of translational transformation and parameters of rotational transformation. Optionally, the method of determining the registration results includes, but is not limited to, steps 2041-2044.
Step 2041, determining a sub-graph of the query key frame, wherein the sub-graph of the query key frame is obtained based on the point cloud track included in the query key frame.
In one possible implementation, a point cloud track included in the query key frame and a relevant point cloud track of the query key frame are combined to obtain a sub-graph of the query key frame, wherein the relevant point cloud track of the query key frame includes a point cloud track adjacent to a time stamp of the query key frame in the target grid.
The timestamp may refer to the total number of seconds from the time of greenwich to the time of 1 month 1 day 00 minute 00 seconds (the time of 1 month 1 day 08 minute 00 seconds in 1970), and the signed object includes the original file information, the signature parameter, the signature time and other information by using the digital signature technology to generate data.
And merging the query key frame and the local point cloud track adjacent to the time stamp of the query key frame in the target grid to obtain a sub-graph of the query key frame. Compared with a single-frame query key frame, the generated sub-image point cloud of the query key frame is dense, has a complete space structure and a wider range, has more common-view areas and is beneficial to constructing line-plane constraint.
Illustratively, a query key frame q is combined with local point cloud trajectories in the target grid adjacent to the time stamps thereof to generate a sub-graph S of the query key frame q
Step 2042, for any matching frame of the query key frame, determining a sub-graph of any matching frame, the sub-graph of any matching frame being obtained based on the point cloud track included in any matching frame.
In a possible implementation manner, consistent with the method for obtaining the sub-graph of the query key frame, the point cloud track included in any matching frame is combined with the relevant point cloud track of any matching frame to obtain the sub-graph of any matching frame, and the relevant point cloud track of any matching frame includes the point cloud track adjacent to the time stamp of any matching frame in the grid where any matching frame is located.
Illustratively, a matching frame p and a local point cloud track adjacent to a timestamp thereof in a grid where the matching frame is located are combined to generateSubgraph S of matched frame p
Step 2043, determining a plurality of pose transformation matrices based on the sub-graph of the query key frame and the sub-graph of any of the matching frames, any of the pose transformation matrices comprising at least one of rotational transformation parameters and translational transformation parameters for transforming the query key frame to any of the matching frames.
In one possible implementation manner, the pose transformation matrix is determined based on multiple groups of local point cloud data corresponding to the sub-graph of the query key frame and global point cloud data corresponding to the sub-graph of any matching frame, and the method for determining multiple groups of local point cloud data corresponding to the sub-graph of the query key frame and global point cloud data corresponding to the sub-graph of any matching frame is not limited.
The multiple sets of local point cloud data corresponding to the sub-graph of the query key frame refer to point cloud data in any one or any multiple areas meeting the proportion requirement in global point cloud data included in the sub-graph of the query key frame, and the point cloud data include, but are not limited to, the number of point clouds and the pose of each point in the point clouds. For example, the scale requirement may be a scale threshold set in advance, determined according to the required map accuracy. Correspondingly, global point cloud data corresponding to the subgraph of any matching frame includes, but is not limited to, the number of global point clouds of the matching frame and the pose of each point in the point clouds.
In one possible implementation manner, when determining multiple groups of local point cloud data corresponding to the sub-graph of the query key frame, the point cloud data meeting the proportion requirement can be screened out according to different areas from the global point cloud data included in the sub-graph of the query key frame, and the point cloud data meeting the proportion requirement screened out by each area is used as one group of local point cloud data corresponding to the sub-graph of the query key frame.
Generating at least two groups of local point cloud data for the sub-graph of the query key frame after generating the sub-graph of the query key frame and the sub-graph of the matching frame respectively for the query key frame and any matching frame, wherein each group of local point cloud data corresponds to different areas of the sub-graph of the query key frame. Each set of generated local point cloud data corresponds to at least one region of the subgraph of the query key frame, and different regions correspond to different generation modes including, but not limited to, center, outline, up-down, left-right, left-diagonal, right-diagonal generation modes. When the generation mode of the local point cloud data is selected, the point cloud distribution in the generated local point cloud data is ensured to be relatively uniform, and the situation that pose solving deviation is overlarge due to the fact that a certain dimension point cloud is missing is prevented. The local point cloud data is used for registration, so that incorrect registration results caused by the fact that the weight of other areas is reduced due to the fact that a certain area in the point cloud track is rich in structure and dominant in registration can be avoided.
Taking the schematic diagram of generating the central local point cloud data shown in fig. 4 as an example, the point clouds in the middle of fig. 4 selected by the dashed line frame are a group of central local point clouds generated according to the subgraph of the query key frame, and the data contained in the point clouds are the group of central local point cloud data. Referring to fig. 5 to fig. 9, the results of generating local point cloud data for the sub-graph of the query key frame by using the generation modes of the outer contour, the up-down, the left-right, the left-diagonal and the right-diagonal are shown respectively, and the point cloud data selected by the dotted line boxes in fig. 5 to fig. 9 are the local point cloud data of the outer contour, the up-down, the left-right, the left-diagonal and the right-diagonal generated according to the sub-graph frame of the query key.
For example, after generating at least two sets of local point cloud data of the sub-graph of the query key frame according to different regions, that is, different generation manners, multiple sets of local point cloud data meeting the ratio requirement are screened out. Optionally, before screening out multiple sets of local point cloud data meeting the proportion requirement, setting an overlapping area proportion threshold value of any set of local point cloud data and the sub-graph of the matching frame as the proportion requirement according to the required map precision.
In one possible implementation, before screening multiple sets of local point cloud data meeting the scale requirement, the overlapping area scale of each set of local point cloud data of the sub-graph of the query key frame and the sub-graph of any matching frame of the query key frame needs to be calculated. The manner in which the proportion of the overlapping region is calculated includes, but is not limited to, steps 1 to 3 as follows.
Step 1, determining each point in any group of local point cloud data corresponding to the sub-graph of the query key frame, and calculating the projection point of each point in the target local point cloud data in the sub-graph of the matching frame according to the registration transformation matrix corresponding to the query key frame and any matching frame. The registration transformation matrix refers to a transformation relation between the query key frame and each matching frame, which is calculated through a mapping optimization algorithm after the query key frame and each matching frame are determined, and comprises at least one of parameters of rotation transformation and parameters of translation transformation. The relationship between the query key frame and each matching frame is not necessarily the same, and therefore, the query key frame and any matching frame have a corresponding registration transformation matrix.
And step 2, determining nearest neighbor points of each projection point in the global point cloud data of the matched frame subgraph. And calculating the spatial distance between each projection point and the nearest neighbor point, comparing each spatial distance with a set spatial distance threshold, and obtaining all spatial distances smaller than the spatial distance threshold through screening. If any calculated spatial distance is larger than a set spatial distance threshold, the nearest neighbor point corresponding to the spatial distance is considered to have no corresponding relation with the projection point corresponding to the nearest neighbor point, and the spatial distance is abandoned.
And 3, determining the number of the screened spatial distances smaller than a spatial distance threshold, and determining the ratio of the number of the screened spatial distances to the number of points in the group of local point cloud data as the ratio of the overlapping areas of the group of local point cloud data and the matching frame subgraph.
Illustratively, after the matching relation between the query key frame q and the matching frame p is determined, a mapping optimization algorithm is utilized to determine a registration transformation matrix T of the query key frame q to the matching frame p pq . Subgraph S based on query key frame q And 4 pieces of local point cloud data are generated in a mode of center, up and down, left and right and outer contour. Subgraph S for determining query key frame q Central local point cloud data S of (1) q1 There are 4 points, respectively A 1 、A 2 、A 3 、A 4 Calculating sub-graph S of each point in the matching frame according to a' =tpqa p Projection point a in (a) 1 ’、A 2 ’、A 3 ’、A 4 ’。
Illustratively, upon completion of the above-described matching frame subgraph S p After calculation of all projection points in (a) determining a sub-picture S in the matching frame p In A of 1 ' nearest neighbor is B 1 、A 2 ' nearest neighbor is B 2 、A 3 ' nearest neighbor is B 3 、A 4 ' nearest neighbor is B 4 Calculating the space distance between each projection point and the corresponding nearest neighbor point as d 1 、d 2 、d 3 、d 4 . Comparing each calculated spatial distance with a set spatial distance threshold value, discarding d greater than the spatial distance threshold value 4 . Determining that the number of the screened space distances is 3, and obtaining central local point cloud data S, wherein the number of point clouds in the set of local point cloud data is 4 q1 Match frame subgraph S p The ratio of the overlapping areas is
Optionally, after determining the proportion of all local point cloud data of the sub-graph of the query key frame to the overlapping area of the sub-graph of the matching frame, screening out the local point cloud data meeting the proportion requirement, and determining multiple groups of local point cloud data corresponding to the sub-graph of the query key frame.
In one possible implementation manner, if the ratio of the overlapping area of any group of local point cloud data obtained by calculation and the matching frame subgraph is greater than the ratio requirement, the result of registration calculation of the group of local point cloud data and the matching frame subgraph is considered to have credibility; if the proportion of the calculated overlapping area of the local point cloud data and the matching frame subgraph is smaller than the proportion threshold value, the result of registration calculation of the local point cloud data and the matching frame subgraph is not trusted because the overlapping area is small. And determining the local point cloud data with the overlapping area proportion larger than the proportion requirement as a plurality of groups of local point cloud data corresponding to the sub-image of the query key frame.
The above step of determining the overlapping area ratio of the local point cloud data and the matching frame sub-graph is typically repeated 4 times, i.e. the key frame sub-graph S is queried q All the generated 4 local point cloud data need to calculate the overlapping area proportion of the local point cloud data and the matching frame subgraph. Calculated, center local point cloud data S q1 Upper and lower local point cloud data S q2 S of left and right local point cloud data q3 These three sets of local point cloud data and matching frame subgraphs p The overlapping area proportion of the three groups of local point cloud data is determined to be subgraph S of the query key frame according to the proportion requirement q Corresponding three sets of local point cloud data. External contour local point cloud data S q4 Match frame subgraph S p Is smaller than the overlap threshold, thereby discarding the outer contour local point cloud data S q4
And registering a plurality of groups of local point cloud data corresponding to the determined sub-image of the query key frame and global point cloud data of the sub-image of any matching frame respectively to obtain a plurality of pose transformation matrixes, wherein one group of local point cloud data corresponds to one pose transformation matrix, and the pose transformation matrix corresponding to any group of local point cloud data comprises at least one of rotation transformation parameters and translation transformation parameters for transforming the query key frame into any matching frame through any group of local point cloud data.
For example, for a plurality of sets of local point cloud data corresponding to the sub-graph of the query key frame and global point cloud data of the sub-graph of any matching frame, a registration algorithm is used for point cloud registration. In a possible implementation manner, when registering, multiple groups of local point cloud data corresponding to the sub-image of the query key frame and global point cloud data of the sub-image of any matching frame are used instead of the local point cloud data of the sub-image of any matching frame, so that the phenomenon that the registering result is not credible due to too small overlapping area between the multiple groups of local point cloud data corresponding to the sub-image of the query key frame and the local point cloud data of the sub-image of any matching frame can be avoided. The registration method is not limited in the embodiment of the present application, for example, the registration method may be: conventional methods such as point-to-face ICP (Iterative Closest Point ), point-to-point ICP, GICP (Generalized Iterative Closest Point, comprehensive iterative closest point), and deep learning methods such as D3Feat (Dense Detection and Description Feat, dense detection and description function), FCGF (Fully Convolutional Geometric Features, full convolution geometry), presdata (registration method of low-overlap three-dimensional point cloud). And carrying out registration calculation on each group of local point cloud data corresponding to the sub-image of the query key frame and the global point cloud data of any matching frame sub-image by any registration algorithm to obtain a corresponding pose transformation matrix, wherein the pose transformation matrix corresponding to any group of local point cloud data comprises at least one of rotation transformation parameters and translation transformation parameters for transforming the query key frame into any matching frame by any group of local point cloud data.
And 2044, carrying out mean value calculation on the pose transformation matrix meeting the requirements in the plurality of pose transformation matrices, and taking the calculated result as a registration result of the query key frame and any matching frame.
And according to the obtained registration calculation of each group of local point cloud data corresponding to the sub-image of the query key frame and the global point cloud data of any matching frame sub-image, a corresponding pose transformation matrix is obtained, the first average transformation matrix of all the pose transformation matrices of the query key frame and the matching frame is calculated, and the difference between each pose transformation matrix and the first average transformation matrix is calculated. Comparing the difference between each pose transformation matrix and the first average transformation matrix with a transformation matrix threshold, screening out pose transformation matrices with the difference between the pose transformation matrices and the first average transformation matrix being smaller than the transformation matrix threshold, and calculating the mean value again according to the screened pose transformation matrices to obtain the registration result of the query key frame and any matching frame.
Illustratively, a registration result can be obtained by calculating any matching frame of the query key frame and the query key frame, and the number of matching frames of the query key frame is the number of registration results finally obtained by the query key frame.
For the special situations, for example, when only two groups of local point cloud data in the local point cloud data corresponding to the sub-graph of the query key frame and the overlapping area proportion of the global point cloud data of the sub-graph of the matching frame meet the proportion requirement, and the difference between the two obtained pose transformation matrixes and the first average transformation matrix exceeds the transformation matrix threshold, at this time, the pose transformation matrix corresponding to the local point cloud data with the largest overlapping area proportion is selected as the registration result.
In one possible implementation, after determining the registration results of the query key frame and the respective matching frames of the query key frame, for any matching frame, a pose error of the query key frame is calculated from the registration results of the query key frame and any matching frame and a registration transformation matrix, the registration transformation matrix comprising transforming the query key frame to at least one of a rotational transformation parameter and a translational transformation parameter of any matching frame by global point cloud data of the query key frame.
In one possible implementation, the inverse matrix of the registration transformation matrix is multiplied by any registration result of the query key frame to obtain the pose error of the matching frame corresponding to the registration result of the query key frame.
Illustratively, the registration result T of the matching frame p based on the query key frame q f According to T diff =(T pq ) -1 T f Calculating pose errors T of query key frame q and matching frame p diff Including rotational errors and translational errors.
After pose errors of the query key frame and each matching frame of the query key frame are obtained, determining that the registration result of any matching frame and the query key frame meets the requirement in response to the pose errors being larger than a pose error threshold.
And comparing each pose error of the query key frame with a pose error threshold value, and screening out the query key frame with the pose error maximum value exceeding the pose error threshold value. The pose error threshold refers to a value set in advance according to the required map accuracy, and includes a rotation threshold and a translation threshold, for example, the rotation threshold is an angle threshold. For the rotation errors in the obtained pose errors, firstly converting the rotation matrix into Euler angles, and if the error of one of the Euler angles in three directions is larger than a rotation threshold, considering that the pose errors exceed the pose error threshold; and regarding the translation error in the obtained pose error, if the error in one of the three xyz directions is larger than the translation threshold, the pose error is considered to exceed the pose error threshold. In one possible implementation, if at least one of the rotation error and the translation error in the obtained pose error is greater than its corresponding rotation threshold or translation threshold, the registration result of the pose error obtained by calculation is considered to satisfy the requirement.
In response to the presence of a registration result satisfying the requirements, the target grid is determined as a grid with ghost.
In one possible implementation manner, after determining the grid with the ghost, the grid with the ghost may be combined, and the area of the ghost may be determined according to the combined grid.
For example, after determining that the registration result of any matching frame and the query key frame meets the requirement, the target grid, that is, the grid where the query key frame is located, is determined as the grid where ghost exists. And after determining all the grids with the ghost images in the point cloud map, merging adjacent ghost image grids in the point cloud map to obtain the region with the ghost images in the point cloud map.
Taking the point cloud map schematic diagram after determining the grid with ghost as shown in fig. 10 as an example, the grid with cross symbol is the grid with ghost. And merging adjacent grids with ghost images in the point cloud map, and determining the grids as a plurality of areas with ghost images, wherein the areas correspond to areas outlined by thick solid lines in the map.
Optionally, the method provided by the embodiment of the application further includes: and repairing the point cloud track included in the ghost area. For example, after determining an area in the point cloud map where ghost exists, a corresponding radar scanning area is determined from the ghost area. And in the area, the radar scans again for a plurality of times to repair the point cloud track corresponding to the area, remove ghost images in the point cloud track and ensure the quality of the point cloud map.
In summary, according to the method provided by the embodiment of the application, the point cloud map is divided into the grids, the large-range point cloud map is converted into the point cloud tracks in the grids with the small ranges, the granularity is finer, and the full coverage of ghost detection in the point cloud map is realized. Selecting a query key frame and a matching frame, synthesizing a query key frame sub-image and a matching frame sub-image, carrying out multi-round registration calculation on at least one local point cloud data of the query key frame sub-image and the matching frame sub-image, completing ghost detection of a point cloud map, and improving the detection accuracy through multi-time registration, so that an accurate high-precision map can be constructed based on the point cloud map after ghost detection, and further, the normal running of an automatic driving vehicle is ensured based on the accurate high-precision map. The ghost detection is carried out through the registration algorithm, and the restriction of the scene on the ghost detection is reduced because specific elements are not required to be extracted.
Referring to fig. 11, an embodiment of the present application provides a ghost detection apparatus for a point cloud map, including:
the dividing module 1101 is configured to divide a point cloud map to be detected into a plurality of grids, where the point cloud map includes a plurality of frame point cloud tracks, and each grid includes a local point cloud track in the plurality of frame point cloud tracks;
A determining module 1102, configured to determine a query key frame from a local point cloud track included in the target grid;
the determining module 1102 is further configured to determine at least one matching frame of the query key frame in the multi-frame point cloud track, where the matching frame is a local point cloud track within a reference range where the query key frame is located;
a determining module 1102, configured to determine registration results of the query key frame and each matching frame of the query key frame;
the determining module 1102 is further configured to determine, in response to the presence of a registration result satisfying the requirement, the target grid as a grid in which ghost exists.
In one possible implementation, the determining module 1102 is configured to determine the number of road layers in the target grid; and determining a reference number of point cloud tracks from the local point cloud tracks included in the target grid as query key frames, wherein the reference number is determined based on the road layer number.
In one possible implementation, the apparatus further includes: the screening module is used for screening local point cloud tracks in a reference range from multi-frame point cloud tracks;
the determining module 1102 is configured to determine, as a matching frame of the query key frame, a local point cloud track satisfying the time interval among the plurality of local point cloud tracks in response to the reference range including the plurality of local point cloud tracks.
In one possible implementation, the determining module 1102 is configured to determine a sub-graph of the query key frame, where the sub-graph of the query key frame is obtained based on a point cloud track included in the query key frame; determining a subgraph of any matching frame for the query key frame and any matching frame, wherein the subgraph of any matching frame is obtained based on a point cloud track included in any matching frame; determining a plurality of pose transformation matrices based on the subgraph of the query key frame and the subgraph of any matching frame, any pose transformation matrix including at least one of a rotation transformation parameter and a translation transformation parameter that transforms the query key frame to any matching frame;
the apparatus further comprises: and the calculation module is used for carrying out mean value calculation on the pose transformation matrix meeting the requirements in the plurality of pose transformation matrices, and taking the calculated result as a registration result of the query key frame and any matching frame.
In one possible implementation manner, the merging module is configured to merge a point cloud track included in the query key frame and a relevant point cloud track of the query key frame to obtain a sub-graph of the query key frame, where the relevant point cloud track of the query key frame includes a point cloud track adjacent to a timestamp of the query key frame in the target grid.
In one possible implementation manner, the merging module is configured to merge a point cloud track included in any matching frame with a relevant point cloud track of any matching frame to obtain a sub-graph of any matching frame, where the relevant point cloud track of any matching frame includes a point cloud track adjacent to a timestamp of any matching frame in a grid where any matching frame is located.
In a possible implementation manner, the determining module 1102 is configured to determine multiple sets of local point cloud data corresponding to sub-graphs of the query key frame and global point cloud data corresponding to sub-graphs of any matching frame;
the apparatus further comprises: the registration module is used for registering a plurality of groups of local point cloud data with global point cloud data respectively to obtain a plurality of pose transformation matrixes, one group of local point cloud data corresponds to one pose transformation matrix, and the pose transformation matrix corresponding to any group of local point cloud data comprises at least one of rotation transformation parameters and translation transformation parameters for transforming the query key frame into any matching frame through any group of local point cloud data.
In one possible implementation manner, the screening module is configured to screen out point cloud data meeting the proportion requirement according to different regions in global point cloud data included in the sub-graph of the query key frame, and take the point cloud data meeting the proportion requirement screened out in each region as a set of local point cloud data corresponding to the sub-graph of the query key frame.
In one possible implementation manner, the calculating module is further configured to calculate, for any matching frame, a pose error of the query key frame according to a registration result of the query key frame and any matching frame and a registration transformation matrix, where the registration transformation matrix includes at least one of a rotation transformation parameter and a translation transformation parameter for transforming the query key frame to any matching frame through global point cloud data of the query key frame;
the determining module 1102 is further configured to determine that the registration result of any matching frame and the query key frame meets a requirement in response to the pose error being greater than the pose error threshold.
In a possible implementation manner, the determining module 1102 is further configured to combine the grids with the ghost, and determine a region of the ghost according to the combined grids;
the apparatus further comprises: and the restoration module is used for restoring the point cloud track included in the ghost area.
According to the device provided by the embodiment of the application, the point cloud map is divided into the grids, the large-range point cloud map is converted into the point cloud tracks in the grids with a plurality of small ranges, the granularity is finer, and the full coverage of ghost detection in the point cloud map is realized. Selecting a query key frame and a matching frame, synthesizing a query key frame sub-image and a matching frame sub-image, carrying out multi-round registration calculation on at least one local point cloud data of the query key frame sub-image and the matching frame sub-image, completing ghost detection of a point cloud map, and improving the detection accuracy through multi-time registration, so that an accurate high-precision map can be constructed based on the point cloud map after ghost detection, and further, the normal running of an automatic driving vehicle is ensured based on the accurate high-precision map. The ghost detection is carried out through the registration algorithm, and the restriction of the scene on the ghost detection is reduced because specific elements are not required to be extracted.
It should be noted that, when the apparatus provided in the foregoing embodiment performs the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
It should be noted that the terms "first," "second," and the like in the description and in the claims of this application (if any) are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
Fig. 12 is a schematic diagram of a server according to an embodiment of the present application, where the server may include one or more processors 1201 and one or more memories 1202, and the server may have a relatively large difference due to different configurations or performances. The processor 1201 is, for example, a CPU (Central Processing Units, central processing unit). Wherein the one or more memories 1202 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 1201, so that the server implements the ghost detection method for the point cloud map provided by the above method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
Fig. 13 is a schematic structural diagram of a ghost detection device for a point cloud map according to an embodiment of the present application. The device may be a terminal, for example: smart phones, tablet computers, notebook computers or desktop computers. Terminals may also be referred to by other names as user equipment, portable terminals, laptop terminals, desktop terminals, etc.
Generally, the terminal includes: a processor 1301, and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Processor 1301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 1301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, processor 1301 may integrate a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content that the display screen needs to display. In some embodiments, the processor 1301 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. Memory 1302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to cause the terminal to implement the methods of ghost detection provided by the method embodiments herein.
In some embodiments, the terminal may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, the memory 1302, and the peripheral interface 1303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, a display screen 1305, a camera assembly 1306, audio circuitry 1307, a positioning assembly 1308, and a power supply 1309.
A peripheral interface 1303 may be used to connect I/O (Input/Output) related at least one peripheral to the processor 1301 and the memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1301, the memory 1302, and the peripheral interface 1303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal to an electromagnetic signal for transmission, or converts a received electromagnetic signal to an electrical signal. Optionally, the radio frequency circuit 1304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication ) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1305 is a touch display, the display 1305 also has the ability to capture touch signals at or above the surface of the display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1305 may be one and disposed on the front panel of the terminal; in other embodiments, the display 1305 may be at least two, disposed on different surfaces of the terminal or in a folded configuration; in other embodiments, the display 1305 may be a flexible display disposed on a curved surface or a folded surface of the terminal. Even more, the display screen 1305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1305 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones can be respectively arranged at different parts of the terminal. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 1301 or the radio frequency circuit 1304 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1307 may also comprise a headphone jack.
The location component 1308 is used to locate the current geographic location of the terminal to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1308 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, the grainer system of russia, or the galileo system of the european union.
A power supply 1309 is used to power the various components in the terminal. The power supply 1309 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal further includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyroscope sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with a terminal. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. Processor 1301 may control display screen 1305 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by acceleration sensor 1311. The acceleration sensor 1311 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the terminal, and the gyro sensor 1312 may collect a 3D motion of the user to the terminal in cooperation with the acceleration sensor 1311. Processor 1301 can implement the following functions based on the data collected by gyro sensor 1312: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1313 may be disposed on a side frame of the terminal and/or below the display screen 1305. When the pressure sensor 1313 is disposed at a side frame of the terminal, a grip signal of the terminal by a user may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1313. When the pressure sensor 1313 is disposed at the lower layer of the display screen 1305, the processor 1301 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1305. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1314 is used to collect a fingerprint of the user, and the processor 1301 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by processor 1301 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal. When a physical key or a manufacturer Logo (trademark) is provided on the terminal, the fingerprint sensor 1314 may be integrated with the physical key or the manufacturer Logo.
The optical sensor 1315 is used to collect ambient light intensity. In one embodiment, processor 1301 may control the display brightness of display screen 1305 based on the intensity of ambient light collected by optical sensor 1315. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1305 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1305 is turned down. In another embodiment, processor 1301 may also dynamically adjust the shooting parameters of camera assembly 1306 based on the intensity of ambient light collected by optical sensor 1315.
A proximity sensor 1316, also known as a distance sensor, is typically provided on the front panel of the terminal. The proximity sensor 1316 is used to collect the distance between the user and the front face of the terminal. In one embodiment, when the proximity sensor 1316 detects a gradual decrease in the distance between the user and the front face of the terminal, the processor 1301 controls the display screen 1305 to switch from the bright screen state to the off screen state; when the proximity sensor 1316 detects that the distance between the user and the front surface of the terminal gradually increases, the processor 1301 controls the display screen 1305 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 13 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer device is also provided, the computer device comprising a processor and a memory, the memory having at least one computer program stored therein. The at least one computer program is loaded and executed by one or more processors to cause the computer arrangement to implement a method of any of the ghost detection described above.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one computer program loaded and executed by a processor of a computer device to cause the computer to implement a method of ghost detection of any of the above.
In one possible implementation, the computer readable storage medium may be a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), a compact disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computer device to perform any of the methods of ghost detection described above.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the grid, query key frame, and matching frame referred to in this application are all acquired with sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to any modification, equivalents, or improvements made within the principles of the present application.

Claims (10)

1. A ghost detection method for a point cloud map, the method comprising:
dividing a point cloud map to be detected into a plurality of grids, wherein the point cloud map comprises a plurality of frame point cloud tracks, and each grid comprises a local point cloud track in the plurality of frame point cloud tracks;
determining a query key frame from a local point cloud track included in a target grid;
determining at least one matching frame of the query key frame in the multi-frame point cloud track, wherein the matching frame is a local point cloud track in a reference range where the query key frame is located;
determining registration results of the query key frame and each matching frame of the query key frame;
in response to the presence of a registration result satisfying the requirements, the target grid is determined to be a grid in which ghost is present.
2. The method of claim 1, wherein determining the query keyframe from the local point cloud trajectories included in the target grid comprises:
determining the road layer number in the target grid;
and determining a reference number of point cloud tracks from the local point cloud tracks included in the target grid as query key frames, wherein the reference number is determined based on the road layer number.
3. The method of claim 1, wherein said determining at least one matching frame of the query keyframe in the multi-frame point cloud trajectory comprises:
selecting a local point cloud track in the reference range from the multi-frame point cloud tracks;
and determining local point cloud tracks meeting time intervals in the plurality of local point cloud tracks as matching frames of the query key frame in response to the fact that the plurality of local point cloud tracks are included in the reference range.
4. The method of claim 1, wherein the determining registration results for the query key frame and each matching frame of the query key frame comprises:
determining a subgraph of the query key frame, wherein the subgraph of the query key frame is obtained based on a point cloud track included in the query key frame;
determining a subgraph of any matching frame for any matching frame of the query key frame, wherein the subgraph of any matching frame is obtained based on a point cloud track included by any matching frame;
determining a plurality of pose transformation matrices based on the subgraph of the query key frame and the subgraph of any matching frame, any pose transformation matrix including at least one of rotation transformation parameters and translation transformation parameters that transform the query key frame to any matching frame;
And carrying out mean value calculation on the pose transformation matrix meeting the requirements in the plurality of pose transformation matrices, and taking the calculated result as a registration result of the query key frame and any matching frame.
5. The method of claim 4, wherein the determining the sub-graph of the query key frame comprises:
and merging the point cloud tracks included in the query key frame and the relevant point cloud tracks of the query key frame to obtain a sub-graph of the query key frame, wherein the relevant point cloud tracks of the query key frame comprise point cloud tracks adjacent to the time stamp of the query key frame in the target grid.
6. The method of claim 4, wherein said determining the sub-picture of any of the matching frames comprises:
and combining the point cloud track included in any matching frame with the relevant point cloud track of any matching frame to obtain a subgraph of the any matching frame, wherein the relevant point cloud track of any matching frame comprises a point cloud track adjacent to the time stamp of any matching frame in a grid where the any matching frame is located.
7. The method of claim 4, wherein the determining a plurality of pose transformation matrices based on the sub-graph of the query key frame and the sub-graph of any matching frame comprises:
Determining multiple groups of local point cloud data corresponding to the subgraphs of the query key frame and global point cloud data corresponding to the subgraphs of any matching frame;
registering the multiple groups of local point cloud data with the global point cloud data respectively to obtain multiple pose transformation matrixes, wherein one group of local point cloud data corresponds to one pose transformation matrix, and the pose transformation matrix corresponding to any group of local point cloud data comprises at least one of rotation transformation parameters and translation transformation parameters for transforming a query key frame to any matching frame through any group of local point cloud data.
8. The method of claim 7, wherein determining the plurality of sets of local point cloud data corresponding to sub-graphs of the query keyframe comprises:
and screening out point cloud data meeting the proportion requirement according to different areas in global point cloud data included in the subgraph of the query key frame, and taking the point cloud data meeting the proportion requirement screened out by each area as a group of local point cloud data corresponding to the subgraph of the query key frame.
9. The method of claim 1, wherein after determining the registration results for the query key frame and each matching frame of the query key frame, further comprising:
For any matching frame, calculating pose errors of the query key frame according to registration results of the query key frame and the any matching frame and a registration transformation matrix, wherein the registration transformation matrix comprises at least one of rotation transformation parameters and translation transformation parameters for transforming the query key frame to the any matching frame through global point cloud data of the query key frame;
and determining that the registration result of any matching frame and the query key frame meets the requirement in response to the pose error being greater than a pose error threshold.
10. The method according to any one of claims 1-9, wherein after determining the target grid as a grid in which ghosting exists, further comprising:
merging the grids with the ghost, and determining the region of the ghost according to the merged grids;
and repairing the point cloud track included in the ghost area.
CN202211063801.3A 2022-08-31 2022-08-31 Ghost detection method of point cloud map Pending CN117670785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211063801.3A CN117670785A (en) 2022-08-31 2022-08-31 Ghost detection method of point cloud map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211063801.3A CN117670785A (en) 2022-08-31 2022-08-31 Ghost detection method of point cloud map

Publications (1)

Publication Number Publication Date
CN117670785A true CN117670785A (en) 2024-03-08

Family

ID=90066889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211063801.3A Pending CN117670785A (en) 2022-08-31 2022-08-31 Ghost detection method of point cloud map

Country Status (1)

Country Link
CN (1) CN117670785A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254857A1 (en) * 2014-03-10 2015-09-10 Sony Corporation Image processing system with registration mechanism and method of operation thereof
CN108694882A (en) * 2017-04-11 2018-10-23 百度在线网络技术(北京)有限公司 Method, apparatus and equipment for marking map
CN111462072A (en) * 2020-03-30 2020-07-28 北京百度网讯科技有限公司 Dot cloud picture quality detection method and device and electronic equipment
CN111771229A (en) * 2019-01-30 2020-10-13 百度时代网络技术(北京)有限公司 Point cloud ghost effect detection system for automatic driving vehicle
CN111795704A (en) * 2020-06-30 2020-10-20 杭州海康机器人技术有限公司 Method and device for constructing visual point cloud map
EP3731181A1 (en) * 2019-04-24 2020-10-28 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for point cloud registration, server and computer readable medium
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
CN113192197A (en) * 2021-05-24 2021-07-30 北京京东乾石科技有限公司 Method, device, equipment and storage medium for constructing global point cloud map
US20210374904A1 (en) * 2020-05-26 2021-12-02 Baidu Usa Llc Depth-guided video inpainting for autonomous driving
CN113866779A (en) * 2020-06-30 2021-12-31 上海商汤智能科技有限公司 Point cloud data fusion method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254857A1 (en) * 2014-03-10 2015-09-10 Sony Corporation Image processing system with registration mechanism and method of operation thereof
CN108694882A (en) * 2017-04-11 2018-10-23 百度在线网络技术(北京)有限公司 Method, apparatus and equipment for marking map
CN111771229A (en) * 2019-01-30 2020-10-13 百度时代网络技术(北京)有限公司 Point cloud ghost effect detection system for automatic driving vehicle
US20210327128A1 (en) * 2019-01-30 2021-10-21 Baidu Usa Llc A point clouds ghosting effects detection system for autonomous driving vehicles
EP3731181A1 (en) * 2019-04-24 2020-10-28 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for point cloud registration, server and computer readable medium
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
CN111462072A (en) * 2020-03-30 2020-07-28 北京百度网讯科技有限公司 Dot cloud picture quality detection method and device and electronic equipment
US20210374904A1 (en) * 2020-05-26 2021-12-02 Baidu Usa Llc Depth-guided video inpainting for autonomous driving
CN111795704A (en) * 2020-06-30 2020-10-20 杭州海康机器人技术有限公司 Method and device for constructing visual point cloud map
CN113866779A (en) * 2020-06-30 2021-12-31 上海商汤智能科技有限公司 Point cloud data fusion method and device, electronic equipment and storage medium
CN113192197A (en) * 2021-05-24 2021-07-30 北京京东乾石科技有限公司 Method, device, equipment and storage medium for constructing global point cloud map

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜俊奎;张焱;李鹏宇;: "基于3D激光点云的同步定位与建图研究", 测绘与空间地理信息, no. 11, 25 November 2018 (2018-11-25) *
殷江;林建德;孔令华;邹诚;游通飞;易定容;: "基于激光雷达的移动机器人三维建图与定位", 福建工程学院学报, no. 04, 25 August 2020 (2020-08-25) *

Similar Documents

Publication Publication Date Title
CN112150560B (en) Method, device and computer storage medium for determining vanishing point
WO2021103841A1 (en) Control vehicle
CN112581358B (en) Training method of image processing model, image processing method and device
CN113099378B (en) Positioning method, device, equipment and storage medium
CN110471614B (en) Method for storing data, method and device for detecting terminal
CN112053360B (en) Image segmentation method, device, computer equipment and storage medium
CN111127541A (en) Vehicle size determination method and device and storage medium
CN111754564B (en) Video display method, device, equipment and storage medium
CN113592874B (en) Image display method, device and computer equipment
CN111859549B (en) Method and related equipment for determining weight and gravity center information of single-configuration whole vehicle
CN112329909B (en) Method, apparatus and storage medium for generating neural network model
CN114550717A (en) Voice sound zone switching method, device, equipment and storage medium
CN110660031B (en) Image sharpening method and device and storage medium
CN117670785A (en) Ghost detection method of point cloud map
CN111402873A (en) Voice signal processing method, device, equipment and storage medium
CN113590877B (en) Method and device for acquiring annotation data
CN113409235B (en) Vanishing point estimation method and apparatus
CN112613673B (en) Travel track determining method and device and computer readable storage medium
CN117911482B (en) Image processing method and device
CN113763486B (en) Dominant hue extraction method, device, electronic equipment and storage medium
CN113052408B (en) Method and device for community aggregation
CN113129221B (en) Image processing method, device, equipment and storage medium
CN112150554B (en) Picture display method, device, terminal and storage medium
CN117635786A (en) Point cloud processing method, device, equipment and storage medium
CN111125571B (en) Picture display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination