CN116664648A - Point cloud frame and depth map generation method and device, electronic equipment and storage medium - Google Patents

Point cloud frame and depth map generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116664648A
CN116664648A CN202310643979.3A CN202310643979A CN116664648A CN 116664648 A CN116664648 A CN 116664648A CN 202310643979 A CN202310643979 A CN 202310643979A CN 116664648 A CN116664648 A CN 116664648A
Authority
CN
China
Prior art keywords
point cloud
virtual
laser radar
depth map
virtual laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310643979.3A
Other languages
Chinese (zh)
Inventor
刘平
孙金泉
蔡登胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Liugong Machinery Co Ltd
Original Assignee
Guangxi Liugong Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Liugong Machinery Co Ltd filed Critical Guangxi Liugong Machinery Co Ltd
Priority to CN202310643979.3A priority Critical patent/CN116664648A/en
Publication of CN116664648A publication Critical patent/CN116664648A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a point cloud frame and depth map generation method, a device, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring map point cloud data and preset point cloud interception parameters, and processing the map point cloud data according to the preset point cloud interception parameters to obtain an intermediate matrix, wherein the intermediate matrix at least comprises the following attribute information: and generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix. According to the embodiment of the invention, the map point cloud data is processed according to the preset point cloud interception parameters to obtain the intermediate matrix, and the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map are generated according to the intermediate matrix, so that the generation process of the virtual laser radar point cloud frame and the depth map is simplified, and the generation efficiency and the real-time performance are greatly improved.

Description

Point cloud frame and depth map generation method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of lidar technologies, and in particular, to a method, an apparatus, an electronic device, and a storage medium for generating a point cloud frame and a depth map.
Background
In unmanned and unmanned operation technology and products, a laser radar is an important sensing device. The virtual laser radar point cloud frame and the corresponding depth map generated based on the point cloud map can provide important references for application scenes such as unmanned or unmanned driving drivable area sensing, obstacle sensing, operation object sensing, positioning sensing and the like. In the prior art, the virtual laser radar point cloud frame is generated based on the point cloud map, and the depth map is generated based on the virtual laser radar point cloud frame, which are separately executed, so that the generation efficiency and the real-time performance are poor.
Disclosure of Invention
The invention provides a point cloud frame and depth map generation method, a device, electronic equipment and a storage medium, which are used for processing map point cloud data according to preset point cloud interception parameters to obtain an intermediate matrix, and simultaneously generating a virtual laser radar point cloud frame and a virtual laser radar depth map corresponding to the virtual laser radar point cloud frame according to the intermediate matrix, so that the generation process of the virtual laser radar point cloud frame and the depth map is simplified, and the generation efficiency and the real-time performance are greatly improved.
According to an aspect of the present invention, there is provided a point cloud frame and depth map generating method, the method including:
Acquiring map point cloud data and preset point cloud interception parameters;
processing map point cloud data according to preset point cloud interception parameters to obtain an intermediate matrix, wherein the intermediate matrix at least comprises the following attribute information: a point cloud three-dimensional coordinate, a point cloud depth value;
and generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix.
According to another aspect of the present invention, there is provided a point cloud frame and depth map generating apparatus, including:
the data acquisition module is used for acquiring map point cloud data and preset point cloud interception parameters;
the middle matrix determining module is used for processing the map point cloud data according to preset point cloud intercepting parameters to obtain a middle matrix, and the middle matrix at least comprises the following attribute information: a point cloud three-dimensional coordinate, a point cloud depth value;
and the point cloud frame and depth map generation module is used for generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the point cloud frame and depth map generation method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the point cloud frame and depth map generation method according to any one of the embodiments of the present invention when executed.
According to the technical scheme, map point cloud data and preset point cloud interception parameters are obtained, and the map point cloud data are processed according to the preset point cloud interception parameters to obtain an intermediate matrix, wherein the intermediate matrix at least comprises the following attribute information: and generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix. According to the embodiment of the invention, the map point cloud data is processed according to the preset point cloud interception parameters to obtain the intermediate matrix, and the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map are generated according to the intermediate matrix, so that the generation process of the virtual laser radar point cloud frame and the depth map is simplified, and the generation efficiency and the real-time performance are greatly improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a point cloud frame and depth map generating method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a point cloud frame and depth map generating method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a point cloud frame and depth map generating method according to a third embodiment of the present invention;
FIG. 4 is an exemplary diagram of a depth map generated from a real lidar point cloud frame provided in accordance with a third embodiment of the present invention;
FIG. 5 is an exemplary diagram of a depth map generated from a virtual lidar point cloud frame provided in accordance with a third embodiment of the present invention;
FIG. 6 is an exemplary diagram of another depth map generated from a virtual lidar point cloud frame provided in accordance with the third embodiment of the present invention;
FIG. 7 is an exemplary diagram of yet another depth map generated from a real lidar point cloud frame provided in accordance with the third embodiment of the present invention;
FIG. 8 is an exemplary diagram of yet another depth map generated from a virtual lidar point cloud frame provided in accordance with the third embodiment of the present invention;
fig. 9 is a schematic structural diagram of a point cloud frame and depth map generating device according to a fourth embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device implementing a point cloud frame and a depth map generating method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a method for generating a point cloud frame and a depth map according to an embodiment of the present invention, where the method may be performed by a point cloud frame and a depth map generating device, and the point cloud frame and the depth map generating device may be implemented in hardware and/or software. As shown in fig. 1, the method for generating a point cloud frame and a depth map according to the first embodiment specifically includes the following steps:
S110, acquiring map point cloud data and preset point cloud interception parameters.
The map point cloud data may refer to a data source for generating a virtual laser radar point cloud frame and a virtual laser radar depth map corresponding to the virtual laser radar point cloud frame, and the map point cloud data may be derived from laser radar point cloud data of unmanned, unmanned operation or other scenes. The preset point cloud intercepting parameters may be understood as parameters which are preconfigured and used for intercepting point cloud data in a certain range from map point cloud data, and the preset point cloud intercepting parameters may include: a horizontal field angle, a vertical field angle, a field angle central axis orientation, a closest point distance limit, a farthest point distance limit, and the like of the virtual lidar.
In the embodiment of the invention, map point cloud data for generating the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map thereof can be acquired from a map point cloud database stored by a local or cloud server, and preset point cloud interception parameters for generating the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map thereof can be acquired from a preset configuration file or a preset configuration table, so that the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map can be generated according to the map point cloud data and the preset point cloud interception parameters. It will be appreciated that the embodiments of the present invention are not limited in this regard as to the storage locations of the map point cloud data and the preset point cloud intercept parameters. In addition, the preset point cloud intercepting parameter may be configured according to the type of the virtual lidar, for example, the direction of the central axis of the angle of view is only configured by the virtual lidar of the type of the solid-state lidar, while other types of virtual lidars may not be configured with the parameter.
S120, processing map point cloud data according to preset point cloud interception parameters to obtain an intermediate matrix, wherein the intermediate matrix at least comprises the following attribute information: and (3) a point cloud three-dimensional coordinate and a point cloud depth value.
The intermediate matrix may refer to an intermediate data structure designed for implementing the simultaneous generation of the virtual lidar point cloud frame and the virtual lidar depth map corresponding to the virtual lidar point cloud frame in the embodiment of the present invention, where the intermediate matrix may at least include the following attribute information: and (3) a point cloud three-dimensional coordinate and a point cloud depth value.
In the embodiment of the invention, in order to generate the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map by utilizing the map point cloud data, an intermediate data structure, namely an intermediate matrix, is designed, and the intermediate matrix can at least comprise the following attribute information: the three-dimensional coordinates of the point cloud and the depth value of the point cloud, wherein the method of processing the map point cloud data according to the preset point cloud interception parameters to obtain the intermediate matrix can include, but is not limited to, the following modes: all point clouds in a certain space range can be intercepted in map point cloud data by utilizing preset point cloud intercepting parameters based on the visual field range of the virtual laser radar, and all intercepted point clouds are firstly converted into a virtual laser radar coordinate system and then mapped to corresponding positions of the established intermediate matrix; the method comprises the steps of obtaining a coordinate transformation of the point cloud data of the intermediate matrix, obtaining the point cloud data of the intermediate matrix, and obtaining the point cloud data of the intermediate matrix. It should be understood that the foregoing embodiments are merely examples, for example, the type of the virtual lidar coordinate system used, the mapping rule for mapping the point cloud to the intermediate matrix, etc. may be configured accordingly according to the actual situation, and only the generated intermediate matrix needs to be guaranteed to include at least attribute information of two dimensions, namely, the three-dimensional coordinates of the point cloud and the depth value of the point cloud, where the intermediate matrix may include attribute information, such as reflection intensity, color, normal vector, etc. of the point cloud.
And S130, generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix.
The virtual lidar point cloud frame may refer to a group of point cloud sets obtained from map point cloud data, and in general, the virtual lidar point cloud frame may be a subset of the map point cloud data. The virtual laser radar depth map may refer to a two-dimensional depth map generated according to map point cloud data, where pixel values of pixels in the virtual laser radar depth map represent point cloud depth values of corresponding point clouds, and a size of the virtual laser radar depth map may be the same as a size of the intermediate matrix.
In the embodiment of the invention, in order to facilitate the direct separation of the corresponding virtual lidar depth map from the intermediate matrix, after the intermediate matrix is obtained, the virtual lidar depth map with the same size can be created according to the number of rows and the number of columns of the intermediate matrix, then the three-dimensional coordinates of the point clouds corresponding to each element in the intermediate matrix are stored into the virtual lidar point cloud frame, and the point cloud depth values corresponding to each element in the intermediate matrix are filled into the corresponding positions of the virtual lidar depth map, so that the required virtual lidar point cloud frame and the corresponding virtual lidar depth map can be obtained. Further, since the point cloud data is discrete and sparse, when the point cloud is mapped to the intermediate matrix and then projected to the virtual lidar depth map, more empty defects will necessarily exist in the depth map, so that the hole filling operation can be performed on the obtained virtual lidar depth map to further increase the visibility and the authenticity of the virtual lidar depth map, and the hole filling mode can be not limited to: filling by using the average depth value of the nearest neighbor pixel point, filling by using the minimum depth value of the nearest neighbor pixel point, and the like.
According to the technical scheme, map point cloud data and preset point cloud interception parameters are obtained, and the map point cloud data are processed according to the preset point cloud interception parameters to obtain an intermediate matrix, wherein the intermediate matrix at least comprises the following attribute information: and generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix. According to the embodiment of the invention, the map point cloud data is processed according to the preset point cloud interception parameters to obtain the intermediate matrix, and the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map are generated according to the intermediate matrix, so that the generation process of the virtual laser radar point cloud frame and the depth map is simplified, and the generation efficiency and the real-time performance are greatly improved.
Example two
Fig. 2 is a flowchart of a point cloud frame and depth map generating method according to a second embodiment of the present invention, which is further optimized and expanded based on the foregoing embodiments, and may be combined with each of the optional technical solutions in the foregoing embodiments. As shown in fig. 2, the method for generating a point cloud frame and a depth map provided in the second embodiment specifically includes the following steps:
S210, acquiring map point cloud data and preset point cloud interception parameters.
In the embodiment of the present invention, the preset point cloud interception parameters may include at least one of the following: the virtual laser radar comprises a preset coordinate origin, a preset virtual laser radar orientation, a virtual laser radar horizontal view angle, a virtual laser radar vertical view angle, a virtual laser radar horizontal direction angle resolution, a virtual laser radar vertical direction angle resolution, a virtual laser radar farthest visible distance and a virtual laser radar nearest visible distance.
S220, taking a preset origin of coordinates in preset point cloud interception parameters as the origin of coordinates of a virtual laser radar coordinate system, and taking a preset virtual laser radar orientation in the preset point cloud interception parameters as the positive X-axis direction of the virtual laser radar coordinate system to establish the virtual laser radar coordinate system.
The preset origin of coordinates may be an origin of coordinates of a virtual laser radar coordinate system configured in advance, and a certain point cloud position in the map point cloud data may be used as the preset origin of coordinates according to actual needs. The preset virtual laser radar direction can be understood as the direction of the central axis of the angle of view of the virtual laser radar which is preset, and a certain direction of the central axis of the angle of view can be selected from the map point cloud data as the direction of the preset virtual laser radar according to actual needs. The virtual lidar coordinate system may be a three-dimensional coordinate system established from a preset origin of coordinates of the desired virtual lidar and a preset virtual lidar orientation.
In the embodiment of the invention, a preset coordinate origin and a preset virtual laser radar direction related to a virtual laser radar coordinate system can be extracted from preset point cloud interception parameters, and then the preset coordinate origin and the preset virtual laser radar direction are respectively used as the coordinate origin and the X-axis positive direction of the virtual laser radar coordinate system to establish a corresponding virtual laser radar coordinate system, wherein for the selection of the preset coordinate origin and the preset virtual laser radar direction, a certain point cloud position and a certain field angle central axis direction can be selected from map point cloud data as the preset coordinate origin and the preset virtual laser radar direction according to actual needs.
And S230, converting all point clouds in the map point cloud data into a virtual laser radar coordinate system to obtain a first virtual point cloud set.
The first virtual point cloud set may refer to a virtual point cloud set obtained by performing coordinate system conversion on all point clouds in map point cloud data.
In the embodiment of the invention, after the virtual laser radar coordinate system is established, coordinate system conversion can be performed on all point clouds in the map point cloud data, and all point clouds are expressed under the virtual laser radar coordinate system and are recorded as a first virtual point cloud set.
S240, screening the view field range of the point clouds in the first virtual point cloud set according to the virtual laser radar horizontal view field angle, the virtual laser radar vertical view field angle, the virtual laser radar horizontal direction angle resolution, the virtual laser radar vertical direction angle resolution, the virtual laser radar farthest visible distance and the virtual laser radar nearest visible distance in the preset point cloud intercepting parameters to obtain a second virtual point cloud set.
The second virtual point cloud set may be a virtual point cloud set obtained by performing field-of-view range screening on point clouds in the first virtual point cloud set.
In the embodiment of the invention, the virtual laser radar horizontal view angle, the virtual laser radar vertical view angle, the virtual laser radar horizontal direction angle resolution, the virtual laser radar vertical direction angle resolution, the virtual laser radar farthest visible distance and the virtual laser radar nearest visible distance related to the virtual laser radar can be extracted from preset point cloud interception parameters, then all point clouds in the first virtual point cloud set are subjected to visual field range screening by utilizing the preset point cloud interception parameters, point clouds beyond the visual field range are removed, and the point cloud set subjected to visual field range screening is used as a second virtual point cloud set.
Further, on the basis of the above embodiment of the present invention, S240 specifically includes the following steps:
s2401, determining the distance from each point cloud in the first virtual point cloud set to the origin of coordinates.
S2402, if the distance is larger than the farthest visible distance of the virtual laser radar or the distance is smaller than the nearest visible distance of the virtual laser radar, eliminating the corresponding point cloud in the first virtual point cloud set.
S2403, if the included angle between the vector from each point cloud to the origin of coordinates and the OXZ coordinate plane of the virtual laser radar coordinate system is greater than half of the horizontal view angle of the virtual laser radar, or the included angle between the vector from each point cloud to the origin of coordinates and the OXZ coordinate plane of the virtual laser radar coordinate system is less than the opposite number of half of the horizontal view angle of the virtual laser radar, eliminating the corresponding point cloud in the first virtual point cloud set.
S2404, if the included angle between the vector from each point cloud to the coordinate origin and the OXY coordinate plane of the virtual laser radar coordinate system is greater than half of the vertical view angle of the virtual laser radar, or the included angle between the vector from each point cloud to the coordinate origin and the OXY coordinate plane of the virtual laser radar coordinate system is smaller than the opposite number of half of the vertical view angle of the virtual laser radar, eliminating the corresponding point cloud in the first virtual point cloud set.
S2405, taking the first virtual point cloud set after being removed as a second virtual point cloud set.
Specifically, S2401 to S2404 may be sequentially executed on all the point clouds in the first virtual point cloud set, so as to screen out the point clouds located in a certain field of view, and the point cloud set after the field of view screening is used as the second virtual point cloud set.
S250, determining the depth map row coordinates and the depth map column coordinates corresponding to each point cloud in the second virtual point cloud set by using a preset depth map coordinate mapping formula.
The preset depth map coordinate mapping formula may be a preset formula for determining a depth map row coordinate and a depth map column coordinate corresponding to the point cloud.
In the embodiment of the present invention, a preset depth map coordinate mapping formula may be preconfigured on the electronic device to determine a depth map row coordinate and a depth map column coordinate corresponding to each point cloud in the second virtual point cloud set, and the preset depth map coordinate mapping formula may be expressed as follows:
wherein r is i Representing the depth map row coordinate corresponding to the ith point cloud; i represents the number of the point cloud; alpha i An included angle between a vector representing the ith point cloud to the origin of coordinates and a OXZ coordinate plane of the virtual laser radar coordinate system; RES (representational state) H Representing the horizontal direction angle resolution of the virtual laser radar; halofov (field of view) H Representing half of the virtual lidar horizontal field angle; c i Representing the depth map column coordinate corresponding to the ith point cloud; beta i An included angle between a vector representing the ith point cloud to the origin of coordinates and an OXY coordinate plane of a virtual laser radar coordinate system; RES (representational state) V Representing the vertical direction angle resolution of the virtual laser radar; halofov (field of view) V Representing half of the vertical field angle of the virtual lidar.
S260, taking the ratio of the horizontal view angle of the virtual laser radar to the horizontal direction angle resolution of the virtual laser radar as the row number of the intermediate matrix, taking the ratio of the vertical view angle of the virtual laser radar to the vertical direction angle resolution of the virtual laser radar as the column number of the intermediate matrix, and creating the intermediate matrix based on the row number and the column number.
In the embodiment of the invention, the virtual laser radar horizontal view angle, the virtual laser radar vertical view angle, the virtual laser radar horizontal direction angle resolution and the virtual laser radar vertical direction angle resolution in preset point cloud interception parameters can be utilized to determine the number of rows and the number of columns of the intermediate matrix, namely, the ratio of the virtual laser radar horizontal view angle to the virtual laser radar horizontal direction angle resolution is used as the number of rows of the intermediate matrix, and the ratio of the virtual laser radar vertical view angle to the virtual laser radar vertical direction angle resolution is used as the number of columns of the intermediate matrix, and a corresponding intermediate matrix is created based on the determined number of rows and columns.
S270, mapping the corresponding point cloud in the second virtual point cloud set to the intermediate matrix by using a preset view shielding condition, a depth map row coordinate and a depth map column coordinate.
The preset view shielding condition may be understood as a preset judging condition for determining whether a view shielding relationship exists between each point cloud in the second virtual point cloud set, and the preset view shielding condition may be used for performing visibility screening on the point clouds in the second virtual point cloud set, and exemplary, the preset view shielding condition may include whether a point cloud depth value exceeds a preset distance threshold, or whether a point cloud depth value is a minimum value, or the like.
In the embodiment of the invention, as the point clouds in the second virtual point cloud set screened by the visual field range lack of the shielding relation among multiple scenes, the generated virtual laser radar point cloud frames and the corresponding virtual laser radar depth maps also lack of the shielding relation among the corresponding multiple scenes, so that the visibility and the authenticity of the virtual laser radar point cloud frames and the corresponding virtual laser radar depth maps are seriously reduced.
Further, on the basis of the above embodiment of the present invention, S270 specifically includes the following steps:
s2701, the distance from each point cloud in the second virtual point cloud set to the origin of coordinates is used as a point cloud depth value of the corresponding point cloud.
S2702, when the depth value of the point cloud meets the preset view shielding condition, the row coordinates and the column coordinates of the depth map are used as the position index of the intermediate matrix, and the three-dimensional coordinates of the point cloud corresponding to the point cloud and the depth value of the point cloud are filled into the corresponding positions of the intermediate matrix according to the corresponding position indexes.
In the embodiment of the invention, in consideration of the shielding relation among multiple scenes and the condition that a plurality of point clouds possibly exist and are mapped to the same position of the intermediate matrix, the visibility screening can be carried out on all the point clouds in the second virtual point cloud set by utilizing the preset vision shielding condition and the point cloud depth values of the point clouds, the screening strategy is to only keep the corresponding point cloud with the minimum point cloud depth value, then the depth map row coordinate and the depth map column coordinate corresponding to the point cloud are used as the position index of the intermediate matrix, and the point cloud three-dimensional coordinate and the point cloud depth value corresponding to the point cloud are filled to the corresponding position of the intermediate matrix according to the corresponding position index.
It should be understood that, when the point cloud map is created, a large number of laser radar point cloud frames collected in different positions and different directions are comprehensively spliced, and the scene space structure is expressed in a near-complete manner, so that the point cloud missing condition of a rear object caused by shielding generated by a plurality of objects in a single view direction does not exist. According to the technical scheme provided by the embodiment of the invention, after the visual field range screening is carried out on all point clouds in the map point cloud data, the visibility screening step of the intercepted point clouds is added, so that the visibility and the authenticity of the subsequently generated virtual laser radar point cloud frame and the corresponding virtual laser radar depth map are greatly increased.
S280, creating a virtual laser radar depth map with the same size according to the row number and the column number of the intermediate matrix.
In the embodiment of the invention, in order to facilitate the subsequent direct separation of the corresponding virtual lidar depth maps from the intermediate matrix, virtual lidar depth maps with the same size can be created according to the number of rows and columns of the intermediate matrix.
And S290, storing the three-dimensional coordinates of the point clouds corresponding to the elements in the intermediate matrix into a virtual laser radar point cloud frame, filling the point cloud depth values corresponding to the elements in the intermediate matrix into the corresponding positions of the virtual laser radar depth map, and executing the cavity filling operation on the virtual laser radar depth map.
In the embodiment of the invention, each element in the intermediate matrix can be subjected to data separation according to two dimensions of the three-dimensional coordinates of the point cloud and the depth value of the point cloud, namely when the element in the intermediate matrix is a non-null value, the three-dimensional coordinates of the point cloud corresponding to the element are stored in the virtual laser radar point cloud frame, the depth value of the point cloud corresponding to the element is filled to the corresponding position of the virtual laser radar depth map according to the corresponding row coordinates of the depth map and the column coordinates of the depth map, and meanwhile, after the filling of the depth value of the point cloud is carried out on the virtual laser radar depth map, a cavity filling operation can be carried out on the virtual laser radar depth map so as to further increase the visibility and the authenticity of the virtual laser radar depth map.
Further, on the basis of the embodiment of the present invention, a hole filling operation is performed on a virtual lidar depth map, and specifically includes the following steps:
A. acquiring nearest neighbor pixel points of all pixel points in the virtual laser radar depth map, and determining the minimum pixel value in the nearest neighbor pixel points;
B. if the pixel value of the pixel point in the virtual laser radar depth map is a null value, filling the pixel value of the corresponding pixel point into a minimum pixel value;
C. if the pixel value of the pixel point in the virtual laser radar depth map is smaller than or equal to the minimum pixel value, not modifying the pixel value of the corresponding pixel point;
D. and if the pixel value of the pixel point in the virtual laser radar depth map is larger than the minimum pixel value, filling the pixel value of the corresponding pixel point into the minimum pixel value.
It should be understood that, because the point cloud data is discrete and sparse, there are many empty defects necessarily when the point cloud data is projected to the two-dimensional depth map, in addition, when the visibility screening is performed on the point cloud in S270, there may be a situation that a small amount of object points of the rear scene are still missed on the foreground object due to the far-field distance and the sparse point cloud, although the rear scene is blocked, so in S290, after the point cloud depth value filling is performed on the virtual laser radar depth map, the hole filling operation may be performed on the virtual laser radar depth map, that is, the nearest neighbor pixel point of each pixel point in the virtual laser radar depth map, for example, a four-neighborhood or eight-neighborhood, etc., a minimum pixel value (i.e., a minimum point cloud depth value) in each nearest neighbor pixel point is calculated, the pixel value of the pixel point is compared with the minimum pixel value, and if the pixel value of the pixel point is the empty value, the pixel value of the corresponding pixel point is filled with the minimum pixel value; if the pixel value of the pixel point is smaller than or equal to the minimum pixel value, not modifying the pixel value of the corresponding pixel point; if the pixel value of the pixel point is larger than the minimum pixel value, filling the pixel value of the corresponding pixel point to be the minimum pixel value, and obtaining the final virtual laser radar depth map after cavity filling. According to the technical scheme provided by the embodiment of the invention, by utilizing the cavity operation, the blank points in the virtual laser radar depth map can be filled, and a small number of background object points possibly missing on a foreground object are filtered, so that the visibility and the authenticity of the finally generated virtual laser radar depth map are higher.
According to the technical scheme of the embodiment of the invention, the map point cloud data and the preset point cloud intercepting parameters are obtained, the preset coordinate origin in the preset point cloud intercepting parameters is used as the coordinate origin of a virtual laser radar coordinate system, the preset virtual laser radar in the preset point cloud intercepting parameters faces the positive X-axis direction of the virtual laser radar coordinate system to establish the virtual laser radar coordinate system, all point clouds in the map point cloud data are converted into the virtual laser radar coordinate system to obtain a first virtual point cloud set, and the point clouds in the first virtual point cloud set are subjected to visual field range screening to obtain a second virtual point cloud set according to the virtual laser radar horizontal visual field angle, the virtual laser radar vertical visual field angle, the virtual laser radar horizontal direction angular resolution, the virtual laser radar vertical direction angular resolution, the virtual laser radar farthest visual distance and the virtual laser radar nearest visual distance in the preset point cloud intercepting parameters, determining the row coordinates and the column coordinates of the depth map corresponding to each point cloud in the second virtual point cloud set by using a preset depth map coordinate mapping formula, taking the ratio of the horizontal view angle of the virtual laser radar to the horizontal direction angle resolution of the virtual laser radar as the row number of the intermediate matrix, taking the ratio of the vertical view angle of the virtual laser radar to the vertical direction angle resolution of the virtual laser radar as the column number of the intermediate matrix, creating the intermediate matrix based on the row number and the column number, mapping the corresponding point cloud in the second virtual point cloud set to the intermediate matrix by using a preset view shielding condition, the row coordinates and the column coordinates of the depth map, creating a virtual laser radar depth map with the same size according to the row number and the column number of the intermediate matrix, storing the three-dimensional coordinates of the point cloud corresponding to each element in the intermediate matrix to the virtual laser radar point cloud frame, and filling the point cloud depth values corresponding to the elements in the intermediate matrix to the corresponding positions of the virtual laser radar depth map, and executing cavity filling operation on the virtual laser radar depth map. According to the embodiment of the invention, the map point cloud data are sequentially subjected to visual field range and visibility screening to generate the intermediate matrix, and the virtual laser radar point cloud frames and the corresponding virtual laser radar depth maps thereof are generated by simultaneous separation according to the three-dimensional sitting and the point cloud depth values of the point clouds of the elements in the intermediate matrix, so that the generation efficiency and the real-time performance are greatly improved; meanwhile, when an intermediate matrix is generated, judging the shielding relation among multiple scenes, so that the visibility and the authenticity of the generated virtual laser radar point cloud frame and the virtual laser radar depth map corresponding to the virtual laser radar point cloud frame are greatly increased; in addition, the virtual laser radar depth map is generated and then is subjected to hole filling operation, the light impermeability of laser radar imaging is fully considered, the hole filling is performed in a mode of being more in line with physical characteristics, a small number of rear scene object points possibly missing on a foreground object are filtered, and the visibility and the authenticity of the virtual laser radar depth map are further improved.
Example III
Fig. 3 is a flowchart of a point cloud frame and depth map generating method according to a third embodiment of the present invention. The embodiment provides an implementation mode of a point cloud frame and a depth map generating method based on the embodiment, which can realize that the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map are simultaneously generated by utilizing map point cloud data by ingenious design of an intermediate data structure-intermediate matrix, and meanwhile, the shielding relation among multiple scenes and more reasonable cavity filling operation are considered, so that the visibility and the authenticity of the finally generated virtual laser radar point cloud frame and the corresponding virtual laser radar depth map are higher. As shown in fig. 3, a method for generating a point cloud frame and a depth map according to a third embodiment of the present invention specifically includes the following steps:
s310, reading in map point cloud data and inputting preset point cloud interception parameters.
Specifically, map point cloud data may be read in and stored in the set M, and then a map position, i.e., a preset origin of coordinates P0 (x 0 ,y 0 ,z 0 ) And presetting the virtual laser radar orientation theta.
S320, establishing a virtual laser radar coordinate system, and converting all point clouds in the map point cloud data into the virtual laser radar coordinate system to obtain a first virtual point cloud set.
Specifically, the map position is used as a preset origin of coordinates P0 (x 0 ,y 0 ,z 0 ) And taking the preset virtual laser radar orientation theta as the positive direction of the X axis as the origin of coordinates, establishing a virtual laser radar coordinate system OLxyz, carrying out coordinate system conversion on all point clouds of the set M, and expressing a first virtual point cloud set ML in the virtual laser radar coordinate system OLxyz.
S330, performing visual field range and visibility screening on the point clouds in the first virtual point cloud set to generate an intermediate matrix.
In the embodiment of the invention, the virtual lidar horizontal field angle FOV can be preset H Virtual lidar vertical field angle FOV V Horizontal angle resolution RES of virtual laser radar H Vertical angle resolution RES of virtual laser radar V Virtual lidar furthest visible distance D max Virtual lidar nearest visible distance D min And sets a half-width angle range: halofov (field of view) H =FOV H /2,Half_FOV V =FOV V /2. Meanwhile, the intermediate matrix T may be designed as follows:
wherein the intermediate matrix T is a three-dimensional data structure, the elements T of which are ij = (x, y, z, range) represent the point cloud three-dimensional coordinates (x, y, z) of the point cloud and the point cloud depth value range, respectively.
Specifically, the performing the visual field range and the visibility filtering on the point clouds in the first virtual point cloud set ML to generate the intermediate matrix may include the following steps:
a1, calculating all point clouds Pi (x) i ,y i ,z i ) Distance Di to origin P0 of virtual lidar coordinate system OLxyz, if Di>D max Alternatively, di<D min And eliminating the corresponding point cloud in the first virtual point cloud set ML, wherein the distance Di can also represent the point cloud depth value corresponding to the point cloud Pi.
A2, calculating all point clouds Pi (x) i ,y i ,z i ) The angle alpha between the vector OPi to the origin of coordinates (O is the origin of the OLxyz coordinate system) and the OXZ coordinate plane i If alpha i >Half_FOV H Or alpha i <-Half_FOV H And eliminating the corresponding point cloud in the first virtual point cloud set ML.
A3, calculating all point clouds Pi (x) i ,y i ,z i ) The angle beta between the vector OPi to the origin of coordinates (O is the origin of the OLxyz coordinate system) and the OXY coordinate plane i If beta i >Half_FOV V Or beta i <-Half_FOV V And eliminating the corresponding point cloud in the first virtual point cloud set ML.
And A4, determining the depth map row coordinates and the depth map column coordinates corresponding to each point cloud in the second virtual point cloud set by using a preset depth map coordinate mapping formula. The preset depth map coordinate mapping formula may be expressed as follows:
And A5, updating elements of the intermediate matrix according to the depth map row coordinates and the depth map column coordinates corresponding to the point cloud.
Updating the object to be the element T in the intermediate matrix T rc I.e. point cloud Pi (x i ,y i ,z i ) Mapped to corresponding positions in the intermediate matrix T. At the same time, check element t rc Whether there is a previously stored or updated element value, the specific steps are as follows:
a51, when the corresponding position of the intermediate matrix T does not have the old value, filling the vector T rc Four components: x=x i ,y=y i ,z=z i ,range=Di;
A52, when the old value exists in the corresponding position of the intermediate matrix T, judging the size relation between the point cloud depth value range in the further old value and the point cloud depth value Di of the current point cloud, and if range<Di, the point cloud data stored before is closer to the origin of coordinates than the current point cloud, and the current point cloud is blocked by the old point cloud before, so that the old value of the corresponding position of the intermediate matrix T is kept unchanged; if range>Di, the point cloud data stored before is farther from the coordinate origin than the current point cloud, the old point cloud before is blocked by the current point cloud, and the current point cloud Pi is used for updating the vector t rc Is a component of (1): x=x i ,y=y i ,z=z i ,range=Di。
In summary, the view range and visibility of the point clouds in the first virtual point cloud set may be screened to generate an intermediate matrix. In the prior art, the shielding relation between different scenes is not considered, and the technical scheme of the embodiment of the invention utilizes the designed intermediate matrix T to judge the shielding relation, so that the invisible problem of the object with the blocked background is effectively solved.
And S340, separating the point cloud depth values of the point cloud three-dimensional coordinates corresponding to the elements in the intermediate matrix into virtual laser radar point cloud frames and corresponding virtual laser radar depth maps.
Specifically, if element t in the intermediate matrix ij If the value is null, skipping the element; if element t in the intermediate matrix ij Is not null, then element t ij The x, y, z components of (c) are stored to a virtual laser radar point cloud frame l= { L 0 ,l 1 ,…,l k -wherein, l k =(x k ,y k ,z k ) Represents the kth spatial point in the point cloud frame L and stores the element t ij Is stored to a corresponding position R of a virtual laser radar depth map R ij
S350, executing hole filling operation on the virtual laser radar depth map.
In the embodiment of the invention, each pixel point R in the virtual laser radar depth map can be obtained ij For example, four-neighborhood or eight-neighborhood, and the minimum pixel value n in each nearest-neighbor pixel is calculated min (i.e. minimum point cloud depth value), the pixel value r of the pixel point is calculated ij And the minimum pixel value n min Comparing, if the pixel value r of the pixel point ij If the value is null, the pixel value r of the corresponding pixel point is obtained ij Fill to minimum pixel value n min The method comprises the steps of carrying out a first treatment on the surface of the If the pixel value r of the pixel point ij Less than or equal to the minimum pixel value n min The pixel value of the corresponding pixel point is not modified; if the pixel value r of the pixel point ij Greater than the minimum pixel value n min The pixel value r of the corresponding pixel point ij Fill to minimum pixel value n min Warp yarnAnd obtaining a final virtual laser radar depth map R after filling the cavity. According to the technical scheme provided by the embodiment of the invention, on the post-processing of the virtual laser radar depth map, the light non-penetrability of laser radar imaging is fully considered, the cavity filling is performed in a mode of more conforming to the physical characteristics, and a small number of background object points possibly missing on a foreground object are filtered, so that the visibility and the authenticity of the finally generated virtual laser radar depth map are higher.
Fig. 4 is an exemplary diagram of a depth map generated by a real lidar point cloud frame according to a third embodiment of the present invention. As can be seen from fig. 4, there are multiple walls in the map scene, with strict occlusion relationships between each wall. Fig. 5 is an exemplary diagram of a depth map generated by a virtual lidar point cloud frame according to a third embodiment of the present invention. As shown in fig. 5, when the prior art is adopted to directly intercept the corresponding point cloud from the map point cloud data to generate the virtual lidar depth map, it can be obviously seen in the depth map that the shielding relationship between the multiple walls is not correctly reflected, and the point cloud of the rear wall can be seen through to the front wall, so that the virtual lidar depth map is messy, i.e. the visibility and the authenticity of the virtual lidar depth map are seriously affected. Fig. 6 is an exemplary diagram of another depth map generated by a virtual lidar point cloud frame according to the third embodiment of the present invention. As can be seen from fig. 6, when the technical solution of the embodiment of the present invention is adopted and the hole filling operation is not performed, the visibility and the authenticity of the generated virtual lidar depth map are greatly improved compared with those of fig. 5. Fig. 7 is an exemplary diagram of a depth map generated by a real lidar point cloud frame according to still another embodiment of the present invention. Fig. 8 is an exemplary diagram of a depth map generated by a virtual lidar point cloud frame according to still another embodiment of the present invention. As shown in fig. 7 and 8, the full technical scheme of the embodiment of the present invention, that is, the real lidar depth map and the virtual lidar depth map obtained after the cavity filling operation is performed, is that the real lidar depth map and the virtual lidar depth map are very close to each other, so that the visibility and the authenticity of the virtual lidar depth map generated by adopting the technical scheme of the embodiment of the present invention are greatly increased compared with those of the prior art.
The embodiment of the invention belongs to a bottom core technology as an innovative method for point cloud processing, and can be applied to the aspects of sensing, positioning, simulation and the like of unmanned operation and unmanned operation; furthermore, the present invention may be applied to a series of unmanned vehicle products, for example, various unmanned engineering machine products requiring unmanned operation, and the embodiment of the present invention is not limited thereto.
The technical solution of the embodiment of the present invention may have the following application scenarios:
(1) The method is applied to unmanned scenes perceived by unmanned operations. The virtual laser radar point cloud frame and the virtual laser radar depth map generated by the embodiment of the invention can be compared with the real laser radar point cloud frame and the real laser radar depth map in real time, so that the targets are perceived as the inherent targets in the scene map and the targets are dynamically appeared, for example, after the virtual laser radar point cloud frame and the virtual laser radar depth map are compared with the real laser radar point cloud frame and the real laser radar depth map by using a perception algorithm, the targets which are more in the current dynamic scene are more likely to be obstacle objects or newly added operation objects compared with the high-precision point cloud map (static inherent scene); on the other hand, the object can be clearly known, which objects are inherent in the scenes of walls, columns and the like, so that the object can not be used as an operation object, and a safe distance is required to be kept;
(2) The method is applied to unmanned and unmanned operation simulation scenes. In the existing simulation technology, a new technical direction is to construct a virtual scene by utilizing a large amount of real sensor (laser radar, camera and the like) data, and the embodiment of the invention also provides a method for generating the visualization of the laser radar field of view and the depth map field of view and generating output data in the virtual scene again;
(3) The method is applied to unmanned positioning scenes. In the positioning system based on the high-precision point cloud map, the technical scheme of the embodiment of the invention can be used for generating the virtual laser radar point cloud view and the depth map view, and the position and the course angle of the unmanned vehicle in the map at the current moment can be reversely deduced by comparing the virtual laser radar point cloud view and the depth map view with the real laser radar point cloud view and the depth map view at the current moment, so that the positioning of the unmanned vehicle is realized.
It should be understood that the above application scenario is only an example, and the technical solution of the embodiment of the present invention, as a point cloud processing method belonging to the underlying core technology, may be applied to all relevant technical fields, which is not limited by the embodiment of the present invention.
According to the technical scheme, the virtual laser radar coordinate system is established by reading map point cloud data and inputting preset point cloud intercepting parameters, all point clouds in the map point cloud data are converted into the virtual laser radar coordinate system, the point clouds in the first virtual point cloud set are subjected to visual field range and visibility screening to generate an intermediate matrix, point cloud depth values of point cloud three-dimensional coordinates corresponding to elements in the intermediate matrix are separated to virtual laser radar point cloud frames and virtual laser radar depth maps corresponding to the virtual laser radar point cloud frames, and cavity filling operation is carried out on the virtual laser radar depth maps. According to the technical scheme provided by the embodiment of the invention, from the physical imaging essence, the virtual laser radar point cloud frame and the virtual laser radar depth map are generated by utilizing the designed intermediate matrix, so that the technical processing link is simplified, and the point cloud processing efficiency and instantaneity are greatly improved; meanwhile, when the intermediate matrix is generated, the shielding relation among multiple sceneries is judged, the invisible problem of the object with the shielded background is properly solved, and the visibility and the authenticity of the generated virtual laser radar point cloud frame and the virtual laser radar depth map corresponding to the virtual laser radar point cloud frame are greatly increased; in addition, the virtual laser radar depth map is generated and then is subjected to hole filling operation, the light impermeability of laser radar imaging is fully considered, the hole filling is performed in a mode of being more in line with physical characteristics, a small number of rear scene object points possibly missing on a foreground object are filtered, and the visibility and the authenticity of the virtual laser radar depth map are further improved.
Example IV
Fig. 9 is a schematic structural diagram of a point cloud frame and depth map generating device according to a fourth embodiment of the present invention. As shown in fig. 9, the apparatus includes:
the data acquisition module 41 is configured to acquire map point cloud data and preset point cloud interception parameters.
The intermediate matrix determining module 42 is configured to process the map point cloud data according to the preset point cloud capturing parameter to obtain an intermediate matrix, where the intermediate matrix includes at least the following attribute information: and (3) a point cloud three-dimensional coordinate and a point cloud depth value.
The point cloud frame and depth map generating module 43 is configured to generate a virtual lidar point cloud frame and a virtual lidar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix.
According to the technical scheme, map point cloud data and preset point cloud interception parameters are acquired through a data acquisition module, and an intermediate matrix determination module processes the map point cloud data according to the preset point cloud interception parameters to obtain an intermediate matrix, wherein the intermediate matrix at least comprises the following attribute information: the point cloud three-dimensional coordinates and the point cloud depth values are used for generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix by the point cloud frame and depth map generation module. According to the embodiment of the invention, the map point cloud data is processed according to the preset point cloud interception parameters to obtain the intermediate matrix, and the virtual laser radar point cloud frame and the corresponding virtual laser radar depth map are generated according to the intermediate matrix, so that the generation process of the virtual laser radar point cloud frame and the depth map is simplified, and the generation efficiency and the real-time performance are greatly improved.
Further, on the basis of the above embodiment of the invention, the intermediate matrix determining module 42 includes:
the coordinate system establishment unit is used for taking a preset coordinate origin in the preset point cloud interception parameter as the coordinate origin of the virtual laser radar coordinate system and taking a preset virtual laser radar orientation in the preset point cloud interception parameter as the X-axis positive direction of the virtual laser radar coordinate system to establish the virtual laser radar coordinate system.
The first point cloud set acquisition unit is used for converting all point clouds in the map point cloud data into a virtual laser radar coordinate system to obtain a first virtual point cloud set.
The second point cloud set acquisition unit is used for screening the field of view of the point clouds in the first virtual point cloud set according to the virtual laser radar horizontal field angle, the virtual laser radar vertical field angle, the virtual laser radar horizontal direction angle resolution, the virtual laser radar vertical direction angle resolution, the virtual laser radar farthest visible distance and the virtual laser radar nearest visible distance in the preset point cloud interception parameters to obtain a second virtual point cloud set.
And the coordinate determining unit is used for determining the depth map row coordinates and the depth map column coordinates corresponding to the point clouds in the second virtual point cloud set by utilizing a preset depth map coordinate mapping formula.
The intermediate matrix creation unit is configured to take a ratio of a horizontal view angle of the virtual lidar to a horizontal direction angle resolution of the virtual lidar as a number of rows of the intermediate matrix, take a ratio of a vertical view angle of the virtual lidar to a vertical direction angle resolution of the virtual lidar as a number of columns of the intermediate matrix, and create the intermediate matrix based on the number of rows and the number of columns.
And the intermediate matrix determining unit is used for mapping the corresponding point cloud in the second virtual point cloud set to the intermediate matrix by utilizing a preset view shielding condition, a depth map row coordinate and a depth map column coordinate.
Further, on the basis of the above embodiment of the present invention, the second point-cloud-set obtaining unit is specifically configured to:
determining the distance from each point cloud in the first virtual point cloud set to the origin of coordinates;
if the distance is greater than the furthest visible distance of the virtual laser radar or the distance is less than the closest visible distance of the virtual laser radar, eliminating the corresponding point cloud in the first virtual point cloud set;
if the included angle between the vector from each point cloud to the coordinate origin and the OXZ coordinate plane of the virtual laser radar coordinate system is larger than half of the horizontal view angle of the virtual laser radar, or the included angle between the vector from each point cloud to the coordinate origin and the OXZ coordinate plane of the virtual laser radar coordinate system is smaller than the opposite number of half of the horizontal view angle of the virtual laser radar, eliminating the corresponding point cloud in the first virtual point cloud set;
If the included angle between the vector from each point cloud to the coordinate origin and the OXY coordinate plane of the virtual laser radar coordinate system is larger than half of the vertical view angle of the virtual laser radar, or the included angle between the vector from each point cloud to the coordinate origin and the OXY coordinate plane of the virtual laser radar coordinate system is smaller than the opposite number of half of the vertical view angle of the virtual laser radar, eliminating the corresponding point cloud in the first virtual point cloud set;
and taking the first virtual point cloud set after being removed as a second virtual point cloud set.
Further, on the basis of the above embodiment of the present invention, the intermediate matrix determining unit is specifically configured to:
taking the distance from each point cloud in the second virtual point cloud set to the origin of coordinates as a point cloud depth value of the corresponding point cloud;
and when the depth value of the point cloud meets the preset view shielding condition, taking the row coordinates and the column coordinates of the depth map as the position index of the intermediate matrix, and filling the three-dimensional coordinates of the point cloud corresponding to the point cloud and the depth value of the point cloud into the corresponding position of the intermediate matrix according to the corresponding position index.
Further, on the basis of the above embodiment of the present invention, the point cloud frame and depth map generating module 43 includes:
and the depth map creating unit is used for creating the virtual laser radar depth maps with the same size according to the number of rows and the number of columns of the intermediate matrix.
And the point cloud frame and depth map generating unit is used for storing the point cloud three-dimensional coordinates corresponding to each element in the intermediate matrix into the virtual laser radar point cloud frame, filling the point cloud depth values corresponding to each element in the intermediate matrix into the corresponding positions of the virtual laser radar depth map, and executing the cavity filling operation on the virtual laser radar depth map.
Further, on the basis of the above embodiment of the present invention, the point cloud frame and depth map generating unit is further configured to:
acquiring nearest neighbor pixel points of all pixel points in the virtual laser radar depth map, and determining the minimum pixel value in the nearest neighbor pixel points;
if the pixel value of the pixel point in the virtual laser radar depth map is a null value, filling the pixel value of the corresponding pixel point into a minimum pixel value;
if the pixel value of the pixel point in the virtual laser radar depth map is smaller than or equal to the minimum pixel value, not modifying the pixel value of the corresponding pixel point;
and if the pixel value of the pixel point in the virtual laser radar depth map is larger than the minimum pixel value, filling the pixel value of the corresponding pixel point into the minimum pixel value.
Further, on the basis of the above embodiment of the present invention, the preset point cloud interception parameter includes at least one of the following: the virtual laser radar comprises a preset coordinate origin, a preset virtual laser radar orientation, a virtual laser radar horizontal view angle, a virtual laser radar vertical view angle, a virtual laser radar horizontal direction angle resolution, a virtual laser radar vertical direction angle resolution, a virtual laser radar farthest visible distance and a virtual laser radar nearest visible distance.
The point cloud frame and depth map generating device provided by the embodiment of the invention can execute the point cloud frame and depth map generating method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example five
Fig. 10 shows a schematic diagram of an electronic device 50 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 10, the electronic device 50 includes at least one processor 51, and a memory, such as a Read Only Memory (ROM) 52, a Random Access Memory (RAM) 53, etc., communicatively connected to the at least one processor 51, in which the memory stores a computer program executable by the at least one processor, and the processor 51 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 52 or the computer program loaded from the storage unit 58 into the Random Access Memory (RAM) 53. In the RAM 53, various programs and data required for the operation of the electronic device 50 can also be stored. The processor 51, the ROM 52 and the RAM 53 are connected to each other via a bus 54. An input/output (I/O) interface 55 is also connected to bus 54.
Various components in the electronic device 50 are connected to the I/O interface 55, including: an input unit 56 such as a keyboard, a mouse, etc.; an output unit 57 such as various types of displays, speakers, and the like; a storage unit 58 such as a magnetic disk, an optical disk, or the like; and a communication unit 59 such as a network card, modem, wireless communication transceiver, etc. The communication unit 59 allows the electronic device 50 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The processor 51 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 51 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 51 performs the various methods and processes described above, such as the point cloud frame and depth map generation methods.
In some embodiments, the point cloud frame and depth map generation method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 58. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 50 via the ROM 52 and/or the communication unit 59. When the computer program is loaded into RAM 53 and executed by processor 51, one or more steps of the point cloud frame and depth map generation method described above may be performed. Alternatively, in other embodiments, the processor 51 may be configured to perform the point cloud frame and depth map generation method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for generating a point cloud frame and a depth map, the method comprising:
acquiring map point cloud data and preset point cloud interception parameters;
processing the map point cloud data according to the preset point cloud intercepting parameters to obtain an intermediate matrix, wherein the intermediate matrix at least comprises the following attribute information: a point cloud three-dimensional coordinate, a point cloud depth value;
and generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix.
2. The method of claim 1, wherein the preset point cloud intercept parameters comprise at least one of: the virtual laser radar comprises a preset coordinate origin, a preset virtual laser radar orientation, a virtual laser radar horizontal view angle, a virtual laser radar vertical view angle, a virtual laser radar horizontal direction angle resolution, a virtual laser radar vertical direction angle resolution, a virtual laser radar farthest visible distance and a virtual laser radar nearest visible distance.
3. The method of claim 1, wherein said processing the map point cloud data according to the preset point cloud intercept parameters to obtain an intermediate matrix comprises:
taking a preset coordinate origin in the preset point cloud intercepting parameters as a coordinate origin of a virtual laser radar coordinate system, and taking a preset virtual laser radar orientation in the preset point cloud intercepting parameters as an X-axis positive direction of the virtual laser radar coordinate system to establish the virtual laser radar coordinate system;
converting all point clouds in the map point cloud data into the virtual laser radar coordinate system to obtain a first virtual point cloud set;
according to the virtual laser radar horizontal view angle, the virtual laser radar vertical view angle, the virtual laser radar horizontal direction angle resolution, the virtual laser radar vertical direction angle resolution, the virtual laser radar farthest visible distance and the virtual laser radar nearest visible distance in the preset point cloud intercepting parameters, performing visual field range screening on the point clouds in the first virtual point cloud set to obtain a second virtual point cloud set;
Determining depth map row coordinates and depth map column coordinates corresponding to each point cloud in the second virtual point cloud set by using a preset depth map coordinate mapping formula;
taking the ratio of the virtual laser radar horizontal view angle to the virtual laser radar horizontal direction angle resolution as the row number of the intermediate matrix, taking the ratio of the virtual laser radar vertical view angle to the virtual laser radar vertical direction angle resolution as the column number of the intermediate matrix, and creating the intermediate matrix based on the row number and the column number;
and mapping the point cloud corresponding to the second virtual point cloud set to the intermediate matrix by using a preset view shielding condition, the depth map row coordinate and the depth map column coordinate.
4. The method of claim 3, wherein said performing a field of view screening on the point clouds in the first set of virtual point clouds according to the virtual lidar horizontal angle of view, the virtual lidar vertical angle of view, the virtual lidar horizontal angle resolution, the virtual lidar vertical angle resolution, the virtual lidar furthest visible distance, and the virtual lidar closest visible distance in the preset point cloud interception parameters to obtain a second set of virtual point clouds comprises:
Determining the distance from each point cloud in the first virtual point cloud set to the origin of coordinates;
if the distance is greater than the furthest visible distance of the virtual laser radar or the distance is less than the closest visible distance of the virtual laser radar, eliminating the corresponding point cloud in the first virtual point cloud set;
if the included angle between the vector from each point cloud to the origin of coordinates and the OXZ coordinate plane of the virtual lidar coordinate system is greater than half of the horizontal view angle of the virtual lidar, or the included angle between the vector from each point cloud to the origin of coordinates and the OXZ coordinate plane of the virtual lidar coordinate system is less than the opposite number of half of the horizontal view angle of the virtual lidar, eliminating the corresponding point cloud in the first virtual point cloud set;
if the included angle between the vector from each point cloud to the coordinate origin and the OXY coordinate plane of the virtual laser radar coordinate system is greater than half of the vertical view angle of the virtual laser radar, or the included angle between the vector from each point cloud to the coordinate origin and the OXY coordinate plane of the virtual laser radar coordinate system is smaller than the opposite number of half of the vertical view angle of the virtual laser radar, eliminating the corresponding point cloud in the first virtual point cloud set;
And taking the first virtual point cloud set after being removed as the second virtual point cloud set.
5. A method according to claim 3, wherein said mapping the corresponding point cloud in the second set of virtual point clouds to the intermediate matrix using a preset view occlusion condition and the depth map row coordinates and the depth map column coordinates comprises:
taking the distance from each point cloud in the second virtual point cloud set to the origin of coordinates as the point cloud depth value corresponding to the point cloud;
and when the point cloud depth value meets the preset view shielding condition, taking the depth map row coordinate and the depth map column coordinate as position indexes of the intermediate matrix, and filling the point cloud three-dimensional coordinate and the point cloud depth value corresponding to each point cloud into corresponding positions of the intermediate matrix according to the corresponding position indexes.
6. The method of claim 1, wherein the generating a virtual lidar point cloud frame and a virtual lidar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix comprises:
creating the virtual laser radar depth map with the same size according to the row number and the column number of the intermediate matrix;
Storing the point cloud three-dimensional coordinates corresponding to each element in the intermediate matrix to the virtual laser radar point cloud frame, filling the point cloud depth values corresponding to each element in the intermediate matrix to the corresponding positions of the virtual laser radar depth map, and executing cavity filling operation on the virtual laser radar depth map.
7. The method of claim 6, wherein performing a hole filling operation on the virtual lidar depth map comprises:
acquiring nearest neighbor pixel points of all pixel points in the virtual laser radar depth map, and determining the minimum pixel value in the nearest neighbor pixel points;
if the pixel value of the pixel point in the virtual laser radar depth map is a null value, filling the pixel value corresponding to the pixel point into the minimum pixel value;
if the pixel value of the pixel point in the virtual laser radar depth map is smaller than or equal to the minimum pixel value, not modifying the pixel value of the corresponding pixel point;
and if the pixel value of the pixel point in the virtual laser radar depth map is larger than the minimum pixel value, filling the pixel value corresponding to the pixel point into the minimum pixel value.
8. A point cloud frame and depth map generation apparatus, the apparatus comprising:
the data acquisition module is used for acquiring map point cloud data and preset point cloud interception parameters;
the intermediate matrix determining module is used for processing the map point cloud data according to the preset point cloud intercepting parameters to obtain an intermediate matrix, and the intermediate matrix at least comprises the following attribute information: a point cloud three-dimensional coordinate, a point cloud depth value;
and the point cloud frame and depth map generation module is used for generating a virtual laser radar point cloud frame and a virtual laser radar depth map according to the point cloud three-dimensional coordinates and the point cloud depth values in the intermediate matrix.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the point cloud frame and depth map generation method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to implement the point cloud frame and depth map generation method of any one of claims 1-7 when executed.
CN202310643979.3A 2023-06-01 2023-06-01 Point cloud frame and depth map generation method and device, electronic equipment and storage medium Pending CN116664648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310643979.3A CN116664648A (en) 2023-06-01 2023-06-01 Point cloud frame and depth map generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310643979.3A CN116664648A (en) 2023-06-01 2023-06-01 Point cloud frame and depth map generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116664648A true CN116664648A (en) 2023-08-29

Family

ID=87709272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310643979.3A Pending CN116664648A (en) 2023-06-01 2023-06-01 Point cloud frame and depth map generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116664648A (en)

Similar Documents

Publication Publication Date Title
CN115082639B (en) Image generation method, device, electronic equipment and storage medium
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
CN113012210B (en) Method and device for generating depth map, electronic equipment and storage medium
CN113192179B (en) Three-dimensional reconstruction method based on binocular stereo vision
CN111739167B (en) 3D human head reconstruction method, device, equipment and medium
CN111666876B (en) Method and device for detecting obstacle, electronic equipment and road side equipment
JP2020518918A (en) Information processing method, apparatus, cloud processing device, and computer program product
CN115116049B (en) Target detection method and device, electronic equipment and storage medium
CN115330940B (en) Three-dimensional reconstruction method, device, equipment and medium
EP3971839A1 (en) Illumination rendering method and apparatus, storage medium, and electronic apparatus
CN113759338B (en) Target detection method and device, electronic equipment and storage medium
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN114004754A (en) Scene depth completion system and method based on deep learning
CN114627239B (en) Bounding box generation method, device, equipment and storage medium
CN108986210B (en) Method and device for reconstructing three-dimensional scene
CN117078767A (en) Laser radar and camera calibration method and device, electronic equipment and storage medium
CN115619986B (en) Scene roaming method, device, equipment and medium
CN115359170B (en) Scene data generation method and device, electronic equipment and storage medium
CN116664648A (en) Point cloud frame and depth map generation method and device, electronic equipment and storage medium
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN115760575A (en) Laser point cloud data processing method and device, electronic equipment and storage medium
CN115790621A (en) High-precision map updating method and device and electronic equipment
CN116129422A (en) Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium
CN112802175B (en) Large-scale scene shielding and eliminating method, device, equipment and storage medium
CN115063485A (en) Three-dimensional reconstruction method, device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination