CN116400372A - Laser radar point cloud extraction method and device, electronic equipment and storage medium - Google Patents

Laser radar point cloud extraction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116400372A
CN116400372A CN202310066067.4A CN202310066067A CN116400372A CN 116400372 A CN116400372 A CN 116400372A CN 202310066067 A CN202310066067 A CN 202310066067A CN 116400372 A CN116400372 A CN 116400372A
Authority
CN
China
Prior art keywords
coordinate system
laser radar
point cloud
coordinates
radar point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310066067.4A
Other languages
Chinese (zh)
Inventor
周文彬
李佳恒
蔡登胜
孙金泉
刘平
李逸岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Liugong Machinery Co Ltd
Original Assignee
Guangxi Liugong Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Liugong Machinery Co Ltd filed Critical Guangxi Liugong Machinery Co Ltd
Priority to CN202310066067.4A priority Critical patent/CN116400372A/en
Publication of CN116400372A publication Critical patent/CN116400372A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a laser radar point cloud extraction method, a laser radar point cloud extraction device, electronic equipment and a storage medium. The method comprises the following steps: determining a first formula, wherein the first formula is used for converting a camera coordinate system into a laser radar coordinate system, and constructing a virtual coordinate system based on the laser radar and the camera coordinate system; based on the virtual coordinate system, determining the corresponding relation between the coordinates and the angles of the image pixel points; converting the coordinates of the region of interest of the image into a virtual coordinate system according to the corresponding relation to obtain a first angle, and a maximum value and a minimum value in the first angle; traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into a virtual coordinate system to obtain a second angle; and extracting the laser radar point cloud region of interest from the image region of interest based on the corresponding second angle of each laser radar point cloud and the maximum and minimum values in the first angle. According to the method, the point cloud and the image can be fused and perceived in real time by introducing a virtual coordinate system and the relation between the coordinates and the angles of the pixel points of the image.

Description

Laser radar point cloud extraction method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a laser radar point cloud extraction method, a device, electronic equipment and a storage medium.
Background
In the unmanned field, the fusion perception of cameras and laser radars is a more and more common phenomenon, and the multi-sensor fusion system can provide more accurate and rich environmental information to complete higher-level tasks such as target detection, autonomous positioning, path planning and the like.
In the prior art, fusion perception of a monocular camera and a laser radar is based on the premise of internal parameters of a known camera and external parameters of the camera and the laser radar, traversing all laser radar point clouds, converting laser radar point cloud coordinates into camera pixel point coordinates according to a formula for converting a laser radar coordinate system into a camera pixel coordinate system, judging whether the camera pixel point coordinates are in an image interested area or not, and if so, reserving the pixel point.
However, since a large amount of matrix operations are required to be performed in the scheme, for one frame of laser radar point cloud, the calculated amount is relatively large, the time consumption is high, and especially the more the number of laser radar point clouds is, the more the algorithm time consumption is high, the real-time processing and application are difficult to achieve in the algorithm processing process, and particularly the processing result is difficult to obtain in real time under a vehicle end processor, so that the result can not be obtained in time under the unmanned state of the vehicle, and the safe running of the vehicle can be influenced.
Disclosure of Invention
The invention provides a laser radar point cloud extraction method, a device, electronic equipment and a storage medium, which are used for solving the problems of large calculation amount and large time consumption of an algorithm in the prior art, and greatly shortening the algorithm time, so that the point cloud and an image can be fused and perceived in real time.
According to an aspect of the present invention, there is provided a laser radar point cloud extraction method, including:
determining a first formula, wherein the first formula is used for converting a camera coordinate system into a laser radar coordinate system, and constructing a virtual coordinate system based on the laser radar coordinate system and the camera coordinate system;
based on the virtual coordinate system, determining the corresponding relation between the coordinates and the angles of the image pixel points;
converting coordinates of an image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle;
traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud;
and extracting a laser radar point cloud region of interest from the image region of interest based on the second angle corresponding to each laser radar point cloud and the maximum and minimum values in the first angles.
According to another aspect of the present invention, there is provided a laser radar point cloud extraction apparatus, including:
the first determining module is used for determining a first formula, the first formula is used for converting a camera pixel coordinate system into a laser radar coordinate system, and a virtual coordinate system is built based on the laser radar coordinate system and the camera coordinate system;
the second determining module is used for determining the corresponding relation between the coordinates and the angles of the image pixel points based on the virtual coordinate system;
the conversion module is used for converting the coordinates of the image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle;
the traversing module is used for traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud;
and the extraction module is used for extracting the laser radar point cloud region of interest from the image region of interest based on the second angle corresponding to each laser radar point cloud and the maximum value and the minimum value in the first angle.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the laser radar point cloud extraction method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the laser radar point cloud extraction method according to any one of the embodiments of the present invention when executed.
The laser radar point cloud extraction method provided by the embodiment of the invention comprises the following steps: determining a first formula, wherein the first formula is used for converting a camera coordinate system into a laser radar coordinate system, and constructing a virtual coordinate system based on the laser radar coordinate system and the camera coordinate system; based on the virtual coordinate system, determining the corresponding relation between the coordinates and the angles of the image pixel points; converting coordinates of an image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle; traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud; and extracting a laser radar point cloud region of interest from the image region of interest based on the second angle corresponding to each laser radar point cloud and the maximum and minimum values in the first angles. The method solves the problems of large calculated amount and large time consumption of the algorithm in the prior art, and greatly shortens the algorithm time, so that the point cloud and the image can be fused and perceived in real time.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a laser radar point cloud extraction method according to a first embodiment of the present invention;
FIG. 2 is a schematic view of an angle definition according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a coordinate system translation according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a laser radar point cloud extraction method according to a second embodiment of the present invention;
fig. 5 is a schematic flow chart of a laser radar point cloud extraction method according to an exemplary embodiment of the present invention;
fig. 6 is a schematic structural diagram of a laser radar point cloud extracting device according to a third embodiment of the present invention;
Fig. 7 is a schematic structural diagram of an electronic device of a laser radar point cloud extraction method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention. It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the devices in the embodiments of the present invention are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Example 1
Fig. 1 is a schematic flow chart of a method for extracting point cloud of lidar according to an embodiment of the present invention, where the method is applicable to a case where a monocular camera and lidar perform fusion sensing, and the method may be performed by a device for extracting point cloud of lidar, where the device may be implemented by software and/or hardware and is generally integrated on an electronic device, and in this embodiment, the electronic device includes but is not limited to: unmanned aerial vehicle, unmanned excavator, unmanned loader, unmanned road roller, automatic vehicle, unmanned equipment such as robot.
As shown in fig. 1, a method for extracting a point cloud of a lidar according to an embodiment of the present invention includes the following steps:
s110, determining a first formula, wherein the first formula is used for converting a camera coordinate system into a laser radar coordinate system, and constructing a virtual coordinate system based on the laser radar coordinate system and the camera coordinate system.
Wherein the camera coordinate system can be converted to the lidar coordinate system by a first formula. The camera coordinate system may be a coordinate system established with the camera as an origin, defined for describing the position of the object from the perspective of the camera, as a middle loop communicating the world coordinate system and the image/pixel coordinate system. The lidar coordinate system may be a world coordinate system. The world coordinate system is a user-defined three-dimensional world coordinate system, which is introduced with a point as an origin to describe the position of an object in the real world.
Wherein the first formula may be determined based on a formula in which the lidar coordinate system is converted to a camera coordinate system. When converting the point coordinates in the laser radar coordinate system to the coordinates in the camera coordinate system, the camera internal reference matrix is a matrix of 3 rows and 4 columns, and the inverse matrix cannot be solved, so that the coordinates of the point in the laser radar coordinate system cannot be obtained according to the point coordinates in the camera coordinate system, and therefore, a formula for converting the laser radar coordinate system to the camera coordinate system needs to be decomposed and converted to obtain a first formula.
In the present embodiment, the camera coordinate system is converted to the lidar coordinate system by a first formula. For example, the coordinates of a certain point in space in the camera coordinate system may be (u, v), and the coordinates of the certain point (u, v) may be converted into the lidar coordinate system by the first formula, and the coordinates of the point in the lidar coordinate system may be (x, y, z).
The virtual coordinate system may be a virtually existing coordinate system, by which points in the camera coordinate system may be linked to points in the lidar coordinate system. The virtual coordinate system may be constructed based on the lidar coordinate system and the camera coordinate system.
In this embodiment, after determining the first formula, a virtual coordinate system may be constructed based on the lidar coordinate system and the camera coordinate system. Since the camera pixel point coordinates (u, v) in the camera coordinate system are points on the two-dimensional plane, in order to eliminate the depth Z in the camera coordinate system 1 The influence of the value on the transformation coordinates needs to find a transformation formula when constructing the virtual coordinate system, so that the virtual coordinate system has a corresponding relation with the laser radar coordinate system and the camera coordinate system.
And S120, based on the virtual coordinate system, determining the corresponding relation between the coordinates and the angles of the image pixel points.
Wherein the angle may include α 'and β'. The image pixel point coordinates may be coordinates of a certain point in a camera coordinate system.
In this embodiment, after a virtual coordinate system is built based on a conversion formula of coordinates of a laser radar point cloud under a virtual coordinate system and coordinates of image pixels, a correspondence between an included angle of the laser radar point cloud under the laser radar point cloud coordinate system and an included angle under the virtual coordinate system may be deduced under the virtual coordinate system, and the laser radar point cloud coordinates may be converted into angles under the virtual coordinate system based on the correspondence; based on the angle of the laser radar point cloud under the virtual coordinate system, the corresponding relation between the coordinates and the angles of the image pixel points can be further determined.
S130, converting coordinates of the image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle.
The image interested area can be an area where the corresponding position of the person or the object to be determined by the unmanned equipment is located. The coordinates of the image region of interest may include coordinates of vertices of the region of interest border. The first angle may be an angle of a frame vertex of the region of interest of the image under a virtual coordinate system. The maximum value may be the maximum value of all the first angles and the minimum value may be the minimum value of all the first angles.
In this embodiment, the coordinates of the image region of interest in the camera pixel coordinate system may be converted to the angles in the virtual coordinate system according to the correspondence between the coordinates of the image pixel points and the angles. For example, if four corner points of the image region of interest are taken as four coordinate points, the four coordinate points may be converted into angles under the virtual coordinate system according to the correspondence, a minimum angle of the four angles is taken as a minimum value, and a maximum angle of the four angles is taken as a maximum value.
And S140, traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud.
The point cloud is a massive point set of target surface characteristics, and after the space coordinates of each sampling point on the surface of the object are acquired, a point set is obtained, which is called as the point cloud. The second angle may be an angle of a point in the lidar coordinate system in the virtual coordinate system.
In this embodiment, there are multiple lidar point cloud coordinates in the lidar point cloud coordinate system, and for each lidar point cloud coordinate, the lidar point cloud coordinate may be converted to a virtual coordinate system according to a conversion formula between the lidar point cloud coordinate and the virtual coordinate system to obtain a corresponding second angle, until all the lidar point cloud coordinates have been traversed, and then the traversing may be ended.
And S150, extracting a laser radar point cloud region of interest from the image region of interest based on the second angle corresponding to each laser radar point cloud and the maximum value and the minimum value in the first angle.
The laser radar point cloud region of interest may be a region formed by points of points in the image region of interest on a laser radar coordinate system.
In this embodiment, after determining the first angle and the second angle corresponding to each lidar point cloud, the lidar point cloud region of interest may be extracted from the image region of interest according to the second angle corresponding to each lidar point cloud and the relationship between the maximum value and the minimum value in the angles. For example, if the second angle corresponding to the laser radar point cloud is between the maximum value and the minimum value in the first angle, the position where the image pixel coordinate corresponding to the laser radar point cloud is located may be considered as the coordinate in the image interested area, and it is determined that the laser radar point cloud needs to be extracted; and after the second angle corresponding to each laser radar point cloud is judged, the laser radar point clouds meeting the conditions can be extracted, and the area formed by all the laser radar point clouds meeting the conditions is used as the laser radar point cloud interested area.
The laser radar point cloud extraction method provided by the embodiment of the invention comprises the following steps: determining a first formula for converting a camera pixel coordinate system to a lidar coordinate system; constructing a virtual coordinate system based on the laser radar coordinate system and a camera coordinate system; based on the virtual coordinate system, determining the corresponding relation between the coordinates and the angles of the image pixel points; converting coordinates of an image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle; traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud; and extracting a laser radar point cloud region of interest from the image region of interest based on the second angle corresponding to each laser radar point cloud and the maximum and minimum values in the first angles. According to the method, the problems of large calculated amount and large time consumption of the algorithm in the prior art are solved by introducing the virtual coordinate system, and the algorithm time is greatly shortened, so that the point cloud and the image can be fused and perceived in real time.
On the basis of the above embodiments, modified embodiments of the above embodiments are proposed, and it is to be noted here that only the differences from the above embodiments are described in the modified embodiments for the sake of brevity of description.
In one embodiment, determining the first formula includes: and decomposing and converting the formula for converting the laser radar coordinate system into the camera coordinate system to obtain a first formula.
The transformation formula for converting the laser radar coordinate system into the camera pixel coordinate system is as follows:
Figure BDA0004062299100000101
in this embodiment, the formula for converting the laser radar coordinate system into the camera pixel coordinate system may be reversely deduced, and the decomposition and conversion are performed to obtain the first formula, where a specific decomposition and conversion process is as follows:
Figure BDA0004062299100000102
wherein f x ,f y ,c x ,c y Representing camera internal parameters, R representing a rotation matrix, T representing a translation matrix, (u, v) representing the coordinates of the point in the camera coordinate system, Z 1 Representing the depth of the point in the camera coordinate system, i.e. the coordinate value of the z-axis of the point in the camera coordinate system.
The formula is a formula for converting a laser radar coordinate system into a camera coordinate system, and the formula is decomposed and converted into the following formula:
Figure BDA0004062299100000111
wherein,,
Figure BDA0004062299100000112
based on the above formula, a first formula can be derived:
Figure BDA0004062299100000113
wherein,,
Figure BDA0004062299100000114
representing laser radar point cloud coordinates, R and T representing camera external parameter matrix, M representing camera internal parameter matrix, Z 1 Representing camera image pixels, (u, v) representing image pixel point coordinates.
A point (u, v) in the camera coordinate system may be converted into the lidar coordinate system according to the first formula to obtain the point coordinate as (x, y, z).
According to the method, the formula is improved and optimized on the basis of the original formula, and the formula converted from the image coordinate system to the laser radar point cloud coordinate is deduced, so that matrix operation is not needed in the process of extracting the region of interest, the calculated amount of the algorithm is greatly reduced, and the operation is simpler.
According to a first formula, all the matrices are square matrices, and the inverse matrix can be obtained, so that the image pixel point coordinates (u, v) of the camera coordinate system can be converted into laser radar point cloud coordinates (x, y, z) of the laser radar coordinate system theoretically. However, to find the lidar point cloud coordinates (x, y, Z) corresponding to the image pixel coordinates (u, v), it is also necessary to find the Z corresponding to the image pixel coordinates (u, v) 1 Values. Wherein Z is 1 The magnitude of the value is related to the distance of the photographed object (i.e., the region of interest), the farther the photographed object is, Z 1 The larger the value, the closer the object is photographed, and the smaller the Z1 value. Therefore, we need to transform the formula to find a cancellation Z 1 And the corresponding relation between the image pixel point coordinates affected by the value and the laser radar point cloud coordinates. The conversion formula may be derived from the following formula:
Figure BDA0004062299100000121
order the
Figure BDA0004062299100000122
The formula is a conversion formula of a laser radar point cloud coordinate system and a virtual coordinate system. From the above formula, it can be derived:
Figure BDA0004062299100000123
a relation can thus be found in the coordinate system:
Figure BDA0004062299100000124
Figure BDA0004062299100000125
further, the correspondence between the coordinates and the angles of the image pixels is as follows:
Figure BDA0004062299100000126
Figure BDA0004062299100000127
wherein, alpha ' and beta ' represent the included angles of the laser radar point cloud and the origin under the virtual coordinate system, (x ', y ', z ') represent the coordinates of the laser radar point cloud under the virtual coordinate system, and (u, v) represent the coordinates of the image pixel points.
From the formula
Figure BDA0004062299100000131
It can be seen that the angles α ', β' are not subject to Z 1 Influence of the value. Fig. 2 is a schematic diagram of an angle definition provided by an embodiment of the present invention, as shown in fig. 2, the definition of angles α, β of laser radar point cloud coordinates (x, y, z) in a coordinate system is as follows: selecting a point a (x, y, z) and casting itThe point a ' (0, y, z) is obtained by imaging the yz plane, then the perpendicular line of the y axis of the point a ' intersects with the y axis at the point a ' (0, y, 0), the included angle between the straight line Oa ' and the positive direction of the z axis of the coordinate system is alpha, and the included angle between the straight line a ' a and the positive direction of the z axis of the coordinate system is beta, so that the + & lt/EN & gt can be deduced >
Figure BDA0004062299100000132
After the included angles alpha and beta between the laser radar point cloud coordinates and the origin of the coordinate system can be obtained through the derivation process, the alpha and beta can be subjected to Z 1 While the alpha ', beta' angles are not affected by Z 1 Influence of the value. Therefore, we need to derive the conversion relationship between the α, β and α ', β' angles. In addition, according to the conversion formula between (x, y, z) and (x ', y ', z ')
Figure BDA0004062299100000133
It was found that only translational relations exist between (x, y, z) and (x ', y ', z '), and therefore the coordinate system needs to be translated.
Fig. 3 is a schematic diagram of a coordinate system translation provided by the embodiment of the present invention, as shown in fig. 3, a laser radar coordinate system origin O (0, 0) is translated to a new coordinate system origin O '(Tx, ty, tz), and x' axis, y 'axis, z' axis are parallel to x axis, y axis, z axis respectively and are consistent with positive directions. Similarly, a point a (x, y, z) is selected, where (x, y, z) is the coordinate in the original coordinate system Oxyz, and projected onto the o 'y' z 'plane to obtain a point a', which is the coordinate in the original coordinate system Oxyz as a '(Tx, y, z), and then the point a' is perpendicular to the o 'y' axis and intersects the o 'y' axis at a point a″ whose coordinate in the original coordinate system Oxyz is a "(Tx, y, tz), and the angle between o 'a' and o 'z' is α ', and the angle between a" and a "is β'.
The following formula can be obtained according to the new coordinate system in fig. 3, and the laser radar point cloud coordinates are converted into the angles under the virtual coordinate system through the following formula:
Figure BDA0004062299100000141
Figure BDA0004062299100000142
wherein, alpha 'and beta' represent angles, x represents coordinate values of laser radar point cloud in x axis under the laser radar coordinate system, y represents coordinate values of laser radar point cloud in y axis under the laser radar coordinate system, z represents coordinate values of laser radar point cloud in z axis under the laser radar coordinate system, (T) x 、T y 、T z ) Representing the coordinates of the origin in the virtual coordinate system.
And the values of Tx, ty, tz can be deduced from the following formulas:
Figure BDA0004062299100000143
Figure BDA0004062299100000144
example two
Fig. 4 is a schematic flow chart of a laser radar point cloud extraction method according to a second embodiment of the present invention, where the second embodiment is optimized based on the above embodiments. For details not yet described in detail in this embodiment, refer to embodiment one. As shown in fig. 4, a method for extracting a point cloud of a lidar according to a second embodiment of the present invention includes the following steps:
s210, determining a first formula for converting a camera coordinate system into a laser radar coordinate system.
S220, constructing a virtual coordinate system based on the laser radar coordinate system and the camera coordinate system.
S230, based on the virtual coordinate system, determining the corresponding relation between the coordinates and the angles of the image pixel points.
S240, converting coordinates of the image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle.
For example, the specific method for converting the coordinates of the image region of interest into the first angle corresponding to the image region of interest under the virtual coordinate system according to the correspondence may include:
setting Z1 equal to a constant value, such as Z1=10, converting pixel coordinates (u, v) in the region of interest ROI extracted by the image into coordinates (x, y, Z) in a laser radar coordinate system, calculating angles alpha 'and angles beta' of the coordinates (x, y, Z) under a camera coordinate system, and obtaining minimum values alpha 'of angles alpha' and beta 'corresponding to the pixel coordinates (u, v) of four corner points of the region of the image ROI' min 、β′ min Maximum value alpha' max 、β′ max
S250, traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud.
S260, determining whether a second angle corresponding to each laser radar point cloud is within a maximum value interval and a minimum value interval in the first angles or not according to each laser radar point cloud.
In this embodiment, each lidar point cloud corresponds to a second angle, and after obtaining the second angle corresponding to each lidar point cloud, it may be determined whether the second angle is within a range between a maximum value and a minimum value of the first angle.
For example, a specific method for determining whether the second angle (α ", β") corresponding to the laser radar point cloud is within the maximum and minimum intervals of the first angle may include:
specifically, whether alpha ' in a second angle (alpha ', beta ') corresponding to the laser radar point cloud is alpha ' is judged ' min And alpha' max Between, it is determined whether beta 'in the second angle (alpha', beta ') is at beta' min And beta' max Between them; if yes, the laser radar point cloud is reserved; if not, the laser point cloud is not in the interested range, and the laser radar point cloud is discarded. After judging a laser radar point cloud, traversingAnd traversing to the next laser radar point cloud for judgment until the laser radar point cloud of the complete amplitude is traversed, wherein all the remaining laser radar point clouds are laser point clouds corresponding to the image ROI area.
And S270, if yes, extracting the laser radar point cloud from the image region of interest.
In this embodiment, if the second angle corresponding to the lidar point cloud is within the interval between the maximum value and the minimum value in the first angle, the lidar point cloud may be determined to be the lidar point cloud to be extracted, and the lidar point cloud may be extracted from the image region of interest.
S280, taking the region formed by all the extracted laser radar point clouds as a laser radar point cloud region of interest.
In this embodiment, after obtaining the second angles corresponding to all the lidar point clouds in the maximum value and the minimum value in the first angle, the area formed by all the lidar point clouds may be used as the lidar point cloud region of interest.
The method for extracting the laser radar point cloud provided by the second embodiment of the invention comprises the following steps: determining a first formula for converting a camera coordinate system to a lidar coordinate system; constructing a virtual coordinate system based on the laser radar coordinate system and a camera coordinate system; based on the virtual coordinate system, determining the corresponding relation between the coordinates and the angles of the image pixel points; converting coordinates of an image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle; traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud; determining whether a second angle corresponding to each laser radar point cloud is within a maximum value interval and a minimum value interval in the first angles; if yes, extracting laser radar point clouds from the image interested area; and taking the region formed by all the extracted laser radar point clouds as a laser radar point cloud region of interest. According to the method, the problems of large calculated amount and large time consumption of the algorithm in the prior art are solved by introducing the virtual coordinate system, and the algorithm time is greatly shortened, so that the point cloud and the image can be fused and perceived in real time.
The embodiment of the invention provides a specific implementation mode based on the technical scheme of each embodiment.
Fig. 5 is a schematic flow chart of a laser radar point cloud extraction method according to an exemplary embodiment of the present invention, as a specific implementation manner, as shown in fig. 5, the method includes the following steps:
and step 1, determining a formula for converting the laser radar coordinate system into a camera coordinate system.
And step 2, carrying out formula decomposition conversion deduction.
In the formula for converting the laser radar into the image, the camera internal reference matrix is a matrix of 3 rows and 4 columns, so that the inversion cannot be performed. The present technique decomposes and converts the formula from a camera coordinate system to a lidar coordinate system.
And 3, designing a virtual coordinate system and removing the camera depth Z from the included angle.
In this step, it can be found from the relation between the converted camera coordinate system and the laser radar coordinate system that, because the depth Z corresponding to the different pixels (u, v) of the image under the camera coordinate system is uncertain and the Z of the pixel of the camera image is discarded, a virtual coordinate system is designed, the origin of which is the origin of the camera coordinate system, and the coordinate axis is the same as the laser radar coordinate axis, so that the virtual coordinate system has only a rotation relation with the camera coordinate system and only a translation relation with the laser coordinate system. Thereby, the laser radar (x, y, Z) is converted into the angle (alpha ', beta'), so that Z in the formula is eliminated, and the formula conversion is completed.
And 4, converting the camera coordinate system into a virtual coordinate system and solving the minimum and maximum angle values.
In this step, the coordinates (u, v) of the obtained image coordinate system are converted into angles (α ', β') in the virtual coordinate system, and the maximum and minimum values of the angles α 'and β' are obtained.
And 5, traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into a virtual coordinate system for clipping.
Traversing the coordinates (x, y and z) of the laser radar, converting the coordinates into angles (alpha ', beta') under a virtual coordinate system, judging whether the angles are in the maximum and minimum angles, if so, reserving the point, otherwise, discarding the point, and thus, realizing the method for extracting the point cloud from the image coordinate system.
In the embodiment, formula conversion and deduction are performed, a virtual coordinate system and angles alpha ', beta' under the virtual coordinate system are designed, the problem of lower image dimension can be solved by a new algorithm, the point cloud coordinates are extracted from the image coordinate system, the processing speed is greatly improved, the time consumption is reduced, the real-time processing of the algorithm can be met, more application to the perception fusion algorithm is realized, and the method has a stronger application scene.
Example III
Fig. 6 is a schematic structural diagram of a point cloud extraction device for lidar according to a third embodiment of the present invention, where the device may be adapted to a case where a camera and a lidar perform fusion sensing, and the device may be implemented by software and/or hardware and is generally integrated on an electronic device. As shown in fig. 6, the apparatus includes:
A first determining module 310, configured to determine a first formula, where the first formula is used to convert a camera coordinate system to a lidar coordinate system, and construct a virtual coordinate system based on the lidar coordinate system and the camera coordinate system;
a second determining module 320, configured to determine a correspondence between coordinates and angles of the image pixel points based on the virtual coordinate system;
the conversion module 330 is configured to convert coordinates of an image region of interest into the virtual coordinate system according to the correspondence, to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle;
the traversing module 340 is configured to traverse the laser radar point cloud coordinates, and convert the laser radar point cloud coordinates to the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud;
and the extracting module 350 is configured to extract a lidar point cloud region of interest from the image region of interest based on the second angle corresponding to each lidar point cloud and the maximum and minimum values in the first angle.
The third embodiment provides a laser radar point cloud extracting device, first determining, by using a first determining module 310, a first formula, where the first formula is used to convert a camera coordinate system into a laser radar coordinate system; the construction module is used for constructing a virtual coordinate system based on the laser radar coordinate system and the camera coordinate system; secondly, determining the corresponding relation between the coordinates and the angles of the image pixel points based on the virtual coordinate system through a second determining module 320; then, converting the coordinates of the image region of interest into the virtual coordinate system through a conversion module 330 according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle; traversing the laser radar point cloud coordinates through a traversing module 340, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud; finally, extracting a laser radar point cloud region of interest from the image region of interest by an extracting module 350 based on the second angle corresponding to each laser radar point cloud and the maximum value and the minimum value in the first angle. The device solves the problems of large calculated amount and large time consumption of the algorithm in the prior art, and greatly shortens the algorithm time, so that the point cloud and the image can be fused and perceived in real time.
Further, the first determining module 310 is specifically configured to: and decomposing and converting the formula for converting the laser radar coordinate system into the camera coordinate system to obtain a first formula.
Further, the first formula is as follows:
Figure BDA0004062299100000191
wherein,,
Figure BDA0004062299100000192
representing laser radar point cloud coordinates, R and T representing camera external parameter matrix, M representing camera internal parameter matrix, Z 1 Representing camera image pixels, (u, v) representing image pixel point coordinates.
Further, the correspondence between the coordinates and the angles of the image pixels is as follows:
Figure BDA0004062299100000201
Figure BDA0004062299100000202
wherein, alpha ' and beta ' represent the included angles of the laser radar point cloud and the origin under the virtual coordinate system, (x ', y ', z ') represent the coordinates of the laser radar point cloud under the virtual coordinate system, and (u, v) represent the coordinates of the image pixel points.
Further, the origin of the virtual coordinate system is the same as the origin of the camera coordinate system, and the coordinate axis of the virtual coordinate system is the same as the coordinate axis of the laser radar coordinate system.
In this embodiment, when the origin of the virtual coordinate system is the same as the origin of the camera coordinate system, it may be determined that the virtual coordinate system has only a rotation relationship with the camera coordinate system; when the coordinate axis of the virtual coordinate system is the same as the coordinate axis of the laser radar coordinate system, it can be determined that the virtual coordinate system has only a translation relationship with the laser radar coordinate system.
Further, the laser radar point cloud coordinates are converted into angles under the virtual coordinate system through the following formula:
Figure BDA0004062299100000203
Figure BDA0004062299100000204
/>
wherein, alpha 'and beta' represent angles, x represents coordinate values of laser radar point cloud in x axis under the laser radar coordinate system, y represents coordinate values of laser radar point cloud in y axis under the laser radar coordinate system, z represents coordinate values of laser radar point cloud in z axis under the laser radar coordinate system, (T) x 、T y 、T z ) Representing the coordinates of the origin in the virtual coordinate system.
Further, the extracting module 350 further includes:
a determining unit, configured to determine, for each lidar point cloud, whether a second angle corresponding to the lidar point cloud is within a maximum value and a minimum value interval in the first angle;
the extraction unit is used for extracting laser radar point clouds from the image interested area if yes;
and the interest determining unit is used for taking the region formed by all the extracted laser radar point clouds as a laser radar point cloud interest region.
The laser radar point cloud extraction device can execute the laser radar point cloud extraction method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 7 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the lidar point cloud extraction method.
In some embodiments, the lidar point cloud extraction method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the lidar point cloud extraction method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the lidar point cloud extraction method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for extracting a laser radar point cloud, the method comprising:
determining a first formula, wherein the first formula is used for converting a camera coordinate system into a laser radar coordinate system, and constructing a virtual coordinate system based on the laser radar coordinate system and the camera coordinate system;
based on the virtual coordinate system, determining the corresponding relation between the coordinates and the angles of the image pixel points;
converting coordinates of an image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle;
Traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud;
and extracting a laser radar point cloud region of interest from the image region of interest based on the second angle corresponding to each laser radar point cloud and the maximum and minimum values in the first angles.
2. The method of claim 1, wherein determining the first formula comprises:
and decomposing and converting the formula for converting the laser radar coordinate system into the camera coordinate system to obtain a first formula.
3. The method of claim 2, wherein the first formula is as follows:
Figure FDA0004062299090000011
wherein,,
Figure FDA0004062299090000012
the method comprises the steps of representing laser radar point cloud coordinates, R and T represent external parameter matrixes of a laser radar camera, R represents a rotation matrix, T represents a translation matrix, M represents an internal parameter matrix of the camera, and Z 1 Representing the depth of the point in the camera coordinate system and (u, v) representing the image pixel point coordinates.
4. A method according to claim 3, wherein the correspondence between the coordinates and the angles of the image pixels is as follows:
Figure FDA0004062299090000021
Figure FDA0004062299090000022
wherein, alpha ' and beta ' represent the included angles of the laser radar point cloud and the origin under the virtual coordinate system, (x ', y ', z ') represent the coordinates of the laser radar point cloud under the virtual coordinate system, and (u, v) represent the coordinates of the image pixel points.
5. The method of claim 1, wherein an origin of the virtual coordinate system is the same as an origin of the camera coordinate system, and a coordinate axis of the virtual coordinate system is the same as a coordinate axis of the lidar coordinate system.
6. The method of claim 1, wherein the lidar point cloud coordinates are converted to angles in the virtual coordinate system by the formula:
Figure FDA0004062299090000023
Figure FDA0004062299090000024
wherein, alpha 'and beta' represent angles, x represents coordinate values of laser radar point cloud in x axis under the laser radar coordinate system, y represents coordinate values of laser radar point cloud in y axis under the laser radar coordinate system, z represents coordinate values of laser radar point cloud in z axis under the laser radar coordinate system, (T) x 、T y 、T z ) Representing a translation matrix of the lidar coordinate system relative to the virtual coordinate system.
7. The method of claim 1, wherein extracting the lidar point cloud region of interest from the image region of interest based on the second angle for each lidar point cloud and the maximum and minimum of the first angles comprises:
determining whether a second angle corresponding to each laser radar point cloud is within a maximum value interval and a minimum value interval in the first angles;
If yes, extracting laser radar point clouds from the image interested area;
and taking the region formed by all the extracted laser radar point clouds as a laser radar point cloud region of interest.
8. A lidar point cloud extraction device, the device comprising:
the first determining module is used for determining a first formula, the first formula is used for converting a camera coordinate system into a laser radar coordinate system, and a virtual coordinate system is built based on the laser radar coordinate system and the camera coordinate system;
the second determining module is used for determining the corresponding relation between the coordinates and the angles of the image pixel points based on the virtual coordinate system;
the conversion module is used for converting the coordinates of the image region of interest into the virtual coordinate system according to the corresponding relation to obtain a first angle corresponding to the image region of interest, and a maximum value and a minimum value in the first angle;
the traversing module is used for traversing the laser radar point cloud coordinates, and converting the laser radar point cloud coordinates into the virtual coordinate system to obtain a second angle corresponding to each laser radar point cloud;
and the extraction module is used for extracting the laser radar point cloud region of interest from the image region of interest based on the second angle corresponding to each laser radar point cloud and the maximum value and the minimum value in the first angle.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the lidar point cloud extraction method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores computer instructions for causing a processor to implement the lidar point cloud extraction method of any of claims 1 to 7 when executed.
CN202310066067.4A 2023-01-17 2023-01-17 Laser radar point cloud extraction method and device, electronic equipment and storage medium Pending CN116400372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310066067.4A CN116400372A (en) 2023-01-17 2023-01-17 Laser radar point cloud extraction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310066067.4A CN116400372A (en) 2023-01-17 2023-01-17 Laser radar point cloud extraction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116400372A true CN116400372A (en) 2023-07-07

Family

ID=87018583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310066067.4A Pending CN116400372A (en) 2023-01-17 2023-01-17 Laser radar point cloud extraction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116400372A (en)

Similar Documents

Publication Publication Date Title
CN115147558B (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method and device
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN115330940B (en) Three-dimensional reconstruction method, device, equipment and medium
CN115457152A (en) External parameter calibration method and device, electronic equipment and storage medium
CN113421217A (en) Method and device for detecting travelable area
CN114299242A (en) Method, device and equipment for processing images in high-precision map and storage medium
CN117078767A (en) Laser radar and camera calibration method and device, electronic equipment and storage medium
CN116400372A (en) Laser radar point cloud extraction method and device, electronic equipment and storage medium
CN116129422A (en) Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN114266879A (en) Three-dimensional data enhancement method, model training detection method, three-dimensional data enhancement equipment and automatic driving vehicle
CN112561995A (en) Real-time efficient 6D attitude estimation network, construction method and estimation method
CN117612015A (en) Vegetation extraction method, device, equipment and storage medium
CN114463409B (en) Image depth information determining method and device, electronic equipment and medium
CN113312979B (en) Image processing method and device, electronic equipment, road side equipment and cloud control platform
CN116664648A (en) Point cloud frame and depth map generation method and device, electronic equipment and storage medium
CN117726641A (en) Ground segmentation method, device, equipment and storage medium
CN114612544B (en) Image processing method, device, equipment and storage medium
CN115496791A (en) Depth map generation method and device, electronic equipment and storage medium
CN116840791A (en) Calibration and coordinate conversion method for radar and camera
CN117911469A (en) Registration method, device and equipment of point cloud data and storage medium
CN118072078A (en) Target detection method, device, equipment and storage medium
CN116167911A (en) Model training method, three-dimensional scene reconstruction method and device
CN117333873A (en) Instance segmentation method and device, electronic equipment and storage medium
CN112819890A (en) Three-dimensional object detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination