CN115810093A - Laser radar point cloud data correction method and device and storage medium - Google Patents

Laser radar point cloud data correction method and device and storage medium Download PDF

Info

Publication number
CN115810093A
CN115810093A CN202211136133.2A CN202211136133A CN115810093A CN 115810093 A CN115810093 A CN 115810093A CN 202211136133 A CN202211136133 A CN 202211136133A CN 115810093 A CN115810093 A CN 115810093A
Authority
CN
China
Prior art keywords
reference body
point cloud
cloud data
dimensional image
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211136133.2A
Other languages
Chinese (zh)
Inventor
李加乐
王哲象
李坚华
汪长青
余其真
张亚博
裘哲涵
王金江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Mingdu Intelligent Technology Co ltd
Original Assignee
Hangzhou Mingdu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Mingdu Intelligent Technology Co ltd filed Critical Hangzhou Mingdu Intelligent Technology Co ltd
Priority to CN202211136133.2A priority Critical patent/CN115810093A/en
Publication of CN115810093A publication Critical patent/CN115810093A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method and a device for correcting laser radar point cloud data and a storage medium, which are used for adjusting the measurement data of a laser radar arranged on a measurement area and acquiring first point cloud data under a holder coordinate system containing a reference body through the laser radar; converting the first point cloud data into third point cloud data on a first rough world coordinate system according to the installation position of the laser radar; and converting the third point cloud data into two-dimensional images on three coordinate surfaces respectively, acquiring coordinate axis deflection angles from the two-dimensional images respectively, combining the acquired axis deflection angles with the first transformation matrix to serve as a correction matrix of the laser radar, and acquiring point cloud data under an accurate world coordinate system of a measurement area through the correction matrix. The method corrects the posture of the virtual camera for converting three-dimensional data into two-dimensional data to obtain the angle deviation between a cloud platform coordinate system and a world coordinate system, thereby realizing the correction of the cloud platform installation error.

Description

Laser radar point cloud data correction method and device and storage medium
Technical Field
The invention relates to the technical field of three-dimensional detection, in particular to a method and a device for correcting laser radar point cloud data and a storage medium.
Background
Laser measurement technology is a measurement mapping technology that has been rapidly developed and rapidly applied in recent years. The laser radar which is a typical product of the technology is widely used in the industrial fields such as aviation mapping, automatic driving and the like. Compared with the traditional measurement and mapping technology, the laser radar has the characteristics of large data volume, convenience in use, high production efficiency and the like; compared with the image photogrammetry technology, the method has the characteristics of high resolution, high precision and the like, so that the method has strong irreplaceability. At present, most of laser radar scanning holders are realized by adopting a scheme of matching a two-dimensional laser radar with a one-dimensional scanning holder. The calibration of the pose relation between the laser radar body and the rotary table is accurate at present, and the current rotation angle of the rotary table can be obtained by reading an encoder. So when the turntable is at a certain known angle, the three-dimensional points measured by the laser radar can be transformed into the coordinate system of the base of the turntable. However, in the existing situation, when each pan-tilt is independently installed in a measurement location, only the installation accuracy of the position and angle of the pan-tilt within a certain range can be guaranteed, and the ultra-high accuracy of the installation position and angle of the pan-tilt in a real complex production environment can not be guaranteed basically, so that the ultra-high accuracy mutual position and angle relationship between the pan-tilt coordinate system and a world coordinate system to be converted can not be accurately obtained.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method for correcting laser radar point cloud data, which is used for adjusting the measurement data of a laser radar arranged on a measurement area and comprises the following steps:
s1, acquiring first point cloud data under a holder coordinate system at least comprising a first reference body and a second reference body through a laser radar, wherein the first reference body and the second reference body are sequentially placed in a measurement area of the laser radar along the direction of a central axis of the area;
s2, acquiring a first transformation matrix of the laser radar according to the installation position of the laser radar, and converting the first point cloud data into third point cloud data on a first rough world coordinate system through the first transformation matrix;
s3, respectively converting the third point cloud data into two-dimensional images on three coordinate surfaces, and respectively obtaining included angles of the first reference body and/or the second reference body on each two-dimensional image relative to the edge of the image, wherein the included angles are used as deflection angles around coordinate axes vertical to the image;
and S4, combining the obtained deflection angles of all axes with the first transformation matrix to serve as a correction matrix of the laser radar, and converting point cloud data of the measured object obtained by the laser radar through the correction matrix to obtain point cloud data under an accurate world coordinate system of the measuring area.
Preferably, the step S3 specifically includes:
converting the third point cloud data into a first two-dimensional image on an XY coordinate plane, taking the Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and identifying and acquiring an included angle between a side connecting line of the first reference body and the second reference body on the first two-dimensional image and an image edge as a Z-axis deflection angle;
converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the second two-dimensional image and the edge of the image as a Y-axis deflection angle;
and converting the third point cloud data into a third two-dimensional image on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the third two-dimensional image and the edge of the image as an X-axis deflection angle.
Preferably, the step S3 further specifically includes:
converting the third point cloud data into a first two-dimensional image on an XY coordinate plane, taking the Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and acquiring an included angle between a side connecting line of the first reference body and the second reference body on the first two-dimensional image and the edge of the image as a Z-axis deflection angle;
converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and obtaining an included angle between the ground image and the image edge on the second two-dimensional image as a Y-axis deflection angle;
and converting the third point cloud data into a third two-dimensional image on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and obtaining an included angle between the ground image and the image edge on the third two-dimensional image as an X-axis deflection angle.
Preferably, at least two laser radars with intersection areas are installed on the measurement area, and step S1 further includes:
acquiring first point cloud data comprising a first reference body, a second reference body and a third reference body through a first laser radar, and acquiring fourth point cloud data comprising a second reference body, a third reference body and a fourth reference body through a second laser radar; the first to fourth reference bodies are sequentially placed in the measuring areas of the two laser radars along the direction of the central axis of the area, the second reference body and the third reference body are positioned in the measuring intersection area of the two laser radars, one side of the second reference body and one side of the third reference body, which are opposite to the second reference body, are respectively positioned on the central axis of the area, the rear side of the second reference body and the front side of the third reference body are respectively positioned on a vertical plane which is perpendicular to the central axis of the area, the first reference body and the fourth reference body are respectively positioned in the first laser radar measuring area and the second laser radar measuring area which are not intersected, and the same side surfaces of the first reference body and the fourth reference body are positioned on the central axis of the area.
Preferably, the step S3 includes:
converting the third point cloud data into a first two-dimensional image on an XY coordinate plane, and taking Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image;
and identifying and acquiring a connecting line between one side of the first reference body in the first two-dimensional image, which is close to the central axis of the area, and the intersection side of the second reference body and the third reference body, and calculating and acquiring an included angle between the connecting line and the edge of the image as a Z-axis deflection angle.
The invention also discloses a device for correcting the point cloud data of the laser radar, which is used for adjusting the measurement data of the laser radar arranged on a measurement area and comprises the following components:
the device comprises a first point cloud acquisition module, a second point cloud acquisition module and a third point cloud acquisition module, wherein the first point cloud acquisition module is used for acquiring first point cloud data under a holder coordinate system at least comprising a first reference body and a second reference body through the laser radar, and the first reference body and the second reference body are sequentially placed in a measurement area of the laser radar along the direction of a central axis of the area;
the first transformation module is used for acquiring a first transformation matrix of the laser radar according to the installation position of the laser radar and converting the first point cloud data into third point cloud data on a first rough world coordinate system through the first transformation matrix;
the deflection angle detection module is used for converting the third point cloud data into two-dimensional images on three coordinate surfaces respectively, and acquiring the included angle of the first reference body and/or the second reference body on each two-dimensional image relative to the edge of the image respectively as a deflection angle around a coordinate axis vertical to the image;
and the correction module is used for combining the obtained deflection angles of all the axes with the first transformation matrix to be used as a correction matrix of the laser radar, and converting the point cloud data of the measured object obtained by the laser radar through the correction matrix to obtain the point cloud data under the accurate world coordinate system of the measurement area.
Preferably, the deflection angle detection module specifically includes:
the first image acquisition module is used for converting the third point cloud data into a first two-dimensional image positioned on an XY coordinate plane, taking the Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and identifying and acquiring an included angle between a side connecting line of the first reference body and the second reference body on the first two-dimensional image and an image edge as a Z-axis deflection angle;
the second image acquisition module is used for converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the second two-dimensional image and the edge of the image as a Y-axis deflection angle;
and the third image acquisition module is used for converting the third point cloud data into a third two-dimensional image positioned on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the third two-dimensional image and the edge of the image as an X-axis deflection angle.
Preferably, at least two laser radars with intersection areas are installed on the measurement area, and the first point cloud acquisition module is further configured to acquire first point cloud data including the first, second, and third reference bodies through the first laser radar and acquire fourth point cloud data including the second, third, and fourth reference bodies through the second laser radar; the first to fourth reference bodies are sequentially placed in the measuring areas of the two laser radars along the direction of the central axis of the area, the second reference body and the third reference body are positioned in the measuring intersection area of the two laser radars, one side of the second reference body and one side of the third reference body, which are opposite to the second reference body, are respectively positioned on the central axis of the area, the rear side of the second reference body and the front side of the third reference body are respectively positioned on a vertical surface which is perpendicular to the central axis of the area, the first reference body and the fourth reference body are respectively positioned in the first laser radar measuring area and the second laser radar measuring area which do not intersect, and the same side surfaces of the first reference body and the fourth reference body are positioned on the central axis of the area.
The invention also discloses a device for correcting the laser radar point cloud data, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of any one of the methods.
The invention also discloses a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of the preceding claims.
The invention discloses a method, a device and a storage medium for correcting laser radar point cloud data, which are used for adjusting the measurement data of a laser radar arranged on a measurement area and acquiring first point cloud data under a holder coordinate system containing a reference body through the laser radar; acquiring a first transformation matrix of the laser radar according to the installation position of the laser radar, and converting the first point cloud data into third point cloud data on a first rough world coordinate system; respectively converting the third point cloud data into two-dimensional images on three coordinate surfaces, and respectively acquiring the included angle of the first reference body and/or the second reference body on each two-dimensional image relative to the edge of the image as a deflection angle around a coordinate axis vertical to the image; and finally, combining the obtained shaft deflection angles with the first transformation matrix to serve as a correction matrix of the laser radar, and converting point cloud data of a measured object obtained by the laser radar through the correction matrix to obtain point cloud data under an accurate world coordinate system of a measuring area. The method corrects the posture of the virtual camera for converting three-dimensional data into two-dimensional data to obtain the angle deviation between a cloud platform coordinate system and a world coordinate system, thereby realizing the correction of the cloud platform installation error.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic flow chart of a method for correcting laser radar point cloud data according to an embodiment of the present invention.
Fig. 2 and 6 are schematic diagrams of the arrangement of the reference body of the measuring region disclosed in one embodiment of the invention.
Fig. 3-5 are schematic diagrams of first point cloud data and second point cloud data according to an embodiment of the disclosure.
Fig. 7 is a schematic diagram of first point cloud data according to an embodiment of the disclosure.
Fig. 8 is a schematic diagram of a first two-dimensional image according to an embodiment of the disclosure.
Fig. 9 is a schematic diagram of a second two-dimensional image according to an embodiment of the disclosure.
Fig. 10 is a schematic diagram of a third two-dimensional image according to an embodiment of the disclosure.
Fig. 11 is a schematic diagram of a point cloud after stitching and combining disclosed in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without any inventive step, are within the scope of protection of the invention.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of "first," "second," and similar terms in the description and claims of the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. Also, the use of the terms "a" or "an" and the like do not denote a limitation of quantity, but rather denote the presence of at least one.
Example 1
A method for correcting laser radar point cloud data, which is used to adjust measurement data of a laser radar installed on a measurement area, as shown in fig. 1, the method may include the following steps:
the method comprises the following steps of S1, obtaining first point cloud data under a holder coordinate system at least comprising a first reference body and a second reference body through a laser radar, wherein the first reference body and the second reference body are sequentially placed in a measuring area of the laser radar along the direction of a central axis of the area.
In this embodiment, a three-dimensional size of a carriage of a vehicle to be loaded issued by a loader is measured, and a set measurement area of the laser radar is a parking area issued by a moving loading area of the loader, so that an equipment central axis can be set as a central axis of an upper loader, that is, a central axis of a moving area of the upper loader, an equipment central axis reference line of a ground of the measurement area can be obtained by placing a laser scale and the like, then a reference body is placed according to the reference line, and a first reference body and a second reference body are sequentially placed in the measurement area of the laser radar along the direction of the central axis of the area.
Where the point cloud is a dataset, each point in the dataset represents a set of X, Y, Z geometric coordinate values, and the set of data points in space represent a 3D shape or object. The holder is a mechanism for producing a point cloud by rotating a radar. The world coordinate system w is first introduced, which has a certain number of translational relationships with the known device coordinate system, wherein the horizontal ground plane is the xy-plane. The x-axis direction is the direction which emits to the tail end of the equipment along the central axis of the equipment, the y-axis direction is the direction which rotates the x-axis clockwise by 90 degrees when overlooking the ground, and the z-axis direction is the direction which is vertical to the xy plane and faces downwards. Wherein the position of the origin can be specifically set and adjusted according to the situation,
and S2, acquiring a first transformation matrix of the laser radar according to the installation position of the laser radar, and converting the first point cloud data into third point cloud data on a first rough world coordinate system through the first transformation matrix.
And S3, respectively converting the third point cloud data into two-dimensional images on three coordinate surfaces, and respectively acquiring the included angle of the first reference body and/or the second reference body on each two-dimensional image relative to the edge of the image as a deflection angle around a coordinate axis vertical to the image.
The step S3 may specifically include:
and converting the third point cloud data into a first two-dimensional image positioned on an XY coordinate plane, taking the Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and identifying and acquiring an included angle between a side connecting line of the first reference body and the second reference body on the first two-dimensional image and the edge of the image as a Z-axis deflection angle.
Specifically, the projection of each point of the point cloud on the plane xoy constitutes the position of each pixel point in the two-dimensional image, and the gray value of the pixel may be an x value, a y value, or a z value, or an equal proportion value of the point. For example, a point in a 3D point cloud n (x n ,y n ,z n ) Conversion into a 2d two-dimensional image, specifying z n Is the image gray value, namely 2d image at x = x n ,y=y n Has a position gray value of z n A point image of (2).
And converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the second two-dimensional image and the edge of the image as a Y-axis deflection angle.
And converting the third point cloud data into a third two-dimensional image on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the third two-dimensional image and the edge of the image as an X-axis deflection angle.
In addition, the step S3 may further specifically include:
converting the third point cloud data into a first two-dimensional image on an XY coordinate plane, taking the Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and acquiring an included angle between a side connecting line of the first reference body and the second reference body on the first two-dimensional image and the edge of the image as a Z-axis deflection angle;
converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and obtaining an included angle between the ground image and the image edge on the second two-dimensional image as a Y-axis deflection angle;
and converting the third point cloud data into a third two-dimensional image on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and obtaining an included angle between the ground image and the image edge on the third two-dimensional image as an X-axis deflection angle.
And S4, combining the obtained deflection angles of all axes with the first transformation matrix to serve as a correction matrix of the laser radar, and converting point cloud data of the measured object obtained by the laser radar through the correction matrix to obtain point cloud data under an accurate world coordinate system of the measuring area.
In the embodiment, the first point cloud data under the holder coordinate system containing the reference body is obtained through the laser radar; acquiring a first transformation matrix of the laser radar according to the installation position of the laser radar, and converting the first point cloud data into third point cloud data on a first rough world coordinate system; respectively converting the third point cloud data into two-dimensional images on three coordinate surfaces, and respectively acquiring the included angle of the first reference body and/or the second reference body on each two-dimensional image relative to the edge of the image as a deflection angle around a coordinate axis vertical to the image; and finally, combining the obtained shaft deflection angles with the first transformation matrix to serve as a correction matrix of the laser radar, and converting point cloud data of a measured object obtained by the laser radar through the correction matrix to obtain point cloud data under an accurate world coordinate system of a measuring area. The method corrects the posture of the virtual camera for converting three-dimensional data into two-dimensional data to obtain the angle deviation between a cloud platform coordinate system and a world coordinate system, thereby realizing the correction of the cloud platform installation error.
Example 2
In the embodiment, at least two laser radars with intersection areas are arranged on a measuring area and used for acquiring at least two point cloud data which can be spliced, but the three-dimensional coordinate relation between the two radars which are independently arranged can only have an approximate relative position size when being arranged, the consistency is more difficult to guarantee in angle, most ground point clouds are in different scenes, and the correction difficulty is higher. In this embodiment, still taking a cement slab loader working site as an example, three-dimensional size detection is performed on a large-length truck to be loaded below a loader, so as to obtain point cloud data of a vehicle, particularly a carriage, and finally obtain size information of the carriage, wherein two or more holders with laser radars can be installed in front and at back above a to-be-loaded area where the vehicle is parked. In this embodiment, the lidar point cloud data method specifically includes the following contents.
Step S101, first point cloud data including a reference body under a first cloud platform coordinate system are obtained through a first laser radar, and second point cloud data including the reference body under a second cloud platform coordinate system are obtained through a second laser radar.
As shown in fig. 2 and 3, in this embodiment, the zero point of the x axis may be a position where the distance of the forward and backward movement stroke of the device is equal to zero, the zero point of the y axis is a position where the central axis of the device moves 3 meters in the negative y direction, and the zero point of the z axis is a position where the xy plane moves 6 meters in the negative z direction.
Wherein the first pan-tilt coordinate system c 1 The coordinate system direction of (1) is determined as the synthetic cloud point of the pan-tilt, and according to the current installation direction as shown in fig. 4, the x-axis direction is approximately the same as the y-axis direction of the world coordinate system but has installation deviation, the y-axis direction is approximately the same as the x-axis direction of the world coordinate system but has installation deviation, and the z-axis direction is approximately the same as the z-axis direction of the world coordinate system but has installation deviation, wherein the origin position is the actual installation position of the first pan-tilt.
Wherein the second pan-tilt coordinate system c 2 The coordinate system direction of (3) is determined as the synthetic point cloud of the pan-tilt, according to the current installation direction as shown in fig. 5, the x-axis direction is approximately the same as the y-axis direction of the world coordinate system but has installation deviation, the y-axis direction is approximately the same as the x-axis direction of the world coordinate system but has installation deviation, the z-axis direction is approximately the same as the z-axis direction of the world coordinate system but has installation deviation, wherein the original point position is the actual installation position of the first pan-tilt.
In this embodiment, the step S1 may further specifically include the following contents.
Step S1011, acquiring first point cloud data including a first reference body, a second reference body and a third reference body in a cloud deck coordinate system through a first laser radar; the first to fourth reference bodies are sequentially placed in the measuring area along the direction of the central axis of the area, the second reference body and the third reference body are positioned in the measuring intersection area of the two laser radars, one side of the second reference body and one side of the third reference body, which are opposite to the second reference body, are respectively positioned on the central axis of the area, the rear side of the second reference body and the front side of the third reference body are respectively positioned on a vertical surface which is perpendicular to the central axis of the area, the first reference body and the fourth reference body are respectively positioned in the first laser radar measuring area and the second laser radar measuring area which do not intersect, and the same side surfaces of the first reference body and the fourth reference body are positioned on the central axis of the area.
Step S1012, a second point cloud data including a second, a third, and a fourth reference objects under the holder coordinate system is obtained through the second laser radar.
Specifically, the present embodiment assists in converting the coordinate system by placing a plurality of reference objects in the measurement area, that is, rotationally translating a plurality of pieces of point cloud data into a uniform coordinate system so that they can form complete environmental point cloud data. As shown in fig. 2 and 6, in the present embodiment, four reference bodies are precisely placed in the measurement area, and the reference bodies are preferably rectangular parallelepiped in shape, but may be in other shapes or numbers, and the processing procedure is similar. The equipment axis in this embodiment is the axis of top carloader, and the equipment axis reference line on measuring area ground can be obtained through placing laser scale etc. that its axis that moves the region promptly, then according to this reference line the placing of the body of reference. Wherein the first reference body 1, the second reference body 2, the third reference body 3 and the fourth reference body 4 are arranged in order from the first pan/tilt head to the second pan/tilt head along the central axis direction of the region. The first reference body and the fourth reference body are respectively positioned in a first laser radar measuring area and a second laser radar measuring area which do not intersect, and the same side face of the first reference body and the same side face of the fourth reference body are placed on the central axis of the areas. The second reference body and the third reference body are located in a measurement intersection region of the two laser radars, the second reference body and the third reference body are arranged on two sides of a central axis of the region in a row, and one side of the second reference body and one side of the third reference body, which are opposite to the second reference body, are respectively placed on the central axis of the region. The second reference body rear side and the third reference body front side are respectively located on a vertical plane perpendicular to the central axis of the region. Meanwhile, the rear side edge of the first reference body may be placed at a position where the front-rear travel distance of the apparatus is equal to zero, i.e., the x-axis zero point of the world coordinate system W.
Step S102, a first transformation matrix is obtained according to the installation position of the first laser radar, and the first point cloud data are converted into third point cloud data under a first rough world coordinate system through the first transformation matrix; and acquiring a second transformation matrix according to the installation position of the second laser radar, and converting the second point cloud data into fourth point cloud data under a second rough world coordinate system through the second transformation matrix.
Obtaining a first transformation matrix according to the installation position of the first laser radar, and according to the installation position of the first holder provided with the first laser radar, being capable of setting the coordinate system of the first holder as c 1 By first variationChange matrix
Figure SMS_1
After the change, a first rough world coordinate system w' is obtained, and the axes of the first rough world coordinate system and the xyz axes of the first precise world coordinate system are basically the same, but a small included angle caused by installation errors still exists. Namely, the first point cloud data is rotated by 90 degrees around the z axis, moved by 6m in the positive direction of the x axis and moved by 2 meters in the positive direction of the y axis, and the first point cloud data is converted into third point cloud data.
And if the first holder coordinate system is to be c 1 Converting c to a first precise world coordinate system orthogonal to the world coordinate system with high precision 1 Through
Figure SMS_2
The variation is to obtain a coordinate system w' orthogonal to w with high precision, wherein alpha, beta and gamma are included angles of each axis of xyz of the first rough world coordinate system and the first precise world coordinate system.
Similarly, a second transformation matrix is obtained according to the installation position of the second laser radar, and the second point cloud data is converted into fourth point cloud data under a second rough world coordinate system through the second transformation matrix, and the discussion is not repeated here.
Step S103, respectively converting the third point cloud data into two-dimensional images in three coordinate planes, acquiring a first angle offset matrix according to the offset angle of a reference body in the two-dimensional images, and combining the first angle offset matrix and the first transformation matrix to form a third transformation matrix; and respectively converting the fourth point cloud data into two-dimensional images in three coordinate planes, acquiring a second angle offset matrix according to the offset angle of the reference body in the two-dimensional images, and combining the second angle offset matrix and the second transformation matrix to form a fourth transformation matrix. In this embodiment, step S103 may specifically include the following.
And converting the third point cloud data into a first two-dimensional image positioned on an XY coordinate plane, taking the Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, identifying and acquiring a connecting line between one side of a first reference body in the first two-dimensional image, which is close to the central axis of the area, and the intersection side of a second reference body and a third reference body, and calculating and acquiring an included angle between the connecting line and the edge of the image as a Z-axis deflection angle of the third point cloud data.
As described in the foregoing step S102, the first pan/tilt coordinate system c can be obtained according to the position where the first pan/tilt is installed 1 Coarse conversion relation Pose with world coordinate system w front (6,2,0,0,0,90) is the first transform matrix. First pan-tilt coordinate system c 1 Through a first transformation matrix
Figure SMS_3
After the change, a first rough world coordinate system w 'is obtained, and the axes of the first rough world coordinate system w' and the xyz of the world coordinate system w are basically in the same direction, but a small included angle exists. Calculating the lower included angle value between the xyz axes of the two coordinate systems w' and w, respectively, the following transformation can be performed:
by Pose over (6,2,0,0,0, 90) the calculated ground is the angle of rotation about the z-axis of the w coordinate system.
By Pose side (6,0,3, 270,0, 90) the calculated ground is the angle of rotation about the y-axis of the w coordinate system.
By Pose back (6,0, 10, 270,0, 180) the calculated ground is the rotational angle about the x-axis of the w coordinate system.
First calculate the angle between w' and the x-axis of w, c 1 Passing of point cloud
Figure SMS_4
And (4) changing. FIG. 9 is c 1 By Pose over (6,2,0,0,0, 90) in the posture, fig. 10 is a 2D image converted from a 3D point cloud, in which the horizontal line with an arrow is a connecting line between one side of the first reference body near the central axis of the region and the intersection side of the second and third reference bodies, and through a 2D image algorithm, an included angle between the horizontal line with an arrow and a horizontal axis of the upper graph can be calculated, that is, a rotation included angle between w ' and w is λ ', and the horizontal axis and the edge of the image are in the same direction, so that an included angle between the connecting line and the edge of the image, that is, λ ', can also be directly obtained.
And converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and identifying and acquiring a bottom edge of a first reference body, a bottom edge of a second reference body, a bottom edge of a third reference body or an included angle between the ground and the image edge on the second two-dimensional image as a Y-axis deflection angle of the third point cloud data.
Specifically, obtain c 1 By Pose side (6,0,3, 270,0, 90) in the posture, fig. 11 is a 2d image converted from the 3d point cloud, and an included angle between an arrow line in fig. 11 and a horizontal axis can be calculated through various conventional 2d image algorithms, where the arrow line may be a line connecting the bottom edge of the first reference body and the bottom edges of the second and third reference bodies, or may be a ground identification line in the drawing, so that a y-axis rotation included angle between w 'and w is β'.
And converting the third point cloud data into a third two-dimensional image positioned on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and identifying and acquiring a bottom edge of a first reference body, a bottom edge of a second reference body, a bottom edge of a third reference body or an included angle between the ground and the image edge on the third two-dimensional image as an X-axis deflection angle of the third point cloud data.
Specifically, obtain c 1 By Pose back (6,0, 10, 270,0, 180) in the posture, fig. 11 is a 2d image converted from the 3d point cloud, an included angle between a horizontal line with an arrow in fig. 11 and a horizontal axis can be calculated through a 2d image algorithm, the arrow can be a connecting line referring to the bottom edge of the body or a ground mark line in the figure directly, and therefore the x-axis rotation included angle between w 'and w is alpha'.
And combining the first angle deviation matrix and the first transformation matrix to form a third transformation matrix.
The xyz axis angular deviations of the first rough world coordinate system w and the world coordinate system w or the first precise world coordinate system, which are obtained from the above calculation, are α ', β ' and λ ', respectively, that is, the first point cloud data c 1 Through a third transformation matrix
Figure SMS_5
The variation results in a coordinate system w' which is oriented with high precision with respect to w. In addition, as shown in fig. 7, the translation distances from the zero point of the device to the origin of w 'are x', y '+3, and z' +6, and c can be obtained 1 Change to posture w
Figure SMS_6
Specifically, according to the above steps, the pose conversion of the first point cloud data C1 into the first precision world coordinate system w ″ in the present embodiment may include the following transformation process.
Description of attitude:
Pose over (TransX,TransY,TransZ,RotX,RotY,RotZ)
Pose over : the subscript notes the description of the transformed 2D image after the pose transformation, wherein over represents the top view pose, side represents the side view pose, and back represents the back view.
TransX: along the translation amount of the x axis, the point cloud moves to the positive direction of x to form a positive value, and the point cloud moves to the negative direction to form a negative value;
TransY: along the translation amount of the y axis, the point cloud moves to the positive y direction to form a positive value, and the point cloud moves to the negative y direction to form a negative value;
TransZ: along the translation amount of the z axis, the point cloud moves to the positive z direction to form a positive value, and the point cloud moves to the negative z direction to form a negative value;
RotX: rotating around an x-axis, wherein the x-axis extends to Shen Shi far along an observer, and the clockwise rotation is positive and is in unit degrees;
RotY: rotating around the y-axis, extending the y-axis to Shen Shi far along the observer, and rotating clockwise to positive unit degree;
RotZ: rotating around the z-axis, wherein the z-axis direction extends to Shen Shi along the far position of the observer, and the clockwise rotation is positive and unit degree;
attitude transformation calculation formula:
dot
Figure SMS_7
Around Pose (x) t ,y t ,z t ,αβ, γ) attitude transformation is calculated as follows:
Figure SMS_8
obtained by the above formula
Figure SMS_9
Figure SMS_10
Figure SMS_11
c 1 Each point in
Figure SMS_12
n =0,1,2,3 Λ n around pos (x) t ,y t ,z t And alpha, beta and gamma) postures are converted by the formula, so that the point cloud coordinate system can be changed. c. C 1 Through
Figure SMS_13
After the change, the xyz axes of w ', w' and w are basically in the same direction, but a small included angle caused by installation errors exists.
Measurement c 1 Angle and position deviation from w method: as each coordinate system is specified in the foregoing description, the first pan-tilt coordinate system c can be obtained from the position where the first pan-tilt is mounted 1 Coarse conversion relation Pose with world coordinate system w front (6,2,0,0,0,90),c 1 Through
Figure SMS_14
The change results in w ', w' being substantially co-directional with the xyz axes of w, but also having a small included angle. To correct the angle w 'to obtain w', for c 1 Through
Figure SMS_15
Is changed to obtainAnd a coordinate system w' orthogonal to w with high precision.
In this embodiment, step S103 further includes the following content of performing installation error angle correction on fourth cloud data obtained by the second laser radar on the second pan/tilt:
converting the fourth point cloud data into a fourth two-dimensional image on an XY coordinate plane, taking Z-axis data of each point as a gray value corresponding to each point on the fourth two-dimensional image, identifying and acquiring a connecting line between the intersection side of a second reference body and a third reference body in the fourth two-dimensional image and one side of a first reference body close to the central axis of the area, and calculating and acquiring an included angle between the connecting line and the edge of the image as a Z-axis deflection angle of the fourth point cloud data;
converting the fourth point cloud data into a fifth two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the fifth two-dimensional image, and identifying and acquiring a second reference body bottom edge, a third reference body bottom edge, a fourth reference body bottom edge or an included angle between the ground and the image edge on the fifth two-dimensional image as a Y-axis deflection angle of the fourth point cloud data;
converting the fourth point cloud data into a sixth two-dimensional image on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the sixth two-dimensional image, and identifying and obtaining the bottom edge of a second reference body, the bottom edge of a third reference body, the bottom edge of a fourth reference body or the included angle between the ground and the image edge on the sixth two-dimensional image as the X-axis deflection angle of the fourth point cloud data;
and combining the shaft deflection angles of the obtained fourth point cloud data into a second angle deflection matrix, and combining the second angle deflection matrix and the second transformation matrix to form a fourth transformation matrix.
The above steps are processing and coordinate transformation of the point cloud data obtained by the second laser radar on the second pan-tilt, and the content of the point cloud data is basically the same as the content of the point cloud data obtained by the first laser radar on the first pan-tilt, so that repeated description is not needed, and the specific steps and effects can be referred to in the above steps.
Step S104, converting the third point cloud data into fifth point cloud data in the first precise world coordinate system through a third transformation matrix; converting the fourth point cloud data into sixth point cloud data under a second precise world coordinate system through a fourth transformation matrix; and acquiring an offset matrix according to the positions of the same reference body in the fifth point cloud data and the sixth point cloud data.
As shown in FIG. 6, the region (1) is c 1 The area (2) is c 2 The area (3) is the intersection of the two. Four black rectangular reference bodies in the figure penetrate through the area 1 and the area 2 to perfectly connect the x-axis of the two areas, and ensure that w and w are connected 2 The consistency of the direction of the x axis is ensured due to the level of the ground. The xy axes are unified, and the z axis is naturally unified. Then converting the point cloud into a 2D image through the 3D point cloud again, and measuring the coordinate of the interaction point o' in the w as x ″, through conventional means such as a two-dimensional image processing algorithm and the like 2
In another embodiment, step S104 may further include: acquiring the position offset of the second reference body or the third reference body in the fifth point cloud data and the sixth point cloud data, and acquiring an offset matrix according to the offset of each axis, wherein the method specifically comprises the following steps:
step S1041, transforming the sixth point cloud data by using a fifth transformation matrix to obtain seventh point cloud data, where third reference volume data in the seventh point cloud data is the same as first reference volume data in the fifth point cloud data.
Step S1042, obtaining an offset distance of the first reference body and the second reference body on the X axis, and adding the X axis offset distance to the fifth transformation matrix to form the offset matrix.
In the embodiment of step S104, the coordinate system is corrected by referring to the position of the reference object, the sixth point cloud data located in the second precise coordinate system is translated into the point cloud data located in the first precise coordinate system by using the first reference object and the third reference object as references, and then the offset distance X ″ of the first reference object and the second reference object on the X axis is passed 2 And performing offset compensation on the seventh point cloud data to obtain two-point cloud data in the same coordinate system.
And step S105, after the point cloud data of the measured object obtained by the first laser radar is transformed through the third transformation matrix, the point cloud data of the measured object obtained by the second laser radar and the point cloud data obtained after the transformation of the fourth transformation matrix and the offset matrix are overlapped to obtain complete point cloud data of the measured object.
And finally, converting the first point cloud data and the second point cloud data to the same world coordinate system w, and then carrying out splicing combination and calibration. Through the above various posture transformation, the 3D is converted into the 2D image, and the measured angle and displacement are compensated through the complex coordinate transformation and formula calculation
Figure SMS_16
And
Figure SMS_17
first point cloud data c 1 Pose converted to world coordinate system w:
Figure SMS_18
second point cloud data c 2 Pose converted to world coordinate system w:
Figure SMS_19
cloud data c of first point 1 And second point cloud data c 2 Respectively pass through
Figure SMS_20
And
Figure SMS_21
and completing the splicing and calibration of the double-holder point cloud, wherein in the embodiment, the point cloud is converted into a w coordinate system corresponding to actual physics.
Wherein in the measurement region where the first pan/tilt and the second pan/tilt are installed disclosed in the above embodiment, the third transformation matrix may be
Figure SMS_22
And the fourth transform matrix and the offset matrix are combined as
Figure SMS_23
When the carriage of the vehicle to be loaded in the measuring area is scanned and detected, point cloud data c obtained by a first laser radar on a first cloud platform 1 Through
Figure SMS_24
After the conversion to w, the point cloud data c obtained by the second laser radar on the second cloud platform 2 Through
Figure SMS_25
Conversion to w in the original point cloud c 1 And c 2 Through a process
Figure SMS_26
And
Figure SMS_27
after the conversion to the same world coordinate system w, the point cloud vehicles with the same size as the actual vehicles can be obtained by superposition, and the point cloud vehicles with the same size as the actual vehicles can be obtained by restoration shown in the attached figure 11.
The method comprises the steps of splicing local point cloud data of the front part and the rear part of a large-span object respectively acquired by at least two laser radars in a measuring area, carrying out coordinate system conversion correction through a reference body arranged in the measuring area in the early stage, and obtaining the angle deviation between a holder coordinate system and a world coordinate system through posture correction of a 3D conversion 2D virtual camera. The mutual relation between the two cloud platform coordinate systems is obtained through a common object reference plane, finally the coordinate systems of the double cloud platforms are converted into the same equipment coordinate system, the point clouds acquired by the double cloud platforms independently are spliced to form point cloud data with an oversized size, the point cloud data acquired by the double cloud platforms independently can be spliced by means of the point cloud data with a few parts overlapped in the point cloud data acquired by the two laser radars, and accordingly the point cloud data of the oversized object are obtained.
Example 3
In another embodiment, a device for correcting lidar point cloud data is further disclosed, which is used for adjusting measurement data of a lidar installed on a measurement area, and includes: the device comprises a first point cloud acquisition module, a second point cloud acquisition module and a third point cloud acquisition module, wherein the first point cloud acquisition module is used for acquiring first point cloud data under a holder coordinate system at least comprising a first reference body and a second reference body through the laser radar, and the first reference body and the second reference body are sequentially placed in a measurement area of the laser radar along the direction of a central axis of the area; the first transformation module is used for acquiring a first transformation matrix of the laser radar according to the installation position of the laser radar and converting the first point cloud data into third point cloud data on a first rough world coordinate system through the first transformation matrix; the deflection angle detection module is used for converting the third point cloud data into two-dimensional images on three coordinate surfaces respectively, and acquiring the included angle of the first reference body and/or the second reference body on each two-dimensional image relative to the edge of the image respectively as a deflection angle around a coordinate axis vertical to the image; and the correction module is used for combining the obtained deflection angles of all the axes with the first transformation matrix to be used as a correction matrix of the laser radar, and converting the point cloud data of the measured object obtained by the laser radar through the correction matrix to obtain the point cloud data under the accurate world coordinate system of the measurement area.
In this embodiment, the deflection angle detection module specifically includes: the first image acquisition module is used for converting the third point cloud data into a first two-dimensional image positioned on an XY coordinate plane, taking the Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and identifying and acquiring an included angle between a side connecting line of the first reference body and the second reference body on the first two-dimensional image and an image edge as a Z-axis deflection angle; the second image acquisition module is used for converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the second two-dimensional image and the edge of the image as a Y-axis deflection angle; and the third image acquisition module is used for converting the third point cloud data into a third two-dimensional image positioned on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the third two-dimensional image and the edge of the image as an X-axis deflection angle.
In this embodiment, at least two laser radars with intersection regions are installed in the measurement region, and the first point cloud obtaining module is further configured to obtain first point cloud data including the first, second, and third reference bodies through the first laser radar, and obtain fourth point cloud data including the second, third, and fourth reference bodies through the second laser radar; the first to fourth reference bodies are sequentially placed in the measuring areas of the two laser radars along the direction of the central axis of the area, the second reference body and the third reference body are positioned in the measuring intersection area of the two laser radars, one side of the second reference body and one side of the third reference body, which are opposite to the second reference body, are respectively positioned on the central axis of the area, the rear side of the second reference body and the front side of the third reference body are respectively positioned on a vertical surface which is perpendicular to the central axis of the area, the first reference body and the fourth reference body are respectively positioned in the first laser radar measuring area and the second laser radar measuring area which do not intersect, and the same side surfaces of the first reference body and the fourth reference body are positioned on the central axis of the area.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The lidar point cloud data correction device disclosed by the embodiment corresponds to the lidar point cloud data correction method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
The invention also discloses a device for correcting the laser radar point cloud data, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the steps of the method for correcting the laser radar point cloud data according to any one of the embodiments when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the server.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the server device and connects the various parts of the overall server device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the server device by executing or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like, and the memory may include a high speed random access memory, and may further include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The cargo stacking control method of the mechanical arm can be stored in a computer readable storage medium if the cargo stacking control method is realized in the form of a software functional unit and is sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer-readable medium may contain suitable additions or subtractions depending on the requirements of legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
In summary, the above-mentioned embodiments are only preferred embodiments of the present invention, and all equivalent changes and modifications made in the claims of the present invention should be covered by the claims of the present invention.

Claims (10)

1. A method for correcting laser radar point cloud data is used for adjusting the measurement data of a laser radar installed on a measurement area, and is characterized by comprising the following steps:
s1, acquiring first point cloud data under a holder coordinate system at least comprising a first reference body and a second reference body through a laser radar, wherein the first reference body and the second reference body are sequentially placed in a measurement area of the laser radar along the direction of a central axis of the area;
s2, acquiring a first transformation matrix of the laser radar according to the installation position of the laser radar, and converting the first point cloud data into third point cloud data on a first rough world coordinate system through the first transformation matrix;
s3, respectively converting the third point cloud data into two-dimensional images on three coordinate surfaces, and respectively obtaining included angles of the first reference body and/or the second reference body on each two-dimensional image relative to the edge of the image, wherein the included angles are used as deflection angles around coordinate axes vertical to the image;
and S4, combining the obtained deflection angles of all axes with the first transformation matrix to serve as a correction matrix of the laser radar, and converting point cloud data of the measured object obtained by the laser radar through the correction matrix to obtain point cloud data under an accurate world coordinate system of the measuring area.
2. The lidar point cloud data correction method according to claim 1, wherein the step S3 specifically comprises:
converting the third point cloud data into a first two-dimensional image positioned on an XY coordinate plane, taking Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and identifying and acquiring an included angle between a side connecting line of the first reference body and the second reference body on the first two-dimensional image and an image edge as a Z-axis deflection angle;
converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the second two-dimensional image and the edge of the image as a Y-axis deflection angle;
and converting the third point cloud data into a third two-dimensional image on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the third two-dimensional image and the edge of the image as an X-axis deflection angle.
3. The lidar point cloud data correction method according to claim 1, wherein the step S3 specifically comprises:
converting the third point cloud data into a first two-dimensional image on an XY coordinate plane, taking Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and obtaining an included angle between a connecting line of the side parts of the first reference body and the second reference body on the first two-dimensional image and the edge of the image as a Z-axis deflection angle;
converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and obtaining an included angle between the ground image and the image edge on the second two-dimensional image as a Y-axis deflection angle;
and converting the third point cloud data into a third two-dimensional image on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and obtaining an included angle between the ground image and the image edge on the third two-dimensional image as an X-axis deflection angle.
4. The lidar point cloud data correction method according to claim 2 or 3, wherein at least two lidar having intersection areas are installed on the measurement area, and the step S1 further comprises:
acquiring first point cloud data comprising a first reference body, a second reference body and a third reference body through a first laser radar, and acquiring fourth point cloud data comprising a second reference body, a third reference body and a fourth reference body through a second laser radar; the first to fourth reference bodies are sequentially placed in the measuring areas of the two laser radars along the direction of the central axis of the area, the second reference body and the third reference body are positioned in the measuring intersection area of the two laser radars, one side of the second reference body and one side of the third reference body, which are opposite to the second reference body, are respectively positioned on the central axis of the area, the rear side of the second reference body and the front side of the third reference body are respectively positioned on a vertical surface which is perpendicular to the central axis of the area, the first reference body and the fourth reference body are respectively positioned in the first laser radar measuring area and the second laser radar measuring area which do not intersect, and the same side surfaces of the first reference body and the fourth reference body are positioned on the central axis of the area.
5. The lidar point cloud data correction method according to claim 4, wherein the step S3 comprises:
converting the third point cloud data into a first two-dimensional image on an XY coordinate plane, and taking Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image;
and identifying and acquiring a connecting line between one side of the first reference body in the first two-dimensional image, which is close to the central axis of the area, and the intersection side of the second reference body and the third reference body, and calculating and acquiring an included angle between the connecting line and the edge of the image as a Z-axis deflection angle.
6. A correction device of laser radar point cloud data is used for adjusting the measurement data of a laser radar installed on a measurement area, and is characterized by comprising:
the device comprises a first point cloud acquisition module, a second point cloud acquisition module and a third point cloud acquisition module, wherein the first point cloud acquisition module is used for acquiring first point cloud data under a holder coordinate system at least comprising a first reference body and a second reference body through the laser radar, and the first reference body and the second reference body are sequentially placed in a measurement area of the laser radar along the direction of a central axis of the area;
the first transformation module is used for acquiring a first transformation matrix of the laser radar according to the installation position of the laser radar and converting the first point cloud data into third point cloud data on a first rough world coordinate system through the first transformation matrix;
the deflection angle detection module is used for converting the third point cloud data into two-dimensional images on three coordinate surfaces respectively, and acquiring the included angle of the first reference body and/or the second reference body on each two-dimensional image relative to the edge of the image respectively as a deflection angle around a coordinate axis vertical to the image;
and the correction module is used for combining the obtained deflection angles of all the axes with the first transformation matrix to be used as a correction matrix of the laser radar, and converting the point cloud data of the measured object obtained by the laser radar through the correction matrix to obtain the point cloud data under the accurate world coordinate system of the measurement area.
7. The lidar point cloud data correction apparatus according to claim 6, wherein the deflection angle detection module specifically comprises:
the first image acquisition module is used for converting the third point cloud data into a first two-dimensional image positioned on an XY coordinate plane, taking Z-axis data of each point as a gray value corresponding to each point on the first two-dimensional image, and identifying and acquiring an included angle between a connecting line of the side parts of the first reference body and the second reference body on the first two-dimensional image and an image edge as a Z-axis deflection angle;
the second image acquisition module is used for converting the third point cloud data into a second two-dimensional image on an XZ coordinate plane, taking the Y-axis data of each point as a gray value corresponding to each point on the second two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the second two-dimensional image and the edge of the image as a Y-axis deflection angle;
and the third image acquisition module is used for converting the third point cloud data into a third two-dimensional image positioned on a YZ coordinate plane, taking the X-axis data of each point as a gray value corresponding to each point on the third two-dimensional image, and identifying and acquiring an included angle between the bottom edge of the first reference body or the second reference body on the third two-dimensional image and the edge of the image as an X-axis deflection angle.
8. The lidar point cloud data correction device according to claim 7, wherein at least two lidar points having a convergence region are installed on the measurement region, the first point cloud obtaining module is further configured to obtain first point cloud data including first, second, and third reference bodies through the first lidar, and obtain fourth point cloud data including second, third, and fourth reference bodies through the second lidar; the first to fourth reference bodies are sequentially placed in the measuring areas of the two laser radars along the direction of the central axis of the area, the second reference body and the third reference body are positioned in the measuring intersection area of the two laser radars, one side of the second reference body and one side of the third reference body, which are opposite to the second reference body, are respectively positioned on the central axis of the area, the rear side of the second reference body and the front side of the third reference body are respectively positioned on a vertical surface which is perpendicular to the central axis of the area, the first reference body and the fourth reference body are respectively positioned in the first laser radar measuring area and the second laser radar measuring area which do not intersect, and the same side surfaces of the first reference body and the fourth reference body are positioned on the central axis of the area.
9. A lidar point cloud data correction apparatus comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein: the processor when executing the computer program realizes the steps of the method according to any of claims 1-5.
10. A computer-readable storage medium storing a computer program, characterized in that: the computer program realizing the steps of the method according to any of claims 1-5 when executed by a processor.
CN202211136133.2A 2022-09-19 2022-09-19 Laser radar point cloud data correction method and device and storage medium Pending CN115810093A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211136133.2A CN115810093A (en) 2022-09-19 2022-09-19 Laser radar point cloud data correction method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211136133.2A CN115810093A (en) 2022-09-19 2022-09-19 Laser radar point cloud data correction method and device and storage medium

Publications (1)

Publication Number Publication Date
CN115810093A true CN115810093A (en) 2023-03-17

Family

ID=85482645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211136133.2A Pending CN115810093A (en) 2022-09-19 2022-09-19 Laser radar point cloud data correction method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115810093A (en)

Similar Documents

Publication Publication Date Title
CN110021046B (en) External parameter calibration method and system for camera and laser radar combined sensor
CN107564069B (en) Method and device for determining calibration parameters and computer readable storage medium
Luhmann et al. Close range photogrammetry
US6822748B2 (en) Calibration for 3D measurement system
CN111754573B (en) Scanning method and system
Oh et al. A piecewise approach to epipolar resampling of pushbroom satellite images based on RPC
CN111486864B (en) Multi-source sensor combined calibration method based on three-dimensional regular octagon structure
González-Jorge et al. Photogrammetry and laser scanner technology applied to length measurements in car testing laboratories
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
US20230348237A1 (en) Mapping of a Crane Spreader and a Crane Spreader Target
CN111208493A (en) Rapid calibration method of vehicle-mounted laser radar in whole vehicle coordinate system
CN112102375B (en) Point cloud registration reliability detection method and device and mobile intelligent equipment
CN111179351B (en) Parameter calibration method and device and processing equipment thereof
CN114723825A (en) Camera coordinate mapping method, system, medium and electronic terminal used in unmanned driving scene
Boehm et al. Accuracy of exterior orientation for a range camera
Zexiao et al. Study on a full field of view laser scanning system
CN112525106B (en) Three-phase machine cooperative laser-based 3D detection method and device
CN115810093A (en) Laser radar point cloud data correction method and device and storage medium
CN115856924A (en) Large-span object three-dimensional point cloud data combination method and device
CN114004949B (en) Airborne point cloud-assisted mobile measurement system placement parameter checking method and system
CN117876502B (en) Depth calibration method, depth calibration equipment and depth calibration system
CN118500359B (en) Method and system for rapidly compensating observation errors of optical satellite of rational function model
EP4379662A1 (en) Provision of real world and image sensor correspondence points for use in calibration of an imaging system for three dimensional imaging based on light triangulation
CN117173256B (en) Calibration method and device of line dynamic laser system with double vibrating mirrors
Nakini et al. Distortion correction in 3d-modeling of roots for plant phenotyping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 17th Floor, Building A, Building 10, No. 611, Dongguan Road, Puyan Street, Binjiang District, Hangzhou City, Zhejiang Province 310000

Applicant after: Hangzhou Mingdu Intelligent Manufacturing Co.,Ltd.

Address before: 17th Floor, Building A, Building 10, No. 611, Dongguan Road, Puyan Street, Binjiang District, Hangzhou City, Zhejiang Province 310000

Applicant before: Hangzhou Mingdu Intelligent Technology Co.,Ltd.

Country or region before: China