WO2021197345A1 - 一种基于激光雷达的封闭空间内剩余体积的检测方法和装置 - Google Patents

一种基于激光雷达的封闭空间内剩余体积的检测方法和装置 Download PDF

Info

Publication number
WO2021197345A1
WO2021197345A1 PCT/CN2021/084109 CN2021084109W WO2021197345A1 WO 2021197345 A1 WO2021197345 A1 WO 2021197345A1 CN 2021084109 W CN2021084109 W CN 2021084109W WO 2021197345 A1 WO2021197345 A1 WO 2021197345A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
dimensional
gray
image
cargo
Prior art date
Application number
PCT/CN2021/084109
Other languages
English (en)
French (fr)
Inventor
胡荣东
李敏
李雅盟
彭清
曾钰廷
Original Assignee
长沙智能驾驶研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙智能驾驶研究院有限公司 filed Critical 长沙智能驾驶研究院有限公司
Publication of WO2021197345A1 publication Critical patent/WO2021197345A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • This application relates to the field of lidar technology, and in particular to a method and device for detecting the remaining volume in a closed space based on lidar.
  • the estimation of the amount of cargo that has been loaded in a closed space such as a compartment is mainly completed by using a weight sensor for weight estimation.
  • the volume of goods of the same weight may be very different.
  • the actual total volume of the cargo has exceeded the total capacity of the carriage.
  • Other methods for estimating the volume of goods in a closed space include basically manual visual inspection or shooting with land sensors.
  • the present invention proposes a lidar-based detection method for the remaining volume in a closed space, including: obtaining a three-dimensional point cloud in the closed space; filtering out the inner walls of the closed space in the three-dimensional point cloud Point cloud to obtain a cargo surface point cloud; project the cargo surface point cloud into a two-dimensional grayscale image, wherein the gray value of each pixel in the two-dimensional grayscale image corresponds to the point in the cargo surface point cloud Is related to the height value of the two-dimensional gray image; perform gray-scale filling of the area covered by the cargo in the two-dimensional gray-scale image to obtain a gray-scale filled image, where the filled gray value is the gray of the pixel corresponding to the point cloud of the cargo that blocks the area Degree value; at least according to the gray-scale filling image or its deformation to determine the remaining height of the enclosed space not occupied by goods, to obtain the remaining volume of the enclosed space.
  • acquiring the three-dimensional point cloud in the enclosed space further includes: acquiring the attitude angle of the lidar; and performing coordinate conversion on the three-dimensional point cloud in the enclosed space according to the attitude angle.
  • projecting the three-dimensional point cloud into a two-dimensional grayscale image includes: based on a preset mapping relationship between points in the three-dimensional point cloud and pixels in the two-dimensional grayscale image, and points in the three-dimensional point cloud Projecting the three-dimensional point cloud into a two-dimensional image; based on the preset mapping relationship between unit grayscale and height value and the height value of the point in the three-dimensional point cloud, the two-dimensional point cloud is determined
  • the gray value of each pixel in the image is used to obtain the two-dimensional gray image.
  • performing gray-scale filling on the area hidden by the cargo in the two-dimensional gray-scale image to obtain a gray-scale filled image includes: traversing the pixels in the two-dimensional gray-scale image line by line, and when the gray-scale is detected When a pixel with a value of zero, a preset number of pixels are traversed horizontally in the direction of the lidar along its row; if there is a pixel with a gray value that is not zero among the preset number of pixels traversed, Then, according to the gray value of the pixel, the gray value of the pixel is filled with the gray value of zero.
  • the gray value is zero according to the maximum gray value of the plurality of pixels.
  • the pixels are filled with gray value.
  • determining the remaining height of the enclosed space not occupied by cargo at least according to the grayscale filling image or its deformation, and obtaining the remaining volume of the enclosed space includes: determining the grayscale of any pixel point and the highest point Difference; preset the volume value corresponding to the unit gray level according to the gray level difference to obtain the unoccupied volume of each pixel corresponding to the unoccupied goods; accumulate the unoccupied volume corresponding to each pixel to obtain the remaining volume of the enclosed space .
  • the gray-filled image after obtaining the gray-filled image, it further includes: performing median filtering and bilateral filtering processing operations on the gray-filled image.
  • the application further includes a lidar-based detection device for the remaining volume in a closed space, including: a point cloud acquisition module configured to acquire a three-dimensional point cloud in the closed space; a segmentation module configured to filter out the closed space in the three-dimensional point cloud Each inner wall point cloud obtains a cargo surface point cloud; the projection module is configured to project the cargo surface point cloud into a two-dimensional grayscale image, wherein the grayscale value of each pixel in the two-dimensional grayscale image corresponds to The height value of the points in the point cloud on the surface of the cargo is related; the filling module is configured to perform gray-scale filling on the area covered by the cargo in the two-dimensional gray-scale image to obtain a gray-scale filled image, wherein the filled gray-scale value blocks the The gray value of the pixel corresponding to the point cloud of the goods in the area; the detection module is configured to determine the remaining height of the enclosed space not occupied by the goods at least according to the grayscale filling image or its deformation, and obtain the remaining height of the enclosed space volume
  • it further includes: an attitude angle acquisition module configured to acquire the attitude angle of the lidar; and a conversion module configured to perform coordinate transformation on the three-dimensional point cloud in the current space according to the attitude angle.
  • the present application further includes a computer device, including a memory and a processor, the memory stores a computer program, and is characterized in that the processor implements the steps of the aforementioned method when the computer program is executed by the processor.
  • the present application further includes a detection system for the remaining volume in a closed space based on lidar, including lidar, and the device as described above.
  • the application further includes an intelligent loading device, including: a closed space; a processor, and a memory coupled with the processor; and a lidar configured to obtain a three-dimensional point cloud in the closed space; wherein the processor is configured to Perform the method as described earlier.
  • an intelligent loading device including: a closed space; a processor, and a memory coupled with the processor; and a lidar configured to obtain a three-dimensional point cloud in the closed space; wherein the processor is configured to Perform the method as described earlier.
  • Fig. 1 is an application environment diagram of a method for detecting remaining volume in a closed space based on lidar in an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of a method for detecting the remaining volume in a closed space based on lidar according to an embodiment of the present invention
  • Figure 3 is a front view of a closed space according to an embodiment of the present invention.
  • Figure 4 is a perspective view of a closed space according to an embodiment of the present invention.
  • Figure 5 is a front view of a closed space according to an embodiment of the present invention.
  • Figure 6 is a top view of a closed space according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of the steps of obtaining the attitude angle of the lidar according to an embodiment of the present invention.
  • Fig. 8 is a schematic diagram of a point cloud on the surface of a cargo according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a two-dimensional grayscale image obtained by projecting a point cloud on the surface of the cargo shown in FIG. 8;
  • FIG. 10 is a schematic diagram of an image obtained after filling and post-processing the two-dimensional grayscale image shown in FIG. 9 according to an embodiment of the present invention.
  • FIG. 11 is a structural block diagram of a laser radar-based residual volume detection device in a closed space according to an embodiment of the present invention
  • Fig. 12 is a diagram of the internal structure of a computer device according to an embodiment of the present invention.
  • Fig. 1 is an application environment diagram of a method for detecting remaining volume in a closed space based on lidar according to an embodiment of the present invention.
  • the lidar-based detection and detection method for the remaining volume in a closed space provided in this application can be applied to the application environment as shown in FIG. 1.
  • the application scenario may be a cargo distribution/transit point
  • the enclosed space 101 may be a movable container or carriage, or a fixed warehouse such as a closed warehouse.
  • the load may be goods with a regular shape, such as boxed goods.
  • the load may be goods with an irregular shape.
  • the lidar 103 is installed inside the enclosed space 101.
  • the placement of the goods usually follows the rules of placement from the inside out and bottom to top.
  • the lidar 103 may be installed on the side of the enclosed space 101 close to the door.
  • the lidar 103 can be installed in a closed space that is not easily blocked, for example, near the top of the lidar.
  • the lidar 103 can also be arranged in other positions.
  • the lidar 103 inside the enclosed space 101 is communicatively connected with the main control device 105 of the application scenario.
  • the main control device 105 receives the three-dimensional point cloud collected by each lidar 103, and detects the enclosed space 101 based on the three-dimensional point cloud.
  • the main control device 105 may also be in communication connection with the display device 107, and send the detection result of the enclosed space 101 to the display device 107, and prompt the enclosed space 101.
  • the display device 107 can be a mobile phone terminal of a manager (or a driver), or a display screen of a cargo distribution/transfer point (such as a display screen set in a cargo distribution/transfer point site or in an office).
  • a reminder threshold may be set for the remaining volume of the enclosed space.
  • the master control device sends a reminder message to the display device. For example, a prompt indicating that the remaining volume of the enclosed space is insufficient is displayed on the display device.
  • each enclosed space 101 additionally includes a computing unit.
  • the calculation unit is configured to receive the three-dimensional point cloud collected by the lidar 103, detect the remaining volume in the enclosed space 101 based on the three-dimensional point cloud, and then send the detection result to the main control device 105.
  • a method for detecting a closed space based on lidar includes the following steps:
  • Step 201 Obtain a three-dimensional point cloud of the current enclosed space through a lidar.
  • the lidar is installed at the top of the enclosed space on the side close to the door in the enclosed space.
  • the lidar is used to collect the three-dimensional point cloud inside the enclosed space.
  • the collection frequency can be set for the lidar, such as collection according to a preset time interval, so as to perform periodic detection of the enclosed space.
  • the lidar acquisition may be triggered every time the door of the enclosed space is closed, so as to detect the remaining volume of the enclosed space.
  • the manager can also send collection instructions to the main control device through a display device (such as a mobile phone terminal) as needed, and the main control device sends the collection instructions to the lidar, and the lidar responds to the collection instructions to perform collection.
  • the lidar sends the collected 3D point cloud to the main control device.
  • Step 202 Obtain the attitude angle of the lidar.
  • the lidar Due to the position where the lidar is installed in actual operation, its attitude may not be parallel to the coordinate axes of the coordinate system of the enclosed space. Therefore, for the convenience of subsequent calculations, the attitude angle of the lidar can be obtained and the coordinate system based on the lidar can be converted.
  • the attitude angle of the lidar refers to the installation angle of the lidar relative to the reference object, including but not limited to roll angle, pitch angle, and yaw angle.
  • the attitude angle of the lidar can be determined according to the three-dimensional point cloud in the current space. In practical applications, since the position of the lidar is basically fixed, the attitude angle of the lidar only needs to be calculated once, and the first attitude angle can be used in the subsequent point cloud coordinate conversion.
  • the ideal operation scenario for the calibration of the lidar attitude angle is a closed space with no load, that is, calibration is performed to determine the attitude angle of the lidar when no objects are loaded in the enclosed space.
  • the calibration of the lidar can also be calibrated for each detection in real time, so that the calibrated point cloud will be more accurate.
  • the detection system coordinate system is established with the lidar as the origin.
  • Fig. 3 is a front view of a closed space according to an embodiment of the present invention
  • Fig. 4 is a perspective view of a closed space according to an embodiment of the present invention.
  • the set detection system coordinate system is shown in Figure 3 and Figure 4, where the origin o of the coordinate system is the position of the lidar, the X axis is parallel to the closed space such as the long side of the container, the Y axis is parallel to the short side of the container, and the Z axis is parallel to the container. height of.
  • Fig. 5 is a front view of a closed space according to an embodiment of the present invention
  • Fig. 6 is a top view of a closed space according to an embodiment of the present invention.
  • the lidar installation position is shown in Fig. 5 and Fig. 6 and is set at the position o.
  • the distance from the lidar along the Y axis to the two side walls of the enclosed space is b1 and b2; the distance from the lidar along the Z axis to the top wall is b3, and the distance to the bottom is a; from the lidar along the X axis
  • the distance to the door is b5, and the distance to the opposite side of the door to the enclosed space is b4.
  • the attitude angle may include a roll angle, a pitch angle, and a yaw angle.
  • Fig. 7 shows a method for obtaining the attitude angle of a lidar according to an embodiment of the present application. In this process, the enclosed space is empty or in a so-called calibration state. The method includes:
  • Step 701 Acquire a three-dimensional point cloud inside the enclosed space collected by lidar when the enclosed space is empty. All the point clouds obtained in this state are the point clouds of the inner surface of the closed space. For example, if the closed space is a cuboid, it is the point cloud of the six inner planes.
  • Step 702 Determine the bottom point cloud of the closed space from the three-dimensional point cloud inside the closed space according to the vertical distance of the lidar from the bottom surface of the closed space.
  • the point cloud at the bottom of the current space refers to the point cloud on the bottom wall of the current space. Knowing that the vertical distance of the lidar from the bottom plane of the enclosed space is a, the set of points with the z coordinate value less than -a in the three-dimensional point cloud inside the enclosed space is regarded as the bottom point cloud of the enclosed space.
  • Step 703 Calculate the plane normal vector of the point cloud at the bottom of the closed space.
  • the normal vector is a concept of space analytic geometry.
  • the vector represented by a straight line perpendicular to the plane is the normal vector of the plane.
  • the method of calculating the normal vector is to first calculate the covariance matrix of the point cloud at the bottom of the closed space, and then perform singular value decomposition on the covariance matrix.
  • the singular vector obtained by singular value decomposition describes the three main directions of the point cloud data, perpendicular to the plane
  • the normal vector represents the direction with the smallest variance, and the smallest variance represents the smallest singular value, so finally the vector with the smallest singular value is selected as the normal vector of the plane.
  • C is the covariance matrix
  • s i is the point in the point cloud at the bottom of the closed space
  • Step 704 Calculate the roll angle and the pitch angle of the lidar according to the plane normal vector of the point cloud at the bottom of the enclosed space.
  • the pitch angle is the angle between the X-axis of the lidar coordinate system and the bottom plane of the enclosed space
  • the roll angle is the angle between the Y-axis of the lidar coordinate system and the plane normal vector of the bottom point cloud of the lidar coordinate system.
  • the formula for calculating the roll angle and the pitch angle is:
  • T 1 (a 1 ,b 1 ,c 1 ) (2)
  • T 1 is the plane normal vector of the point cloud at the bottom of the enclosed space
  • is the roll angle of the lidar
  • is the pitch angle of the lidar
  • Step 705 Determine the side wall point cloud of the enclosed space from the three-dimensional point cloud inside the enclosed space according to the vertical distance of the lidar from the bottom of the enclosed space and the distance of the lidar relative to the side wall.
  • the current space side wall point cloud refers to the point cloud of the side wall part perpendicular to the bottom surface in the enclosed space. For example, taking the left wall as an example, it is known that the vertical distance of the lidar from the bottom of the enclosed space is a, and the point cloud with the z coordinate range of [0,-a) is removed as a filtered point cloud (that is, remove The remaining point cloud after the bottom of the current space).
  • the y coordinate range of the lidar is taken as [b1,b1+
  • the point cloud of ⁇ b) is taken as the side point cloud of the current space, where ⁇ b is the distance threshold, and 1> ⁇ b>0.
  • the point cloud determination of the other sidewalls is similar.
  • Step 706 Calculate the plane normal vector of the point cloud on the side wall of the enclosed space.
  • Step 707 Calculate the yaw angle of the lidar according to the plane normal vector of the point cloud on the side wall of the enclosed space.
  • the yaw angle is the angle between the Z axis of the lidar coordinate system and the side of the current space.
  • the calculation formula for calculating the yaw angle is:
  • T 2 (a 2 ,b 2 ,c 2 ) (4)
  • T 2 is the plane normal vector of the point cloud on the side wall of the enclosed space
  • is the yaw angle
  • Step 203 Perform coordinate conversion on a three-dimensional point cloud in a closed space according to the attitude angle of the lidar.
  • the attitude angle includes roll angle, pitch angle and yaw angle.
  • the roll angle and pitch angle are obtained from the point cloud at the bottom of the enclosed space and its plane normal vector, and the yaw angle is based on the point on the side wall of the enclosed space.
  • the cloud and its plane normal vector are obtained. Therefore, in this embodiment, the closed space three-dimensional point cloud is converted. After the conversion, the closed space bottom point cloud is parallel to the XOY plane of the detection system coordinate system, and the converted closed space side wall point cloud is respectively XOZ and YOZ of the detection system coordinate system.
  • the plane is parallel.
  • the point cloud conversion is performed according to the roll angle and the pitch angle, in order to convert the bottom point cloud in the closed space three-dimensional point cloud to the detection system coordinate system and the XOY plane is parallel. Specifically, according to the elevation angle of the lidar, rotate the current 3D point cloud around the X axis of the lidar coordinate system, and according to the roll angle of the lidar, rotate the current 3D point cloud around the Y axis of the lidar coordinate system to convert The point cloud at the bottom of the back enclosed space is parallel to the XOY plane of the detection system coordinate system. As follows:
  • R x and R y are rotation matrices around the x-axis and around the y-axis
  • p g is the bottom point cloud of the closed space after conversion
  • p c is the original three-dimensional point cloud of the entire closed space.
  • the point cloud conversion according to the yaw angle is to convert the point cloud of the side wall of the enclosed space to the detection system coordinate system and parallel to the XOY and XOZ planes. Specifically, according to the yaw angle of the lidar, the converted point cloud parallel to the bottom of the enclosed space is rotated around the Z axis of the lidar coordinate system, and the sidewall point cloud in the 3D point cloud of the converted enclosed space and the detection system coordinate system are rotated The XOZ plane is parallel. As follows:
  • R z is the rotation matrix around the z axis
  • p g is the bottom point cloud after conversion
  • p is the point cloud on the side wall after conversion.
  • Step 204 Filter out the point clouds on the inner walls of the enclosed space in the three-dimensional point cloud to obtain a point cloud on the surface of the goods.
  • the collected 3D point cloud includes the inner wall point cloud of the enclosed space and the cargo surface point cloud in the enclosed space.
  • the inner wall point cloud of the enclosed space refers to the point cloud data of the top, side walls, and bottom of the enclosed space (determined by the aforementioned similar method)
  • the cargo surface point cloud refers to the point cloud data of the cargo in the enclosed space.
  • the point cloud collected by the lidar is often the unobstructed goods at the front end.
  • the purpose of filtering out the point cloud of the inner wall of the enclosed space is to reduce the amount of calculation and to avoid the interference of the point cloud data of the inner wall with the estimation of the cargo volume.
  • Step 205 Project the cargo surface point cloud into a two-dimensional grayscale image, wherein the gray value of each pixel in the two-dimensional grayscale image is related to the height value of the point in the corresponding cargo surface point cloud.
  • the point cloud of the surface of the goods in the enclosed space is converted into the pixel points of the two-dimensional image (for example, one or more points in the point cloud can correspond to or fall into the two-dimensional image with a size of Ur*Vr Pixels or a unit area).
  • the gray value of the corresponding pixel in the two-dimensional image is determined according to the height value of the point cloud in the Z direction, and the gray value of the pixel in the two-dimensional image is related to the height value of the point cloud, such as positive correlation and negative correlation Or index correlation, etc.
  • the greater the height of the point in the point cloud the greater the gray value of the corresponding pixel; the smaller the height of the point in the point cloud, the greater the gray value of the corresponding pixel. Small, but not limited to this. If the same pixel in the two-dimensional image corresponds to multiple points in the point cloud, the gray value corresponding to the highest height among the multiple points can be taken as the gray value of the pixel.
  • the coordinates of the two-dimensional image can be calculated by the following formula.
  • u i and V i is the i-th point cloud of points onto the row and column coordinates of the two-dimensional image pixels
  • x i and y i X-axis coordinate of a point cloud surface cargo in the i-th point and Y-axis coordinates
  • x min and y min are the minimum values of the cargo surface point cloud p r on the X-axis and Y-axis
  • u r and v r are the pixel sizes of a single point in the cargo surface point cloud projected onto the two-dimensional image.
  • u r and v r represent the size of each pixel on the two-dimensional image in the two-dimensional image, and these two values can be set according to requirements.
  • the gray value of the point cloud can be determined according to the relative height of the point cloud and the height value corresponding to the unit gray value.
  • the relative height of the point cloud refers to the difference between the height value of the point and the minimum height.
  • the minimum height is the minimum height value in the point cloud on the surface of the cargo, usually the height value at the bottom of the enclosed space.
  • the relative height of the point in the point cloud indicates the relative height of the point cloud on the surface of the cargo relative to the bottom of the enclosed space.
  • the ratio of the relative height of a point in the point cloud to the height value corresponding to the unit gray value is the gray value of the point.
  • the formula for converting height to gray value is as follows:
  • FIG. 9 is a schematic diagram of a two-dimensional gray image obtained by projecting a point cloud on the surface of the cargo shown in FIG. 8.
  • Step 206 Perform gray-scale filling on the area hidden by the cargo in the two-dimensional gray-scale image to obtain a gray-scale filled image, where the filled gray value is the gray value of the pixel corresponding to the point cloud of the cargo that blocks the area .
  • the laser can only scan the goods placed on the surface, and the goods in the area blocked by the surface goods cannot be scanned. But under normal circumstances, the goods in the enclosed space are arranged neatly, and the order of arrangement is from the inside to the outside (the outside refers to the doorway of the enclosed space, and the inside refers to the end farthest from the door), from bottom to top If the area is blocked by the surface cargo, there is a high probability that the cargo is also placed. However, because it cannot be scanned by the laser, it is impossible to obtain the point cloud of the surface of the goods in these blocked areas, and the two-dimensional image cannot show the grayscale that should be in the corresponding area. However, if this part of the area is ignored, the remaining volume of the enclosed space obtained will not match the reality, which will affect the detection accuracy.
  • Gray value filling to supplement the gray value of the area covered by the surface cargo.
  • the two-dimensional grayscale image is first traversed from top to bottom, and the sequence is from the inside to the outside (that is, toward the door of the enclosed space); then, when the gray value of the i-th pixel is 0
  • the gray value of point j is assigned to point i; If there is more than one point whose gray value is not 0, then the gray value with the largest value is assigned to point i, so as to realize the gray value filling process for the area covered by the surface cargo in the two-dimensional gray image.
  • the method further includes optimization processing for the gray-scale filled image.
  • post-processing operations of median filtering and bilateral filtering can be performed on the gray-filled image.
  • Median filtering is to protect edge information
  • bilateral filtering is to preserve edges and denoise; then perform morphological expansion operations.
  • the distance between some adjacent points in the three-dimensional point cloud will be greater than the pixel distance of the two-dimensional image obtained by the projection, resulting in holes in the two-dimensional image. If the pixel size is increased, the resolution of the two-dimensional image will be reduced.
  • the expansion operation on the two-dimensional image can effectively reduce the holes.
  • the image post-processing method is not limited to morphological expansion. It can also perform morphological closing operations on the image to fill the black hole area, and then perform morphological opening operations to enhance edge information and filter discrete interference pixels.
  • the optimized grayscale filled image is a deformation of the grayscale filled image.
  • FIG. 10 shows a gray-scale filled image obtained after filling and optimizing the two-dimensional gray-scale image shown in FIG. 9 according to an embodiment of the present invention.
  • Step 208 Determine the remaining height of the enclosed space not occupied by cargo at least according to the grayscale filling image or its deformation, and obtain the remaining volume of the enclosed space.
  • the gray value in the two-dimensional gray-scale image is determined according to the height value of the three-dimensional point cloud, which represents the height information of the three-dimensional point cloud. Therefore, the remaining height of the space not occupied by goods can be determined by the gray value of each pixel. Specifically, the gray value of each pixel is compared with the maximum gray value, and the gray value difference between the two can be converted into the remaining height of the space not occupied by the goods. Among them, the maximum gray value corresponds to the maximum value of the Z axis in the three-dimensional point cloud, and represents the highest value of the height of the goods that can be accommodated in the enclosed space. According to the gray value of the height of the unoccupied space and the volume value corresponding to the preset unit gray level, the unoccupied volume of each pixel is obtained; the unoccupied volume of each pixel is accumulated to obtain the closed The remaining volume of the space.
  • the grayscale difference between each pixel and the highest point is the difference between the height of the highest point and the height value corresponding to the pixel/the height value corresponding to the unit gray value, which represents the gray value of the height of the space not occupied by the goods.
  • ⁇ h represents the remaining height of the closed space not occupied by the goods at the i-th point in the three-dimensional point cloud
  • G resolution is the height value corresponding to the unit gray value
  • H v is the i-th point and the highest point in the three-dimensional point cloud (closed space The grayscale difference between the highest height of the goods that can be accommodated.
  • v h represents the volume value of the preset unit gray value (u r , v r are the size of the two-dimensional pixel corresponding to a three-dimensional point cloud), and v is the unoccupied volume of the ith point in the three-dimensional point cloud .
  • V is the remaining volume in the enclosed space, which is the sum of the unoccupied volume represented by each pixel.
  • the above-mentioned laser radar-based detection method for the remaining volume in a closed space uses the current three-dimensional point cloud collected by the laser radar to filter out the inner wall point cloud, accurately segment the cargo surface point cloud, and then convert the cargo surface point cloud into a two-dimensional Grayscale image, using two-dimensional grayscale image to record the height information of the point cloud on the surface of the cargo, and fill the area covered by the cargo with the gray value of adjacent pixels, so that the two-dimensional grayscale image reflects the actual cargo loading situation.
  • the processing angle is to determine the height of the space not occupied by the goods according to the gray value of each pixel, and to detect the remaining volume in the enclosed space. Because the laser radar is used to collect three-dimensional data, the data source has high accuracy.
  • the algorithm of this application After projecting the three-dimensional point cloud into a two-dimensional grayscale image, the area that will be blocked by the cargo is filled with the gray value of neighboring pixels to make the two-dimensional grayscale image reflect The actual cargo loading situation, thereby improving the accuracy of the loading space detection.
  • the algorithm of this application has a simple operation process and can be used for real-time monitoring of the cargo situation in the space.
  • the lidar-based detection method of the remaining volume in a closed space of the present application can be widely used in scenarios where the remaining volume needs to be measured, and is suitable for standard size containers (all sizes are available, 20ft, 40ft, and 45ft) , Car carriages, train carriages.
  • a lidar-based detection device for the remaining volume in a closed space including:
  • the point cloud acquisition module 1101 is configured to acquire the three-dimensional point cloud of the current enclosed space through the lidar.
  • the attitude angle acquisition module 1102 is configured to acquire the attitude angle of the lidar. In some embodiments, the attitude angle acquisition module 1102 is not necessary.
  • the conversion module 1103 is configured to perform coordinate conversion on a three-dimensional point cloud in a closed space according to the attitude angle of the lidar. In some embodiments, the conversion module 1103 is not necessary.
  • the segmentation module 1104 is configured to filter out the space inner wall point cloud in the scanned point cloud of the current space after coordinate conversion, to obtain a detectable cargo surface point cloud.
  • the projection module 1105 is configured to project the cargo surface point cloud into a two-dimensional grayscale image, wherein the gray value of each pixel in the two-dimensional grayscale image corresponds to the height value of the point in the cargo surface point cloud Related.
  • the filling module 1106 is configured to perform gray-scale filling on the area covered by the cargo in the two-dimensional gray-scale image to obtain a gray-scale filled image, wherein the filled gray value is the value of the pixel point corresponding to the point cloud of the cargo covering the area grayscale value.
  • the detection module 1107 is configured to determine the remaining height of the enclosed space not occupied by cargo at least according to the gray-scale filling image or its deformation, to obtain the remaining volume of the enclosed space.
  • the projection module includes:
  • the image conversion module is used to project the cargo surface point cloud into a two-dimensional image according to the abscissa and ordinate of the cargo surface point cloud.
  • the gray value processing module is used to determine the gray value of each point cloud pixel in the two-dimensional image according to the height value of the point cloud on the surface of the goods, and obtain the two-dimensional gray image corresponding to the point cloud on the surface of the goods.
  • the gray value filling module is configured to sequentially traverse each pixel in the two-dimensional gray image, and when the current pixel to be traversed is a pixel with a gray value of zero, follow the set direction Traverse a preset number of pixels, if there is a pixel with a gray value that is not zero in the preset number of pixels traversed, the current pixel is filled with gray value according to the gray value of the pixel, if If there are multiple pixels whose gray values are not zero in the preset number of pixels to be traversed, the gray value of the current pixel is filled according to the maximum gray value of the multiple pixels.
  • the detection module is configured to obtain the height difference between any point in the three-dimensional point cloud and the highest point; determine the gray level difference between the corresponding pixel point and the highest point based on the height difference; The gray level difference and the volume value corresponding to the preset unit gray level are used to obtain the unoccupied volume of the pixel point; and the unoccupied volume of each pixel point is accumulated to obtain the remaining volume of the enclosed space.
  • Each module in the above-mentioned lidar-based remaining volume detection device in a closed space can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • the above-mentioned laser radar-based detection device for the remaining volume in a closed space uses the current three-dimensional point cloud collected by the laser radar to filter out the inner wall point cloud, accurately segment the cargo surface point cloud, and then convert the cargo surface point cloud into a two-dimensional Grayscale image, using two-dimensional grayscale image to record the height information of the point cloud on the surface of the cargo, and fill the area covered by the cargo with the gray value of adjacent pixels, so that the two-dimensional grayscale image reflects the actual cargo loading situation.
  • the processing angle is to determine the height of the space not occupied by the goods according to the gray value of each pixel, and to detect the remaining volume in the enclosed space. Because the laser radar is used to collect three-dimensional data, the data source has high accuracy.
  • the algorithm of this application After projecting the three-dimensional point cloud into a two-dimensional gray image, the area that will be blocked by the cargo is filled with the gray value of adjacent pixels to make the two-dimensional gray image reflect The actual cargo loading situation, thereby improving the accuracy of the detection of the loadable space of the carriage.
  • the algorithm of this application has a simple operation process and can be used for real-time monitoring of the cargo situation in the space.
  • the application further includes a lidar-based detection system for remaining volume in a closed space, including lidar, and the aforementioned lidar-based detection device for remaining volume in a closed space.
  • the application further includes an intelligent loading device, including: a closed space; a processor, and a memory coupled with the processor; and a lidar configured to obtain a three-dimensional point cloud in the closed space; wherein the processor is configured to Perform the detection method of the remaining volume in the enclosed space based on the lidar as described above.
  • an intelligent loading device including: a closed space; a processor, and a memory coupled with the processor; and a lidar configured to obtain a three-dimensional point cloud in the closed space; wherein the processor is configured to Perform the detection method of the remaining volume in the enclosed space based on the lidar as described above.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 12.
  • the computer equipment includes a processor and a memory communication interface connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the communication interface of the computer device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be implemented through WIFI, an operator's network, NFC (near field communication) or other technologies.
  • FIG. 12 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method in each of the foregoing embodiments are implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种基于激光雷达的封闭空间内剩余体积的检测方法、检测装置以及检测系统和一种计算机设备以及智能装载设备,所述方法包括:获取封闭空间内的三维点云(201);滤除三维点云中封闭空间各内壁点云,得到货物表面点云(204);将所述货物表面点云投影为二维灰度图像(205),其中,所述二维灰度图像中各像素点的灰度值与对应货物表面点云中的点的高度值相关;对所述二维灰度图像中被货物遮挡区域进行灰度填充,得到灰度填充图像(206),其中所填充灰度值为遮挡该区域的货物的点云对应的像素点的灰度值;至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度,得到所述封闭空间的剩余体积(208)。

Description

一种基于激光雷达的封闭空间内剩余体积的检测方法和装置 技术领域
本申请涉及激光雷达技术领域,特别是涉及一种基于激光雷达的封闭空间内剩余体积的检测方法和装置。
背景技术
现代物流运输中,常常使用封闭空间例如厢体来装载货物,如集卡的集装箱,汽车车厢、火车车厢等。封闭的厢体装载的货物量是影响现代物流效率的关键因素之一。在货物运输或内部转运过程中,需要获取当前车厢或集装箱内货物量,以便物流人员调配合适车辆进行装卸货操作。
通常对封闭空间例如厢体内已经装载的货物量的估计主要使用重量传感器进行重量估计完成。但由于不同货物密度不一样,相同重量的货物体积可能有很大的差别。在满足货物不超过车辆最大运输重量基础之上,可能存在实际的货物总体积已经超过车厢总容量的问题。其他的对封闭空间内货物量的估计方法还包括基本通过人工目测或者利用土地传感器拍摄的方式完成。这种方式可能存在两种问题:(1)在保证质量不超标前提下,估计的货物总体积小于实际货物总体积,导致无法正常运送全部货物;(2)在保证质量不超标的前提下,估计货物总体积大于实际货物体积,导致空间的浪费。
发明内容
针对现有技术中存在的技术问题,本发明提出了一种基于激光雷达的封闭空间内剩余体积的检测方法,包括:获取封闭空间内的三维点云;滤除三维点云中封闭空间各内壁点云,得到货物表面点云;将所述货物表面点云投影为二维灰度图像,其中,所述二维灰度图像中各像素点的灰度值与对应货物表面点云中的 点的高度值相关;对所述二维灰度图像中被货物遮挡区域进行灰度填充,得到灰度填充图像,其中所填充灰度值为遮挡该区域的货物的点云对应的像素点的灰度值;至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度,得到所述封闭空间的剩余体积。
特别的,在获取封闭空间内的三维点云还包括:获取所述激光雷达的姿态角;根据所述姿态角,对所述封闭空间的三维点云进行坐标转换。
特别的,其中,将所述三维点云投影为二维灰度图像,包括:基于预设的三维点云中的点与二维灰度图像中像素的映射关系,以及三维点云中的点的横坐标和纵坐标,将所述三维点云投影为二维图像;基于预设的单位灰度与高度值的映射关系以及所述三维点云中的点的高度值,确定所述二维图像中各像素点的灰度值,得到所述二维灰度图像。
特别的,其中对所述二维灰度图像中被货物遮挡区域进行灰度填充,得到灰度填充图像,包括:逐行遍历所述二维灰度图像中的像素点,当检测到灰度值为零的像素点时,沿其所在行朝所述激光雷达的方向水平遍历预设数量的像素点;若遍历的预设数量的像素点中存在灰度值为不为零的像素点,则根据该像素点的灰度值对该灰度值为零像素点进行灰度值填充。
特别的,其中,若遍历的预设数量的像素点中存在多个灰度值为不为零的像素点,则根据该多个像素点中的最大灰度值对所述灰度值为零的像素点进行灰度值填充。
特别的,若遍历的预设数量的像素点中不存在灰度值不为零的点,则继续沿该行朝所述激光雷达方向遍历,并利用所遍历到的灰度不为零的像素点灰度作为填充依据。
特别的,其中,至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度,得到所述封闭空间的剩余体积,包括:确定任一像素点与最高点的灰度差;根据所述灰度差预设单位灰度对应的体积值,得到各像素点对应 的未被货物占据体积;累加各像素点对应的未被货物占据体积,得到所述封闭空间的剩余体积。
特别的,在得到灰度填充图像后,进一步包括:为对灰度填充图像进行中值滤波和双边滤波处理操作。
本申请进一步包括一种基于激光雷达的封闭空间内剩余体积的检测装置,包括:点云获取模块,配置为获取封闭空间内的三维点云;分割模块,配置为滤除三维点云中封闭空间各内壁点云,得到货物表面点云;投影模块,配置为将所述货物表面点云投影为二维灰度图像,其中,所述二维灰度图像中各像素点的灰度值与对应货物表面点云中的点的高度值相关;填充模块,配置为对所述二维灰度图像中被货物遮挡区域进行灰度填充,得到灰度填充图像,其中所填充灰度值为遮挡该区域的货物的点云对应的像素点的灰度值;检测模块,配置为至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度,得到所述封闭空间的剩余体积。
特别的,进一步包括:姿态角获取模块,配置为获取激光雷达的姿态角;转换模块,配置为根据所述姿态角,对所述当前空间的三维点云进行坐标转换。
本申请进一步包括一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如前所述方法的步骤。
本申请进一步包括一种基于激光雷达的封闭空间内剩余体积的检测系统,包括激光雷达,以及如前任一所述的装置。
本申请进一步包括一种智能装载设备,包括:封闭空间;处理器,以及与所述处理器耦合的存储器;以及激光雷达,配置为获取封闭空间内的三维点云;其中所述处理器配置为执行如前所述的方法。
附图说明
下面,将结合附图对本发明的优选实施方式进行进一步详细的说明,其中:
图1是根据本发明一个实施例中基于激光雷达的封闭空间内剩余体积的检测方法的应用环境图;
图2是根据本发明一个实施例中基于激光雷达的封闭空间内剩余体积的检测方法的流程示意图;
图3是根据本发明一个实施例中封闭空间的正视图;
图4是根据本发明一个实施例中封闭空间的立体图;
图5是根据本发明一个实施例中封闭空间正视图;
图6是根据本发明一个实施例中封闭空间俯视图;
图7是根据本发明一个实施例中获取激光雷达的姿态角的步骤的流程示意图;
图8是根据本发明一个实施例中货物表面点云示意图;
图9为对图8所示的货物表面点云投影得到的二维灰度图像示意图;
图10是根据本发明一个实施例中对图9所示的二维灰度图像进行填充和后处理后得到的图像示意图;
图11是根据本发明一个实施例中基于激光雷达的封闭空间内剩余体积检测装置的结构框图;
图12是根据本发明一个实施例中计算机设备的内部结构图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在以下的详细描述中,可以参看作为本申请一部分用来说明本申请的特定实施例的各个说明书附图。在附图中,相似的附图标记在不同图式中描述大体上类似的组件。本申请的各个特定实施例在以下进行了足够详细的描述,使得具备本领域相关知识和技术的普通技术人员能够实施本申请的技术方案。应当理解,还可以利用其它实施例或者对本申请的实施例进行结构、逻辑或者电性的改变。
图1是根据本发明一个实施例中基于激光雷达的封闭空间内剩余体积的检测方法的应用环境图。本申请提供的基于激光雷达的封闭空间内剩余体积的检测检测方法,可以应用于如图1所示的应用环境中。其中,在应用场景内有多个封闭空间101,用于放置装载物。在一些实施例中,应用场景可以是货物集散/中转点,封闭空间101可以是一种可以移动的例如集装箱、车厢,也可以是固定的例如封闭的仓库库房。装载物可以是具有规则形状的货物例如成箱的货物在一些实施例中,装载物可以是形状不规则的货物。
根据一个实施例,激光雷达103安装在封闭空间101内部。一般货物装运过程中,货物摆放通常遵循从内而外,由下至上的摆放规则。在一些实施例中,激光雷达103可以安装在封闭空间101内靠近门的一侧。优选的,激光雷达103可以安装在封闭空间内比较不容易被遮挡的位置,例如靠近其顶部的位置。当然,根据不同的实施例,激光雷达103也可以设置在其他的位置。
根据一个实施例,封闭空间101内部的激光雷达103与应用场景的主控设备105通信连接。主控设备105接收各激光雷达103采集的三维点云,基于三维点云对封闭空间101进行检测。在具体应用场景中,主控设备105还可以与显示设备107通信连接,将封闭空间101的检测结果发送至显示设备107,对于封闭空间101进行提示。其中,显示设备107可以为管理人员(或司机)的手机终端,也可以为货物集散/中转点的显示屏(如设置在货物集散/中转点场地内或办公室内的显示屏)。在一些实施例中,对于封闭空间的剩余体积可设置提醒阈值,当检测到封闭空间剩余体积小于阈值时,主控设备发送提示信息至显示设备。 如在显示设备显示封闭空间剩余体积余量不足的提示。
在一些实施例中,每个封闭空间101额外包括一个计算单元。计算单元配置为接收激光雷达103采集的三维点云,并基于该三维点云对封闭空间101内剩余体积进行检测,然后将检测结果发送至主控设备105。
在一些实施例中,如图2所示,提供了一种基于激光雷达的封闭空间检测方法,该方法包括以下步骤:
步骤201,通过激光雷达获取当前封闭空间的三维点云。
在一些实施例中,激光雷达安装在封闭空间内靠近门的一侧靠封闭空间顶部的位置。在货物装载过程中,利用激光雷达采集封闭空间内部的三维点云。在一些实施例中,可对激光雷达设置采集频率,如根据预设时间间隔采集,以对封闭空间进行定期检测。根据其他实施例,还可以在每次关闭封闭空间的门的时候触发激光雷达采集,以对封闭空间剩余体积进行检测。根据其他实施例,还可以由管理人员根据需要通过显示设备(例如手机终端),向主控设备发送采集指令,由主控设备向激光雷达发送采集指令,激光雷达响应采集指令进行采集。激光雷达将采集的三维点云发送至主控设备。
步骤202,获取激光雷达的姿态角。
由于实际操作中激光雷达安装的位置,其姿态可能并不是与该封闭空间的坐标系的各坐标轴平行。因此,为了后续计算方便起见,可以对激光雷达的姿态角进行获取以及基于激光雷达的坐标系进行转换。
其中,激光雷达的姿态角,是指激光雷达相对参照物的安装角度,包括但不限于翻滚角、俯仰角和偏航角。激光雷达的姿态角可根据当前空间的三维点云确定。在实际应用中,由于激光雷达位置基本固定,激光雷达的姿态角,只需要计算一次,后续可以使用第一次的姿态角,进行点云坐标转换。
在一些实施例中,激光雷达姿态角的标定,理想操作场景为封闭空间空载,即在封闭空间中没有装载任何物体的情况下进行标定以确定激光雷达的姿态角。 在一些实施例中,激光雷达的标定还可以实时地对每一次检测都进行校准,这样校准的点云会更加精确。
在一些实施例中,以激光雷达为原点建立检测系统坐标系。图3是根据本发明一个实施例中封闭空间的正视图;图4是根据本发明一个实施例中封闭空间的立体图。设定的检测系统坐标系如图3和图4所示,其中坐标系原点o为激光雷达位置,X轴平行于封闭空间例如集装箱长边,Y轴平行于集装箱短边,Z轴平行于集装箱的高。
图5是根据本发明一个实施例中封闭空间正视图,图6是根据本发明一个实施例中封闭空间俯视图。在一些实施例中,激光雷达安装位置如图5、图6所示,设置在点o位置。自激光雷达沿Y轴延伸至到封闭空间两个侧壁的距离分别为b1和b2;自激光雷达沿Z轴延伸至顶壁距离为b3,至底部距离为a;自激光雷达沿X轴延伸至门的距离为b5,至封闭空间的门对侧距离为b4。
在一些实施例中,姿态角可以包括翻滚角、俯仰角和偏航角。图7所示为根据本申请一个实施例的获取激光雷达的姿态角的方法,在这个过程中封闭空间都是空载或者是处在所谓的标定状态,该方法包括:
步骤701,在封闭空间空载的状态下,获取激光雷达采集的封闭空间内部三维点云。在这个状态下获得的所有的点云都是封闭空间的内表面的点云,例如如果封闭空间是长方体的话,就是其内部六个平面的点云。
步骤702,根据激光雷达距离封闭空间底面的垂直距离,从封闭空间内部三维点云中确定该封闭空间底部点云。
当前空间底部点云,是指当前空间底壁部分的点云。已知该激光雷达距离封闭空间底部平面的垂直距离为a,则将封闭空间内部三维点云中z坐标值小于-a的点的集合认定为该封闭空间的底部点云。
步骤703,计算封闭空间底部点云的平面法向量。
法向量,是空间解析几何的一个概念,垂直于平面的直线所表示的向量为该 平面的法向量。
计算法向量的方法,首先计算封闭空间底部点云的协方差矩阵,然后对协方差矩阵进行奇异值分解,奇异值分解得到的奇异向量描述了点云数据的三个主要方向,垂直于平面的法向量代表了方差最小的方向,方差最小代表了奇异值最小,所以最后选取奇异值最小的向量作为平面的法向量。
Figure PCTCN2021084109-appb-000001
其中,C为协方差矩阵,s i为封闭空间底部点云中的点,
Figure PCTCN2021084109-appb-000002
代表了封闭空间点云的均值。
步骤704,根据封闭空间底部点云的平面法向量,计算激光雷达的翻滚角和俯仰角。
其中,俯仰角为激光雷达坐标系X轴与封闭空间底部平面的夹角,翻滚角为激光雷达坐标系Y轴与激光雷达封闭空间底部点云的平面法向量的夹角。
具体地,计算翻滚角和俯仰角的公式为:
T 1=(a 1,b 1,c 1)   (2)
Figure PCTCN2021084109-appb-000003
其中,T 1为封闭空间底部点云的平面法向量,α为激光雷达的翻滚角,β为激光雷达的俯仰角。
步骤705,根据激光雷达距离封闭空间底部的垂直距离以及激光雷达相对于侧壁的距离,从封闭空间内部三维点云中确定封闭空间侧壁点云。
当前空间侧壁点云是指封闭空间中垂直于底面的侧壁部分的点云。例如,以左侧壁为参照为例,已知激光雷达距离封闭空间底部的垂直距离为a,去除z坐标范围为[0,-a)的点云,作为一次过滤后的点云(即除去当前空间底部后的剩余点云)。由于当前空间左侧面与激光雷达的距离为已知距离b1,在一次过滤后的点云基础上,同时为了避免远处噪音点的干扰,取与激光雷达y坐标范围为 [b1,b1+Δb)的点云作为当前空间侧面点云,其中,Δb为距离阈值,1>Δb>0。其他的侧壁的点云确定也是类似。
步骤706,计算封闭空间侧壁点云的平面法向量。
当前空间侧壁点云的平面法向量的计算方法与步骤703相同,此处不再赘述。
步骤707,根据封闭空间侧壁点云的平面法向量,计算激光雷达的偏航角。
其中,偏航角为激光雷达坐标系Z轴与当前空间侧面的夹角。
具体地,计算偏航角的计算公式为:
T 2=(a 2,b 2,c 2)   (4)
Figure PCTCN2021084109-appb-000004
其中,T 2为封闭空间侧壁点云的平面法向量,γ为偏航角。
步骤203,根据所述激光雷达的姿态角,对封闭空间的三维点云进行坐标转换。
如前面所提到的,姿态角包括了翻滚角、俯仰角和偏航角,其中,翻滚角和俯仰角根据封闭空间底部点云及其平面法向量得到,偏航角根据封闭空间侧壁点云及其平面法向量得到。因此,本实施例中,将封闭空间三维点云进行转换,转换后封闭空间底部点云与检测系统坐标系的XOY平面平行,转换封闭空间侧壁点云分别与检测系统坐标系的XOZ、YOZ平面平行。
根据翻滚角和俯仰角进行点云转换,为了将封闭空间三维点云中的底部点云转换至检测系统坐标系与XOY平面平行。具体地,根据激光雷达的俯仰角,将当前空间三维点云绕激光雷达坐标系的X轴旋转,根据激光雷达的翻滚角,将当前空间三维点云绕激光雷达坐标系的Y轴旋转,转换后封闭空间底部点云与检测系统坐标系的XOY平面平行。如下所示:
Figure PCTCN2021084109-appb-000005
Figure PCTCN2021084109-appb-000006
p g=R y·R x·p c    (8)
其中,R x和R y为绕x轴与绕y轴的旋转矩阵,p g为转换后的封闭空间底部点云,p c为原始的整个封闭空间三维点云。
根据偏航角进行点云转换,是为了将封闭空间侧壁点云转换至检测系统坐标系与XOY、XOZ平面平行。具体地,根据激光雷达的偏航角,将转换后与封闭空间底部平行的点云绕激光雷达坐标系的Z轴旋转,转换后封闭空间三维点云中的侧壁点云与检测系统坐标系的XOZ平面平行。如下所示:
Figure PCTCN2021084109-appb-000007
p=R z·p g    (10)
其中,R z为绕z轴的旋转矩阵,p g为转换后的底部点云,p为转换后侧壁的点云。
步骤204,滤除三维点云中封闭空间各内壁点云,得到货物表面点云。
由于激光雷达安装在封闭空间内部,因此,采集的三维点云包括了封闭空间的内壁点云以及封闭空间内的货物表面点云。其中,封闭空间的内壁点云指的是封闭空间顶部、侧壁和底部的点云数据(通过前述类似方法确定),货物表面点云是指封闭空间内的货物的点云数据。其中,由于货物之间有遮挡,所以激光雷达采集到的往往是最前端未被遮挡的货物的点云。滤除封闭空间内壁点云的目的在于降低计算量,以及避免内壁点云数据对于货物体积估算的干扰。
步骤205,将所述货物表面点云投影为二维灰度图像,其中,所述二维灰度图像中各像素点的灰度值与对应货物表面点云中的点的高度值相关。
在一些实施例中,将封闭空间内的货物表面点云转化为二维图像的像素点(例如点云中的一个或多个点可以对应或者说落入二维图像中一个尺寸为Ur*Vr的像素点或者说一个单位区域)。同时,根据点云在Z方向的高度值确定 二维图像中对应的像素点的灰度值,使二维图像中像素点的灰度值与点云的高度值相关,例如正相关、负相关或指数相关等。在一些实施例中,点云中点的高度值越大,其对应的像素点的灰度值也越大;点云中点的高度值越小,其对应的像素点的灰度值也越小,但不以此为限。如果二维图像中同一个像素对应点云中多个点,则可以取这个多个点中的最高的高度对应的灰度值作为该像素的灰度值。
根据一个实施例可以以下公式计算其二维图像的坐标。
u i=[(x i-x min)/u r]    (11)
v i=[(y i-y min)/v r]    (12)
其中,u i和v i为点云第i个中的点投影到二维图像中像素的行坐标和列坐标,x i和y i为货物表面点云中第i个点的X轴坐标和Y轴坐标,x min和y min为货物表面点云p r在X轴和Y轴的最小值,u r和v r为货物表面点云中单个点投影到二维图像上的像素的尺寸。在一些实施例中,u r和v r代表了二维图像上每一个像素点在二维图像中的尺寸,这两个值可根据需求设置。
具体地,可以根据点云的相对高度与单位灰度值对应的高度值确定点云的灰度值。其中,点云的相对高度指点的高度值与最小高度的差值。最小高度为货物表面点云中的最小高度值,通常为封闭空间底部的高度值。点云中点的相对高度表示该货物表面点云相对于封闭空间底部的相对高度。
点云中的点的相对高度与单位灰度值对应的高度值的比值,即为该点的灰度值。具体地,将高度转换为灰度值的公式如下:
G=[(z i-z min)/G resolution]    (13)
其中,z i为货物表面点云中第i个点的Z轴坐标,即高度值,z min为货物表面点云p r在Z轴的最小值,G resolution为单位灰度值对应的高度值。图9为对图8所示的货物表面点云投影得到的二维灰度图像示意图。
步骤206,对所述二维灰度图像中被货物遮挡区域进行灰度填充,得到灰度 填充图像,其中所填充灰度值为遮挡该区域的货物的点云对应的像素点的灰度值。
在实际应用中,激光只能扫描到放置在表面的货物,而被表面货物遮挡的区域中的货物则无法被扫描到。但通常情况下,封闭空间内的货物摆放是规整的,摆放顺序是由内至外(外指的是封闭空间的门口,内指的是距离门最远的一端),自下而上的,则被表面货物遮挡的区域大概率也是被摆放了货物。但由于无法被激光扫描到,因此无法获得这些被遮挡区域的货物表面点云,在二维图像也就无法在相应的区域呈现应有的灰度。然而,若忽略这部分区域,得到的封闭空间剩余体积将与实际不符,从而影响检测精度。
灰度值填充,给被表面货物遮挡区域进行灰度值补充。
在一些实施例中,首先对二维灰度图像进行从上往下的行遍历,顺序为从内向外(即朝向封闭空间门口的方向);然后当第i个像素点的灰度值为0时,从该点出发沿该行向外再遍历e个像素点,若在这e个像素点中存在灰度值不为0的点j,那么将点j的灰度值赋值给点i;若存在不止一个点的灰度值不为0,那么将灰度值中数值最大的赋值给点i,从而实现对二维灰度图像中被表面货物遮挡区域进行灰度值填充处理。若是在遍历e个像素点,都不存在灰度值不为0的点,则将继续朝所述激光雷达方向沿当前行遍历,直到找到灰度值不为0的点k,那么将点k的灰度值赋值给点i。若遍历至图像边缘处,仍不不存在灰度值不为0的点,则将点i的灰度值赋值为封闭空间底面对应的灰度值。
可选择的,在步骤207,在一些实施例中,本方法进一步包括对于灰度填充图像的优化处理。具体地,可以对灰度填充图像进行中值滤波和双边滤波后处理操作,中值滤波是为了保护边缘信息,双边滤波是为了保边去噪;然后进行形态学膨胀操作。
由于激光传感器的扫描方式,三维点云中有些相邻的点之间的距离会大于投影得到的二维图像的像素距离,导致二维图像出现孔洞。如果增大像素点尺寸, 又会降低二维图像的分辨率。在二维图像上进行膨胀操作能有效的减少孔洞。图像后处理方法,不仅限于形态学膨胀,也可以对图像进行形态学闭运算,以填充黑洞区域,然后进行形态学开运算,以增强边缘信息,过滤离散的干扰像素点。在一些实施例中,优化处理后的灰度填充图像为灰度填充图像的变形。图10所示为根据本发明一个实施例中对图9所示的二维灰度图像进行填充和优化处理后得到的灰度填充图像。
步骤208,至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度,得到所述封闭空间的剩余体积。
如前面提及的,二维灰度图像中的灰度值是根据三维点云的高度值确定,表示了三维点云的高度信息。因此,可通过各像素点的灰度值确定未被货物占据空间的剩余高度。具体地,将各像素点的灰度值与灰度值最大值进行比较,可以将二者的灰度差值换算为未被货物占据空间的剩余高度。其中,灰度值最大值对应三维点云中Z轴最大值,代表封闭空间内可容纳的货物高度的最高值。根据所述未被货物占据空间高度的灰度值与预设单位灰度对应的体积值,得到各像素点对应的未被货物占据体积;累加各像素点对应的未被货物占据体积,得到封闭空间剩余体积。
各像素点与最高点的灰度差值,即是最高点的高度值与像素点对应的高度值之差/单位灰度值对应的高度值,表示未被货物占据空间高度的灰度值。其中,
Δh=z max-z i    (14)
H v=[Δh/G resolution]    (15)
v h=G resolution*u r*v r    (16)
v=H v*v h    (17)
V=∑v    (18)
其中,Δh表示三维点云中第i点的未被货物占据的封闭空间剩余高度,G resolution为单位灰度值对应的高度值,H v为三维点云中第i点与最高点(封闭 空间内可容纳的货物高度的最高)间的灰度差值。
其中,v h表示预设单位灰度值的体积值(u r,v r为一个三维点云对应的二维像素的尺寸),v为三维点云中位于第i点的未被货物占据体积。
其中,V为封闭空间中剩余体积,是各像素点表示的未被货物占据体积之和。
上述的基于激光雷达的封闭空间内剩余体积的检测方法,通过激光雷达采集的当前空间三维点云,滤除内壁点云,准确分割出货物表面点云,进而将货物表面点云转换为二维灰度图像,利用二维灰度图像记载货物表面点云的高度信息,并将被货物遮挡区域利用临近像素点的灰度值进行填充,使二维灰度图像反应实际货物装载情况,从图像处理角度,根据各像素点的灰度值确定未被货物占据空间高度,对封闭空间内剩余体积进行检测。由于利用激光雷达采集三维数据,数据源精度高,将三维点云投影为二维灰度图像后,对于将被货物遮挡区域利用临近像素点的灰度值进行填充,使二维灰度图像反应实际货物装载情况,从而提高可装载空间检测的精度。另外,本申请算法操作过程简单,可以针对空间内货物情况进行实时监测。
在一些实施例中,本申请的基于激光雷达的封闭空间内剩余体积的检测方法,可广泛应用在需要测量剩余体积的场景,适用于标准尺寸集装箱(各尺寸均可,20ft、40ft和45ft)、汽车车厢、火车车厢。
应该理解的是,虽然上述各流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一些实施例中,如图11所示,提供了一种基于激光雷达的封闭空间内剩 余体积的检测装置,包括:
点云获取模块1101,配置为通过激光雷达获取当前封闭空间的三维点云。
姿态角获取模块1102,配置为获取激光雷达的姿态角。在一些实施例中,姿态角获取模块1102不是必须的。
转换模块1103,配置为根据所述激光雷达的姿态角,对封闭空间的三维点云进行坐标转换。在一些实施例中,转换模块1103不是必须的。
分割模块1104,配置为滤除坐标转换后所述当前空间的扫描点云中的空间内壁点云,得到可探测货物表面点云。
投影模块1105,配置为将所述货物表面点云投影为二维灰度图像,其中,所述二维灰度图像中各像素点的灰度值与对应货物表面点云中的点的高度值相关。
填充模块1106,配置为对所述二维灰度图像中被货物遮挡区域进行灰度填充,得到灰度填充图像,其中所填充灰度值为遮挡该区域的货物的点云对应的像素点的灰度值。
检测模块1107,配置为至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度,得到所述封闭空间的剩余体积。
在一个实施例中,投影模块包括:
图像转换模块,用于根据货物表面点云的横坐标和纵坐标,将货物表面点云投影为二维图像。
灰度值处理模块,用于根据货物表面点云的高度值,确定二维图像中各点云像素点的灰度值,得到货物表面点云对应的二维灰度图像。
在另一个实施例中,灰度值填充模块,配置为按序遍历二维灰度图像中的各像素点,当遍历的当前像素点为灰度值为零的像素点时,按照设定方向遍历预设数量的像素点,若遍历的预设数量的像素点中存在灰度值为不为零的像素点,则根据该像素点的灰度值对当前像素点进行灰度值填充,若遍历的预设数量的像 素点中存在多个灰度值为不为零的像素点,则根据多个像素点中的最大灰度值对当前像素点进行灰度值填充。
在一个实施例中,检测模块,用于获取所述三维点云中的任一点与最高点之间的高度差;基于所述高度差确定得到相应像素点与最高点的灰度差;根据所述灰度差以及预设单位灰度对应的体积值,得到该像素点对应的未被货物占据体积;以及累加各像素点对应的未被货物占据体积,得到所述封闭空间的剩余体积。
关于基于激光雷达的封闭空间内剩余体积检测装置的具体限定可以参见上文中对于基于激光雷达的封闭空间内剩余体积检测方法的限定,在此不再赘述。上述基于激光雷达的封闭空间内剩余体积检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
上述的基于激光雷达的封闭空间内剩余体积的检测装置,通过激光雷达采集的当前空间三维点云,滤除内壁点云,准确分割出货物表面点云,进而将货物表面点云转换为二维灰度图像,利用二维灰度图像记载货物表面点云的高度信息,并将被货物遮挡区域利用临近像素点的灰度值进行填充,使二维灰度图像反应实际货物装载情况,从图像处理角度,根据各像素点的灰度值确定未被货物占据空间高度,对封闭空间内剩余体积进行检测。由于利用激光雷达采集三维数据,数据源精度高,将三维点云投影为二维灰度图像后,对于将被货物遮挡区域利用临近像素点的灰度值进行填充,使二维灰度图像反应实际货物装载情况,从而提高车厢可装载空间检测的精度。另外,本申请算法操作过程简单,可以针对空间内货物情况进行实时监测。
本申请进一步包括一种基于激光雷达的封闭空间内剩余体积的检测系统,包括激光雷达,以及如前所述的基于激光雷达的封闭空间内剩余体积的检测装置。
本申请进一步包括一种智能装载设备,包括:封闭空间;处理器,以及与所述处理器耦合的存储器;以及激光雷达,配置为获取封闭空间内的三维点云;其中所述处理器配置为执行如前所述的基于激光雷达的封闭空间内剩余体积的检测方法。
在一些实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图12所示。该计算机设备包括通过系统总线连接的处理器、存储器通信接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种基于激光雷达的封闭空间内剩余体积检测方法。
本领域技术人员可以理解,图12中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一些实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各实施例方法的步骤。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进, 这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (13)

  1. 一种基于激光雷达的封闭空间内剩余体积的检测方法,包括:
    获取封闭空间内的三维点云;
    滤除三维点云中封闭空间各内壁点云,得到货物表面点云;
    将所述货物表面点云投影为二维灰度图像,其中,所述二维灰度图像中各像素点的灰度值与对应货物表面点云中的点的高度值相关;
    对所述二维灰度图像中被货物遮挡区域进行灰度填充,得到灰度填充图像,其中所填充灰度值为遮挡该区域的货物的点云对应的像素点的灰度值;以及
    至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度,得到所述封闭空间的剩余体积。
  2. 根据权利要求1所述的方法,其中,获取封闭空间内的三维点云,包括:
    获取所述激光雷达的姿态角;以及
    根据所述姿态角对所述封闭空间的三维点云进行坐标转换。
  3. 根据权利要求1所述的方法,其中,将所述三维点云投影为二维灰度图像,包括:
    基于预设的三维点云中的点与二维灰度图像中像素的映射关系,以及三维点云中的点的横坐标和纵坐标,将所述三维点云投影为二维图像;以及
    基于预设的单位灰度与高度值的映射关系以及所述三维点云中的点的高度值,确定所述二维图像中各像素点的灰度值,得到所述二维灰度图像。
  4. 根据权利要求1所述的方法,其中,对所述二维灰度图像中被货物遮挡区域进行灰度填充得到灰度填充图像,包括:
    逐行遍历所述二维灰度图像中的像素点,当检测到灰度值为零的像素点时,沿其所在行朝所述激光雷达的方向遍历预设数量的像素点;以及
    若遍历的预设数量的像素点中存在灰度值为不为零的像素点,则根据该像素点的灰度值对该灰度值为零像素点进行灰度值填充。
  5. 根据权利要求4所述的方法,其中,当遍历的预设数量的像素点中存在多个灰度值为不为零的像素点时,则根据该多个像素点中的最大灰度值对所述灰度值为零的像素点进行灰度值填充。
  6. 根据权利要求4所述的方法,其中,若遍历的预设数量的像素点中不存在灰度值不为零的点,则继续沿该行朝所述激光雷达方向遍历,并利用所遍历到的灰度不为零的像素点灰度作为填充依据。
  7. 根据权利要求1所述的方法,其中,至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度得到所述封闭空间的剩余体积,包括:
    确定任一像素点与所述最高点之间的灰度差;
    根据所述灰度差以及预设的单位灰度对应的体积值,得到该像素点对应的未被货物占据体积;以及
    累加各像素点对应的未被货物占据体积,得到所述封闭空间的剩余体积。
  8. 根据权利要求1或2所述的方法,在得到灰度填充图像后,进一步包括:
    对所述灰度填充图像进行中值滤波和双边滤波处理操作。
  9. 一种基于激光雷达的封闭空间内剩余体积的检测装置,包括:
    点云获取模块,配置为获取封闭空间内的三维点云;
    三维点云分割模块,配置为滤除三维点云中封闭空间各内壁点云,得到货物表面点云;
    投影模块,配置为将所述货物表面点云投影为二维灰度图像,其中,所述二维灰度图像中各像素点的灰度值与对应货物表面点云中的点的高度值相关;
    填充模块,配置为对所述二维灰度图像中被货物遮挡区域进行灰度填充,得到灰度填充图像,其中所填充灰度值为遮挡该区域的货物的点云对应的像素点的灰度值;
    检测模块,配置为至少根据灰度填充图像或其变形确定未被货物占据的所述封闭空间的剩余高度,得到所述封闭空间的剩余体积。
  10. 根据权利要求9所述的装置,进一步包括:
    姿态角获取模块,配置为获取激光雷达的姿态角;
    转换模块,配置为根据所述姿态角对所述当前空间的三维点云进行坐标转换。
  11. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述方法的步骤。
  12. 一种基于激光雷达的封闭空间内剩余体积的检测系统,包括激光雷达,以及权利要求9或10任一所述的装置。
  13. 一种智能装载设备,包括:
    封闭空间;
    处理器,以及与所述处理器耦合的存储器;以及
    激光雷达,配置为获取封闭空间内的三维点云;
    其中所述处理器配置为执行权利要求1至8任一所述的方法。
PCT/CN2021/084109 2020-03-30 2021-03-30 一种基于激光雷达的封闭空间内剩余体积的检测方法和装置 WO2021197345A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010236021.9A CN113469871B (zh) 2020-03-30 2020-03-30 基于三维激光的车厢可装载空间检测方法和装置
CN202010236021.9 2020-03-30

Publications (1)

Publication Number Publication Date
WO2021197345A1 true WO2021197345A1 (zh) 2021-10-07

Family

ID=77865931

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/084109 WO2021197345A1 (zh) 2020-03-30 2021-03-30 一种基于激光雷达的封闭空间内剩余体积的检测方法和装置

Country Status (2)

Country Link
CN (1) CN113469871B (zh)
WO (1) WO2021197345A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119710A (zh) * 2021-11-23 2022-03-01 燕山大学 一种计算敞车车厢剩余冻煤体积的方法及系统
CN114842323A (zh) * 2022-07-04 2022-08-02 山东西曼克技术有限公司 基于分类识别的智能机器人分拣优化方法
CN115294105A (zh) * 2022-09-28 2022-11-04 南京理工大学 一种多层多道焊接余高预测方法
CN115631329A (zh) * 2022-12-08 2023-01-20 杭州明度智能科技有限公司 一种用于开放式车厢的装载控制方法、系统和存储介质
CN116307985A (zh) * 2023-03-06 2023-06-23 中天建设集团有限公司 建筑材料节能运输方法、计算机设备与介质
CN116681748A (zh) * 2023-06-13 2023-09-01 上海频准激光科技有限公司 一种激光器稳频组件的匹配方法
CN116843742A (zh) * 2023-03-13 2023-10-03 武汉理工大学 一种针对装载黑色煤车辆的点云配准后堆料体积的计算方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862001A (zh) * 2023-03-02 2023-03-28 青岛慧拓智能机器有限公司 一种基于体积计量的露天矿山车厢残留物检测方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207600384U (zh) * 2017-11-24 2018-07-10 深古安地智能科技(武汉)有限公司 一种基于线激光的集装箱容积占用率测算系统
CN109146952A (zh) * 2018-09-06 2019-01-04 北京京东尚科信息技术有限公司 估计车厢空闲体积的方法、装置及计算机可读存储介质
CN109916301A (zh) * 2019-03-27 2019-06-21 青岛小鸟看看科技有限公司 一种体积测量方法和深度相机模组
CN109916302A (zh) * 2019-03-27 2019-06-21 青岛小鸟看看科技有限公司 一种载货箱体的体积测量方法和系统
US20190197719A1 (en) * 2017-12-22 2019-06-27 Symbol Technologies, Llc Container use estimation
US20190195617A1 (en) * 2017-12-22 2019-06-27 Symbol Technologies, Llc Container auto-dimensioning
CN110411530A (zh) * 2019-03-21 2019-11-05 重庆大学 一种货箱剩余体积的智能识别方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106017320B (zh) * 2016-05-30 2018-06-12 燕山大学 一种基于图像处理的散杂货堆体积测量方法及实现所述方法的系统
CN107314741A (zh) * 2017-03-01 2017-11-03 秦皇岛燕大燕软信息系统有限公司 货物体积测量方法
AU2018247343A1 (en) * 2017-10-16 2019-05-02 Flex Ltd. Method and system for tracking and optimizing cargo utilization and volume measurement and imaging sensing using lidars and video cameras
CN109029254B (zh) * 2018-07-03 2020-06-16 秦皇岛燕大燕软信息系统有限公司 一种基于点云数据处理的列车车厢载货体积及体密度质量检测方法
CN109696663B (zh) * 2019-02-21 2021-02-09 北京大学 一种车载三维激光雷达标定方法和系统
CN110057292B (zh) * 2019-05-27 2021-05-18 杭州亚美利嘉科技有限公司 车厢装载率的确定方法和装置
CN110488308A (zh) * 2019-07-05 2019-11-22 北京国泰新能科技发展有限公司 一种车厢定位检测方法和装置
CN110837080B (zh) * 2019-10-28 2023-09-05 武汉海云空间信息技术有限公司 激光雷达移动测量系统的快速标定方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207600384U (zh) * 2017-11-24 2018-07-10 深古安地智能科技(武汉)有限公司 一种基于线激光的集装箱容积占用率测算系统
US20190197719A1 (en) * 2017-12-22 2019-06-27 Symbol Technologies, Llc Container use estimation
US20190195617A1 (en) * 2017-12-22 2019-06-27 Symbol Technologies, Llc Container auto-dimensioning
CN109146952A (zh) * 2018-09-06 2019-01-04 北京京东尚科信息技术有限公司 估计车厢空闲体积的方法、装置及计算机可读存储介质
CN110411530A (zh) * 2019-03-21 2019-11-05 重庆大学 一种货箱剩余体积的智能识别方法
CN109916301A (zh) * 2019-03-27 2019-06-21 青岛小鸟看看科技有限公司 一种体积测量方法和深度相机模组
CN109916302A (zh) * 2019-03-27 2019-06-21 青岛小鸟看看科技有限公司 一种载货箱体的体积测量方法和系统

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119710B (zh) * 2021-11-23 2024-05-07 燕山大学 一种计算敞车车厢剩余冻煤体积的方法及系统
CN114119710A (zh) * 2021-11-23 2022-03-01 燕山大学 一种计算敞车车厢剩余冻煤体积的方法及系统
CN114842323A (zh) * 2022-07-04 2022-08-02 山东西曼克技术有限公司 基于分类识别的智能机器人分拣优化方法
CN114842323B (zh) * 2022-07-04 2022-09-13 山东西曼克技术有限公司 基于分类识别的智能机器人分拣优化方法
CN115294105A (zh) * 2022-09-28 2022-11-04 南京理工大学 一种多层多道焊接余高预测方法
CN115294105B (zh) * 2022-09-28 2023-04-07 南京理工大学 一种多层多道焊接余高预测方法
CN115631329A (zh) * 2022-12-08 2023-01-20 杭州明度智能科技有限公司 一种用于开放式车厢的装载控制方法、系统和存储介质
CN116307985A (zh) * 2023-03-06 2023-06-23 中天建设集团有限公司 建筑材料节能运输方法、计算机设备与介质
CN116307985B (zh) * 2023-03-06 2024-01-26 北京中天北方建设有限公司 建筑材料节能运输方法、计算机设备与介质
CN116843742A (zh) * 2023-03-13 2023-10-03 武汉理工大学 一种针对装载黑色煤车辆的点云配准后堆料体积的计算方法及系统
CN116843742B (zh) * 2023-03-13 2024-02-02 武汉理工大学 一种针对装载黑色煤车辆的点云配准后堆料体积的计算方法及系统
CN116681748B (zh) * 2023-06-13 2023-12-15 上海频准激光科技有限公司 一种激光器稳频组件的匹配方法
CN116681748A (zh) * 2023-06-13 2023-09-01 上海频准激光科技有限公司 一种激光器稳频组件的匹配方法

Also Published As

Publication number Publication date
CN113469871B (zh) 2023-07-14
CN113469871A (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
WO2021197345A1 (zh) 一种基于激光雷达的封闭空间内剩余体积的检测方法和装置
US10692236B2 (en) Container use estimation
AU2018388705B2 (en) Systems and methods for determining commercial trailer fullness
WO2021179983A1 (zh) 基于三维激光的集卡防吊起检测方法、装置和计算机设备
US11430104B2 (en) Three-dimensional (3D) imaging systems and methods for detecting and dimensioning a vehicle storage area
CN112417591A (zh) 基于云台与扫描仪的车辆建模方法、系统、介质及设备
US10697757B2 (en) Container auto-dimensioning
CN113963038A (zh) 车厢点云校准方法及装置
CN113173502B (zh) 一种基于激光视觉融合和深度学习的防撞方法、系统
US20240135566A1 (en) System and Method for Automatic Container Configuration using Fiducial Markers
CN112432596B (zh) 空间测量方法、装置、电子设备及计算机存储介质
US11009604B1 (en) Methods for detecting if a time of flight (ToF) sensor is looking into a container
CN117011362A (zh) 货物体积的计算方法、容积率的动态计算方法
CN115631329B (zh) 一种用于开放式车厢的装载控制方法、系统和存储介质
CN116215520A (zh) 基于超声波和3d环视的车辆碰撞预警及处理方法、装置
CN113988740A (zh) 车厢装卸率计算方法及装置
US11763439B2 (en) Systems and methods for assessing trailer utilization
CN113443555B (zh) 确定抓斗位置的方法、抓斗位置检测方法及存储介质
US11436835B2 (en) Method for detecting trailer status using combined 3D algorithms and 2D machine learning models
CN113129354A (zh) 车辆车厢剩余容积的测量方法及测量系统
CN115077385B (zh) 无人集卡集装箱位姿测量方法及其测量系统
JP7491615B2 (ja) 積載容積率計測装置、システム、方法、及び、プログラム
CN113933817A (zh) 车厢点云姿态的校正方法及装置
CN117132633A (zh) 基于单目相机的装载率估计方法、装置、设备及介质
CN114200415A (zh) 车厢点云姿态自动校正方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21780519

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21780519

Country of ref document: EP

Kind code of ref document: A1