WO2021179983A1 - 基于三维激光的集卡防吊起检测方法、装置和计算机设备 - Google Patents

基于三维激光的集卡防吊起检测方法、装置和计算机设备 Download PDF

Info

Publication number
WO2021179983A1
WO2021179983A1 PCT/CN2021/079043 CN2021079043W WO2021179983A1 WO 2021179983 A1 WO2021179983 A1 WO 2021179983A1 CN 2021079043 W CN2021079043 W CN 2021079043W WO 2021179983 A1 WO2021179983 A1 WO 2021179983A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
dimensional
lidar
container
truck
Prior art date
Application number
PCT/CN2021/079043
Other languages
English (en)
French (fr)
Inventor
胡荣东
文驰
彭清
李雅盟
李敏
Original Assignee
长沙智能驾驶研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙智能驾驶研究院有限公司 filed Critical 长沙智能驾驶研究院有限公司
Publication of WO2021179983A1 publication Critical patent/WO2021179983A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G67/00Loading or unloading vehicles
    • B65G67/02Loading or unloading land vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C15/00Safety gear
    • B66C15/06Arrangements or use of warning devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces

Definitions

  • This application relates to the field of laser radar technology, and in particular to a three-dimensional laser-based method, device and computer equipment for anti-lifting detection of trucks.
  • the spreader In the process of unloading the container from the truck on the site bridge, because the truck lock pin is not completely unlocked, the spreader lifts the container together with the truck or half-sides, which may cause a truck crane accident.
  • the traditional detection method uses a 2D laser scanner to obtain the outline of the truck, and judges whether the truck is separated from the container based on the outline. This method needs to rely on the installation position of the laser scanner and the parking position of the truck, and is affected by the accuracy of the data, and the detection accuracy of anti-lifting is low.
  • a three-dimensional laser-based detection method for anti-lifting of trucks comprising:
  • Image detection is performed on the position range in the two-dimensional image to obtain a detection result of truck pick-up prevention.
  • the method of obtaining the attitude angle of the lidar includes:
  • the converting the three-dimensional point cloud according to the attitude angle includes:
  • the converted three-dimensional point cloud is converted according to the yaw angle of the lidar, and the side surface point cloud of the container in the converted three-dimensional point cloud is parallel to the side plane of the lidar coordinate system.
  • the projecting the converted three-dimensional point cloud into a two-dimensional image includes:
  • Image preprocessing is performed on the binary image to obtain a two-dimensional image.
  • determining the position range of the gap between the truck and the container in the two-dimensional image includes:
  • the height of the lidar the lifting height of the container and the height of the truck tray, the position range of the gap between the truck and the container in the two-dimensional image is determined.
  • performing image detection according to the position range to obtain the detection result of the truck anti-lifting includes:
  • the counter is increased by a preset value
  • performing image detection according to the position range to obtain the detection result of the truck anti-lifting includes:
  • intersection of the straight lines is determined, and if the intersection is within the position range, a detection result that the truck is hoisted is obtained.
  • a three-dimensional laser-based truck pick-up anti-lifting detection device includes:
  • the point cloud acquisition module is used to acquire the three-dimensional point cloud of the container operation collected by the lidar;
  • An attitude angle acquisition module for acquiring the attitude angle of the lidar
  • a conversion module configured to convert the three-dimensional point cloud according to the attitude angle
  • a projection module for projecting the converted three-dimensional point cloud into a two-dimensional image
  • the gap position determination module is used to determine the position range of the gap between the truck and the container in the two-dimensional image
  • the detection module is configured to perform image detection based on the position range in the two-dimensional image to obtain a detection result of truck pick-up prevention.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the method described in any one of the foregoing embodiments when the computer program is executed.
  • a computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the method described in any one of the foregoing embodiments are implemented.
  • the above-mentioned three-dimensional laser-based truck anti-lifting detection method, device, computer equipment and storage medium use lidar to collect three-dimensional data of container operations with high data accuracy.
  • lidar On the basis of high-precision three-dimensional point cloud data, according to the attitude angle
  • the three-dimensional point cloud is converted without being affected by the installation position and angle of the lidar, and then the point cloud data is projected to a two-dimensional image, and the gap between the truck and the container is detected in the two-dimensional image to determine the truck and the container Whether the separation is effective.
  • the data source of this method has high accuracy, and the detection method is not affected by the installation position and angle of the lidar, which greatly improves the accuracy of the anti-lifting detection.
  • FIG. 1 is an application environment diagram of a three-dimensional laser-based pickup truck anti-lifting detection method in an embodiment
  • Figure 2 is a schematic diagram of the installation position of the lightning radar in an embodiment
  • FIG. 3 is a schematic flow diagram of a three-dimensional laser-based truck pick-up anti-lifting detection method in an embodiment
  • FIG. 4 is a schematic flowchart of the steps of obtaining the attitude angle of the lidar in an embodiment
  • FIG. 5 is a schematic flow chart of the steps of converting the three-dimensional point cloud according to the attitude angle in another embodiment
  • FIG. 6 is a flowchart of the steps of projecting a converted three-dimensional point cloud into a two-dimensional image in an embodiment
  • FIG. 7 is a two-dimensional image of a three-dimensional point cloud of a container operation in which the truck is not hoisted during a container operation in an embodiment
  • FIG. 8 is a schematic diagram of the position range of the gap between the truck and the container in an embodiment
  • FIG. 9 is a schematic flow diagram of the steps of performing image detection on the position range in the two-dimensional image to obtain the detection result of the collection card anti-lifting in an embodiment
  • FIG. 10 is a two-dimensional image of a three-dimensional point cloud of a container operation in which a truck is hoisted during a container operation in an embodiment
  • 11 is a schematic flow chart of the steps of performing image detection on the position range in the two-dimensional image to obtain the detection result of the truck pick-up prevention in another embodiment
  • FIG. 12 is a structural block diagram of a three-dimensional laser-based pickup anti-lifting detection device in an embodiment
  • Fig. 13 is an internal structure diagram of a computer device in an embodiment.
  • the three-dimensional laser-based pickup anti-lifting detection method provided in this application can be applied to the application environment as shown in FIG. 1.
  • the laser radar 101 is installed on the side of the operation lane of the container operation gantry crane 102 to collect laser point clouds.
  • the installation position of the lidar is set according to the height of the truck.
  • the main control device 103 is in communication connection with the lidar 101.
  • the main control device is also connected to the gantry crane control device 104.
  • the gantry crane control device 104 controls the spreader 105 to lift the container 106 on the truck 105, it sends a control signal to the main control device 103, the main control device 103 sends the collection signal to the lidar 101, and the lidar 101 collects container operations according to the collection training And send the collected three-dimensional point cloud to the main control device 103, and the main control device 103 obtains the three-dimensional point cloud of the container operation collected by the lidar; obtains the attitude angle of the lidar; according to the attitude angle, the three-dimensional point cloud Perform conversion; project the converted three-dimensional point cloud into a two-dimensional image; determine the position range of the gap between the truck and the container in the two-dimensional image; perform image detection on the position range in the two-dimensional image to obtain the truck jam prevention Lift the test result.
  • the installation position of the lidar is shown in Figure 2.
  • the coordinate system is the lidar coordinate system
  • the origin O represents the position of the lidar
  • the X-axis direction represents the front of the lidar
  • the Y-axis direction represents the front of the lidar.
  • the Z-axis direction represents directly above the lidar.
  • the cube in the picture represents the position of the container and the truck.
  • a three-dimensional laser-based pickup anti-lifting detection method is provided. Taking the method applied to the main control device in FIG. 1 as an example for description, the method includes the following steps:
  • Leiguangjida collects a three-dimensional laser point cloud at the container operation site.
  • the main control device sends a collection signal to the lidar, and controls the lidar to scan to obtain a three-dimensional point cloud of the container operation.
  • the attitude angle of the lidar refers to the installation angle of the lidar relative to the reference object, including but not limited to roll angle, pitch angle, and yaw angle.
  • the attitude angle of the lidar can be determined according to the three-dimensional point cloud of the container operation.
  • the attitude angle of the lidar since the position of the truck loading container is basically fixed, the attitude angle of the lidar only needs to be calculated once, and then the first attitude angle can be used for point cloud calibration, and it can also be detected in real time for each time. Both are calibrated, so that the calibrated point cloud will be more accurate.
  • the attitude angle includes a roll angle, a pitch angle, and a yaw angle.
  • the steps of obtaining the attitude angle of the lidar include:
  • S402 Acquire a three-dimensional calibration point cloud of the container operation collected by the lidar in the calibration state.
  • the container operation that uses the method of the application for anti-lifting detection for the first time can be regarded as the calibration state.
  • calibration can also be carried out at regular intervals.
  • the container operation that uses the method of this application for anti-lifting detection for the first time every week is regarded as the calibration state.
  • S404 Determine the ground point cloud from the three-dimensional calibration point cloud according to the installation height of the lidar.
  • Ground point cloud refers to the point cloud located on the ground in the collected three-dimensional fixed point cloud of the container operation site. Specifically, according to the installation height a of the lidar, a point cloud with a Z coordinate value less than -a in the three-dimensional calibration point cloud is taken as the ground point cloud for the three-dimensional container operation. The container operation is carried out on the ground. Taking the ground point cloud can remove the 3D point cloud related to the non-operation scene in the 3D point cloud, and reduce the impact of the non-operation scene-related 3D point cloud on the subsequent anti-lifting detection.
  • the normal vector is a concept of space analytic geometry, and the vector represented by a straight line perpendicular to the plane is the normal vector of the plane.
  • the method of calculating the normal vector is to first calculate the covariance matrix of the ground point cloud, and then perform singular value decomposition on the covariance matrix.
  • the singular vector obtained by the singular value decomposition describes the three main directions of the point cloud data and is perpendicular to the normal vector of the plane. It represents the direction with the smallest variance, and the smallest variance represents the smallest singular value, so finally the vector with the smallest singular value is selected as the normal vector of the plane.
  • C is the covariance matrix
  • s i is the point in the point cloud
  • S408 Calculate the roll angle and the pitch angle of the lidar according to the plane normal vector of the ground point cloud.
  • the pitch angle is the angle between the X axis of the lidar coordinate system and the horizontal plane
  • the roll angle is the angle between the lidar coordinate Y axis and the lidar vertical plane.
  • the formula for calculating the roll angle and the pitch angle is:
  • T 1 (a 1 ,b 1 ,c 1 )
  • T 1 is the normal vector of the ground
  • is the roll angle
  • is the pitch angle
  • S410 Determine the side point cloud of the container from the three-dimensional point cloud of the container operation according to the installation height of the lidar, the height of the truck tray, the height of the container, and the distance from the lidar.
  • the container side point cloud refers to the point cloud representing the side part of the container in the collected three-dimensional laser point cloud of the container operation site. It can be determined according to the height of the point cloud and the distance between the point cloud and the lidar.
  • the height of the lidar is a
  • the height of the truck tray is b
  • the height of the container is c.
  • the z coordinate range taken is [-a+b,-a+b+c] Point cloud, as a filtered point cloud. Since the side of the container is close to the lidar, a distance threshold t is set, and based on the point cloud after one-time filtering, a point cloud with a distance less than t from the lidar is taken as the side point cloud of the container.
  • S412 Calculate the plane normal vector of the point cloud on the side of the container.
  • step S406 The calculation method of the plane normal vector of the point cloud on the side of the container is the same as step S406, and will not be repeated here.
  • S414 Calculate the yaw angle of the lidar according to the plane normal vector of the point cloud on the side of the container.
  • the yaw angle is the angle between the Z axis of the lidar coordinate system and the side of the container.
  • the calculation formula for calculating the yaw angle is:
  • T 2 (a 2 ,b 2 ,c 2 )
  • T 2 is the plane normal vector of the point cloud on the side of the container
  • is the yaw angle
  • the roll angle, pitch angle and yaw angle of the lidar are calculated by the method of plane normal vector.
  • step S304 it further includes:
  • the attitude angle includes roll angle, pitch angle and yaw angle.
  • the roll angle and pitch angle are obtained from the plane normal vector of the ground point cloud in the three-dimensional point cloud, and the yaw angle is based on the three-dimensional point cloud.
  • the plane normal vector of the point cloud on the side of the container is obtained. Therefore, in this embodiment, after conversion, the ground point cloud in the three-dimensional point cloud is parallel to the bottom plane of the lidar coordinate system, and the converted container side point cloud is parallel to the side plane of the lidar coordinate system. After conversion, the obtained point cloud data is not affected by the installation angle, installation position, and truck parking position of the lidar, and the ground point cloud with the frontal view angle can be obtained.
  • the step of transforming the three-dimensional point cloud according to the attitude angle includes:
  • S502 Convert the 3D point cloud according to the roll angle and the pitch angle of the lidar, and the ground point cloud in the 3D point cloud after the conversion is parallel to the bottom plane of the lidar coordinate system.
  • the ground point cloud is rotated around the X axis of the lidar coordinate system, and the ground point cloud is rotated around the Y axis of the lidar coordinate system according to the roll angle of the lidar, and the converted ground point cloud Parallel to the bottom plane of the lidar coordinate system.
  • R x and R y are rotation matrices around the x-axis and around the y-axis
  • p g is the ground point cloud parallel to the XOY plane of the lidar coordinate system after conversion
  • p c is the original ground point cloud.
  • S504 Convert the converted three-dimensional point cloud according to the yaw angle of the lidar, and the side point cloud of the container in the converted three-dimensional point cloud is parallel to the side plane of the lidar coordinate system.
  • the converted 3D point cloud is rotated around the Z axis of the lidar coordinate system, and the container side point cloud in the converted 3D point cloud is parallel to the side plane of the lidar coordinate system.
  • R z is the rotation matrix around the z axis
  • p g is the point cloud where the converted ground point cloud is parallel to the XOY plane of the lidar coordinate system
  • p is the point where the container side is parallel to the XOZ plane of the lidar coordinate system after the conversion. cloud.
  • step S306 it further includes:
  • a three-dimensional point cloud it is expressed in pixels to obtain a two-dimensional image.
  • the step of projecting the converted 3D point cloud into a 2D image includes:
  • S602 Calculate the two-dimensional coordinates of each three-dimensional point cloud for the converted three-dimensional point cloud.
  • the coordinates of its two-dimensional image can be calculated by the following formula.
  • u and v are the row and column coordinates of the two-dimensional image
  • x i and z i are the x-axis and z-axis coordinates of the i-th ground point cloud
  • x min and z min are the ground point cloud on the x-axis and
  • the minimum value of the Z axis, u r and v r are the accuracy of the projection of the ground point cloud onto the two-dimensional image, which represents the distance between each pixel on the two-dimensional image.
  • S604 Convert the point cloud into pixel points according to the two-dimensional coordinates of each three-dimensional point cloud.
  • the ground point cloud is represented by pixels, and the coordinates of the pixel points are the two-dimensional coordinates of the ground point cloud.
  • S606 Binarize the pixel points of the point cloud and the pixel points of the non-point cloud to obtain a binary image.
  • the binarization process refers to the process of setting the gray value of the pixel on the image to 0 or 255, that is, the process of presenting the entire image with a clear black and white effect.
  • One way may be to set the gray value of the pixel converted from the point cloud to 255, and set the gray value of other non-point cloud converted pixels to 0 to obtain a binary image.
  • Another way can be to set the gray value of the pixel converted from the point cloud to 0, and set the gray value of other non-point cloud converted pixels to 255 to obtain a binary image.
  • S608 Perform image preprocessing on the binary image to obtain a two-dimensional image.
  • the image preprocessing includes: firstly perform median filtering and bilateral filtering preprocessing operations on the two-dimensional image.
  • the median filtering is to protect the edge information
  • the bilateral filtering is to preserve the edges and denoise; and then perform the morphological expansion operation. Due to the scanning method of the laser sensor, the distance between some adjacent points will be greater than the pixel distance of the image, resulting in holes in the image. If the pixel accuracy is increased, the resolution of the image will be reduced.
  • the expansion operation on the image can effectively reduce Hole.
  • Image preprocessing methods are not limited to morphological expansion. It is also possible to perform morphological closing operations on the image to fill the black hole area, and then perform morphological opening operations to enhance edge information and filter discrete interference pixels. As shown in Figure 7, it is a two-dimensional image of a three-dimensional point cloud of a container operation where the truck is not lifted during the container operation.
  • step S306 it further includes:
  • S308 Determine the position range of the gap between the truck and the container in the two-dimensional image.
  • the gap between the truck and the container refers to the gap between the truck and the bottom of the container when the truck is lifted out of the container. It can be seen that the gap between the truck and the container is related to the height of the container being hoisted and the height of the truck tray. Specifically, determining the position range of the gap between the truck and the container in the two-dimensional image includes: determining the gap between the truck and the container in the two-dimensional image according to the height of the lidar, the height of the container being lifted, and the height of the truck bracket The location range.
  • the known installation height of the lidar is a
  • the height of the truck tray is b
  • the height of the container being lifted is d
  • the z coordinate range of the gap position in the laser sensor coordinate system is [ba,b-a+ d].
  • the position range of the slit on the two-dimensional image can be determined according to the coordinate formula for converting the three-dimensional point cloud into the two-dimensional image mentioned above.
  • S310 Perform image detection on the position range in the two-dimensional image, and obtain a detection result of truck pick-up prevention.
  • image detection is performed on the pixel points within the position range, and the detection result of the pick-up anti-lifting is obtained.
  • the step of performing image detection on the position range in the two-dimensional image to obtain the detection result of the collection card anti-lifting includes:
  • S902 Traverse each row in the position range in the two-dimensional image, and count the number of point cloud pixels in each row.
  • the point cloud pixel refers to the pixel converted from the point cloud.
  • the gray value of point cloud pixels can be 255
  • the gray value of non-point cloud pixels can be 0.
  • the gray value of the point cloud pixel can be 0, and the gray value of the non-point cloud pixel is 255.
  • the gray values of the point cloud pixels count the number of pixel points whose gray values are corresponding values in each row within the position range in the two-dimensional image. For example, if the gray value of a point cloud pixel is 255, then count the number of pixels with a gray value of 255 in each row in the two-dimensional image, that is, count the number of pixels in each row in the two-dimensional image. How many pixels have a pixel value of 255, so as to obtain the number of point cloud pixels in each row within the position range of the two-dimensional image.
  • step S906 is executed, and if the number of point cloud pixels in the current line is less than the first threshold, return to step S902.
  • S906 The counter is increased by a preset value.
  • Step S908 is executed after step S906.
  • S908 Determine whether the traversal of each row in the position range is completed.
  • step S910 If yes, go to step S910, if no, go back to step S902.
  • step S912 is executed, and if the statistical value of the counter is less than the second threshold, step S914 is executed.
  • the step of performing image detection on the position range in the two-dimensional image to obtain the detection result of the collection card anti-lifting includes:
  • S1101 Perform boundary extraction on the two-dimensional image to obtain a boundary line image.
  • the edge extraction can use image edge extraction methods, such as canny edge detection, sobel edge detection and so on. Or use the connected domain method to find the contour boundary of the image, such as findcontours.
  • S1104 Perform straight line detection on the boundary line image, and reserve straight lines with slopes within a preset range.
  • the pixels in the boundary line image within the position range can be retained to obtain the image to be detected.
  • Hough line detection is performed on the image to be detected, and straight lines with slopes within a certain range are retained.
  • straight line detection can also be performed on the entire boundary line image, and straight lines with slopes within a certain range are retained, that is, pixels of straight lines whose slopes are not within the range are removed.
  • step S1110 is executed, that is, the intersection point is within the position range, and the detection result that the truck is hoisted is obtained. If not, step S1112 is executed, that is, the intersection is not within the position range, and the detection result that the truck is not lifted is obtained.
  • the projected image of the three-dimensional point cloud on the two-dimensional plane is used to perform image detection according to the position range to obtain the detection result of the truck anti-lifting, which ensures the accuracy of the detection and reduces the amount of calculation.
  • the above-mentioned three-dimensional laser-based truck anti-lifting detection method uses lidar to collect three-dimensional data of container operations with high data accuracy.
  • the three-dimensional point cloud is converted according to the attitude angle.
  • the point cloud data is projected to a two-dimensional image, and the gap between the truck and the container is detected in the two-dimensional image, so as to determine whether the truck and the container are effectively separated.
  • the data source of this method has high accuracy, and the detection method is not affected by the installation position and angle of the lidar, which greatly improves the accuracy of anti-lifting detection.
  • a three-dimensional laser-based truck pick-up anti-lifting detection device which includes:
  • the point cloud acquisition module 1202 is used to acquire the three-dimensional point cloud of the container operation collected by the lidar.
  • the attitude angle acquisition module 1204 is used to acquire the attitude angle of the lidar.
  • the conversion module 1206 is used to convert the three-dimensional point cloud according to the attitude angle.
  • the projection module 1208 is used to project the converted three-dimensional point cloud into a two-dimensional image.
  • the gap position determination module 1210 is used to determine the position range of the gap between the truck and the container in the two-dimensional image.
  • the detection module 1212 is used to perform image detection on the position range in the two-dimensional image, and obtain the detection result of the truck pick-up prevention.
  • the above-mentioned three-dimensional laser-based truck anti-lifting detection device uses lidar to collect three-dimensional data of container operations with high data accuracy.
  • the three-dimensional point cloud is converted according to the attitude angle.
  • the laser radar installation position and installation angle are affected, and then the point cloud data is projected to a two-dimensional image; the gap between the truck and the container is detected in the two-dimensional image, so as to determine whether the truck and the container are effectively separated.
  • the data source of this method has high accuracy, and the detection method is not affected by the installation position and angle of the lidar, which greatly improves the accuracy of anti-lifting detection.
  • the three-dimensional laser-based pickup anti-lifting detection device further includes:
  • the calibration point cloud acquisition module is used to acquire the three-dimensional calibration point cloud of the container operation collected by the lidar in the calibration state;
  • the ground point cloud determination module is used to determine the ground point cloud from the three-dimensional calibration point cloud according to the installation height of the lidar.
  • the first normal vector calculation module is used to calculate the plane normal vector of the ground point cloud.
  • the first angle calculation module is used to calculate the roll angle and pitch angle of the lidar according to the plane normal vector of the ground point cloud;
  • the side point cloud determination module is used to determine the side point cloud of the container from the three-dimensional point cloud according to the installation height of the lidar, the height of the truck tray, the height of the container, and the distance from the lidar;
  • the second normal vector calculation module is used to calculate the plane normal vector of the point cloud on the side of the container
  • the second angle calculation module is used to calculate the yaw angle of the laser radar according to the plane normal vector of the point cloud on the side of the container.
  • the attitude angle includes the roll angle, the pitch angle and the yaw angle.
  • the conversion module is used to convert the 3D point cloud according to the roll angle and pitch angle of the lidar. After the conversion, the ground point cloud in the 3D point cloud is parallel to the bottom plane of the lidar coordinate system; The yaw angle of the radar converts the converted 3D point cloud, and the side point cloud of the container in the converted 3D point cloud is parallel to the side plane of the lidar coordinate system.
  • the projection module includes:
  • the coordinate calculation module is used to calculate the two-dimensional coordinates of each three-dimensional point cloud after the conversion of the three-dimensional point cloud.
  • the pixel point conversion module is used to convert the point cloud into pixel points according to the two-dimensional coordinates of each three-dimensional point cloud.
  • the binarization processing module is used to binarize the pixels of the point cloud and the pixels of the non-point cloud to obtain a binary image
  • the preprocessing module is used to perform image preprocessing on the binary image to obtain a two-dimensional image.
  • the gap position determining module is used to determine the position range of the gap between the truck and the container in the two-dimensional image according to the height of the lidar, the height of the container being hoisted, and the height of the truck tray.
  • the detection module includes:
  • the traversal module is used to traverse the rows in the position range of the two-dimensional image and count the number of point cloud pixels in each row.
  • a calculator module configured to increase the counter by a preset value if the number of pixel points in the current row point cloud is greater than the first threshold
  • the comparison module is used to compare the statistical value of the counter with the second threshold after the traversal of each row in the position range is completed;
  • the detection and analysis module is used to obtain the detection result that the truck is hoisted if the statistical value of the counter is greater than the second threshold.
  • the detection module includes:
  • the edge detection module is used to extract the boundary of the two-dimensional image to obtain the boundary line image
  • the straight line detection module is used to detect the straight line of the boundary line image, and keep the straight line whose slope is within the preset range;
  • the detection and analysis module is used to determine the intersection of the straight lines. If the intersection is within the position range, the detection result that the truck is hoisted will be obtained.
  • the various modules in the above-mentioned three-dimensional laser-based truck anti-lifting detection device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 13.
  • the computer equipment includes a processor and a memory communication interface connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the communication interface of the computer device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be implemented through WIFI, an operator's network, NFC (near field communication) or other technologies.
  • WIFI an operator's network
  • NFC near field communication
  • FIG. 13 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device including a memory and a processor, a computer program is stored in the memory, and the processor implements the following steps when the processor executes the computer program:
  • the image detection is performed on the position range in the two-dimensional image, and the detection result of the truck anti-lifting is obtained.
  • the manner of obtaining the attitude angle of the lidar includes:
  • the yaw angle of the lidar is calculated.
  • the attitude angle includes the roll angle, the pitch angle and the yaw angle.
  • converting the three-dimensional point cloud according to the attitude angle includes:
  • the 3D point cloud is converted, and the ground point cloud in the 3D point cloud after the conversion is parallel to the bottom plane of the lidar coordinate system;
  • the converted 3D point cloud is converted, and the side point cloud of the container in the converted 3D point cloud is parallel to the side plane of the lidar coordinate system.
  • projecting the converted three-dimensional point cloud into a two-dimensional image includes:
  • determining the position range of the gap between the truck and the container in the two-dimensional image includes:
  • the height of the lidar the lifting height of the container and the height of the truck tray, the position range of the gap between the truck and the container in the two-dimensional image is determined.
  • image detection is performed on the position range in the two-dimensional image to obtain the detection result of the truck anti-lifting, including:
  • the counter is increased by a preset value
  • the statistical value of the counter is compared with the second threshold
  • image detection is performed according to the position range to obtain the detection result of truck anti-lifting, including:
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
  • the image detection is performed on the position range in the two-dimensional image, and the detection result of the truck anti-lifting is obtained.
  • the manner of obtaining the attitude angle of the lidar includes:
  • the yaw angle of the lidar is calculated.
  • the attitude angle includes the roll angle, the pitch angle and the yaw angle.
  • converting the three-dimensional point according to the attitude angle includes:
  • the 3D point cloud is converted, and the ground point cloud in the 3D point cloud after the conversion is parallel to the bottom plane of the lidar coordinate system;
  • the converted 3D point cloud is converted, and the side point cloud of the container in the converted 3D point cloud is parallel to the side plane of the lidar coordinate system.
  • projecting the converted ground point cloud into a two-dimensional image includes:
  • determining the position range of the gap between the truck and the container in the two-dimensional image includes:
  • the height of the lidar the lifting height of the container and the height of the truck tray, the position range of the gap between the truck and the container in the two-dimensional image is determined.
  • image detection is performed on the position range in the two-dimensional image to obtain the detection result of the truck anti-lifting, including:
  • the statistical value of the counter is compared with the second threshold
  • image detection is performed according to the position range to obtain the detection result of truck anti-lifting, including:
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical storage.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM may be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种基于三维激光的集卡防吊起检测方法、装置、计算机设备和存储介质。方法包括:获取激光雷达采集的集装箱作业的三维点云(S302);获取激光雷达的姿态角(S304);根据姿态角,对三维点云进行转换(S306);将转换后的三维点云投影为二维图像(S308);确定集卡与集装箱之间的缝隙在二维图像中的位置范围(S310);对二维图像中的根据位置范围进行图像检测,得到集卡防吊起检测结果(S312)。该方法的数据源精度高,且检测方法不受激光雷达安装位置和安装角度的影响,极大地提高了防吊起检测的精度。

Description

基于三维激光的集卡防吊起检测方法、装置和计算机设备 技术领域
本申请涉及激光雷达技术领域,特别是涉及一种基于三维激光的集卡防吊起检测方法、装置和计算机设备。
背景技术
在场桥从集卡卸箱作业过程中,由于集卡锁销未完全解锁,吊具将集装箱连同集卡一并吊起或是半边吊起,从而会引起吊集卡事故。
为了避免集卡事故的发生,在集卡卸箱作业过程中,需要对集卡与集装箱是否分离进行检测,即集卡防吊起检测。传统的检测方法,使用2D的激光扫描仪获得集卡的轮廓,根据轮廓判断集卡与集装箱是否分离。这种方法需要依赖于激光扫描仪的安装位置以及集卡的停放位置,且受数据精度的影响,防吊起的检测精度低。
发明内容
基于此,有必要针对上述技术问题,提供一种能够提高检测精度的基于三维激光的集卡防吊起检测方法、装置、计算机设备和存储介质。
一种基于三维激光的集卡防吊起检测方法,所述方法包括:
获取激光雷达采集的集装箱作业的三维点云;
获取所述激光雷达的姿态角;
根据所述姿态角,对所述三维点云进行转换;
将转换后的所述三维点云投影为二维图像;
确定集卡与集装箱之间的缝隙在所述二维图像中的位置范围;
对所述二维图像中的所述位置范围进行图像检测,得到集卡防吊起检测结果。
在其中一个实施例中,所述获取所述激光雷达的姿态角的方式,包括:
获取在标定状态所述激光雷达采集的集装箱作业的三维标定点云;
根据所述激光雷达的安装高度,从所述三维标定点云中确定地面点云;
计算所述地面点云的平面法向量;
根据所述地面点云的平面法向量,计算所述激光雷达的翻滚角和俯仰角;
根据所述激光雷达的安装高度、集卡托架高度、集装箱高度以及与所述激光雷达的距离,从所述三维点云中确定集装箱侧面点云;
计算所述集装箱侧面点云的平面法向量;
根据所述集装箱侧面点云的平面法向量,计算所述激光雷达的偏航角,所述姿态角包括所述翻滚角、所述俯仰角和所述偏航角。
在其中一个实施例中,所述根据所述姿态角,对所述三维点云进行转换,包括:
根据所述激光雷达的翻滚角和俯仰角,对所述三维点云进行转换,转换后所述三维点云中的地面点云与激光雷达坐标系的底平面平行;
根据所述激光雷达的偏航角,对转换后的所述三维点云进行转换,转换后所述三维点云中的集装箱侧面点云与所述激光雷达坐标系的侧平面平行。
在其中一个实施例中,所述将转换后的所述三维点云投影为二维图像,包括:
对转换后的所述三维点云,计算各三维点云的二维坐标;
根据各三维点云的二维坐标,将点云转换为像素点;
将点云的像素点和非点云的像素点进行二值化处理,得到二值图像;
对所述二值图像进行图像预处理,得到二维图像。
在其中一个实施例中,确定集卡与集装箱之间的缝隙在所述二维图像中的位置范围,包括:
根据所述激光雷达的高度、集装箱被吊起高度以及集卡托架高度,确定集卡与集装箱之间的缝隙在所述二维图像中的位置范围。
在其中一个实施例中,根据所述位置范围进行图像检测,得到集卡防吊起检测结果,包括:
遍历所述位置范围内的各行,统计各行中点云像素点的数量;
若当前行点云像素点的数量大于第一阈值,则计数器增加预设值;
在所述位置范围内的各行遍历完成后,将计数器的统计值与第二阈值进行比较;
若所述计数器的统计值大于所述第二阈值,则得到集卡被吊起的检测结果。
在其中一个实施例中,根据所述位置范围进行图像检测,得到集卡防吊起检测结果,包括:
对二维图像进行边界提取,得到边界线图像;
对所述边界线图像进行直线检测,保留斜率在预设范围内的直线;
确定直线的交点,若所述交点在所述位置范围内,则得到集卡被吊起的检测结果。
一种基于三维激光的集卡防吊起检测装置,包括:
点云获取模块,用于获取激光雷达采集的集装箱作业的三维点云;
姿态角获取模块,用于获取所述激光雷达的姿态角;
转换模块,用于根据所述姿态角,对所述三维点云进行转换;
投影模块,用于将转换后的所述三维点云投影为二维图像;
缝隙位置确定模块,用于确定集卡与集装箱之间的缝隙在所述二维图像中的位置范围;
检测模块,用于根据对所述二维图像中的所述位置范围进行图像检测,得到集卡防吊起检测结果。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述各实施例任一项所述方法的步骤。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述各实施例任一项所述的方法的步骤。
上述基于三维激光的集卡防吊起检测方法、装置、计算机设备和存储介质,利用激光雷达采集集装箱作业的三维数据,数据精度高,在高精度三维点云数据的基础上,根据姿态角对三维点云进行转换,不受激光雷达安装位置和安装角度的影响,进而将点云数据投影到二维图像,在二维图像中检测集卡与集装箱之间的缝隙,从而判断集卡和集装箱是否有效分离。该方法的数据源精度高, 且检测方法不受激光雷达安装位置和安装角度的影响,极大地提高了防吊起检测的精度。
附图说明
图1为一个实施例中基于三维激光的集卡防吊起检测方法的应用环境图;
图2为一个实施例中雷光雷达安装位置示意图;
图3为一个实施例中基于三维激光的集卡防吊起检测方法的流程示意图;
图4为一个实施例中获取激光雷达的姿态角的步骤的流程示意图;
图5为另一个实施例中根据姿态角,对三维点云进行转换的步骤的步骤的流程示意图;
图6为一个实施例中将转换后的三维点云投影为二维图像的步骤的流程示图;
图7为一个实施例中集装箱作业过程中集卡未被吊起的集装箱作业的三维点云的二维图像;
图8为一个实施例中集卡与集装箱缝隙的位置范围示意图;
图9为一个实施例中对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果的步骤的流程示意图;
图10为一个实施例中集装箱作业过程中集卡被吊起的集装箱作业的三维点云的二维图像;
图11为另一个实施例中对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果的步骤的流程示意图;
图12为一个实施例中基于三维激光的集卡防吊起检测装置的结构框图;
图13为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的基于三维激光的集卡防吊起检测方法,可以应用于如图1所示的应用环境中。其中,激光雷达101安装在集装箱作业龙门吊102的作业车道一侧,采集激光点云。激光雷达的安装位置根据集卡高度设置。主控设备103与激光雷达101通信连接。主控设备还与龙门吊控制设备104连接。当龙门吊控制设备104控制吊具105吊起集卡105上的集装箱106时,发送控制信号给主控设备103,主控设备103发送采集信号给激光雷达101,激光雷达101根据采集集训采集集装箱作业的三维点云,并将采集的三维点云发送至主控设备103,主控设备103获取激光雷达采集的集装箱作业的三维点云;获取激光雷达的姿态角;根据姿态角,对三维点云进行转换;将转换后的三维点云投影为二维图像;确定集卡与集装箱之间的缝隙在二维图像中的位置范围;对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果。
其中,激光雷达安装位置如图2所示,其中坐标系为激光雷达坐标系,原点O代表了激光雷达的位置,X轴方向代表了激光雷达的正前方,Y轴方向代表了激光雷达的正左方,Z轴方向代表了激光雷达的正上方。图中正方体代表了集装箱和集卡位置。
在一个实施例中,如图3所示,提供了一种基于三维激光的集卡防吊起检测方法,以该方法应用于图1中的主控设备为例进行说明,包括以下步骤:
S302,获取激光雷达采集的集装箱作业的三维点云。
具体地,雷光激达采集集装箱作业现场的三维激光点云。在吊具作业吊起集卡车上的集装箱时,主控设备向激光雷达发送采集信号,控制激光雷达扫描得到集装箱作业的三维点云。
S304,获取激光雷达的姿态角。
其中,激光雷达的姿态角,是指激光雷达相对参照物的安装角度,包括但不限于翻滚角、俯仰角和偏航角。激光雷达的姿态角可根据集装箱作业的三维点云确定。在实际应用中,由于集卡装载集装箱的位置基本固定,激光雷达的姿态角,只需要计算一次,后续可以使用第一次的姿态角,进行点云校准,也可以实时性地对每一次检测都进行校准,这样校准的点云会更加精确。
一个实施例中,姿态角包括翻滚角、俯仰角和偏航角。如图4所示,获取 激光雷达的姿态角的步骤包括:
S402,获取在标定状态激光雷达采集的集装箱作业的三维标定点云。
其中,可以将首次采用本申请方法进行防吊起检测的集装箱作业作为标定状态。为确保姿态角数据的准确性,还可以定时进行标定,如将每周首次采用本申请方法进行防吊起检测的集装箱作业作为标定状态。
S404,根据激光雷达的安装高度,从三维标定点云中确定地面点云。
地面点云,是指采集的集装箱作业现场的三维定点云中位于地面的点云。具体地,根据激光雷达的安装高度a,取三维标定点云中Z坐标值小于-a的点云,作为三维集装箱作业的地面点云。集装箱作业是在地面进行的,取地面点云,能够去除三维点云中非作业场景相关的三维点云,减少非作业场景相关三维点云对后续防吊起检测的影响。
S406,计算地面点云的平面法向量。
法向量,是空间解析几何的一个概念,垂直于平面的直线所表示的向量为该平面的法向量。
计算法向量的方法,首先计算地面点云的协方差矩阵,然后对协方差矩阵进行奇异值分解,奇异值分解得到的奇异向量描述了点云数据的三个主要方向,垂直于平面的法向量代表了方差最小的方向,方差最小代表了奇异值最小,所以最后选取奇异值最小的向量作为平面的法向量。
Figure PCTCN2021079043-appb-000001
其中,C为协方差矩阵,s i为点云中的点,
Figure PCTCN2021079043-appb-000002
代表了点云的均值。
S408,根据地面点云的平面法向量,计算激光雷达的翻滚角和俯仰角。
其中,俯仰角为激光雷达坐标系X轴与水平面的夹角,翻滚角为激光雷达坐标Y轴与激光雷达铅垂面的夹角。
具体地,计算翻滚角和俯仰角的公式为:
T 1=(a 1,b 1,c 1)
Figure PCTCN2021079043-appb-000003
其中,T 1为地面的法向量,α为翻滚角,β为俯仰角。
S410,根据激光雷达的安装高度、集卡托架高度、集装箱高度以及与激光雷达的距离,从集装箱作业三维点云中确定集装箱侧面点云。
集装箱侧面点云是指采集的集装箱作业现场的三维激光点云中表示集装箱侧面部分的点云。具体可根据点云高度,以及点云与激光雷达的距离确定。
具体地,集装箱侧面点云,已知激光雷达的高度为a,集卡托架高度为b,集装箱高度为c,取的z坐标范围为[-a+b,-a+b+c]的点云,作为一次过滤后的点云。由于集装箱侧面靠近激光雷达,设置距离阈值t,在一次过滤后的点云基础上,取与激光雷达的距离小于t的点云作为集装箱侧面点云。
S412,计算集装箱侧面点云的平面法向量。
集装箱侧面点云的平面法向量的计算方法与步骤S406相同,此处不再赘述。
S414,根据集装箱侧面点云的平面法向量,计算激光雷达的偏航角。
其中,偏航角为激光雷达坐标系Z轴与集装箱侧面的夹角。
具体地,计算偏航角的计算公式为:
T 2=(a 2,b 2,c 2)
Figure PCTCN2021079043-appb-000004
其中,T 2为集装箱侧面点云的平面法向量,γ为偏航角。
本实施例中,通过平面法向量的方法,计算激光雷达的翻滚角、俯仰角和偏航角。
在步骤S304之后,还包括:
S306,根据姿态角,对三维点云进行转换。
如前面所提到的,姿态角包括了翻滚角、俯仰角和偏航角,其中,翻滚角和俯仰角根据三维点云中的地面点云的平面法向量得到,偏航角根据三维点云中的集装箱侧面点云的平面法向量得到。因此,本实施例中,转换后,三维点云中的地面点云与激光雷达坐标系的底平面平行,转换后的集装箱侧面点云与激光雷达坐标系的侧平面平行。通过转换后,得到的点云数据不受激光雷达安装角度、安装位置以及集卡停放位置的影响,能够得到正面平视角度的地面点 云。
具体地,如图5所示,根据姿态角,对三维点云进行转换的步骤,包括:
S502,根据激光雷达的翻滚角和俯仰角,对三维点云进行转换,转换后三维点云中的地面点云与激光雷达坐标系的底平面平行。
具体地,根据激光雷达的俯仰角,将地面点云绕激光雷达坐标系的X轴旋转,根据激光雷达的翻滚角,将地面点云绕激光雷达坐标系的Y轴旋转,转换后地面点云与激光雷达坐标系的底平面平行。如下所示:
Figure PCTCN2021079043-appb-000005
Figure PCTCN2021079043-appb-000006
p g=R y·R x·p c
其中,R x和R y为绕x轴与绕y轴的旋转矩阵,p g为转换后与激光雷达坐标系XOY平面平行的地面点云,p c为原始地面点云。
S504,根据激光雷达的偏航角,对转换后三维点云进行转换,转换后三维点云中的集装箱侧面点云与激光雷达坐标系的侧平面平行。
具体地,根据激光雷达的偏航角,将转换后的三维点云绕激光雷达坐标系的Z轴旋转,转换后三维点云中的集装箱侧面点云与激光雷达坐标系的侧平面平行。如下所示:
Figure PCTCN2021079043-appb-000007
p=R z·p g
其中,R z为绕z轴的旋转矩阵,p g为转换后地面点云与激光雷达坐标系的XOY平面平行的点云,p为转换后集装箱侧与激光雷达坐标系的XOZ平面平行的点云。
在步骤S306之后,还包括:
S308,将转换后的三维点云投影为二维图像。
具体地,对于三维点云,以像素点表示,得到二维图像。
如图6所示,将转换后的三维点云投影为二维图像的步骤包括:
S602,对转换后的三维点云,计算各三维点云的二维坐标。
具体地,对于地面点云中的每一个三维点,可以以下公式计算其二维图像的坐标。
u=[(x i-x min)/u r]
v=[(z i-z min)/v r]
其中,u和v为二维图像的行坐标和列坐标,x i和z i为第i个地面点云的x轴坐标和z轴坐标,x min和z min为地面点云在x轴和Z轴的最小值,u r和v r为地面点云投影到二维图像上的精度,代表了二维图像上每一个像素之间的距离。
S604,根据各三维点云的二维坐标,将点云转换为像素点。
具体地,以像素点表示地面点云,像素点的坐标即为地面点云的二维坐标。
S606,将点云的像素点和非点云的像素点进行二值化处理,得到二值图像。
具体地,二值化处理是指将图像上的像素点的灰度值设置为0或255,也就是将整个图像呈现出明显的黑白效果的过程。一种方式可以为,将从点云转换的像素点的灰度值设为255,其它的非点云转换的像素点的灰度值设为0,得到二值图像。另一种方式可以为,将从点云转换的像素点的灰度值设为0,其它的非点云转换的像素点的灰度值设为255,得到二值图像。
S608,对二值图像进行图像预处理,得到二维图像。
其中,图像的预处理包括:首先对二维图像进行中值滤波和双边滤波预处理操作,中值滤波是为了保护边缘信息,双边滤波是为了保边去噪;然后进行形态学膨胀操作。由于激光传感器的扫描方式,有些临近点之间的距离会大于图像的像素距离,导致图像出现孔洞,如果增大像素精度,又会降低图像的分辨率,在图像上进行膨胀操作能有效的减少孔洞。
图像预处理方法,不仅限于形态学膨胀。也可以对图像进行形态学闭运算, 以填充黑洞区域,然后进行形态学开运算,以增强边缘信息,过滤离散的干扰像素点。如图7所示,为集装箱作业过程中集卡未被吊起的集装箱作业的三维点云的二维图像。
在步骤S306之后,还包括:
S308,确定集卡与集装箱之间的缝隙在二维图像中的位置范围。
集卡与集装箱缝隙是指集卡被由吊起离开集装箱,集卡与集装箱底部之间的间隙。由此可见,集卡与集装箱之间的缝隙与集装箱被吊起的高度以及集卡托架高度有关。具体地,确定集卡与集装箱缝隙在二维图像中的位置范围,包括:根据激光雷达的高度、集装箱被提起高度以及集卡托架高度,确定集卡与集装箱之前的缝隙在二维图像中的位置范围。如图8所示,已知激光雷达安装高度为a,集卡托架高度为b,集装箱被提起高度为d,那么缝隙位置在激光传感器坐标系的z坐标范围为[b-a,b-a+d]。获得相对于激光雷达坐标系下的坐标值范围,根据前面所提到的三维点云转换为二维图像的坐标公式就能确定缝隙在二维图像上的位置范围。
S310,对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果。
具体地,对位置范围内的像素点进行图像检测,得到集卡防吊起检测结果。
一种实施方式,如图9所示,对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果的步骤,包括:
S902,遍历二维图像中位置范围内的各行,统计各行中点云像素点的数量。
其中,点云像素点是指从点云转换的像素点。根据二值化的规则,点云像素点的灰度值可以为255,非点云像素点的灰度值为0。点云像素点的灰度值可以为0,非点云像素点的灰度值为255。具体地,根据点云像素点的灰度值,统计二维图像中位置范围内的各行中,像素点的灰度值为相应数值的像素点数量。例如,点云像素点的灰度值为255,则统计二维图像中位置范围内的各行中像素点的灰度值为255的像素点数量,即统计二维图像中位置范围内每行有多少个像素点的像素值为255,从而得到二维图像中位置范围内每行中点云像素点的数量。
S904,将当前行点云像素点的数量与第一阈值进行比较。
若当前行点云像素点的数量大于第一阈值,则执行步骤S906,若当前行点云像素点的数量小于第一阈值,则返回步骤S902。
S906,计数器增加预设值。
具体地,预设值为1,若当前行点云像素点的数量大于第一阈值,则计数器加1。步骤S906之后执行步骤S908。
S908,判断位置范围内的各行是否遍历完成。
若是,则执行步骤S910,若否,则返回步骤S902。
S910,将计数器的统计值与第二阈值进行比较。
若计数器的统计值大于第二阈值,则执行步骤S912,若计数器的统计值小于第二阈值,则执行步骤S914。
S912,得到集卡被吊起的检测结果。
S914,得到集卡未被吊起的检测结果。
其中,如图7所示,集装箱完全脱离,集卡未被吊起时,集装箱与集卡之间有在任何位置均有缝隙,相应的缝隙位置应当未采集到三维点云,对应地,位置范围内的各行中点云像素点的数量为0。如图10所示,集卡被吊起时,集装箱与集卡缝隙在图像中的每一行像素个数都大于0,并且超出阈值T1的行数,即计数器的统计值大于T2,则可以判断集卡被吊起。第一阈值和第二阈值可以根据精度要求和经验值设定。
在另一个实施例中,如图11所示,对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果的步骤,包括:
S1101,对二维图像进行边界提取,得到边界线图像。
其中,边界提取可采用图像的边缘提取方法,如canny边缘检测,sobel边缘检测等等。或者使用连通域的方法寻找图像的轮廓边界,如findcontours。
S1104,对边界线图像进行直线检测,保留斜率在预设范围内的直线。
具体地,可根据集卡与集装箱缝隙在二维图像中的位置范围,仅保留位置范围内的边界线图像中的像素点,得到待检测图像。对待检测图像进行霍夫线检测,保留斜率在一定范围内的直线。在其它的实施方式中,也可以对整个边界线图像进行直线检测,保留斜率在一定范围内的直线,即清除斜率不在范围内 的直线的像素点。
S1106,确定直线的交点。
具体为计算保留的斜率在预设范围内的直线的交点。
S1108,判断交点是否在位置范围内;
若是,则执行步骤S1110,即交点在位置范围内,得到集卡被吊起的检测结果。若否,则执行步骤S1112,即交点未在位置范围内,则得到集卡未被吊起的检测结果。
其中,如图7所示,集装箱完全脱离,集卡未被吊起时,集装箱与集卡之间在位置范围内没有直线的交点,可以判断为集卡未被吊起。而集卡被吊起时,通常为集装箱未完全脱离,集卡部分仍与集装箱接触,其它部分被吊具向上吊起,则集卡和集装箱外轮廓线有相交部分。如图10所示,集卡被吊起时,集装箱与集卡在缝隙的位置范围内有交点,可以判断为集卡被吊起。
本实施例中,利用三维点云在二维平面的投影图像,根据位置范围进行图像检测,得到集卡防吊起检测结果,保证了检测的准确性,减少了计算量。
上述的基于三维激光的集卡防吊起检测方法,利用激光雷达采集集装箱作业的三维数据,数据精度高,在高精度三维点云数据的基础上,根据姿态角对三维点云进行转换,不受激光雷达安装位置和安装角度的影响,进而将点云数据投影到二维图像,在二维图像中检测集卡与集装箱之间的缝隙,从而判断集卡和集装箱是否有效分离。该方法的数据源精度高,且检测方法不受激光雷达安装位置和安装角度的影响,极大地提高了防吊起检测的精度。
应该理解的是,虽然上述各流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图12所示,提供了一种基于三维激光的集卡防吊起检测装置,包括:
点云获取模块1202,用于获取激光雷达采集的集装箱作业的三维点云。
姿态角获取模块1204,用于获取激光雷达的姿态角。
转换模块1206,用于根据姿态角,对三维点云进行转换。
投影模块1208,用于将转换后的三维点云投影为二维图像。
缝隙位置确定模块1210,用于确定集卡与集装箱之间的缝隙在二维图像中的位置范围。
检测模块1212,用于对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果。
上述基于三维激光的集卡防吊起检测装置,利用激光雷达采集集装箱作业的三维数据,数据精度高,在高精度三维点云数据的基础上,根据姿态角对三维点云进行转换,不受激光雷达安装位置和安装角度的影响,进而将点云数据投影到二维图像;在二维图像中检测集卡与集装箱之间的缝隙,从而判断集卡和集装箱是否有效分离。该方法的数据源精度高,且检测方法不受激光雷达安装位置和安装角度的影响,极大地提高了防吊起检测的精度。
在其中一个实施例中,基于三维激光的集卡防吊起检测装置还包括:
标定点云获取模块,用于获取在标定状态激光雷达采集的集装箱作业的三维标定点云;
地面点云确定模块,用于根据激光雷达的安装高度,从三维标定点云中确定地面点云。
第一法向量计算模块,用于计算地面点云的平面法向量。
第一角度计算模块,用于根据地面点云的平面法向量,计算激光雷达的翻滚角和俯仰角;
侧面点云确定模块,用于根据激光雷达的安装高度、集卡托架高度、集装箱高度以及与激光雷达的距离,从三维点云中确定集装箱侧面点云;
第二法向量计算模块,用于计算集装箱侧面点云的平面法向量;
第二角度计算模块,用于根据集装箱侧面点云的平面法向量,计算激光雷 达的偏航角,姿态角包括翻滚角、俯仰角和偏航角。
在另一个实施中,转换模块,用于根据激光雷达的翻滚角和俯仰角,对三维点云进行转换,转换后三维点云中的地面点云与激光雷达坐标系的底平面平行;根据激光雷达的偏航角,对转换后三维点云进行转换,转换后三维点云中的集装箱侧面点云与激光雷达坐标系的侧平面平行。
在另一个实施例中,投影模块,包括:
坐标计算模块,用于对转换后的三维点云,计算各三维点云的二维坐标。
像素点转换模块,用于根据各三维点云的二维坐标,将点云转换为像素点。
二值化处理模块,用于将点云的像素点和非点云的像素点进行二值化处理,得到二值图像;
预处理模块,用于对二值图像进行图像预处理,得到二维图像。
在另一个实施例中,缝隙位置确定模块,用于根据激光雷达的高度、集装箱被吊起高度以及集卡托架高度,确定集卡与集装箱之间的缝隙在二维图像中的位置范围。
在另一个实施例中,检测模块,包括:
遍历模块,用于遍历二维图像中位置范围内的各行,统计各行中点云像素点的数量。
计算器模块,用于若当前行点云像素点的数量大于第一阈值,则计数器增加预设值;
比较模块,用于在位置范围内的各行遍历完成后,将计数器的统计值与第二阈值进行比较;
检测分析模块,用于若计数器的统计值大于第二阈值,则得到集卡被吊起的检测结果。
在另一个实施例中,检测模块,包括:
边缘检测模块,用于对二维图像进行边界提取,得到边界线图像;
直线检测模块,用于对边界线图像进行直线检测,保留斜率在预设范围内的直线;
检测分析模块,用于确定直线的交点,若交点在位置范围内,则得到集卡 被吊起的检测结果。
关于基于三维激光的集卡防吊起检测装置的具体限定可以参见上文中对于基于三维激光的集卡防吊起检测方法的限定,在此不再赘述。上述基于三维激光的集卡防吊起检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图13所示。该计算机设备包括通过系统总线连接的处理器、存储器通信接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种基于三维激光的集卡防吊起检测方法。
本领域技术人员可以理解,图13中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
获取激光雷达采集的集装箱作业的三维点云;
获取激光雷达的姿态角;
根据姿态角,对三维点云进行转换;
将转换后的三维点云投影为二维图像;
确定集卡与集装箱之间的缝隙在二维图像中的位置范围;
对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果。
在一个实施例中,获取激光雷达的姿态角的方式,包括:
获取在标定状态激光雷达采集的集装箱作业的三维标定点云;
根据激光雷达的安装高度,从三维标定点云中确定地面点云;
计算地面点云的平面法向量;
根据地面点云的平面法向量,计算激光雷达的翻滚角和俯仰角;
根据激光雷达的安装高度、集卡托架高度、集装箱高度以及与激光雷达的距离,从三维点云中确定集装箱侧面点云;
计算集装箱侧面点云的平面法向量;
根据集装箱侧面点云的平面法向量,计算激光雷达的偏航角,姿态角包括翻滚角、俯仰角和偏航角。
在一个实施例中,根据姿态角,对三维点云进行转换,包括:
根据激光雷达的翻滚角和俯仰角,对三维点云进行转换,转换后三维点云中的地面点云与激光雷达坐标系的底平面平行;
根据激光雷达的偏航角,对转换后三维点云进行转换,转换后三维点云中的集装箱侧面点云与激光雷达坐标系的侧平面平行。
在一个实施例中,将转换后的三维点云投影为二维图像,包括:
对转换后的三维点云,计算各三维点云的二维坐标;
根据各三维点云的二维坐标,将点云转换为像素点;
将点云的像素点和非点云的像素点进行二值化处理,得到二值图像;
对二值图像进行图像预处理,得到二维图像。
在一个实施例中,确定集卡与集装箱之间的缝隙在二维图像中的位置范围,包括:
根据激光雷达的高度、集装箱被吊起高度以及集卡托架高度,确定集卡与集装箱之间的缝隙在二维图像中的位置范围。
在一个实施例中,对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果,包括:
遍历二维图像中位置范围内的各行,统计各行中点云像素点的数量;
若当前行点云像素点的数量大于第一阈值,则计数器增加预设值;
在位置范围内的各行遍历完成后,将计数器的统计值与第二阈值进行比较;
若计数器的统计值大于第二阈值,则得到集卡被吊起的检测结果。
在一个实施例中,根据位置范围进行图像检测,得到集卡防吊起检测结果,包括:
对二维图像进行边界提取,得到边界线图像;
对边界线图像进行直线检测,保留斜率在预设范围内的直线;
确定直线的交点,若交点在位置范围内,则得到集卡被吊起的检测结果。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
获取激光雷达采集的集装箱作业的三维点云;
获取激光雷达的姿态角;
根据姿态角,对三维点云进行转换;
将转换后的三维点云投影为二维图像;
确定集卡与集装箱之间的缝隙在二维图像中的位置范围;
对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果。
在一个实施例中,获取激光雷达的姿态角的方式,包括:
获取在标定状态激光雷达采集的集装箱作业的三维标定点云;
根据激光雷达的安装高度,从三维标定点云中确定地面点云;
计算地面点云的平面法向量;
根据地面点云的平面法向量,计算激光雷达的翻滚角和俯仰角;
根据激光雷达的安装高度、集卡托架高度、集装箱高度以及与激光雷达的距离,从三维点云中确定集装箱侧面点云;
计算集装箱侧面点云的平面法向量;
根据集装箱侧面点云的平面法向量,计算激光雷达的偏航角,姿态角包括翻滚角、俯仰角和偏航角。
在一个实施例中,根据姿态角,对三维点进行转换,包括:
根据激光雷达的翻滚角和俯仰角,对三维点云进行转换,转换后三维点云中的地面点云与激光雷达坐标系的底平面平行;
根据激光雷达的偏航角,对转换后三维点云进行转换,转换后三维点云中的集装箱侧面点云与激光雷达坐标系的侧平面平行。
在一个实施例中,将转换后的地面点云投影为二维图像,包括:
对转换后的三维点云,计算各三维点云的二维坐标;
根据各三维点云的二维坐标,将点云转换为像素点;
将点云的像素点和非点云的像素点进行二值化处理,得到二值图像;
对二值图像进行图像预处理,得到二维图像。
在一个实施例中,确定集卡与集装箱缝隙在二维图像中的位置范围,包括:
根据激光雷达的高度、集装箱被吊起高度以及集卡托架高度,确定集卡与集装箱之间的缝隙在二维图像中的位置范围。
在一个实施例中,对二维图像中的位置范围进行图像检测,得到集卡防吊起检测结果,包括:
遍历二维图像中位置范围内的各行,统计各行中点云像素点的数量若当前行点云像素点的数量大于第一阈值,则计数器增加预设值;
在位置范围内的各行遍历完成后,将计数器的统计值与第二阈值进行比较;
若计数器的统计值大于第二阈值,则得到集卡被吊起的检测结果。
在一个实施例中,根据位置范围进行图像检测,得到集卡防吊起检测结果,包括:
对二维图像进行边界提取,得到边界线图像;
对边界线图像进行直线检测,保留斜率在预设范围内的直线;
确定直线的交点,若交点在位置范围内,则得到集卡被吊起的检测结果。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、 存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (10)

  1. 一种基于三维激光的集卡防吊起检测方法,所述方法包括:
    获取激光雷达采集的集装箱作业的三维点云;
    获取所述激光雷达的姿态角;
    根据所述姿态角,对所述三维点云进行转换;
    将转换后的所述三维点云投影为二维图像;
    确定集卡与集装箱之间的缝隙在所述二维图像中的位置范围;
    对所述二维图像中的所述位置范围进行图像检测,得到集卡防吊起检测结果。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述激光雷达的姿态角的方式,包括:
    获取在标定状态所述激光雷达采集的集装箱作业的三维标定点云;
    根据所述激光雷达的安装高度,从所述三维标定点云中确定地面点云;
    计算所述地面点云的平面法向量;
    根据所述地面点云的平面法向量,计算所述激光雷达的翻滚角和俯仰角;
    根据所述激光雷达的安装高度、集卡托架高度、集装箱高度以及与所述激光雷达的距离,从所述三维点云中确定集装箱侧面点云;
    计算所述集装箱侧面点云的平面法向量;
    根据所述集装箱侧面点云的平面法向量,计算所述激光雷达的偏航角,所述姿态角包括所述翻滚角、所述俯仰角和所述偏航角。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述姿态角,对所述三维点云进行转换,包括:
    根据所述激光雷达的翻滚角和俯仰角,对所述三维点云进行转换,转换后所述三维点云中的地面点云与激光雷达坐标系的底平面平行;
    根据所述激光雷达的偏航角,对转换后所述三维点云进行转换,转换后所述三维点云中的集装箱侧面点云与所述激光雷达坐标系的侧平面平行。
  4. 根据权利要求1所述的方法,其特征在于,所述将转换后的所述三维点云投影为二维图像,包括:
    对转换后的所述三维点云,计算各三维点云的二维坐标;
    根据各三维点云的二维坐标,将点云转换为像素点;
    将点云的像素点和非点云的像素点进行二值化处理,得到二值图像;
    对所述二值图像进行图像预处理,得到二维图像。
  5. 根据权利要求1所述的方法,其特征在于,确定集卡与集装箱之间的缝隙在所述二维图像中的位置范围,包括:
    根据所述激光雷达的高度、集装箱被吊起高度以及集卡托架高度,确定集卡与集装箱之间的缝隙在所述二维图像中的位置范围。
  6. 根据权利要求1所述的方法,其特征在于,对所述二维图像中的所述位置范围进行图像检测,得到集卡防吊起检测结果,包括:
    遍历所述二维图像中位置范围内的各行,统计各行中点云像素点的数量;
    若当前行点云像素点的数量大于第一阈值,则计数器增加预设值;
    在所述位置范围内的各行遍历完成后,将计数器的统计值与第二阈值进行比较;
    若所述计数器的统计值大于所述第二阈值,则得到集卡被吊起的检测结果。
  7. 根据权利要求1所述的方法,其特征在于,根据所述位置范围进行图像检测,得到集卡防吊起检测结果,包括:
    对二维图像进行边界提取,得到边界线图像;
    对所述边界线图像进行直线检测,保留斜率在预设范围内的直线;
    确定直线的交点,若所述交点在所述位置范围内,则得到集卡被吊起的检测结果。
  8. 一种基于三维激光的集卡防吊起检测装置,包括:
    点云获取模块,用于获取激光雷达采集的集装箱作业的三维点云;
    姿态角获取模块,用于获取所述激光雷达的姿态角;
    转换模块,用于根据所述姿态角,对所述三维点进行转换;
    投影模块,用于将转换后的所述三维点云投影为二维图像;
    缝隙位置确定模块,用于确定集卡与集装箱之间的缝隙在所述二维图像中的位置范围;
    检测模块,用于对所述二维图像中的所述位置范围进行图像检测,得到集卡防吊起检测结果。
  9. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述方法的步骤。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法的步骤。
PCT/CN2021/079043 2020-03-09 2021-03-04 基于三维激光的集卡防吊起检测方法、装置和计算机设备 WO2021179983A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010157795.2 2020-03-09
CN202010157795.2A CN113376651B (zh) 2020-03-09 2020-03-09 基于三维激光的集卡防吊起检测方法、装置和计算机设备

Publications (1)

Publication Number Publication Date
WO2021179983A1 true WO2021179983A1 (zh) 2021-09-16

Family

ID=77568479

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079043 WO2021179983A1 (zh) 2020-03-09 2021-03-04 基于三维激光的集卡防吊起检测方法、装置和计算机设备

Country Status (2)

Country Link
CN (1) CN113376651B (zh)
WO (1) WO2021179983A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066847A (zh) * 2021-11-16 2022-02-18 北京国泰星云科技有限公司 基于2d激光和图像数据融合的集卡吊起状态检测方法
CN115817538A (zh) * 2023-01-09 2023-03-21 杭州飞步科技有限公司 无人集卡的控制方法、装置、设备及介质
CN115937069A (zh) * 2022-03-24 2023-04-07 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114647011B (zh) * 2022-02-28 2024-02-02 三一海洋重工有限公司 集卡防吊监控方法、装置及系统
CN115077385B (zh) * 2022-07-05 2023-09-26 北京斯年智驾科技有限公司 无人集卡集装箱位姿测量方法及其测量系统
CN115641553B (zh) * 2022-12-26 2023-03-10 太原理工大学 掘进机作业环境侵入物的在线检测装置及方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006160402A (ja) * 2004-12-03 2006-06-22 Mitsui Eng & Shipbuild Co Ltd コンテナクレーンにおけるシャシ位置検出装置
CN203976241U (zh) * 2014-06-12 2014-12-03 上海海镭激光科技有限公司 轮胎吊行走定位纠偏及集卡对位防吊起系统
US20150191333A1 (en) * 2011-05-10 2015-07-09 Cargotec Finland Oy System for determination of a container's position in a vehicle and/or its trailer to be loaded with containers
CN105606023A (zh) * 2015-12-18 2016-05-25 武汉万集信息技术有限公司 一种车辆外廓尺寸测量方法和系统
CN207293963U (zh) * 2017-08-03 2018-05-01 南通通镭软件有限公司 自动化装卸作业的自动着箱和防吊起系统
CN109384151A (zh) * 2017-08-03 2019-02-26 南通通镭软件有限公司 自动化装卸作业的自动着箱和防吊起方法
CN109752726A (zh) * 2019-01-23 2019-05-14 上海海事大学 一种集装箱姿态检测装置和方法
CN110431101A (zh) * 2017-03-16 2019-11-08 科尼起重机全球公司 将集装箱下降到运输平台或提升集装箱远离运输平台时集装箱转移装置的监控

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109959352A (zh) * 2019-03-01 2019-07-02 武汉光庭科技有限公司 利用激光点云计算卡车车头和挂车之间夹角的方法及系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006160402A (ja) * 2004-12-03 2006-06-22 Mitsui Eng & Shipbuild Co Ltd コンテナクレーンにおけるシャシ位置検出装置
US20150191333A1 (en) * 2011-05-10 2015-07-09 Cargotec Finland Oy System for determination of a container's position in a vehicle and/or its trailer to be loaded with containers
CN203976241U (zh) * 2014-06-12 2014-12-03 上海海镭激光科技有限公司 轮胎吊行走定位纠偏及集卡对位防吊起系统
CN105606023A (zh) * 2015-12-18 2016-05-25 武汉万集信息技术有限公司 一种车辆外廓尺寸测量方法和系统
CN110431101A (zh) * 2017-03-16 2019-11-08 科尼起重机全球公司 将集装箱下降到运输平台或提升集装箱远离运输平台时集装箱转移装置的监控
CN207293963U (zh) * 2017-08-03 2018-05-01 南通通镭软件有限公司 自动化装卸作业的自动着箱和防吊起系统
CN109384151A (zh) * 2017-08-03 2019-02-26 南通通镭软件有限公司 自动化装卸作业的自动着箱和防吊起方法
CN109752726A (zh) * 2019-01-23 2019-05-14 上海海事大学 一种集装箱姿态检测装置和方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066847A (zh) * 2021-11-16 2022-02-18 北京国泰星云科技有限公司 基于2d激光和图像数据融合的集卡吊起状态检测方法
CN115937069A (zh) * 2022-03-24 2023-04-07 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质
CN115937069B (zh) * 2022-03-24 2023-09-19 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质
CN115817538A (zh) * 2023-01-09 2023-03-21 杭州飞步科技有限公司 无人集卡的控制方法、装置、设备及介质
CN115817538B (zh) * 2023-01-09 2023-05-16 杭州飞步科技有限公司 无人集卡的控制方法、装置、设备及介质

Also Published As

Publication number Publication date
CN113376651A (zh) 2021-09-10
CN113376651B (zh) 2022-11-29

Similar Documents

Publication Publication Date Title
WO2021179983A1 (zh) 基于三维激光的集卡防吊起检测方法、装置和计算机设备
CN113376654B (zh) 基于三维激光的集卡防砸检测方法、装置和计算机设备
US20220270293A1 (en) Calibration for sensor
CN107463918B (zh) 基于激光点云与影像数据融合的车道线提取方法
WO2021197345A1 (zh) 一种基于激光雷达的封闭空间内剩余体积的检测方法和装置
CN110826499A (zh) 物体空间参数检测方法、装置、电子设备及存储介质
CN111508027B (zh) 摄像机外参标定的方法和装置
CN109410264B (zh) 一种基于激光点云与图像融合的前方车辆距离测量方法
US20160343143A1 (en) Edge detection apparatus, edge detection method, and computer readable medium
US20220270294A1 (en) Calibration methods, apparatuses, systems and devices for image acquisition device, and storage media
CN112597846B (zh) 车道线检测方法、装置、计算机设备和存储介质
WO2021000948A1 (zh) 配重重量的检测方法与系统、获取方法与系统及起重机
CN113205604A (zh) 一种基于摄像机和激光雷达的可行区域检测方法
KR20180098945A (ko) 고정형 단일 카메라를 이용한 차량 속도 감지 방법 및 장치
CN114862929A (zh) 三维目标检测方法、装置、计算机可读存储介质及机器人
CN111179184B (zh) 基于随机抽样一致性的鱼眼图像有效区域提取方法
CN112116644B (zh) 一种基于视觉的障碍物检测方法和装置、障碍物距离计算方法和装置
JP6844223B2 (ja) 情報処理装置、撮像装置、機器制御システム、情報処理方法およびプログラム
JP7315216B2 (ja) 補正距離算出装置、補正距離算出用プログラムおよび補正距離算出方法
US10970592B2 (en) Adhering substance detection apparatus and adhering substance detection method
CN113450335B (zh) 一种路沿检测方法、路沿检测装置及路面施工车辆
JP6492603B2 (ja) 画像処理装置、システム、画像処理方法、およびプログラム
CN114581753A (zh) 一种基于占据栅格的盲区内负障碍补全方法、系统及设备
CN113744200A (zh) 一种摄像头脏污检测方法、装置及设备
CN112364693A (zh) 基于双目视觉的障碍识别方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21767769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21767769

Country of ref document: EP

Kind code of ref document: A1