CN114314345A - Intelligent sensing system of bridge crane and working method thereof - Google Patents

Intelligent sensing system of bridge crane and working method thereof Download PDF

Info

Publication number
CN114314345A
CN114314345A CN202111630303.8A CN202111630303A CN114314345A CN 114314345 A CN114314345 A CN 114314345A CN 202111630303 A CN202111630303 A CN 202111630303A CN 114314345 A CN114314345 A CN 114314345A
Authority
CN
China
Prior art keywords
crane
target
point cloud
data
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111630303.8A
Other languages
Chinese (zh)
Inventor
刘财喜
马开辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baigong Huizhi Shanghai Industrial Technology Co ltd
Original Assignee
Baigong Huizhi Shanghai Industrial Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baigong Huizhi Shanghai Industrial Technology Co ltd filed Critical Baigong Huizhi Shanghai Industrial Technology Co ltd
Priority to CN202111630303.8A priority Critical patent/CN114314345A/en
Publication of CN114314345A publication Critical patent/CN114314345A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an intelligent sensing system of a bridge crane and a working method thereof, wherein the system comprises a sensor subsystem and a data processing subsystem; the sensor subsystem comprises a displacement sensor and is used for acquiring position height data; the system comprises a first image acquisition sensor, a second image acquisition sensor and a control module, wherein the first image acquisition sensor is used for acquiring first image point cloud data of an environment above a crane; the second image acquisition sensor is used for acquiring second image point cloud data of the environment below the crane; the data processing subsystem comprises a perception positioning module, a perception identification module and a perception fusion module; the perception positioning module is used for calculating to obtain the position information of the crane; the perception identification module is used for carrying out target identification on a target object below the crane; and the perception fusion module is used for calculating and obtaining the position information of the target object. The invention can automatically avoid the obstacle according to the position information of the target object, thereby avoiding the occurrence of accidents; and the hardware is set simply, and the sensor can not be worn in the use process, so that the service life of the sensor is prolonged.

Description

Intelligent sensing system of bridge crane and working method thereof
Technical Field
The invention belongs to the technical field of automatic detection, and particularly relates to an intelligent sensing system of a bridge crane and a working method thereof.
Background
The bridge crane is an industrial hoisting and hoisting device with wide application. As a special large-scale mechanical device, the device is applied to various industrial environments, but the operation efficiency and the automation degree are still low, and manual operation is needed. Along with the development of the technology, the crane is unmanned to be controlled or becomes the future trend, and when the crane is unmanned to operate, the crane needs to be accurately positioned, and meanwhile, the surrounding environment can be intelligently sensed.
In the prior art, the positioning of the crane is mainly realized by adopting modes such as Graham lines, bar code scales and the like, but the modes have the problems of high installation difficulty, long implementation period, high equipment cost and the like. In addition, there is a method for detecting the operation position of the crane by installing a sensor on the crane operation track, but in some working scenarios, for example, when the track beam is unbalanced, the operation of the crane will cause the sensor to be worn out, and the positioning requirement cannot be met, so that the sensor is basically in a failure state. In addition, the existing bridge crane cannot sense the surrounding environment and automatically avoid obstacles, and when the crane is in unmanned operation, the crane easily collides with goods below the crane or other equipment, so that accidents are caused.
Disclosure of Invention
The invention aims to provide an intelligent sensing system of a bridge crane and a working method thereof, which are used for solving at least one technical problem in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the invention provides an intelligent sensing system of a bridge crane, which comprises a sensor subsystem and a data processing subsystem;
the sensor subsystem comprises a displacement sensor arranged on a lifting sling of the crane and is used for acquiring position height data of the lifting sling; the first image acquisition sensor is arranged above the crane trolley and used for acquiring first image point cloud data of an environment above the crane; the second image acquisition sensor is arranged below the crane trolley and used for acquiring second image point cloud data of the environment below the crane;
the data processing subsystem comprises a perception positioning module, a perception identification module and a perception fusion module; the perception positioning module is used for calculating to obtain the position information of the crane based on a positioning algorithm according to the position height data and the first image point cloud data; the perception identification module is used for carrying out target identification on a target object below the crane based on a target detection algorithm according to the second image point cloud data; the perception fusion module is used for carrying out feature fusion on the position information of the crane and the target identification result of the target object, and calculating to obtain the position information of the target object.
In one possible design, the first image acquisition sensor and the second image acquisition sensor each include one or a combination of two-dimensional lidar, three-dimensional lidar, a monocular camera, a binocular camera, and a depth camera; the displacement sensor includes a displacement absolute encoder.
In one possible design, the sensor subsystem comprises a first displacement absolute encoder arranged on a lifting sling, a first three-dimensional laser radar vertically arranged on a trolley and two second three-dimensional laser radars arranged on two opposite sides of the bottom of the trolley;
the horizontal field angle of the first three-dimensional laser radar is 360 degrees, the vertical field angle alpha 1 of the first three-dimensional laser radar is not smaller than 25 degrees, and the vertical field angle alpha 2 and the horizontal field angle beta 1 of the second three-dimensional laser radar are not smaller than 60 degrees.
In one possible design, the sensor subsystem comprises a second displacement absolute encoder arranged on a lifting sling, a third three-dimensional laser radar vertically arranged on the trolley and two first binocular cameras arranged on the first opposite two sides of the bottom of the trolley;
the horizontal field angle of the third three-dimensional laser radar is 360 degrees, the vertical field angle alpha 3 of the third three-dimensional laser radar is not smaller than 25 degrees, and the vertical field angle alpha 4 and the horizontal field angle beta 2 of the first binocular camera are not smaller than 60 degrees.
In one possible design, the sensor subsystem further comprises two fourth three-dimensional laser radars arranged on the second opposite sides of the bottom of the trolley; is not too clear
Wherein the fourth three-dimensional lidar vertical field angle α 5 and the horizontal field angle β 3 are not less than 60 degrees.
In a second aspect, the present invention provides a method for operating an intelligent sensing system of a bridge crane as set forth in any one of the possible designs of the first aspect, including:
acquiring first image point cloud data of an environment above a crane, second image point cloud data of an environment below the crane and position height data of a lifting sling;
calculating to obtain the position information of the crane based on a positioning algorithm according to the first image point cloud data and the position height data;
performing target identification on a target object below the crane based on a target detection algorithm according to the second image point cloud data;
and performing characteristic fusion on the position information of the crane and the target result of the target object, and calculating to obtain the position information of the target object.
In one possible design, the position information of the crane is calculated based on a positioning algorithm according to the first image point cloud data and the position height data, and the method includes:
when the crane cart and the crane trolley move, the cart and the crane trolley are positioned in real time based on a Slam positioning algorithm according to the acquired first image point cloud data, and coordinate position information of the cart and the crane trolley in a grid map is calculated;
and when the lifting sling of the crane moves upwards or downwards, directly acquiring the position height information of the lifting sling according to the acquired position height data.
In one possible design, the target identification of the target object under the crane based on the target detection algorithm according to the second image point cloud data includes:
based on a 3D point cloud target detection algorithm, respectively and independently extracting first characteristic data from second image point cloud data acquired by each second image acquisition sensor, and respectively carrying out target identification on each first characteristic data to obtain respective initial target identification results;
and performing multi-mode recognition fusion on all initial target recognition results, and obtaining a first target recognition result through recognition decision.
In a possible design, after performing target recognition on each first feature data to obtain a respective initial target recognition result, the method further includes:
performing data fusion on the second point cloud data acquired by all the second image acquisition sensors, extracting second characteristic data from the fused point cloud data, and performing target identification on the second characteristic data to obtain a target fusion identification result;
and performing mixed multi-mode recognition fusion on the initial target recognition result and the target fusion recognition result, and obtaining a second target recognition result through recognition decision.
In one possible design, performing feature fusion on the position information of the crane and a target result of the target object, and calculating to obtain the position information of the target object, includes:
and carrying out feature fusion on the coordinate position information of the cart and the trolley and the position height information of the lifting sling and the first target identification result or the second target identification result, and calculating to obtain the position information of the target object in the grid map.
Has the advantages that:
according to the invention, the image acquisition sensors are arranged on the crane to respectively acquire image point cloud data above and below the crane, the displacement sensors are arranged to acquire position height data of a lifting sling, then the sensing and positioning module is used for positioning the crane in real time, the sensing and identifying module is used for identifying a target object below the crane, the sensing and identifying module is used for carrying out characteristic fusion on position information of the crane and a target identification result, and position information of the target object is obtained by calculation, so that automatic obstacle avoidance can be carried out according to the position information of the target object, accidents are avoided, the hardware of the crane is simple to set, the sensors cannot be worn in the using process, and the service life of the sensors is prolonged.
Drawings
Fig. 1 is a block diagram of the intelligent sensing system of the bridge crane in the embodiment;
FIG. 2 is a schematic view of one of the installation of the sensor of the present embodiment;
FIG. 3 is another schematic view of the sensor assembly of the present embodiment;
FIG. 4 is a schematic view of another installation of the sensor of this embodiment;
fig. 5 is a flowchart of the intelligent sensing method for the bridge crane in this embodiment.
Wherein, 1, a trolley; 2-large vehicle; 3-lifting the sling.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments in the present description, belong to the protection scope of the present invention.
Examples
In order to solve the technical problems of high installation difficulty, long implementation period, high equipment cost and incapability of sensing the surrounding environment of a crane in the prior art, the embodiment provides an intelligent sensing system of a bridge crane and a working method thereof, the intelligent sensing system of the bridge crane respectively collects image point cloud data above and below the crane by arranging an image collecting sensor on the crane, collects position height data of a lifting hanger 3 by arranging a displacement sensor, then uses a sensing and positioning module to carry out real-time positioning on the crane, carries out target identification on a target object below the crane by the sensing and identification module, carries out characteristic fusion on the position information of the crane and a target identification result by the sensing and fusion module, calculates the position information of the target object, and can automatically avoid obstacles according to the position information of the target object, the invention avoids accidents, has simple hardware arrangement, can not generate abrasion in the using process of the sensor, and prolongs the service life of the sensor
As shown in fig. 1-4, in a first aspect, the present invention provides an intelligent sensing system for a bridge crane, including a sensor subsystem and a data processing subsystem;
the sensor subsystem comprises a displacement sensor arranged on a lifting sling 3 of the crane and is used for acquiring position height data of the lifting sling 3; the first image acquisition sensor is arranged above the crane trolley 1 and is used for acquiring first image point cloud data of an environment above the crane; the second image acquisition sensor is arranged below the crane trolley 1 and is used for acquiring second image point cloud data of the environment below the crane;
the data processing subsystem comprises a perception positioning module, a perception identification module and a perception fusion module; the perception positioning module is used for calculating to obtain the position information of the crane based on a positioning algorithm according to the position height data and the first image point cloud data; the perception identification module is used for carrying out target identification on a target object below the crane based on a target detection algorithm according to the second image point cloud data; the perception fusion module is used for carrying out feature fusion on the position information of the crane and the target identification result of the target object, and calculating to obtain the position information of the target object.
It should be noted that the number of the first image capturing sensor, the second image capturing sensor and the displacement sensor may be configured according to an actual industrial environment of the crane, and is not limited specifically.
Based on the disclosure, in the embodiment, the image acquisition sensors are arranged on the crane to respectively acquire image point cloud data above and below the crane, the displacement sensors are arranged to acquire position height data of the lifting sling 3, the sensing and positioning module is used for positioning the crane in real time, the sensing and identifying module is used for identifying a target object below the crane, the sensing and fusing module is used for performing characteristic fusion on position information of the crane and a target identification result, and position information of the target object is obtained through calculation, so that automatic obstacle avoidance can be performed according to the position information of the target object, accidents are avoided, the hardware setting of the invention is simple, the sensors cannot be worn in the using process, and the service life of the sensors is prolonged.
In a specific embodiment, the first image acquisition sensor and the second image acquisition sensor each include one or a combination of two-dimensional lidar, three-dimensional lidar, a monocular camera, a binocular camera, and a depth camera; the displacement sensor includes a displacement absolute encoder. Of course, it is understood that the image capturing sensor in this embodiment is not limited to the above sensors, and the displacement sensor in this embodiment is also not limited to the displacement absolute encoder, which is not illustrated herein.
In a specific embodiment, as shown in fig. 2, the sensor subsystem comprises a first absolute displacement encoder arranged on the lifting sling 3, a first three-dimensional lidar arranged vertically on the trolley 1, and two second three-dimensional lidar arranged on two opposite sides of the bottom of the trolley 1;
the horizontal field angle of the first three-dimensional laser radar is 360 degrees, the vertical field angle alpha 1 of the first three-dimensional laser radar is not smaller than 25 degrees, and the vertical field angle alpha 2 and the horizontal field angle beta 1 of the second three-dimensional laser radar are not smaller than 60 degrees.
It should be noted that the horizontal field angle of the first three-dimensional laser radar is 360 degrees, the vertical field angle α 1 is not less than 25 degrees, and the first three-dimensional laser radar can scan a very wide area of the surrounding environment above the crane, is vertically arranged on the trolley 1, and can scan the environment above the crane in real time when the cart 2 and the trolley 1 move, so as to obtain three-dimensional point cloud data of the environment above the crane. The two second three-dimensional laser radars are respectively arranged at the front side position and the rear side position or the left side position and the right side position of the bottom of the trolley 1, the specific installation angle can be adjusted according to an actual industrial scene, the specific installation angle is not limited, and the two second three-dimensional laser radars can have larger scanning visual fields by setting the vertical visual angle alpha 2 and the horizontal visual angle beta 1 to be not less than 60 degrees, so that a wider area below the crane can be detected and identified; wherein, the scanning range which can be covered by the single second three-dimensional laser radar is at least 8 multiplied by 8 square meters.
Preferably, the first absolute displacement encoder is arranged on a motor of the lifting sling 3, and can directly position the position height of the lifting sling 3.
In a specific embodiment, as shown in fig. 3, the sensor subsystem comprises a first absolute displacement encoder arranged on the lifting sling 3, a third three-dimensional lidar arranged vertically on the trolley 1, and two first binocular cameras arranged on first opposite sides of the bottom of the trolley 1;
the horizontal field angle of the third three-dimensional laser radar is 360 degrees, the vertical field angle alpha 3 of the third three-dimensional laser radar is not smaller than 25 degrees, and the vertical field angle alpha 4 and the horizontal field angle beta 2 of the first binocular camera are not smaller than 60 degrees.
It should be noted that the horizontal field angle of the third three-dimensional laser radar is 360 degrees, the vertical field angle α 1 is not less than 25 degrees, and the third three-dimensional laser radar can scan a very wide area of the surrounding environment above the crane, is vertically arranged on the trolley 1, and can scan the environment above the crane in real time when the cart 2 and the trolley 1 move, so as to obtain three-dimensional point cloud data of the environment above the crane. The two first binocular cameras are respectively installed at the front side position and the rear side position or the left side position and the right side position of the bottom of the trolley 1, the specific installation angle can be adjusted according to an actual industrial scene, the specific installation angle is not limited, the vertical field angle alpha 4 and the horizontal field angle beta 2 are set to be not less than 60 degrees, the two first binocular cameras can be ensured to have larger scanning visual fields, and therefore a wider area below the crane can be detected and identified; among them, it is preferable that the scanning range that can be covered by a single binocular camera is at least 8 × 8 square meters, and the depth or distance detected by the binocular camera is not less than 20 meters.
In a specific embodiment, as shown in fig. 4, the sensor subsystem further includes two fourth three-dimensional lidar positioned on a second opposite side of the bottom of the cart 1;
wherein the fourth three-dimensional lidar vertical field angle α 5 and the horizontal field angle β 3 are not less than 60 degrees.
It should be noted that, on the basis of the previous embodiment, two fourth three-dimensional lidar is additionally arranged at the bottom of the trolley 1, that is, the sensor subsystem in the present embodiment includes a first absolute displacement encoder arranged on the lifting sling 3, a third three-dimensional lidar vertically arranged on the trolley 1, two first binocular cameras arranged at first opposite sides of the bottom of the trolley 1, and two fourth three-dimensional lidar arranged at second opposite sides of the bottom of the trolley 1. The two fourth three-dimensional laser radars are respectively arranged at the front side position and the rear side position or the left side position and the right side position of the bottom of the trolley 1, the specific installation angle can be adjusted according to an actual industrial scene, the specific installation angle is not limited, and the vertical field angle alpha 5 and the horizontal field angle beta 3 are set to be not less than 60 degrees, so that the two fourth three-dimensional laser radars can be ensured to have larger scanning visual fields, and a wider area below the crane can be detected and identified; wherein, the single fourth three-dimensional laser radar can at least cover 8 × 8 square meters.
In a specific embodiment, the data processing subsystem is deployed in an industrial computer device based on a CPU + GPU framework, and the industrial computer device is provided with two devices, and the two devices have completely the same structure, thereby forming a working mode of one device for use and one device for standby. The industrial computer device is used for processing, analyzing and calculating data collected by the sensor subsystem, and sharing the analysis result with an external System, including but not limited to a PLC (Programmable Logic Controller) motion Control System, a WCS (Numerical Control System) System, a WMS (Warehouse Management System) System, and the like.
As shown in fig. 5, in a second aspect, the present embodiment provides a working method of the intelligent sensing system for a bridge crane according to any one of the possible designs of the first aspect, including but not limited to the following steps S101 to S104:
s101, collecting first image point cloud data of an environment above a crane, second image point cloud data of an environment below the crane and position height data of a lifting hanger 3;
preferably, the first image point cloud data of the embodiment is acquired by a three-dimensional laser radar arranged above the trolley 1, and the first image point cloud data is radar point cloud data; preferably, the second image point cloud data is acquired by a three-dimensional laser radar or a binocular camera arranged below the trolley 1, and the second image point cloud data is radar point cloud data or visual point cloud data; preferably, the position height data is acquired by a displacement absolute encoder arranged on the lifting sling 3, and the position height data is directly acquired based on the encoder.
Step S102, calculating to obtain position information of the crane based on a positioning algorithm according to the first image point cloud data and the position height data;
in a specific implementation manner of step S102, calculating, according to the first image point cloud data and the position height data, position information of the crane based on a positioning algorithm, including:
s1021, when the crane cart 2 and the crane cart 1 move, real-time positioning is carried out on the cart 2 and the crane cart 1 on the basis of a Slam positioning algorithm according to the collected first image point cloud data, and coordinate position information of the cart 2 and the crane cart 1 in a grid map is calculated;
preferably, the Slam positioning algorithm includes, but is not limited to, a Slam algorithm based on graph optimization (such as Cartographer, etc.), and is not limited herein.
Before step S1021, the method further includes:
(1) the data acquisition precision of the three-dimensional laser radar arranged above the trolley 1 is adjusted, and the three-dimensional point cloud data of the environment can be obtained through multiple scanning in the working area of the crane, so that the acquisition precision of the radar point cloud data is improved.
(2) The relative coordinates between the mounting position of the three-dimensional laser radar above the trolley 1 and the central point position of the lifting sling 3 are calibrated, and the coordinate conversion is carried out on the mounting position of the three-dimensional laser radar and the central point position of the lifting sling 3, and the coordinate conversion can be realized through coordinate translation.
(3) And point cloud data registration is carried out on the three-dimensional laser radar above the trolley 1, and data matching is carried out on the point cloud data and the position of the environment above the trolley, so that the calibration and calibration of the physical space position of the crane and the position of the virtual grid map are realized.
(4) And establishing a crane work and virtual grid map through a Slam algorithm, and pruning and dividing functional areas of the map.
It should be noted that, in the steps (1) to (4), the system is only calibrated and calibrated before the first image point cloud data is acquired for the first time, and the operation is not required to be performed again when the data is acquired subsequently.
And S1022, when the crane lifting sling 3 moves upwards or downwards, directly acquiring the position height information of the lifting sling 3 according to the acquired position height data.
Before step S1022, the method further includes:
calibrating the initial zero position of the absolute displacement encoder according to the position of the lifting stop limit of the lifting sling 3; the initial zero position of the absolute displacement encoder is calibrated only before the position height data is acquired for the first time, and the operation of the step is not needed when the data is acquired subsequently.
Based on the above disclosure, in the embodiment, radar point cloud data acquired by a three-dimensional laser radar is utilized, a Slam algorithm is adopted to perform real-time positioning and map coordinate construction on an environment above a crane, and position information of a cart 2 and a trolley 1 is obtained through calculation; the displacement data acquired by the displacement absolute encoder is utilized to directly read the position height data, so that the crane cart 2, the crane trolley 1 and the lifting sling 3 are positioned in real time, and the working state of the crane is conveniently monitored.
S103, performing target identification on a target object below the crane based on a target detection algorithm according to the second image point cloud data;
preferably, the target detection algorithm adopts a 3D point cloud target detection algorithm, including but not limited to algorithms such as PointNet + +, PointCRNN and Yolo3D, which are not limited herein.
The method comprises the steps that target identification is carried out on a target object below a crane based on a target detection algorithm, and an obtained target identification result comprises a target object type and local position information, wherein the local position information refers to the local position information of the target object in the view field of a second image acquisition sensor, namely the relative position information of the target object and the second image acquisition sensor; the second image acquisition sensor includes, but is not limited to, two three-dimensional lidar arranged on opposite sides of the cart bottom and/or two binocular cameras arranged on first opposite sides of the cart bottom.
Preferably, before step S103, the method further includes:
and (I) performing information space-time synchronization on all sensors (including a radar sensor, a vision sensor and a displacement sensor) and performing calibration of reference time.
And (II) selecting a calibration position coordinate according to the virtual grid map obtained by the sensing and positioning module, operating the crane to the calibration position coordinate, and placing a calibration object on the calibration position.
(III) when a three-dimensional laser radar is arranged above the trolley 1 and two three-dimensional laser radars or two binocular cameras are arranged below the trolley 1, respectively registering point cloud data of the two three-dimensional laser radars or the two binocular cameras below the trolley 1, then carrying out coordinate transformation on a plurality of pieces of point cloud data, and unifying the point cloud data to the same coordinate system; when a three-dimensional laser radar is arranged above the trolley 1 and two three-dimensional laser radars and two binocular cameras are arranged below the trolley 1, the three-dimensional laser radar point cloud data and the binocular camera point cloud data are simultaneously registered, coordinate calibration is carried out according to the relative posture positions of the three-dimensional laser radar and the binocular cameras, and finally the radar point cloud data and the visual point cloud data are unified under a coordinate system.
It should be noted that, the steps (one) to (three) just need to be set before the first time of acquiring the second image data, and the steps (one) to (three) do not need to be operated when the second image data is acquired subsequently.
In a specific implementation manner of step S103, performing target identification on a target object under the crane based on a target detection algorithm according to the second image point cloud data includes:
s1031, based on a 3D point cloud target detection algorithm, respectively and independently extracting first characteristic data from second image point cloud data acquired by each second image acquisition sensor, and respectively carrying out target identification on each first characteristic data to obtain respective initial target identification results;
specifically, when the three-dimensional laser radar and/or the binocular camera are arranged below the trolley 1, the single radar point cloud data collected by each three-dimensional laser radar and/or the single vision point cloud data collected by each binocular camera are/is subjected to characteristic data extraction independently, then target identification detection is performed on each single radar point cloud data and/or each single vision point cloud data respectively to obtain respective initial target identification results, and the initial target identification results at the moment are multiple.
And S1032, performing multi-mode recognition fusion on all initial target recognition results, and obtaining a first target recognition result through a recognition decision.
In order to make the collected point cloud data richer and the subsequently obtained target detection result more accurate, in another specific implementation manner of step S103, after performing target identification on each first feature data respectively to obtain respective initial target identification results, the method further includes:
s1031a, performing data fusion on second image point cloud data acquired by all second image acquisition sensors, extracting second characteristic data from the fused image point cloud data, and performing target identification on the second characteristic data to obtain a target fusion identification result;
specifically, when the three-dimensional laser radar and/or the binocular camera is arranged below the trolley 1, data fusion is performed on mono-radar point cloud data and/or mono-vision point cloud data acquired by all the three-dimensional laser radar and/or the binocular camera, and feature data are extracted from the fused point cloud data to obtain a target fusion recognition result.
And S1032a, performing mixed multi-mode recognition and fusion on the initial target recognition result and the target fusion recognition result, and obtaining a second target recognition result through a recognition decision.
And S104, performing feature fusion on the position information of the crane and the target result of the target object, and calculating to obtain the position information of the target object.
In a specific embodiment of step S104, performing feature fusion on the position information of the crane and the target result of the target object, and calculating to obtain the position information of the target object includes:
and carrying out feature fusion on the coordinate position information of the cart 2 and the trolley 1 and the position height information of the lifting sling 3 and the first target identification result or the second target identification result, and calculating to obtain the position information of the target object in the grid map.
It should be noted that the target object includes a static object, such as a fixed device or goods, and also includes a dynamic object, such as a mobile device or a person; the target object is identified, and the position information of the target object in the grid map is obtained through calculation, so that the crane can automatically avoid the obstacle of the target object, and safety accidents are avoided.
It should be noted that the calculated position information of the target object in the grid map corresponds to the global position information of the target object in the actual physical space, so that the crane can perform correct path planning according to the global position information, and reasonable obstacle avoidance is realized.
Has the advantages that:
in the embodiment, the image acquisition sensor is arranged on the crane to respectively acquire image point cloud data above and below the crane, the displacement sensor is arranged to acquire position height data of the lifting sling 3, the sensing and positioning module is used for positioning the crane in real time, the sensing and identifying module is used for identifying the target object below the crane, the sensing and fusing module is used for carrying out characteristic fusion on the position information of the crane and the target identification result, and the position information of the target object is obtained through calculation, so that automatic obstacle avoidance can be carried out according to the position information of the target object, accidents are avoided, the hardware setting of the invention is simple, the sensor cannot be worn in the using process, and the service life of the sensor is prolonged.
Finally, it should be noted that: the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An intelligent sensing system of a bridge crane is characterized by comprising a sensor subsystem and a data processing subsystem;
the sensor subsystem comprises a displacement sensor arranged on a lifting sling of the crane and is used for acquiring position height data of the lifting sling; the first image acquisition sensor is arranged above the crane trolley and used for acquiring first image point cloud data of an environment above the crane; the second image acquisition sensor is arranged below the crane trolley and used for acquiring second image point cloud data of the environment below the crane;
the data processing subsystem comprises a perception positioning module, a perception identification module and a perception fusion module; the perception positioning module is used for calculating to obtain the position information of the crane based on a positioning algorithm according to the position height data and the first image point cloud data; the perception identification module is used for carrying out target identification on a target object below the crane based on a target detection algorithm according to the second image point cloud data; the perception fusion module is used for carrying out feature fusion on the position information of the crane and the target identification result of the target object, and calculating to obtain the position information of the target object.
2. The intelligent sensing system for bridge cranes of claim 1, wherein the first image acquisition sensor and the second image acquisition sensor each comprise one or a combination of two-dimensional lidar, three-dimensional lidar, a monocular camera, a binocular camera, and a depth camera; the displacement sensor includes a displacement absolute encoder.
3. The intelligent sensing system for bridge cranes of claim 2, wherein the sensor subsystem comprises a first absolute displacement encoder mounted on the lifting spreader, a first three-dimensional lidar mounted vertically on the trolley, and two second three-dimensional lidar mounted on opposite sides of the bottom of the trolley;
the horizontal field angle of the first three-dimensional laser radar is 360 degrees, the vertical field angle alpha 1 of the first three-dimensional laser radar is not smaller than 25 degrees, and the vertical field angle alpha 2 and the horizontal field angle beta 1 of the second three-dimensional laser radar are not smaller than 60 degrees.
4. The intelligent sensing system for bridge cranes of claim 2, wherein the sensor subsystem comprises a second absolute displacement encoder mounted on the lifting sling, a third three-dimensional lidar mounted vertically on the trolley, and two first binocular cameras mounted on first opposite sides of the bottom of the trolley;
the horizontal field angle of the third three-dimensional laser radar is 360 degrees, the vertical field angle alpha 3 of the third three-dimensional laser radar is not smaller than 25 degrees, and the vertical field angle alpha 4 and the horizontal field angle beta 2 of the first binocular camera are not smaller than 60 degrees.
5. The intelligent sensing system for bridge cranes of claim 4, wherein the sensor subsystem further comprises two fourth three-dimensional lidar positioned on a second, opposite side of the bottom of the trolley;
wherein a vertical field angle α 5 and a horizontal field angle β 3 of the fourth three-dimensional lidar are each not less than 60 degrees.
6. An operating method of the intelligent sensing system of the bridge crane according to any one of claims 1 to 5, comprising the following steps:
acquiring first image point cloud data of an environment above a crane, second image point cloud data of an environment below the crane and position height data of a lifting sling;
calculating to obtain the position information of the crane based on a positioning algorithm according to the first image point cloud data and the position height data;
performing target identification on a target object below the crane based on a target detection algorithm according to the second image point cloud data;
and performing characteristic fusion on the position information of the crane and the target result of the target object, and calculating to obtain the position information of the target object.
7. The working method according to claim 6, wherein the calculating of the position information of the crane based on the positioning algorithm according to the first image point cloud data and the position height data comprises:
when the crane cart and the crane trolley move, the cart and the crane trolley are positioned in real time based on a Slam positioning algorithm according to the collected first image point cloud data, and coordinate position information of the cart and the crane trolley in a grid map is calculated;
and when the lifting sling of the crane moves upwards or downwards, directly acquiring the position height information of the lifting sling according to the acquired position height data.
8. The working method according to claim 7, wherein the target identification of the target object under the crane based on the target detection algorithm according to the second image point cloud data comprises:
based on a 3D point cloud target detection algorithm, respectively and independently extracting first characteristic data from second image point cloud data acquired by each second image acquisition sensor, and respectively carrying out target identification on each first characteristic data to obtain respective initial target identification results;
and performing multi-mode recognition fusion on all initial target recognition results, and obtaining a first target recognition result through recognition decision.
9. The operating method according to claim 8, wherein after performing object recognition on each first feature data to obtain an initial object recognition result, the method further comprises:
performing data fusion on second image point cloud data acquired by all second image acquisition sensors, extracting second characteristic data from the fused image point cloud data, and performing target identification on the second characteristic data to obtain a target fusion identification result;
and performing mixed multi-mode recognition fusion on the initial target recognition result and the target fusion recognition result, and obtaining a second target recognition result through recognition decision.
10. The working method according to claim 9, wherein the step of performing feature fusion on the position information of the crane and the target result of the target object to calculate the position information of the target object comprises the following steps:
and carrying out feature fusion on the coordinate position information of the cart and the trolley and the position height information of the lifting sling and the first target identification result or the second target identification result, and calculating to obtain the position information of the target object in the grid map.
CN202111630303.8A 2021-12-28 2021-12-28 Intelligent sensing system of bridge crane and working method thereof Pending CN114314345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111630303.8A CN114314345A (en) 2021-12-28 2021-12-28 Intelligent sensing system of bridge crane and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111630303.8A CN114314345A (en) 2021-12-28 2021-12-28 Intelligent sensing system of bridge crane and working method thereof

Publications (1)

Publication Number Publication Date
CN114314345A true CN114314345A (en) 2022-04-12

Family

ID=81015835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111630303.8A Pending CN114314345A (en) 2021-12-28 2021-12-28 Intelligent sensing system of bridge crane and working method thereof

Country Status (1)

Country Link
CN (1) CN114314345A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782483A (en) * 2022-06-17 2022-07-22 广州港数据科技有限公司 Intelligent tallying tracking method and system for quayside crane

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782483A (en) * 2022-06-17 2022-07-22 广州港数据科技有限公司 Intelligent tallying tracking method and system for quayside crane
CN114782483B (en) * 2022-06-17 2022-09-16 广州港数据科技有限公司 Intelligent tallying tracking method and system for quayside crane

Similar Documents

Publication Publication Date Title
US10006772B2 (en) Map production method, mobile robot, and map production system
CN112418103B (en) Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
CN103266559B (en) The method of BP bridge security inspection car and face, acquisition bridge surface phase
US9911041B2 (en) Monitoring device, monitoring system and monitoring method
CN111634636B (en) Full-automatic material taking control system of bucket wheel machine
CN109362237B (en) Method and system for detecting intrusion within a monitored volume
CN112066982B (en) Industrial mobile robot positioning method in high dynamic environment
CN110032971B (en) Monocular camera-based mobile platform foreign matter detection method and detection system
CN112520582B (en) High-low-lift automatic electrical control system and control method
CN115597659A (en) Intelligent safety management and control method for transformer substation
CN105307115A (en) Distributed vision positioning system and method based on action robot
US20230064071A1 (en) System for 3d surveying by an autonomous robotic vehicle using lidar-slam and an estimated point distribution map for path planning
CN102788572A (en) Method, device and system for measuring attitude of lifting hook of engineering machinery
CN111721279A (en) Tail end path navigation method suitable for power transmission inspection work
CN114314345A (en) Intelligent sensing system of bridge crane and working method thereof
Wu et al. The real-time vision measurement of multi-information of the bridge crane’s workspace and its application
CN115565058A (en) Robot, obstacle avoidance method, device and storage medium
KR101093793B1 (en) Method for measuring 3d pose information using virtual plane information
Beinschob et al. Strategies for 3D data acquisition and mapping in large-scale modern warehouses
Chemweno et al. Innovative safety zoning for collaborative robots utilizing Kinect and LiDAR sensory approaches
CN117071912A (en) Automatic hoisting and positioning method and system for prefabricated building components
CN116629106A (en) Quasi-digital twin method, system, equipment and medium for mobile robot operation scene
CN115256398A (en) Indoor multifunctional operation robot of transformer substation
KR102076738B1 (en) System for Positioning Crane Based on Wireless Communication
Liang et al. 4D Point Cloud (4DPC)-driven real-time monitoring of construction mobile cranes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination