CN114634112A - Personnel collision avoidance system based on AI vision and UWB technology hoist area - Google Patents

Personnel collision avoidance system based on AI vision and UWB technology hoist area Download PDF

Info

Publication number
CN114634112A
CN114634112A CN202210293776.1A CN202210293776A CN114634112A CN 114634112 A CN114634112 A CN 114634112A CN 202210293776 A CN202210293776 A CN 202210293776A CN 114634112 A CN114634112 A CN 114634112A
Authority
CN
China
Prior art keywords
crane
unit
alarm
personnel
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210293776.1A
Other languages
Chinese (zh)
Inventor
杨宁
李保平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weichuang Safety Technology Co ltd
Original Assignee
Shenzhen Weichuang Safety Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weichuang Safety Technology Co ltd filed Critical Shenzhen Weichuang Safety Technology Co ltd
Priority to CN202210293776.1A priority Critical patent/CN114634112A/en
Publication of CN114634112A publication Critical patent/CN114634112A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C15/00Safety gear
    • B66C15/04Safety gear for preventing collisions, e.g. between cranes or trolleys operating on the same track
    • B66C15/045Safety gear for preventing collisions, e.g. between cranes or trolleys operating on the same track electrical
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/16Applications of indicating, registering, or weighing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • B66C13/46Position indicators for suspended loads or for crane elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C15/00Safety gear
    • B66C15/06Arrangements or use of warning devices
    • B66C15/065Arrangements or use of warning devices electrical
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C23/00Cranes comprising essentially a beam, boom, or triangular structure acting as a cantilever and mounted for translatory of swinging movements in vertical or horizontal planes or a combination of such movements, e.g. jib-cranes, derricks, tower cranes
    • B66C23/62Constructional features or details
    • B66C23/64Jibs

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Control And Safety Of Cranes (AREA)

Abstract

The invention discloses a personnel anti-collision system of a crane lifting area based on AI vision and UWB technology, relates to the technical field of cranes, and solves the problems that in the prior art, workers often flow in the lifting area, the targets of the workers are small and easy to be shielded, and the workers are impacted when a heavy object is lifted and can also cause fatal injuries to the workers. The system comprises an AI visual identification and UWB positioning module and an anti-collision control module, wherein the A I visual identification and UWB positioning module is used for identifying people and objects in a lifting area and detecting the position of a crane boom and the position of the people. According to the personnel collision avoidance system of the crane lifting area based on AI vision and UWB technology, the crane boom camera moves along with the position movement of the crane boom, and the crane boom UWB probe is arranged on the crane boom, so that the position of the crane boom can be known, the environmental change around the crane boom can be known, and the collision risk can be reduced.

Description

Personnel collision avoidance system based on AI vision and UWB technology hoist area
Technical Field
The invention relates to the field of intelligent equipment, in particular to a personnel collision avoidance system of a crane lifting area based on AI vision and UWB technology.
Background
Artificial intelligence is a new technology science which is based on computer science and is a cross discipline and an emerging discipline which are cross-fused by multiple disciplines such as computer, psychology, philosophy and the like, researches and develops theories, methods, technologies and application systems for simulating, extending and expanding human intelligence, attempts to understand the essence of intelligence, and produces a new intelligent machine which can react in a manner similar to human intelligence, and the researches in the field comprise robots, language recognition, image recognition, natural language processing, expert systems and the like. In the hoisting operation process of the crane, when an operator finds possible dangerous personnel and articles around a hoisting object, the operator can press down the crane emergency stop button to realize the emergency stop of the crane operation, but the conventional operation mode is easily influenced by the operation state of the operator, the emergency stop opportunity is not easy to grasp, the safety risk is high, the obstacle avoidance scheme cannot be planned in advance, and the intelligent control cannot be realized.
Chinese patent CN106495005B discloses a crane collision avoidance detection control system, which is used for detecting the working process of the interaction between a bridge crane operated by someone in the upper air and a gantry crane operated automatically by unmanned operation on the ground, wherein the upper bridge crane is operated by someone, and the operation of the bridge crane is controlled by the random movement of a ground dispatcher. The system adopts the interaction of a laser scanner, a trolley safety limit switch and a plurality of PLC control units, the profile programming of an object in a scanning operation area of the laser scanner is adopted to set the early warning deceleration and anti-collision warning profile of the scanned object, the upper-layer crane is linked to lift and trolley safety signals to realize the active detection and active anti-collision control of the upper-layer crane, the lower-layer crane is controlled to passively avoid collision or stop running in the autonomous automatic running process, a central control room is used for comprehensively monitoring the anti-collision control of emergency intervention, the safety is higher, and the control method is more in line with the field process requirements.
Although this application solves the problems in the background art to some extent, the following problems exist in this application: 1. the system does not perform anti-collision detection on personnel in a lifting area, mainly performs detection on the anti-collision of the crane in two-layer operation, and cannot intelligently judge the positions of the personnel; 2. the region of lifting by crane often has the staff to flow, and staff's target is less, is sheltered from easily, strikes personnel when lifting by crane the heavy object, also can cause fatal injury for personnel, can not intelligent optimization keep away the barrier scheme, keeps away that barrier efficiency is not high.
Disclosure of Invention
The invention aims to provide a personnel collision avoidance system of a crane lifting area based on AI vision and UWB technology, which is provided with an AI vision identification and UWB positioning module and a collision avoidance control module, wherein the AI vision identification and UWB positioning module adopts an artificial intelligent terminal to read image data of a camera in real time, the deep learning network model after hardware acceleration is used for reasoning the image to detect people or objects in the image, and gives the initial coordinate position and the area range, the positioning unit collects the position information of the crane boom and the personnel, the distance between a person or an object in an image is calculated by carrying out perspective transformation on a previously calibrated area, and an AI visual identification technology and a UWB positioning technology are combined, so that whether the person has a collision risk or not can be judged visually, and the person shielded by the object can be observed by the UWB positioning technology, thereby avoiding the collision of the person; the crane boom camera moves along with the position movement of the crane boom to shoot the environmental change in the moving process of the crane boom, and the crane boom UWB probe is arranged on the crane boom and can acquire the position of the crane boom and the environmental change around the crane boom, so that the position of personnel can be acquired at the first time, and the risk of collision is reduced; be provided with and lift by crane regional camera and operational environment camera, personnel that can shoot in the environment flow, before personnel are close to hoist crane jib, in time prevent, play the anticollision prevention effect to solve the problem that proposes among the above-mentioned background art.
In order to achieve the purpose, the invention provides the following technical scheme: the system comprises an AI visual identification and UWB positioning module and an anti-collision control module, wherein the AI visual identification and UWB positioning module is used for identifying people and objects in a lifting area, detecting the position of a crane boom and the position of the people, completing the construction of a dynamic three-dimensional model of a lifting space, detecting a moving obstacle, calculating the position information, the scale information and the operation information of a load and the obstacle on line, and predicting collision;
the anti-collision control module is used for acquiring prediction information of possible collision, controlling the starting and stopping of the crane and speed adjustment, early warning the possible collision in time and informing people to evacuate, and comprises a controller, a frequency converter and a sound-light alarm unit, wherein the controller is respectively electrically connected with the frequency converter and the sound-light alarm unit.
Preferably, the operation process of the AI visual identification and UWB positioning module includes the following steps:
s11: the method comprises the following steps that an acquisition unit acquires image information of a lifting area around a crane boom and the whole working environment, an artificial intelligent terminal reads image data of a camera in real time, the image is reasoned through a deep learning network model after hardware acceleration, people or objects existing in the image are detected, and the initial coordinate position and the area range of the people or the objects are given;
s12: the positioning unit collects the position information of a crane boom and personnel, perspective transformation is carried out through a previously calibrated area, and the distance between the personnel or objects in the image is calculated;
s13: the modeling unit respectively obtains three-dimensional coordinates of the load, the dynamic barrier and the static barrier according to the coordinate position and the distance of the person or the object to obtain a dynamic model diagram;
s14: the prediction unit predicts whether the load collides with a dynamic obstacle or a static obstacle according to the dynamic model diagram to obtain a prediction result;
s15: the alarm unit is used for transmitting the prediction information to the anti-collision control module.
Preferably, the acquisition unit comprises a crane boom camera, a hoisting area camera, a working environment camera and an AI visual recognition terminal, the crane boom camera, the hoisting area camera and the working environment camera are all electrically connected with the AI visual recognition terminal, the crane boom camera is installed on the side wall of the crane boom, the hoisting area camera is arranged at the edge position of the hoisting area, the hoisting area cameras are arranged in more than two groups, a position sensing sensor is arranged on the hoisting area camera and used for sensing the position of the crane boom, the working environment camera covers the whole working area, and the AI visual recognition terminal respectively identifies the position and the flow direction of personnel in images shot by the crane boom camera, the hoisting area camera and the working environment camera.
Preferably, the workflow of the acquisition unit includes the following steps:
s111: the crane boom camera follows the moving position of the crane boom, the whole lifting path is shot, and the AI vision identification terminal identifies whether personnel and obstacles appear;
s112: the camera in the lifting area senses that the position of a crane boom is close to the camera, shooting is automatically started, the AI vision recognition terminal recognizes whether personnel enter the lifting area or not, the personnel enter the lifting area, and the prediction unit and the alarm unit perform prediction and alarm processing;
s113: the working environment camera shoots the whole working environment, the AI vision recognition terminal recognizes the flowing condition of personnel, the personnel close to the lifting area or the personnel flowing to the lifting area are found, and the prediction unit and the alarm unit perform prediction and alarm processing.
Preferably, the positioning unit comprises a crane boom UWB probe, a personnel UWB probe and a UWB base station, and the crane boom UWB probe and the personnel UWB probe are respectively in communication connection with the UWB base station.
Preferably, the working process of the anti-collision control module comprises the following steps:
s21: the controller receives the prediction or alarm information transmitted by the prediction unit and the alarm unit, the prediction or alarm information is checked by a remote commander, and the remote commander gives an instruction or the controller gives an instruction according to the alarm grade;
s22: the controller transmits the instruction to the frequency converter and the acousto-optic alarm unit, the frequency converter controls the operation of the motor of the crane, and the acousto-optic alarm unit gives an acousto-optic alarm to warn people to avoid.
Preferably, the alarm level is divided into five levels, wherein when a person approaches to a lifting area or continuously moves towards the lifting area, the prediction unit predicts that no collision is possible, and the alarm unit gives a first-level alarm; when a person enters a lifting area, the prediction unit predicts that no collision is possible, and the alarm unit gives a secondary alarm; when personnel work in the hoisting area actively, the prediction unit predicts that no collision is possible, and the alarm unit gives out three-level alarm; when personnel work in the hoisting area, the prediction unit predicts that the collision is possible, and the alarm unit gives a four-level alarm; when a person is under the load of the crane boom or on the necessary travel of the crane boom, the prediction unit predicts that the collision is possible, and the alarm unit gives a five-level alarm.
Preferably, the work flow of the controller comprises the following steps:
s221: after receiving the primary alarm or the secondary alarm, the controller sends information to the sound and light alarm unit, and the sound and light alarm unit gives out sound and light alarm to warn people to avoid;
s222: the controller receives the three-level alarm and sends information to the frequency converter and the acousto-optic alarm unit, the frequency converter controls the crane motor to decelerate at one level, and the acousto-optic alarm unit gives an acousto-optic alarm to warn people to avoid;
s223: the controller receives the four-level alarm and sends information to the frequency converter and the acousto-optic alarm unit, the frequency converter controls the crane motor to decelerate in a second level, the speed of the crane motor after the frequency converter controls the crane motor to decelerate in the second level is smaller than the speed of the crane motor after the crane motor decelerates in the first level, and the acousto-optic alarm unit gives an acousto-optic alarm to warn people to avoid the alarm;
s224: the controller receives the four-level alarm and sends information to the frequency converter and the acousto-optic alarm unit, the frequency converter controls the crane motor to stop, and the acousto-optic alarm unit gives an acousto-optic alarm to warn people to avoid.
Preferably, the prediction unit includes:
a coordinate acquisition subunit, configured to determine a three-dimensional coordinate value of the load and a three-dimensional coordinate value of a hook of the crane;
the position processing subunit is used for determining a target angle of the load relative to the vertical direction of the hook based on the three-dimensional coordinate value of the load and the three-dimensional coordinate value of the hook;
the parameter obtaining subunit is used for obtaining the current length value of the steel wire rope connected with the hook and obtaining the weight value of the load;
the position analysis subunit is used for inputting the weight value of the load, the target angle of the load relative to the vertical direction of the hook and the current length value of the steel wire rope connected with the hook into the dynamic model diagram;
the position analyzing subunit is further configured to determine a first position point and a second position point of the motion trajectory in the dynamic model map, where the first position point is a starting position point of the load in the dynamic model map, and the second position point is a position point of the load farthest from the first position point in the dynamic model map;
the track generation subunit is used for acquiring a third position point on one side of a connecting line of the first position point and the second position point, and connecting the first position point, the second position point and the third position point to acquire a first motion track;
the track generation subunit is further configured to perform mirror symmetry processing on the first motion track to obtain a second motion track;
the track generation subunit is further configured to synthesize the first motion track and the second motion track to obtain a target motion track of the load when the crane lifts the load;
a point cloud set confirmation subunit, configured to synthesize the target motion trajectory into a motion region of the load according to a preset trajectory processing method in the dynamic model map, and determine a first point cloud set of the motion region and a second point cloud set of the dynamic obstacle or the static obstacle in the dynamic model map;
the analysis subunit is configured to analyze the first point cloud set and the second point cloud set respectively, and determine a set relationship between the first point cloud set and the second point cloud set;
the prediction subunit is used for determining a prediction result according to the set relationship between the first point cloud set and the second point cloud set;
when the first point cloud set and the second point cloud set are intersected, the load is predicted to collide with the static obstacle or the dynamic obstacle according to a prediction result;
when the first point cloud set and the second point cloud set are not intersected, the load is predicted not to collide with the static obstacle or the dynamic obstacle;
the obstacle avoidance scheme generating subunit is configured to, when it is predicted that the load and the static obstacle or the dynamic obstacle will collide with each other, obtain an association node between the first point cloud set and the second point cloud set, and formulate an obstacle avoidance scheme according to the association node;
and the prediction report generating subunit is used for generating a prediction report according to the prediction result and the obstacle avoidance scheme, and transmitting the prediction report to the alarm unit.
Preferably, the obstacle avoidance scheme generating subunit includes:
the scheme reading subunit is used for reading the obstacle avoidance scheme, determining an obstacle avoidance path of the crane and determining a road coordinate point of the obstacle avoidance path;
the model building subunit is used for building a path track model of the obstacle avoidance path based on the adjacent coordinate points of the road coordinate points;
Figure BDA0003561216730000071
wherein M represents the path trajectory model;
Figure BDA0003561216730000072
representing the smoothness factor of the path track, and the value range is (0.98, 0.99); theta represents a turning angle of the obstacle avoidance path; sigma thetajRepresenting a total turning angle of the obstacle avoidance path; j represents the total number of turns encountered in the obstacle avoidance path; theta.theta.jRepresenting the current turning angle in the obstacle avoidance path; (x)i+1,yi+1,zi+1) Indicating the i +1 th road coordinate point; (x)i,yi,zi) Representing the ith road coordinate point, wherein the (i + 1) th road coordinate point is adjacent to the ith road coordinate point; sigma ljRepresenting a total path length of the obstacle avoidance path; l. thejRepresenting the distance between the coordinates of the adjacent road in the jth section, wherein the value of j is the total number of the coordinates of the road-1;
the calculation subunit is configured to calculate a fitness value of the obstacle avoidance scheme based on the road track model;
F=M*(m+1)*∑|ft-ft-1|;
wherein F represents a fitness value of the obstacle avoidance scheme; m represents the weight of the path orbit model and has a value range of (0, 1)];ftRepresenting the fitness value of the obstacle avoidance scheme at the current moment; f. oft-1Representing the fitness value of the obstacle avoidance scheme at the last moment; t represents the current time; t-1 represents the last time;
the comparison subunit is configured to compare the fitness value of the obstacle avoidance scheme with a preset fitness threshold value, and determine whether the obstacle avoidance scheme needs to be optimized;
when the fitness value is equal to or greater than the preset fitness threshold value, judging that the obstacle avoidance scheme does not need to be optimized;
otherwise, judging that the obstacle avoidance scheme needs to be optimized;
and the optimizing subunit is configured to, when the obstacle avoidance scheme needs to be optimized, perform a difference between the fitness value and the fitness threshold, determine an optimization factor based on a difference result, and optimize the obstacle avoidance scheme according to the optimization factor.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the personnel anti-collision system of the crane lifting area based on AI vision and UWB technology, the profile of an object in a scanning operation area is programmed and set by adopting a laser scanner in the prior art, the early warning deceleration and anti-collision alarm profile of the scanned object can not be identified, and a shielded personnel can not be identified;
2. the invention provides a personnel anti-collision system based on AI vision and UWB technology crane lifting area, which adopts single identification technology in the prior art and can not determine the specific position of personnel, wherein an AI vision identification and UWB positioning module adopts an artificial intelligent terminal to read the image data of a camera in real time and reason the image through a deep learning network model after hardware acceleration, thereby detecting the people or objects in the image and giving out the initial coordinate position and the area range thereof, a positioning unit collects the position information of a crane boom and personnel, the distance of the people or objects in the image is calculated through the perspective transformation of the previously calibrated area, the AI vision identification technology and the UWB positioning technology are combined, thereby judging whether the personnel has the collision risk in vision and observing the personnel shielded by the objects through the positioning technology, thereby avoiding the collision of people;
3. according to the personnel anti-collision system based on the AI vision and the UWB technology for the crane lifting area, in the prior art, personnel and obstacles near the crane are checked, and early warning is not carried out on personnel possibly entering the lifting area.
4. According to the personnel anti-collision system based on the AI vision and UWB technology for the crane lifting area, the motion track of the load when the load is lifted is determined according to the dynamic model diagram, the motion range formed when the load is lifted is determined according to the motion track, and the position relation between the motion range and the static obstacle or the dynamic obstacle is determined, so that whether collision occurs or not is accurately predicted, corresponding alarm is given when collision occurs, the anti-collision intelligence is improved, and meanwhile, the safety of personnel in the lifting area is guaranteed.
5. According to the personnel anti-collision system based on the AI vision and the UWB technology for the crane lifting area, when a static obstacle or a dynamic obstacle cannot move, the obstacle avoidance scheme is read to determine the obstacle avoidance path of the crane, so that a path track model is constructed, and the fitness value of the obstacle avoidance scheme is determined based on the path track model, so that whether the obstacle avoidance scheme needs to be optimized or not can be evaluated, the obstacle avoidance efficiency is improved, and meanwhile, the personnel safety is better guaranteed.
Drawings
FIG. 1 is an overall block diagram of the present invention;
FIG. 2 is a flow chart of the AI visual identification and UWB positioning module operation of the present invention;
FIG. 3 is a block diagram of an acquisition unit of the present invention;
FIG. 4 is a flow chart of the acquisition unit operation of the present invention;
FIG. 5 is a schematic view of the acquisition unit installation of the present invention;
FIG. 6 is a block diagram of a positioning unit of the present invention;
FIG. 7 is a schematic view of a crane boom construction of the present invention;
FIG. 8 is a flowchart of the crash control module operation of the present invention;
fig. 9 is a flow chart of the controller operation of the present invention.
In the figure: 1. an AI visual identification and UWB positioning module; 11. a collecting unit; 111. a crane boom camera; 112. lifting the area camera; 113. a working environment camera; 114. an AI visual identification terminal; 12. a positioning unit; 121. crane boom UWB probe; 122. a personal UWB probe; 123. a UWB base station; 13. a modeling unit; 14. a prediction unit; 15. an alarm unit; 2. a collision avoidance control module; 21. a controller; 22. a frequency converter; 23. and an audible and visual alarm unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the personnel collision avoidance system of a crane lifting area based on AI vision and UWB technology comprises an AI vision recognition and UWB positioning module 1 and a collision avoidance control module 2, wherein the AI vision recognition and UWB positioning module 1 is used for recognizing people and objects in the lifting area, detecting the position of a crane boom and the position of the people, completing the construction of a dynamic three-dimensional model of a lifting space, detecting moving obstacles, calculating the position information, the dimension information and the operation information of loads and the obstacles on line, and performing collision prediction, the AI vision recognition and UWB positioning module 1 comprises an acquisition unit 11, a positioning unit 12, a modeling unit 13, a prediction unit 14 and an alarm unit 15, and the acquisition unit 11, the positioning unit 12, the modeling unit 13, the prediction unit 14 and the alarm unit 15 are sequentially connected.
Referring to fig. 2 to 7, the operation process of the AI visual identification and UWB positioning module 1 includes the following steps:
s11: the method comprises the following steps that an acquisition unit 11 acquires image information of a lifting area around a crane boom and the whole working environment, an artificial intelligent terminal reads image data of a camera in real time, the image is reasoned through a deep learning network model after hardware acceleration, people or objects existing in the image are detected, and the initial coordinate position and the area range of the people or the objects are given; the collecting unit 11 comprises a crane boom camera 111, a lifting area camera 112, a working environment camera 113 and an AI visual identification terminal 114, wherein the crane boom camera 111, the lifting area camera 112 and the working environment camera 113 are all electrically connected with the AI visual identification terminal 114, the crane boom camera 111 is installed on the side wall of the crane boom, the crane boom camera 111 moves along with the position movement of the crane boom to shoot the environmental change in the moving process of the crane boom, the lifting area camera 112 is arranged at the edge position of the lifting area to shoot the environmental and personnel change of the whole lifting area, and the personnel flow in the lifting area is closely concerned, more than two groups of the lifting area cameras 112 are arranged, the lifting area camera 112 is provided with a position sensing sensor for sensing the position of the crane boom, when the crane boom moves to the shooting range of the corresponding hoisting area camera 112, the corresponding hoisting area camera 112 is automatically started, the whole shooting range of the hoisting area camera 112 is a hoisting area, the hoisting area is divided into areas on a dynamic model diagram, the position of a person detected by the positioning unit 12 is highlighted in the hoisting area, the working environment camera 113 covers the whole working area, shoots the whole working environment and the flow of the person, and closely focuses on the person flowing to the direction of the hoisting area, and the AI visual identification terminal 114 respectively identifies the position and the flow direction of the person in the image shot by the crane boom camera 111, the hoisting area camera 112 and the working environment camera 113;
the workflow of the acquisition unit 11 comprises the following steps:
s111: the crane boom camera 111 shoots the whole lifting path along with the moving position of the crane boom, and the AI visual identification terminal 114 identifies whether people and obstacles appear;
s112: the crane boom position is sensed to be close by the crane area camera 112, shooting is automatically started, the AI vision recognition terminal 114 recognizes whether personnel enter the crane area or not, the personnel enter the crane area, and the prediction unit 14 and the alarm unit 15 perform prediction and alarm processing;
s113: the working environment camera 113 shoots the whole working environment, the AI visual recognition terminal 114 recognizes the personnel flow situation, and the prediction unit 14 and the alarm unit 15 predict and alarm when finding the personnel close to the lifting area or the personnel flowing to the lifting area;
s12: the positioning unit 12 collects the position information of the crane boom and the personnel, and calculates the distance between the people or the objects in the image through perspective transformation of the previously calibrated area; the positioning unit 12 comprises a crane boom UWB probe 121, a personnel UWB probe 122 and a UWB base station 123, wherein the crane boom UWB probe 121 and the personnel UWB probe 122 are respectively in communication connection with the UWB base station 123, and the UWB base station 123 receives the position information of the crane boom UWB probe 121 and the personnel UWB probe 122 and displays the position information to a remote controller in real time;
s13: the modeling unit 13 obtains three-dimensional coordinates of the load, the dynamic barrier and the static barrier respectively according to the coordinate position and the distance of the person or the object, and obtains a dynamic model diagram;
s14: the prediction unit 14 predicts whether the load collides with the dynamic barrier or the static barrier according to the dynamic model diagram to obtain a prediction result;
s15: the alarm unit 15 is used to transmit the prediction information to the collision avoidance control module 2.
Referring to fig. 8-9, the collision avoidance control module 2 is configured to obtain predicted information of a possible collision, control start and stop of the crane and speed adjustment, early warn of the possible collision in time, and notify people of evacuation, where the collision avoidance control module 2 includes a controller 21, a frequency converter 22, and an acousto-optic alarm unit 23, and the controller 21 is electrically connected to the frequency converter 22 and the acousto-optic alarm unit 23, respectively;
the working process of the anti-collision control module 2 comprises the following steps:
s21: the controller 21 receives the prediction or alarm information transmitted by the prediction unit 14 and the alarm unit 15, and the prediction or alarm information is viewed by a remote commander, and the remote commander gives an instruction or the controller 21 gives an instruction according to the alarm grade;
s22: the controller 21 transmits instructions to the frequency converter 22 and the sound-light alarm unit 23, the frequency converter 22 controls the operation of a motor of the crane, the sound-light alarm unit 23 gives out sound-light alarms to warn people to avoid, and the sound-light alarm unit 23 comprises a loudspeaker and an alarm lamp which are combined to give an alarm to remind people to avoid.
The alarm levels are divided into five levels, wherein a person approaches a hoisting area or continuously moves towards the hoisting area, the prediction unit 14 predicts that no collision is possible, and the alarm unit 15 gives a first-level alarm; when a person enters a hoisting area, the prediction unit 14 predicts that no collision is possible, and the alarm unit 15 gives a secondary alarm; when personnel work in the hoisting area actively, the prediction unit 14 predicts that no collision is possible, and the alarm unit 15 gives a three-level alarm; when personnel work in the hoisting area movably, the prediction unit 14 predicts that collision is possible, and the alarm unit 15 gives a four-level alarm; the prediction unit 14 predicts that an impact is possible and the alarm unit 15 gives a five-level alarm if a person is under the load of the crane boom or on a must-pass trip of the crane boom.
The workflow of the controller 21 includes the steps of:
s221: after receiving the primary alarm or the secondary alarm, the controller 21 sends information to the sound and light alarm unit 23, and the sound and light alarm unit 23 gives a sound and light alarm to warn people to avoid;
s222: the controller 21 receives the three-level alarm and sends information to the frequency converter 22 and the acousto-optic alarm unit 23, the frequency converter 22 controls the crane motor to decelerate at one level, and the acousto-optic alarm unit 23 gives an acousto-optic alarm to warn people to avoid;
s223: the controller 21 receives the four-level alarm and sends information to the frequency converter 22 and the acousto-optic alarm unit 23, the frequency converter 22 controls the crane motor to decelerate in the second level, the acousto-optic alarm unit 23 gives an acousto-optic alarm to warn people to avoid, and the speed of the crane motor after the frequency converter 22 controls the crane motor to decelerate in the second level is smaller than that after the crane motor decelerates in the first level;
s224: the controller 21 receives the four-level alarm and sends information to the frequency converter 22 and the acousto-optic alarm unit 23, the frequency converter 22 controls the crane motor to stop, and the acousto-optic alarm unit 23 gives an acousto-optic alarm to warn people to avoid.
In summary, the following steps: the crane lifting area personnel collision avoidance system based on AI vision and UWB technology is provided with an AI vision recognition and UWB positioning module 1 and a collision avoidance control module 2, wherein the AI vision recognition and UWB positioning module 1 adopts an artificial intelligent terminal to read camera image data in real time, infers the image through a deep learning network model after hardware acceleration, detects people or objects in the image and gives the initial coordinate position and the area range of the people or objects, a positioning unit 12 collects the position information of a crane boom and personnel, carries out perspective transformation through a previously calibrated area, calculates the distance of the people or objects in the image, combines the AI vision recognition technology and the UWB positioning technology, can visually judge whether the personnel have the risk of being collided, and can observe the personnel sheltered from the objects through the UWB positioning technology, thereby avoiding the personnel from being collided; the crane boom camera 111 moves along with the position movement of the crane boom to shoot the environmental change in the moving process of the crane boom, and the crane boom UWB probe 121 is installed on the crane boom and can acquire the position of the crane boom and the environmental change around the crane boom, so that the position of a person can be acquired at the first time, and the risk of collision is reduced; be provided with and lift by crane regional camera 112 and operational environment camera 113, can shoot the personnel flow in the environment, before personnel are close to the hoist jib, in time prevent, play the anticollision prevention effect.
In the above embodiment, the prediction unit 14 includes:
a coordinate acquisition subunit, configured to determine a three-dimensional coordinate value of the load and a three-dimensional coordinate value of a hook of the crane;
the position processing subunit is used for determining a target angle of the load relative to the vertical direction of the hook based on the three-dimensional coordinate value of the load and the three-dimensional coordinate value of the hook;
the parameter acquisition subunit is used for acquiring the current length value of the steel wire rope connected with the hook and acquiring the weight value of the load;
the position analysis subunit is used for inputting the weight value of the load, the target angle of the load relative to the vertical direction of the hook and the current length value of the steel wire rope connected with the hook into the dynamic model diagram;
the position analyzing subunit is further configured to determine a first position point and a second position point of the motion trajectory in the dynamic model map, where the first position point is a starting position point of the load in the dynamic model map, and the second position point is a position point of the load farthest from the first position point in the dynamic model map;
the track generation subunit is used for acquiring a third position point on one side of a connecting line of the first position point and the second position point, and connecting the first position point, the second position point and the third position point to acquire a first motion track;
the track generation subunit is further configured to perform mirror symmetry processing on the first motion track to obtain a second motion track;
the track generation subunit is further configured to synthesize the first motion track and the second motion track to obtain a target motion track of the load when the crane lifts the load;
a point cloud set confirmation subunit, configured to synthesize the target motion trajectory into a motion region of the load according to a preset trajectory processing method in the dynamic model map, and determine a first point cloud set of the motion region and a second point cloud set of the dynamic obstacle or the static obstacle in the dynamic model map;
the analysis subunit is configured to analyze the first point cloud set and the second point cloud set respectively, and determine a set relationship between the first point cloud set and the second point cloud set;
the prediction subunit is used for determining a prediction result according to the set relationship between the first point cloud set and the second point cloud set;
when the first point cloud set and the second point cloud set are intersected, the load is predicted to collide with the static obstacle or the dynamic obstacle according to a prediction result;
when the first point cloud set and the second point cloud set are not intersected, the load is predicted not to collide with the static obstacle or the dynamic obstacle;
the obstacle avoidance scheme generating subunit is used for acquiring the associated nodes of the first point cloud set and the second point cloud set when the collision between the load and the static obstacle or the dynamic obstacle is predicted, and formulating an obstacle avoidance scheme according to the associated nodes;
and the prediction report generating subunit is configured to generate a prediction report according to the prediction result and the obstacle avoidance scheme, and transmit the prediction report to the alarm unit 15.
In this embodiment, the three-dimensional coordinate value of the hook may be a coordinate value of the hook in a vertical state in space.
In this embodiment, the target angle may be an included angle formed between the load and the vertical direction of the hook before hoisting.
In this embodiment, the first position point is a starting position point of the load in the dynamic model map, that is, a position of the load before hoisting.
In this embodiment, the second position point is a position point of the load farthest from the first position point in the dynamic model map, which is a farthest position to which the relative placement position can swing in a swinging situation after the load is hoisted.
In this embodiment, the obtaining of the third position point on the side of the connection line between the first position point and the second position point may be performed by connecting the first position point and the second position point through a straight line without considering the radian swing of the load, because when the load and the hook are not in the same vertical direction, the load must swing when the load is lifted, a certain swing range must be formed along with the swing trajectory of the load, and the analysis of the swing range formed when the load swings is facilitated by determining the third position point on the side of the connection line between the first position point and the second position point, where the third position point is the farthest point from the connection line on the side of the connection line.
In this embodiment, the first motion trajectory may be a side swing trajectory during a swing process when the load is lifted.
In this embodiment, the mirror symmetry processing may be performed such that a straight line connecting the first position point and the second position point is a symmetry axis.
In this embodiment, the second motion trajectory may be the motion trajectory of the other side of the swinging range of the load after the symmetric processing, and the default load will form a closed circular area when swinging.
In this embodiment, the target motion trajectory may be all paths that the load may pass through after the first motion trajectory is connected with the second motion trajectory.
In this embodiment, the preset trajectory processing may be a smoothing processing method.
In this embodiment, the motion region may be a range in which the load is formed during the swing, and may be a circle or an irregular shape.
In this embodiment, the first point cloud set may be a point set composed of all points included in the motion region.
In this embodiment, the second point cloud set may be a set of points where a static obstacle or a dynamic obstacle is located.
In this embodiment, the node associated with the first point cloud set and the second point cloud set may be, for example, a coordinate point where the first point cloud set and the second point cloud set intersect.
In this embodiment, the set relationship includes an intersection or disjointness of the first point cloud set with the second point cloud set.
The beneficial effects of the above technical scheme are: the method has the advantages that the motion trail of the load when the load is lifted is determined according to the dynamic model diagram, the motion range formed by the load when the load is lifted is determined according to the motion trail, the position relation between the motion range and the static barrier or the dynamic barrier is determined, whether collision occurs or not is accurately predicted, corresponding alarm is given when the collision occurs is predicted, the anti-collision intelligence is improved, and meanwhile the safety of personnel in a lifting area is guaranteed.
In the above embodiment, the obstacle avoidance scheme generating subunit includes:
the scheme reading subunit is used for reading the obstacle avoidance scheme, determining an obstacle avoidance path of the crane and determining a road coordinate point of the obstacle avoidance path;
the model building subunit is used for building a path track model of the obstacle avoidance path based on the adjacent coordinate points of the road coordinate points;
Figure BDA0003561216730000171
wherein M represents the path trajectory model;
Figure BDA0003561216730000172
representing the smoothness factor of the path track, and the value range is (0.98, 0.99); theta represents a turning angle of the obstacle avoidance path; sigma thetajRepresenting a total turning angle of the obstacle avoidance path; j represents the total number of turns encountered in the obstacle avoidance path; thetajRepresenting the current turning angle in the obstacle avoidance path; (x)i+1,yi+1,zi+1) Indicating the i +1 th road coordinate point; (x)i,yi,zi) Representing the ith road coordinate point, wherein the (i + 1) th road coordinate point is adjacent to the ith road coordinate point; sigma ljRepresenting a total path length of the obstacle avoidance path; ljRepresenting the distance between the coordinates of the adjacent road in the jth section, wherein the value of j is the total number of the coordinates of the road-1;
the calculation subunit is configured to calculate a fitness value of the obstacle avoidance scheme based on the road track model;
F=M*(m+1)*∑|ft-ft-1|;
wherein, F represents the adaptability value of the obstacle avoidance scheme; m represents the weight of the path orbit model and has a value range of (0, 1)];ftRepresenting the fitness value of the obstacle avoidance scheme at the current moment; f. oft-1Representing the fitness value of the obstacle avoidance scheme at the last moment; t represents the current time; t-1 represents the last time;
the comparison subunit is configured to compare the fitness value of the obstacle avoidance scheme with a preset fitness threshold, and determine whether the obstacle avoidance scheme needs to be optimized;
when the fitness value is equal to or greater than the preset fitness threshold value, judging that the obstacle avoidance scheme does not need to be optimized;
otherwise, judging that the obstacle avoidance scheme needs to be optimized;
and the optimizing subunit is configured to, when the obstacle avoidance scheme needs to be optimized, perform a difference between the fitness value and the fitness threshold, determine an optimization factor based on a difference result, and optimize the obstacle avoidance scheme according to the optimization factor.
In this embodiment, the fitness value may be an execution capability of the characterization obstacle avoidance scheme.
In this embodiment, the preset fitness threshold may be set in advance, and is used to measure whether the obstacle avoidance scheme needs to be optimized, and when the fitness value is smaller than the preset fitness value, it is determined that the obstacle avoidance scheme needs to be optimized (that is, the execution capability of the obstacle avoidance scheme is weak).
In this embodiment, the optimization factor may be a factor determined by a difference between a preset fitness threshold and a fitness value, and is used to optimize the obstacle avoidance scheme.
The beneficial effects of the above technical scheme are: when the static barrier or the dynamic barrier cannot move, the obstacle avoidance scheme is read to determine the obstacle avoidance path of the crane, so that a path track model is constructed, the fitness value of the obstacle avoidance scheme is determined based on the path track model, and whether the obstacle avoidance scheme needs to be optimized can be evaluated, so that the obstacle avoidance efficiency is improved, and meanwhile, the safety of personnel is better guaranteed.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical solutions and the inventive concepts of the present invention within the technical scope of the present invention.

Claims (10)

1. Based on AI vision and UWB technique hoist and mount regional personnel collision avoidance system, including AI vision identification and UWB orientation module (1) and anticollision control module (2), its characterized in that: the AI visual identification and UWB positioning module (1) is used for identifying people and objects in a hoisting area, detecting the position of a crane boom and the position of the people, completing the construction of a dynamic three-dimensional model of a hoisting space, detecting a moving obstacle, calculating the position information, the dimension information and the operation information of the load and the obstacle on line, and predicting collision, wherein the AI visual identification and UWB positioning module (1) comprises an acquisition unit (11), a positioning unit (12), a modeling unit (13), a prediction unit (14) and an alarm unit (15), and the acquisition unit (11), the positioning unit (12), the modeling unit (13), the prediction unit (14) and the alarm unit (15) are sequentially connected;
the anti-collision control module (2) is used for acquiring prediction information of possible collision, controlling the starting and stopping of the crane and speed adjustment, timely early warning the possible collision and informing people to evacuate, the anti-collision control module (2) comprises a controller (21), a frequency converter (22) and an acousto-optic alarm unit (23), and the controller (21) is electrically connected with the frequency converter (22) and the acousto-optic alarm unit (23) respectively.
2. The AI vision and UWB technology based personnel collision avoidance system of crane lifting area of claim 1 wherein: the working process of the AI visual identification and UWB positioning module (1) comprises the following steps:
s11: the method comprises the following steps that an acquisition unit (11) acquires image information of a lifting area around a crane boom and the whole working environment, an artificial intelligent terminal reads image data of a camera in real time, the image is inferred through a deep learning network model after hardware acceleration, people or objects existing in the image are detected, and the initial coordinate position and the area range of the people or the objects are given;
s12: the positioning unit (12) collects the position information of a crane boom and personnel, perspective transformation is carried out through a previously calibrated area, and the distance between the personnel or objects in the image is calculated;
s13: the modeling unit (13) respectively obtains three-dimensional coordinates of the load, the dynamic barrier and the static barrier according to the coordinate position and the distance of the person or the object to obtain a dynamic model diagram;
s14: the prediction unit (14) predicts whether the load collides with the dynamic barrier or the static barrier according to the dynamic model diagram to obtain a prediction result;
s15: the alarm unit (15) is used for transmitting the prediction information to the anti-collision control module (2).
3. The AI vision and UWB technology based personnel collision avoidance system of crane lifting area of claim 2 wherein: the collecting unit (11) comprises a crane boom camera (111), a lifting area camera (112), a working environment camera (113) and an AI visual identification terminal (114), wherein the crane boom camera (111), the lifting area camera (112) and the working environment camera (113) are electrically connected with the AI visual identification terminal (114), the crane boom camera (111) is installed on the side wall of the crane boom, the lifting area camera (112) is arranged at the edge position of a lifting area, more than two groups of lifting area cameras (112) are arranged, a position sensing sensor is arranged on the lifting area camera (112) and used for sensing the position of the crane boom, the working environment camera (113) covers the whole working area, and the AI visual identification terminal (114) respectively identifies the crane boom camera (111), The lifting area camera (112) and the working environment camera (113) shoot the position and the flowing direction of people in the images.
4. The AI-vision and UWB technology based personnel collision avoidance system for crane lifting areas of claim 3 wherein: the workflow of the acquisition unit (11) comprises the following steps:
s111: the crane boom camera (111) shoots the whole lifting path along with the moving position of the crane boom, and the AI vision identification terminal (114) identifies whether people and obstacles appear;
s112: a lifting area camera (112) senses that the position of a crane boom is close to the lifting area, shooting is automatically started, an AI visual identification terminal (114) identifies whether personnel enter the lifting area or not, the personnel enter the lifting area, and a prediction unit (14) and an alarm unit (15) perform prediction and alarm processing;
s113: the working environment camera (113) shoots the whole working environment, the AI visual recognition terminal (114) recognizes the personnel flow situation, the personnel close to the lifting area or the personnel flowing to the lifting area are found, and the prediction unit (14) and the alarm unit (15) carry out prediction and alarm processing.
5. The AI-vision and UWB technology based personnel collision avoidance system for crane lifting areas of claim 1 wherein: the positioning unit (12) comprises a crane boom UWB probe (121), a personnel UWB probe (122) and a UWB base station (123), and the crane boom UWB probe (121) and the personnel UWB probe (122) are respectively in communication connection with the UWB base station (123).
6. The AI vision and UWB technology based personnel collision avoidance system of crane lifting area of claim 1 wherein: the working process of the anti-collision control module (2) comprises the following steps:
s21: the controller (21) receives the prediction or alarm information conducted by the prediction unit (14) and the alarm unit (15), the prediction or alarm information is viewed by a remote commander, and the remote commander gives an instruction or the controller (21) gives an instruction according to the alarm level;
s22: the controller (21) transmits instructions to the frequency converter (22) and the acousto-optic alarm unit (23), the frequency converter (22) controls the operation of a motor of the crane, and the acousto-optic alarm unit (23) gives an acousto-optic alarm to warn people to avoid.
7. The AI vision and UWB technology based personnel collision avoidance system of crane lifting area of claim 6 wherein: the alarm levels are divided into five levels, wherein people approach a lifting area or continuously move towards the lifting area, the prediction unit (14) predicts that no collision is possible, and the alarm unit (15) gives a first-level alarm; when people enter the hoisting area, the prediction unit (14) predicts that no collision is possible, and the alarm unit (15) gives a secondary alarm; when people work in the hoisting area movably, the prediction unit (14) predicts that no collision is possible, and the alarm unit (15) gives out three-level alarms; personnel can work in the hoisting area actively, the prediction unit (14) predicts that the collision is possible, and the alarm unit (15) gives a four-level alarm; when a person is under the load of the crane boom or on the necessary travel of the crane boom, the prediction unit (14) predicts that an impact is possible and the alarm unit (15) gives a five-level alarm.
8. The AI-vision and UWB technology based crane lifting area personnel collision avoidance system of claim 7 wherein: the work flow of the controller (21) comprises the following steps:
s221: after receiving the primary alarm or the secondary alarm, the controller (21) sends information to the acousto-optic alarm unit (23), and the acousto-optic alarm unit (23) gives an acousto-optic alarm to warn people to avoid;
s222: the controller (21) receives the three-level alarm and sends information to the frequency converter (22) and the acousto-optic alarm unit (23), the frequency converter (22) controls the crane motor to decelerate at one level, and the acousto-optic alarm unit (23) gives an acousto-optic alarm to warn people to avoid;
s223: the controller (21) receives the four-level alarm and sends information to the frequency converter (22) and the acousto-optic alarm unit (23), the frequency converter (22) controls the crane motor to decelerate in the second level, the acousto-optic alarm unit (23) gives an acousto-optic alarm to warn people to avoid, and the speed of the frequency converter (22) after controlling the crane motor to decelerate in the second level is smaller than the speed of the crane motor after decelerating in the first level;
s224: the controller (21) receives the four-level alarm and sends information to the frequency converter (22) and the acousto-optic alarm unit (23), the frequency converter (22) controls the crane motor to stop, and the acousto-optic alarm unit (23) gives an acousto-optic alarm to warn people to avoid.
9. AI vision and UWB technology based crane lifting area personnel collision avoidance system as claimed in claim 1, characterized in that said prediction unit (14) comprises:
a coordinate acquisition subunit, configured to determine a three-dimensional coordinate value of the load and a three-dimensional coordinate value of a hook of the crane;
the position processing subunit is used for determining a target angle of the load relative to the vertical direction of the hook based on the three-dimensional coordinate value of the load and the three-dimensional coordinate value of the hook;
the parameter obtaining subunit is used for obtaining the current length value of the steel wire rope connected with the hook and obtaining the weight value of the load;
the position analysis subunit is used for inputting the weight value of the load, the target angle of the load relative to the vertical direction of the hook and the current length value of the steel wire rope connected with the hook into the dynamic model diagram;
the position analyzing subunit is further configured to determine a first position point and a second position point of the motion trajectory in the dynamic model map, where the first position point is a starting position point of the load in the dynamic model map, and the second position point is a position point of the load farthest from the first position point in the dynamic model map;
the track generation subunit is used for acquiring a third position point on one side of a connecting line of the first position point and the second position point, and connecting the first position point, the second position point and the third position point to acquire a first motion track;
the track generation subunit is further configured to perform mirror symmetry processing on the first motion track to obtain a second motion track;
the track generation subunit is further configured to synthesize the first motion track and the second motion track to obtain a target motion track of the load when the crane lifts the load;
a point cloud set confirmation subunit, configured to synthesize the target motion trajectory into a motion region of the load according to a preset trajectory processing method in the dynamic model map, and determine a first point cloud set of the motion region and a second point cloud set of the dynamic obstacle or the static obstacle in the dynamic model map;
the analysis subunit is configured to analyze the first point cloud set and the second point cloud set respectively, and determine a set relationship between the first point cloud set and the second point cloud set;
the prediction subunit is used for determining a prediction result according to the set relationship between the first point cloud set and the second point cloud set;
when the first point cloud set and the second point cloud set are intersected, the load is predicted to collide with the static obstacle or the dynamic obstacle according to a prediction result;
when the first point cloud set and the second point cloud set are not intersected, the load is predicted not to collide with the static obstacle or the dynamic obstacle;
the obstacle avoidance scheme generating subunit is configured to, when it is predicted that the load and the static obstacle or the dynamic obstacle will collide with each other, obtain an association node between the first point cloud set and the second point cloud set, and formulate an obstacle avoidance scheme according to the association node;
and the prediction report generating subunit is used for generating a prediction report according to the prediction result and the obstacle avoidance scheme, and transmitting the prediction report to the alarm unit (15).
10. The AI-vision and UWB technology based crane lifting area personnel collision avoidance system of claim 11 wherein the obstacle avoidance scheme generation sub-unit comprises:
the scheme reading subunit is used for reading the obstacle avoidance scheme, determining an obstacle avoidance path of the crane and determining a road coordinate point of the obstacle avoidance path;
the model building subunit is used for building a path track model of the obstacle avoidance path based on the adjacent coordinate points of the road coordinate points;
Figure FDA0003561216720000061
wherein M represents the path trajectory model;
Figure FDA0003561216720000062
representing the smoothness factor of the path track, and the value range is (0.98, 0.99); theta represents a turning angle of the obstacle avoidance path; sigma thetajRepresenting a total turning angle of the obstacle avoidance path; j represents the total number of turns encountered in the obstacle avoidance path; thetajRepresenting the current turning angle in the obstacle avoidance path; (x)i+1,yi+1,zi+1) Indicating the i +1 th road coordinate point; (x)i,yi,zi) Representing the ith road coordinate point, wherein the (i + 1) th road coordinate point is adjacent to the ith road coordinate point; sigma ljRepresenting a total path length of the obstacle avoidance path; ljRepresenting the distance between the coordinates of the adjacent road in the jth section, wherein the value of j is the total number of the coordinates of the road-1;
the calculating subunit is configured to calculate a fitness value of the obstacle avoidance scheme based on the road track model;
F=M*(m+1)*∑|ft-ft-1|;
wherein F represents a fitness value of the obstacle avoidance scheme; m represents the weight of the path orbit model and has a value range of (0, 1)];ftRepresenting the fitness value of the obstacle avoidance scheme at the current moment; f. oft-1Representing the fitness value of the obstacle avoidance scheme at the last moment; t represents the current time; t-1 represents the last time;
the comparison subunit is configured to compare the fitness value of the obstacle avoidance scheme with a preset fitness threshold value, and determine whether the obstacle avoidance scheme needs to be optimized;
when the fitness value is equal to or greater than the preset fitness threshold value, judging that the obstacle avoidance scheme does not need to be optimized;
otherwise, judging that the obstacle avoidance scheme needs to be optimized;
and the optimization subunit is configured to, when the obstacle avoidance scheme needs to be optimized, perform a difference between the fitness value and the fitness threshold, determine an optimization factor based on a difference result, and optimize the obstacle avoidance scheme according to the optimization factor.
CN202210293776.1A 2022-03-23 2022-03-23 Personnel collision avoidance system based on AI vision and UWB technology hoist area Pending CN114634112A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210293776.1A CN114634112A (en) 2022-03-23 2022-03-23 Personnel collision avoidance system based on AI vision and UWB technology hoist area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210293776.1A CN114634112A (en) 2022-03-23 2022-03-23 Personnel collision avoidance system based on AI vision and UWB technology hoist area

Publications (1)

Publication Number Publication Date
CN114634112A true CN114634112A (en) 2022-06-17

Family

ID=81948804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210293776.1A Pending CN114634112A (en) 2022-03-23 2022-03-23 Personnel collision avoidance system based on AI vision and UWB technology hoist area

Country Status (1)

Country Link
CN (1) CN114634112A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035458A (en) * 2022-07-06 2022-09-09 中国安全生产科学研究院 Safety risk evaluation method and system
CN116281664A (en) * 2023-03-06 2023-06-23 中海福陆重工有限公司 Crawler crane bearing device based on SPMT and control system
CN116675118A (en) * 2023-08-03 2023-09-01 常州常欣起重物联科技有限公司 Crane inspection detection safety protection device and application method thereof
CN117671646A (en) * 2024-01-30 2024-03-08 深圳唯创安全技术有限公司 Anti-collision auxiliary system and method for forklift based on AI image

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035458A (en) * 2022-07-06 2022-09-09 中国安全生产科学研究院 Safety risk evaluation method and system
CN115035458B (en) * 2022-07-06 2023-02-03 中国安全生产科学研究院 Safety risk evaluation method and system
CN116281664A (en) * 2023-03-06 2023-06-23 中海福陆重工有限公司 Crawler crane bearing device based on SPMT and control system
CN116281664B (en) * 2023-03-06 2024-01-23 中海福陆重工有限公司 Crawler crane bearing device based on SPMT and control system
CN116675118A (en) * 2023-08-03 2023-09-01 常州常欣起重物联科技有限公司 Crane inspection detection safety protection device and application method thereof
CN117671646A (en) * 2024-01-30 2024-03-08 深圳唯创安全技术有限公司 Anti-collision auxiliary system and method for forklift based on AI image
CN117671646B (en) * 2024-01-30 2024-04-09 深圳唯创安全技术有限公司 Anti-collision auxiliary system and method for forklift based on AI image

Similar Documents

Publication Publication Date Title
CN114634112A (en) Personnel collision avoidance system based on AI vision and UWB technology hoist area
CN109095356B (en) Engineering machinery and operation space dynamic anti-collision method, device and system thereof
CN109933064A (en) Multisensor secure path system for autonomous vehicle
CN106516990B (en) Container terminal field bridge anti-collision control system and method based on object contour tracking
CN107285206A (en) A kind of collision-proof method based on derrick crane collision prevention early warning system
CN110733983B (en) Tower crane safety control system and control method thereof
CN111226178A (en) Monitoring device, industrial system, method for monitoring, and computer program
CN108706469A (en) Crane intelligent anti-collision system based on millimetre-wave radar
CN114132842A (en) Real-time monitoring system and monitoring method for operation state of container gantry crane storage yard
KR102623060B1 (en) Accident prevention monitoring method and system for tower crane
JP2003118981A (en) Crane approach alarm device
CN116038684A (en) Robot collision early warning method based on vision
US11756427B1 (en) Traffic signal system for congested trafficways
CN111708356B (en) Automatic path planning system and method for crane
EP3592903A1 (en) Method for monitoring movement of a cantilever structure of an offshore platform, monitoring system, offshore platform
CN109461328B (en) Bridge anticollision monitoring device based on laser scanning
CN113194284B (en) Intelligent monitoring system and method for tower crane
CN105776042B (en) A kind of crane collision resistant monitoring method on dock platform
CN116129340A (en) Safety monitoring method for dangerous area based on action track prediction
CN111348559B (en) Control system and control method for predicting and avoiding collision between cable crane and gantry crane
EP3882199A2 (en) Specialized, personalized and enhanced elevator calling for robots & co-bots
CN117058211A (en) Grab bucket anti-shake collision strategy control method and system based on laser positioning
CN116281636B (en) Anti-collision method and system for group tower operation
CN117474321B (en) BIM model-based construction site risk intelligent identification method and system
CN115258869B (en) Elevator early warning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination