CN113233336B - Intelligent tower crane robot pick-and-place control method and system based on scene target recognition - Google Patents

Intelligent tower crane robot pick-and-place control method and system based on scene target recognition Download PDF

Info

Publication number
CN113233336B
CN113233336B CN202110782585.7A CN202110782585A CN113233336B CN 113233336 B CN113233336 B CN 113233336B CN 202110782585 A CN202110782585 A CN 202110782585A CN 113233336 B CN113233336 B CN 113233336B
Authority
CN
China
Prior art keywords
space
placing
material taking
target
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110782585.7A
Other languages
Chinese (zh)
Other versions
CN113233336A (en
Inventor
陈德木
蒋云
赵晓东
陆建江
陈曦
顾姣燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dajie Intelligent Transmission Technology Co Ltd
Original Assignee
Hangzhou Dajie Intelligent Transmission Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dajie Intelligent Transmission Technology Co Ltd filed Critical Hangzhou Dajie Intelligent Transmission Technology Co Ltd
Priority to CN202110782585.7A priority Critical patent/CN113233336B/en
Publication of CN113233336A publication Critical patent/CN113233336A/en
Application granted granted Critical
Publication of CN113233336B publication Critical patent/CN113233336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • B66C13/48Automatic control of crane drives for producing a single or repeated working cycle; Programme control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/16Applications of indicating, registering, or weighing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • B66C13/40Applications of devices for transmitting control pulses; Applications of remote control devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C23/00Cranes comprising essentially a beam, boom, or triangular structure acting as a cantilever and mounted for translatory of swinging movements in vertical or horizontal planes or a combination of such movements, e.g. jib-cranes, derricks, tower cranes
    • B66C23/88Safety gear
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an intelligent tower crane robot pick-and-place control method and system based on scene target identification. The invention carries out space segmentation and target extraction on the basis of grids aiming at the space involved in material taking and placing, further realizes the identification and classification of target characteristics, carries out mode judgment on the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space, and adopts a control mode matched with the mode.

Description

Intelligent tower crane robot pick-and-place control method and system based on scene target recognition
Technical Field
The invention relates to the technical field of intelligent tower crane robots, in particular to a control method and a system for picking and placing an intelligent tower crane robot based on scene target recognition.
Background
A tower crane is an important engineering facility, is used for longitudinal lifting and horizontal movement of large materials, can realize transportation, lifting and other types of engineering operation, and is widely applied to construction sites, ports, logistics and factories. The traditional tower crane needs manual driving and operation, and depends on experience and technology of drivers and related operators, and certain risk still exists in field operation.
At present, in a considerable number of industrial fields, intelligent robots have been used to a large extent, for example, in logistics, assembly lines, etc. However, in large engineering facilities such as tower cranes and the like, no intelligent robot which is completely unmanned, autonomously decision-making and automatically controlled is realized.
In recent years, with the continuous improvement of automation and digitization levels in the engineering field, the research and development of intelligent tower cranes are gradually started and made a certain progress, but the intelligent tower cranes still stay at relatively primary levels of manual instruction remote control, auxiliary operation prompting, abnormal alarming and the like, and the real unmanned aerial vehicle is not realized.
Specifically, the tower crane transports materials, and a link of taking and placing the materials is achieved without leaving, namely, grabbing actions of the materials to be transported and releasing actions after the materials reach a transportation destination are achieved by means of a lifting hook, a gripper and the like. However, in the material taking and placing link, a relatively complex scene in space is faced, static and dynamic obstacles often exist, the pre-judgment and avoidance of risks in the aspects of collision, interference and the like need to be carried out, and meanwhile, the mode recognition is carried out on the working face space environment for material grabbing and releasing, so that a proper action time and mechanism are selected.
In the prior art, a camera, a laser point cloud radar and the like are installed on a lifting hook, a gripper and other components of an intelligent tower crane in the prior art, so that the functions of expanding the visual field, avoiding observing dead angles, preventing collision and braking and the like are realized. However, since the tower crane cannot realize completely unmanned autonomous decision making and judgment, the means above the material taking and placing link is only used as assistance, but still is mainly manual operation, and ground auxiliary personnel and drivers are generally required to communicate and command through traditional modes such as interphones and the like, so that the problems of low efficiency, high labor cost and easiness in error occurrence exist.
Disclosure of Invention
Objects of the invention
In view of the above problems, the invention aims to provide an intelligent tower crane robot pick-and-place control method and system based on scene target identification. The invention carries out space segmentation and target extraction on the basis of grids aiming at the space involved in material taking and placing, further realizes the identification and classification of target characteristics, carries out mode judgment on the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space, and adopts a control mode matched with the mode.
The invention discloses the following technical scheme.
(II) technical scheme
As a first aspect of the invention, the invention discloses an intelligent tower crane robot pick-and-place control method based on scene target recognition, which comprises the following steps:
s101, acquiring three-dimensional scene data of a material taking and placing related space by the intelligent tower crane;
step S102, aiming at three-dimensional scene data, carrying out space segmentation and large-scale target segmentation on the basis of grids, and establishing mapping topology of the segmented space grids and targets;
step S103, mode judgment is carried out on the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space;
and S104, determining a control mode matched with the space environment mode of the material taking and placing operation surface, and issuing a control instruction to a material taking and placing component of the intelligent tower crane.
Preferably, in step S101, the three-dimensional scene data is obtained by any one or a combination of multiple means of the modes of Beidou GPS, UWB, depth camera positioning, 3D multi-line laser scanning radar, millimeter wave, centimeter wave radar, laser ranging, barometric ranging, ultrasonic ranging, optical flow ranging, encoder ranging, multi-camera synthesis video map, and the like; for the obtained data, performing the steps of registration, denoising, simplification, segmentation and the like to obtain three-dimensional coordinates of each target distribution space in the space; and forming the three-dimensional scene data by using the three-dimensional coordinates of each target and the distribution space thereof in the material taking and placing related space.
Preferably, in step S102, the spatial segmentation based on the mesh specifically includes: expanding multi-round space segmentation aiming at the material taking and placing related space; wherein, the 1 st round of space division divides the space into 8 subspaces with the same size; dividing the 2 nd round of space into 8 subspaces with the same size; and sequentially iterating until a preset round number K of space division is reached.
Preferably, in step S102, the mapping topology of the segmented spatial grid and the target specifically includes: an identifier of a grid cell, a set of adjacent grids of the grid cell, a description index of a grid cell association target.
Preferably, the step S103 of performing mode determination on the space environment of the material taking and placing working surface based on the scene state formed by each target in the material taking and placing space specifically includes: converting the description indexes of the targets related to the grid units adjacent to the grid unit where the operation surface is located and the indirectly adjacent grid units into description state symbols of the targets so as to generate a scene state formed by all targets in a space environment where the material taking and placing operation surface is located; in order to carry out mode judgment on the scene state of the space environment where the material taking and placing operation surface is located, a plurality of groups of scene state templates are preset; calculating the scene state of the current material taking and placing working face in the space environment and the matching coefficient of each scene state template; and determining a scene state template with the highest scene state matching value in the space environment of the current material taking and placing operation surface.
As a second aspect of the present invention, the present invention discloses an intelligent tower crane robot pick-and-place control system based on scene target identification, comprising: the system comprises a space three-dimensional scene perception module, a space segmentation and target association module, an operation space environment scene mode judgment module and a pick-and-place operation control module;
the spatial three-dimensional scene sensing module is used for acquiring three-dimensional scene data of a material taking and placing related space;
the space division and target association module is used for carrying out space division and large-scale target division on a grid basis aiming at three-dimensional scene data and establishing mapping topology of the divided space grids and the targets;
the operation space environment scene mode judging module is used for judging the mode of the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space;
and the taking and placing operation control module is used for determining a control mode matched with the space environment mode of the material taking and placing operation surface and issuing a control instruction to the material taking and placing component of the intelligent tower crane.
Preferably, the spatial three-dimensional scene sensing module obtains the three-dimensional scene data through any one or a plurality of comprehensive means of the modes of Beidou GPS, UWB and depth camera positioning, 3D multi-line laser scanning radar, millimeter wave and centimeter wave radar, laser ranging, air pressure ranging, ultrasonic ranging, optical flow ranging, encoder ranging, multi-camera synthesis video map and the like; and for the obtained data, steps of registration, denoising, simplification, segmentation and the like are executed to obtain three-dimensional coordinates of each target distribution space in the space; and forming the three-dimensional scene data by using the three-dimensional coordinates of each target and the distribution space thereof in the material taking and placing related space.
Preferably, the space segmentation and target association module performs multi-round space segmentation on the material taking and placing related space; wherein, the 1 st round of space division divides the space into 8 subspaces with the same size; dividing the 2 nd round of space into 8 subspaces with the same size; and sequentially iterating until a preset round number K of space division is reached.
Preferably, the step of establishing the mapping topology of the segmented space grid and the target by the space segmentation and target association module specifically includes: an identifier of a grid cell, a set of adjacent grids of the grid cell, a description index of a grid cell association target.
Preferably, the operation space environment scene mode determination module specifically determines the mode of the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space: converting the description indexes of the targets related to the grid units adjacent to the grid unit where the operation surface is located and the indirectly adjacent grid units into description state symbols of the targets so as to generate a scene state formed by all targets in a space environment where the material taking and placing operation surface is located; in order to carry out mode judgment on the scene state of the space environment where the material taking and placing operation surface is located, a plurality of groups of scene state templates are preset; calculating the scene state of the current material taking and placing working face in the space environment and the matching coefficient of each scene state template; and determining a scene state template with the highest scene state matching value in the space environment of the current material taking and placing operation surface.
(III) advantageous effects
The invention discloses an intelligent tower crane robot pick-and-place control method and system based on scene target identification, which have the following beneficial effects:
therefore, the intelligent tower crane robot can realize unmanned, autonomous decision-making and automatic control in the material taking and placing link; the control mode of the material taking and placing operation can be adapted based on the scene state of the space environment where the material taking and placing operation surface is located.
The invention can adapt to relatively complex scenes in material taking and placing operation, avoids risks in collision, interference and the like, and adopts proper action time and mechanism. The control method and the system of the invention have simple and efficient algorithm, low calculation amount and no need of matching high-capacity software and hardware, and can be realized on the basis of the existing industrial control equipment and network.
Drawings
The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining and illustrating the present invention and should not be construed as limiting the scope of the present invention.
FIG. 1 is a flow chart of an intelligent tower crane robot pick-and-place control method based on scene target identification disclosed by the invention;
FIG. 2 is a structural diagram of an intelligent tower crane robot pick-and-place control system based on scene target recognition.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described in more detail below with reference to the accompanying drawings in the embodiments of the present invention.
It should be noted that: in the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described are some embodiments of the present invention, not all embodiments, and features in embodiments and embodiments in the present application may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience in describing the present invention and for simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be taken as limiting the scope of the present invention.
The following describes in detail a control method for taking and placing an intelligent tower crane robot based on scene target recognition, which is disclosed by the invention, with reference to fig. 1.
The invention carries out space segmentation and target extraction on the basis of grids for the space involved in material taking and placing so as to further realize the identification and classification of target characteristics, carries out mode judgment on the space environment of the material taking and placing working face based on the scene state formed by each target in the material taking and placing space, and adopts a control mode matched with the mode
Firstly, in step S101, the intelligent tower crane obtains three-dimensional scene data of a material taking and placing relevant space.
As used herein, "material handling related space" refers to the space occupied by a work surface for loading material prior to transport or unloading material after transport, as well as the space within a certain distance around the work surface. Within the material handling related space, there will obviously be various static and dynamic objects, including the material object being handled, static and dynamic obstacle objects (e.g. building structures, engineering facilities, other materials, etc. in the space belong to static obstacle objects, and transportation facilities such as people or trolleys travelling or moving in the space constitute dynamic obstacle objects).
The intelligent tower crane can acquire the three-dimensional scene data in the material taking and placing relevant space by any one or comprehensive multiple means of modes such as Beidou GPS, UWB and depth camera positioning, 3D multi-line laser scanning radar, millimeter wave, centimeter wave radar, laser ranging, air pressure ranging, ultrasonic ranging, optical flow ranging, encoder ranging, multi-camera synthetic video map and the like. For example, for laser point cloud, a laser point cloud radar can be installed on a lifting hook, a hand grip and other parts of the intelligent tower crane, and one or more independent laser point cloud radars can be arranged in a related space. And the laser point cloud radar emits a large number of laser beams to the material taking and placing related space and induces the reflected laser signals so as to obtain point cloud data formed by the reflected laser signals. And for the obtained point cloud data, performing the steps of registration, denoising, simplification, segmentation and the like to obtain the three-dimensional coordinates of each target distribution space in the space. And forming the three-dimensional scene data by using the three-dimensional coordinates of each target and the distribution space thereof in the material taking and placing related space.
And judging whether the target belongs to a static target or a dynamic target according to the three-dimensional coordinate of the distribution space of each target, and determining the scale grade of the target. According to whether the three-dimensional coordinate of the distribution space of the target changes along with time, whether the target belongs to a static target or a dynamic target can be judged. And a spatial scale threshold for distinguishing large-scale and small-scale targets is set in advance, and whether the scale of the distribution space of the targets is larger than the threshold is judged according to the three-dimensional coordinates of the distribution space of the targets, so that the scale grade of the targets is determined.
Step S102, aiming at three-dimensional scene data, space segmentation and large-scale target segmentation on the basis of grids are carried out, and mapping topology of the segmented space grids and the targets is established.
And for the material taking and placing related space, dividing the material taking and placing related space into space grids with proper sizes. The material taking and placing related space is a cubic space, and the side length of the cube is L. Expanding multi-round space segmentation aiming at the material taking and placing related space; wherein, the 1 st round of space division divides the space into 8 subspaces with the same size; dividing the 2 nd round of space into 8 subspaces with the same size; and sequentially iterating until a preset round number K of space division is reached.
Wherein the predetermined number of rounds of spatial division K is determined as follows: all the dynamic small-scale targets determined in the step S101 are selected, and the maximum size of the targets in the dynamic small-scale targets is determined and expressed as
Figure DEST_PATH_IMAGE001
(ii) a And, after K round space division, it is obvious that the cell size of each grid cell is
Figure DEST_PATH_IMAGE002
Guiding the movement of blood
Figure DEST_PATH_IMAGE003
And determining the K value meeting the condition as the value of the space division preset round number.
Obviously, after the above spatial segmentation, the dynamic, small-sized target may occupy 1 or 2 spatial meshes, so as to associate the dynamic small-sized target with the segmented 1 or 2 spatial meshes.
After the space division, the condition that a static large-scale target occupies a plurality of space grids exists; in order to realize the identification of the target characteristics and the scene state based on the spatial grid subsequently, the static and large-scale target is also divided into a plurality of sub-targets based on the spatial grid, and each sub-target is associated with 1 spatial grid occupied by the sub-target. And establishing a directory in which the static and large-scale targets are divided into sub-targets according to the space grid, taking the static and large-scale targets as a root directory, and taking the divided sub-targets as sub-directory items under the root directory.
And then, on the basis of realizing space division and target-space grid association, establishing mapping topology of the divided space grid and the target. The mapping topology established is based on grid cells, including: an identifier of a grid cell, denoted as
Figure DEST_PATH_IMAGE004
(ii) a Neighboring grid set of grid cells, representation
Figure DEST_PATH_IMAGE005
Of the collection
Figure DEST_PATH_IMAGE006
Is an adjacent grid identifier; the description index of the grid unit associated target, as described above, if the target or sub-target associated with the grid unit can be determined, the description index can be formed by describing the category, number and sub-target number of the grid unit associated target; the target category comprises the static category and the dynamic category as well as a large-scale category and a small-scale category; and sequentially numbering the targets in the material taking and placing relevant space, and numbering the sub-targets segmented from the static large-scale targets.
And S103, judging the mode of the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space.
And determining the grid unit where the operation surface is located for the operation surface for loading materials before transportation or the operation surface for unloading the materials after transportation, namely the material taking and placing operation surface. Extracting an adjacent grid set of the grid unit where the operation surface is located according to the mapping topological structure; and aiming at the grid cells in the adjacent grid set, obtaining the description indexes of the grid cell association targets.
Further, an adjacent grid set of any one or more grid cells in an adjacent grid set of the grid cell where the working surface is located may also be extracted, which is called an indirectly adjacent grid cell, and a description index of an indirectly adjacent grid cell association target is obtained.
And converting the description indexes of the targets related to the grid cells adjacent to the grid cell where the operation surface is located and the indirectly adjacent grid cells into the description state symbols of the targets, so as to generate a scene state formed by each target in the space environment where the material picking and placing operation surface is located. The translated object description state symbol is represented as the description index of each adjacent grid cell and the indirectly adjacent grid cell associated object
Figure DEST_PATH_IMAGE007
Wherein i represents a gridThe unit sequence number is used for describing a state symbol according to whether the scale of the grid unit associated target is large scale or small scale, static or dynamic
Figure 107174DEST_PATH_IMAGE007
The corresponding values are different. If the number of the grid units adjacent to the grid unit where the operation surface is located and the number of the indirectly adjacent grid units are M, the scene state formed by each target in the space environment where the material taking and placing operation surface is located is represented as
Figure DEST_PATH_IMAGE008
;
Carrying out mode judgment on scene states of the environment, presetting a plurality of groups of scene state templates, and expressing the group of scene state templates as
Figure DEST_PATH_IMAGE009
Wherein
Figure DEST_PATH_IMAGE010
Is the scene state template number, each scene state template also relates to the object description state symbols of M adjacent grid cells and indirectly adjacent grid cells, denoted as
Figure DEST_PATH_IMAGE011
. Thus, several sets of scene state templates that are preset can be represented as scene state pattern criteria matrices:
Figure DEST_PATH_IMAGE012
taking and placing the current scene state of the space environment of the working face
Figure DEST_PATH_IMAGE013
Adding a scene state mode standard matrix F to form an initial scene state mode judgment matrix D as follows:
Figure DEST_PATH_IMAGE014
carrying out dimension normalization on the initial scene state mode decision matrix D to obtain
Figure DEST_PATH_IMAGE015
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE016
a value range of
Figure DEST_PATH_IMAGE017
And is and
Figure DEST_PATH_IMAGE018
wherein
Figure DEST_PATH_IMAGE019
The minimum value of the target description state symbol of the grid unit sequence number i in all the L groups of scene state templates is represented,
Figure DEST_PATH_IMAGE020
the maximum value of the target description state symbol of the grid cell sequence number i in all the L groups of scene state templates is represented.
After normalization, calculating the scene state of the current material taking and placing working face in the space environment
Figure DEST_PATH_IMAGE021
And matching coefficients in each scene state template are as follows:
Figure DEST_PATH_IMAGE022
Figure DEST_PATH_IMAGE023
is as follows
Figure DEST_PATH_IMAGE024
Grid cell sequence number i in group scene state template is normalized correspondingly
Figure DEST_PATH_IMAGE025
Normalized target description state symbol corresponding to grid unit sequence number i in space environment of normalized current material taking and placing operation surface
Figure DEST_PATH_IMAGE026
The matching coefficient of (2); wherein
Figure 444221DEST_PATH_IMAGE024
The value range is 1-L, and the value range of i is 1-M;
Figure DEST_PATH_IMAGE027
is shown in
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
Within this range of values
Figure DEST_PATH_IMAGE030
The minimum value of (a) is determined,
Figure DEST_PATH_IMAGE031
is shown in
Figure 676488DEST_PATH_IMAGE028
Figure 337277DEST_PATH_IMAGE029
Within this range of values
Figure 417228DEST_PATH_IMAGE030
The maximum value of (a) is,
Figure DEST_PATH_IMAGE032
is the adjustment factor for the number of bits to be adjusted,
Figure DEST_PATH_IMAGE033
by matching coefficients
Figure DEST_PATH_IMAGE034
A matching matrix can be obtained:
Figure DEST_PATH_IMAGE035
further, weight vectors for the M grid cells are determined:
Figure DEST_PATH_IMAGE036
wherein
Figure DEST_PATH_IMAGE037
The scene state and the first scene state of the current material taking and placing operation surface in the space environment
Figure DEST_PATH_IMAGE038
The matching values of the group scene state template are:
Figure DEST_PATH_IMAGE039
therefore, in the L groups of scene state templates, the scene state template with the highest scene state matching value in the space environment where the current material taking and placing operation surface is located is determined.
And S104, determining a control mode matched with the space environment mode of the material taking and placing operation surface, and issuing a control instruction to a material taking and placing component of the intelligent tower crane.
And for the L groups of scene state templates, presetting a control mode corresponding to each scene state template by the intelligent tower crane. For a set of scene state templates
Figure DEST_PATH_IMAGE040
According to the relationship among M grid cells adjacent to and indirectly adjacent to the grid cell where the operation surface is locatedAnd presetting a corresponding control mode according to whether the scale of the associated target is large scale or small scale, whether the associated target is static or dynamic, and the distribution of the associated target in the grid unit. The control mode relates to the longitudinal moving speed, the transverse moving amplitude and the transverse moving speed of a material taking and placing component of the tower crane in the taking and placing action process. For example, when M grid cells adjacent to and indirectly adjacent to the grid cell on which the working face is located have more dynamic and small-scale associated targets, the material taking and placing component has smaller longitudinal and transverse moving speed and larger transverse moving amplitude, so that collision with the dynamic targets is avoided, and a sufficient dynamic target evacuation time length is reserved. Under the condition that M grid units adjacent to and indirectly adjacent to the grid unit on which the working surface is located have static and large-scale related targets, the material taking and placing component has relatively large longitudinal and transverse moving speed and small transverse moving amplitude.
And then, according to the scene state template with the highest scene state matching value in the space environment of the current material taking and placing working face in the step 103, selecting a corresponding control mode preset by the group of scene state templates as an actual control mode for the material taking and placing part of the intelligent tower crane, and issuing a corresponding control instruction according to parameters of the actual control mode.
The intelligent tower crane robot pick-and-place control system based on scene target recognition disclosed by the invention is described in detail with reference to fig. 2. As shown in fig. 2, the system includes a spatial three-dimensional scene sensing module, a spatial segmentation and target association module, an operation space environment scene mode determination module, and a pick-and-place operation control module.
The system performs space segmentation and target extraction on the basis of grids for the space involved in material taking and placing, further realizes identification and classification of target features, performs mode judgment on the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space, and adopts a control mode matched with the mode
And the spatial three-dimensional scene perception module is used for acquiring three-dimensional scene data of the material taking and placing related space.
As used herein, "material handling related space" refers to the space occupied by a work surface for loading material prior to transport or unloading material after transport, as well as the space within a certain distance around the work surface. Within the material handling related space, there will obviously be various static and dynamic objects, including the material object being handled, static and dynamic obstacle objects (e.g. building structures, engineering facilities, other materials, etc. in the space belong to static obstacle objects, and transportation facilities such as people or trolleys travelling or moving in the space constitute dynamic obstacle objects).
The spatial three-dimensional scene sensing module can acquire three-dimensional scene data in the material taking and placing related space by any one or a plurality of comprehensive means of modes such as Beidou GPS, UWB and depth camera positioning, 3D multi-line laser scanning radar, millimeter wave, centimeter wave radar, laser ranging, air pressure ranging, ultrasonic ranging, optical flow ranging, encoder ranging, multi-camera synthetic video map and the like; for example, various sensors discussed above can be installed on the lifting hook, the gripping handle and other parts of the intelligent tower crane, and independent one or more sensors can be arranged in the relevant space. And performing the steps of registration, denoising, simplification, segmentation and the like to obtain the three-dimensional coordinates of each target distribution space in the space. And forming the three-dimensional scene data by using the three-dimensional coordinates of each target and the distribution space thereof in the material taking and placing related space.
And according to the distribution space three-dimensional coordinate of each target, the space three-dimensional scene perception module judges whether the target belongs to a static target or a dynamic target and determines the scale grade of the target. According to whether the three-dimensional coordinate of the distribution space of the target changes along with time, whether the target belongs to a static target or a dynamic target can be judged. And a spatial scale threshold for distinguishing large-scale and small-scale targets is set in advance, and whether the scale of the distribution space of the targets is larger than the threshold is judged according to the three-dimensional coordinates of the distribution space of the targets, so that the scale grade of the targets is determined.
And the space division and target association module is used for carrying out space division and large-scale target division on a grid basis aiming at the three-dimensional scene data and establishing the mapping topology of the divided space grid and the target.
And for the material taking and placing related space, dividing the material taking and placing related space into space grids with proper sizes. The material taking and placing related space is a cubic space, and the side length of the cubic space is L. Expanding multi-round space segmentation aiming at the material taking and placing related space; wherein, the 1 st round of space division divides the space into 8 subspaces with the same size; dividing the 2 nd round of space into 8 subspaces with the same size; and sequentially iterating until a preset round number K of space division is reached.
Wherein the predetermined number of rounds of spatial division K is determined as follows: all the dynamic small-scale targets determined in the step S101 are selected, and the maximum size of the targets in the dynamic small-scale targets is determined and expressed as
Figure DEST_PATH_IMAGE041
(ii) a And, after K round space division, it is obvious that the cell size of each grid cell is
Figure DEST_PATH_IMAGE042
Guiding the movement of blood
Figure DEST_PATH_IMAGE043
And determining the K value meeting the condition as the value of the space division preset round number.
Obviously, after the above spatial segmentation, the dynamic, small-sized target may occupy 1 or 2 spatial meshes, so as to associate the dynamic small-sized target with the segmented 1 or 2 spatial meshes.
After the space division, the condition that a static large-scale target occupies a plurality of space grids exists; in order to realize the identification of the target characteristics and the scene state based on the spatial grid subsequently, the static and large-scale target is also divided into a plurality of sub-targets based on the spatial grid, and each sub-target is associated with 1 spatial grid occupied by the sub-target. And establishing a directory in which the static and large-scale targets are divided into sub-targets according to the space grid, taking the static and large-scale targets as a root directory, and taking the divided sub-targets as sub-directory items under the root directory.
And then, on the basis of realizing space division and target-space grid association, establishing mapping topology of the divided space grid and the target. The mapping topology established is based on grid cells, including: an identifier of a grid cell, denoted as
Figure DEST_PATH_IMAGE044
(ii) a Neighboring grid set of grid cells, representation
Figure DEST_PATH_IMAGE045
Of the collection
Figure DEST_PATH_IMAGE046
Is an adjacent grid identifier; the description index of the grid unit associated target, as described above, if the target or sub-target associated with the grid unit can be determined, the description index can be formed by describing the category, number and sub-target number of the grid unit associated target; the target category comprises the static category and the dynamic category as well as a large-scale category and a small-scale category; and sequentially numbering the targets in the material taking and placing relevant space, and numbering the sub-targets segmented from the static large-scale targets.
And the operation space environment scene mode judging module is used for judging the mode of the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space.
And determining the grid unit where the operation surface is located for the operation surface for loading materials before transportation or the operation surface for unloading the materials after transportation, namely the material taking and placing operation surface. Extracting an adjacent grid set of the grid unit where the operation surface is located according to the mapping topological structure; and aiming at the grid cells in the adjacent grid set, obtaining the description indexes of the grid cell association targets.
Further, an adjacent grid set of any one or more grid cells in an adjacent grid set of the grid cell where the working surface is located may also be extracted, which is called an indirectly adjacent grid cell, and a description index of an indirectly adjacent grid cell association target is obtained.
The operation space environment scene mode judging module is used for converting the description indexes of the targets associated with the grid cells adjacent to the grid cell where the operation surface is located and the indirectly adjacent grid cells into the description state symbols of the targets, so that the scene state formed by all the targets in the space environment where the material taking and placing operation surface is located is generated. The translated object description state symbol is represented as the description index of each adjacent grid cell and the indirectly adjacent grid cell associated object
Figure DEST_PATH_IMAGE047
Wherein i represents the grid cell sequence number, and the object describes the state symbol according to whether the scale of the grid cell associated object is large scale or small scale, static or dynamic
Figure 724844DEST_PATH_IMAGE047
The corresponding values are different. If the number of the grid units adjacent to the grid unit where the operation surface is located and the number of the indirectly adjacent grid units are M, the scene state formed by each target in the space environment where the material taking and placing operation surface is located is represented as
Figure DEST_PATH_IMAGE048
In order to carry out mode judgment on the scene state of the space environment where the material taking and placing operation surface is located, the operation space environment scene mode judgment module is preset with a plurality of sets of scene state templates, and one set of scene state templates are represented as
Figure DEST_PATH_IMAGE049
Wherein
Figure DEST_PATH_IMAGE050
Is the scene state template number, each scene state template also relates to the object description state symbols of M adjacent grid cells and indirectly adjacent grid cells, denoted as
Figure DEST_PATH_IMAGE051
. Thus, several sets of scene state templates that are preset can be represented as scene state pattern criteria matrices:
Figure DEST_PATH_IMAGE052
taking and placing the current scene state of the space environment of the working face
Figure DEST_PATH_IMAGE053
Adding a scene state mode standard matrix F to form an initial scene state mode judgment matrix D as follows:
Figure DEST_PATH_IMAGE054
carrying out dimension normalization on the initial scene state mode decision matrix D to obtain
Figure DEST_PATH_IMAGE055
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE056
a value range of
Figure DEST_PATH_IMAGE057
And is and
Figure DEST_PATH_IMAGE058
wherein
Figure DEST_PATH_IMAGE059
The minimum value of the target description state symbol of the grid unit sequence number i in all the L groups of scene state templates is represented,
Figure DEST_PATH_IMAGE060
then the maximum value of the target description state symbol of the grid cell sequence number i in all the L groups of scene state templates is represented。
After normalization, calculating the scene state of the current material taking and placing working face in the space environment
Figure DEST_PATH_IMAGE061
And matching coefficients in each scene state template are as follows:
Figure DEST_PATH_IMAGE062
Figure DEST_PATH_IMAGE063
is as follows
Figure DEST_PATH_IMAGE064
Grid cell sequence number i in group scene state template is normalized correspondingly
Figure DEST_PATH_IMAGE065
Normalized target description state symbol corresponding to grid unit sequence number i in space environment of normalized current material taking and placing operation surface
Figure DEST_PATH_IMAGE066
The matching coefficient of (2); wherein
Figure 411784DEST_PATH_IMAGE064
The value range is 1-L, and the value range of i is 1-M;
Figure DEST_PATH_IMAGE067
is shown in
Figure DEST_PATH_IMAGE068
Figure DEST_PATH_IMAGE069
Within this range of values
Figure DEST_PATH_IMAGE070
The minimum value of (a) is determined,
Figure DEST_PATH_IMAGE071
is shown in
Figure 830258DEST_PATH_IMAGE068
Figure 640082DEST_PATH_IMAGE069
Within this range of values
Figure 899025DEST_PATH_IMAGE070
The maximum value of (a) is,
Figure DEST_PATH_IMAGE072
is the adjustment factor for the number of bits to be adjusted,
Figure DEST_PATH_IMAGE073
by matching coefficients
Figure DEST_PATH_IMAGE074
A matching matrix can be obtained:
Figure DEST_PATH_IMAGE075
further, weight vectors for the M grid cells are determined:
Figure DEST_PATH_IMAGE076
wherein
Figure DEST_PATH_IMAGE077
The scene state and the first scene state of the current material taking and placing operation surface in the space environment
Figure DEST_PATH_IMAGE078
The matching values of the group scene state template are:
Figure DEST_PATH_IMAGE079
therefore, in the L groups of scene state templates, the scene state template with the highest scene state matching value in the space environment where the current material taking and placing operation surface is located is determined.
And the taking and placing operation control module is used for determining a control mode matched with the space environment mode of the material taking and placing operation surface and issuing a control instruction to the material taking and placing component of the intelligent tower crane.
And for the L groups of scene state templates, presetting a control mode corresponding to each scene state template by the intelligent tower crane. For a set of scene state templates
Figure DEST_PATH_IMAGE080
And presetting a corresponding control mode according to whether the scale of the associated target in M grid units adjacent and indirectly adjacent to the grid unit where the operation surface is located is large scale or small scale, whether the scale is static or dynamic and the distribution of the associated target in the grid unit. The control mode relates to the longitudinal moving speed, the transverse moving amplitude and the transverse moving speed of a material taking and placing component of the tower crane in the taking and placing action process. For example, when M grid cells adjacent to and indirectly adjacent to the grid cell on which the working face is located have more dynamic and small-scale associated targets, the material taking and placing component has smaller longitudinal and transverse moving speed and larger transverse moving amplitude, so that collision with the dynamic targets is avoided, and a sufficient dynamic target evacuation time length is reserved. Under the condition that M grid units adjacent to and indirectly adjacent to the grid unit on which the working surface is located have static and large-scale related targets, the material taking and placing component has relatively large longitudinal and transverse moving speed and small transverse moving amplitude.
And then, the taking and placing operation control module selects a preset corresponding control mode of the set of scene state templates according to the scene state template with the highest scene state matching value in the space environment of the current material taking and placing operation surface, the preset corresponding control mode is used as an actual control mode for the intelligent tower crane material taking and placing component, and a corresponding control instruction is issued according to parameters of the actual control mode.
Therefore, the intelligent tower crane robot can realize unmanned, autonomous decision-making and automatic control in the material taking and placing link; the control mode of the material taking and placing operation can be adapted based on the scene state of the space environment where the material taking and placing operation surface is located.
The invention can adapt to relatively complex scenes in material taking and placing operation, avoids risks in collision, interference and the like, and adopts proper action time and mechanism. The control method and the system of the invention have simple and efficient algorithm, low calculation amount and no need of matching high-capacity software and hardware, and can be realized on the basis of the existing industrial control equipment and network.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (2)

1. An intelligent tower crane robot taking and placing control method based on scene target recognition is characterized by comprising the following steps:
s101, acquiring three-dimensional scene data of a material taking and placing related space by the intelligent tower crane;
step S102, aiming at three-dimensional scene data, carrying out space segmentation and large-scale target segmentation on the basis of grids, and establishing mapping topology of the segmented space grids and targets;
step S103, mode judgment is carried out on the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space;
step S104, determining a control mode matched with the space environment mode of the material taking and placing operation surface, issuing a control instruction to a material taking and placing component of the intelligent tower crane,
in the step S101, the three-dimensional scene data is obtained through any one or a plurality of comprehensive means of the modes of Beidou GPS, UWB and depth camera positioning, 3D multi-line laser scanning radar, millimeter wave and centimeter wave radar, laser ranging, air pressure ranging, ultrasonic ranging, optical flow ranging, encoder ranging, multi-camera video map synthesis and the like; for the obtained three-dimensional scene data, steps of registration, denoising, simplification, segmentation and the like are executed, and three-dimensional coordinates of each target distribution space in the space are obtained; three-dimensional coordinates of each target and the distribution space thereof in the material taking and placing related space form the three-dimensional scene data,
in step S102, the spatial segmentation on the basis of the mesh specifically includes: expanding multi-round space segmentation aiming at the material taking and placing related space; wherein, the 1 st round of space division divides the space into 8 subspaces with the same size; dividing the 2 nd round of space into 8 subspaces with the same size; sequentially iterating until reaching the preset number K of the spatial division,
in step S102, the mapping topology of the segmented spatial grid and the target specifically includes: an identifier of a grid cell, a set of adjacent grids of the grid cell, a description index of a grid cell association objective,
step S103 is to determine a mode of the space environment of the material taking and placing working surface based on the scene state of each object in the material taking and placing space, specifically including: converting the description indexes of the targets related to the grid units adjacent to the grid unit where the operation surface is located and the indirectly adjacent grid units into description state symbols of the targets so as to generate a scene state formed by all targets in a space environment where the material taking and placing operation surface is located; in order to carry out mode judgment on the scene state of the space environment where the material taking and placing operation surface is located, a plurality of groups of scene state templates are preset; calculating the scene state of the current material taking and placing working face in the space environment and the matching coefficient of each scene state template; and determining a scene state template with the highest scene state matching value in the space environment of the current material taking and placing operation surface.
2. The utility model provides an intelligence tower crane robot gets and puts control system based on scene target identification which characterized in that includes: the system comprises a space three-dimensional scene perception module, a space segmentation and target association module, an operation space environment scene mode judgment module and a pick-and-place operation control module;
the spatial three-dimensional scene sensing module is used for acquiring three-dimensional scene data of a material taking and placing related space;
the space division and target association module is used for carrying out space division and large-scale target division on a grid basis aiming at three-dimensional scene data and establishing mapping topology of the divided space grids and the targets;
the operation space environment scene mode judging module is used for judging the mode of the space environment of the material taking and placing operation surface based on the scene state formed by each target in the material taking and placing space;
the system comprises a taking and placing operation control module, a three-dimensional scene sensing module and a multi-camera synthetic video map, wherein the taking and placing operation control module is used for determining a control mode matched with a space environment mode of a material taking and placing operation surface and issuing a control command to a material taking and placing component of an intelligent tower crane, and the three-dimensional scene sensing module obtains three-dimensional scene data through any one or a plurality of comprehensive means of modes such as Beidou GPS, UWB and depth camera positioning, 3D multi-line laser scanning radar, millimeter wave, centimeter wave radar, laser ranging, air pressure ranging, ultrasonic ranging, optical flow ranging, encoder ranging, multi-camera synthetic video map and the like; for the obtained three-dimensional scene data, steps of registration, denoising, simplification, segmentation and the like are executed, and three-dimensional coordinates of each target distribution space in the space are obtained; the three-dimensional scene data is formed by three-dimensional coordinates of each target and a distribution space thereof in the material taking and placing related space, and the space segmentation and target association module is used for performing multi-round space segmentation on the material taking and placing related space; wherein, the 1 st round of space division divides the space into 8 subspaces with the same size; dividing the 2 nd round of space into 8 subspaces with the same size; sequentially iterating until a preset number of turns K of space division is reached, and specifically, the step of establishing a mapping topology of the divided space grid and the target by the space division and target association module comprises the following steps: the identifier of the grid unit, the adjacent grid set of the grid unit and the description index of the grid unit associated target, and the operation space environment scene mode judging module specifically comprises the following steps of based on the scene state formed by each target in the material taking and placing space, and carrying out mode judgment on the space environment of the material taking and placing operation surface: converting the description indexes of the targets related to the grid units adjacent to the grid unit where the operation surface is located and the indirectly adjacent grid units into description state symbols of the targets so as to generate a scene state formed by all targets in a space environment where the material taking and placing operation surface is located; in order to carry out mode judgment on the scene state of the space environment where the material taking and placing operation surface is located, a plurality of groups of scene state templates are preset; calculating the scene state of the current material taking and placing working face in the space environment and the matching coefficient of each scene state template; and determining a scene state template with the highest scene state matching value in the space environment of the current material taking and placing operation surface.
CN202110782585.7A 2021-07-12 2021-07-12 Intelligent tower crane robot pick-and-place control method and system based on scene target recognition Active CN113233336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110782585.7A CN113233336B (en) 2021-07-12 2021-07-12 Intelligent tower crane robot pick-and-place control method and system based on scene target recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110782585.7A CN113233336B (en) 2021-07-12 2021-07-12 Intelligent tower crane robot pick-and-place control method and system based on scene target recognition

Publications (2)

Publication Number Publication Date
CN113233336A CN113233336A (en) 2021-08-10
CN113233336B true CN113233336B (en) 2021-11-16

Family

ID=77135346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110782585.7A Active CN113233336B (en) 2021-07-12 2021-07-12 Intelligent tower crane robot pick-and-place control method and system based on scene target recognition

Country Status (1)

Country Link
CN (1) CN113233336B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114572845B (en) * 2022-01-24 2023-06-02 杭州大杰智能传动科技有限公司 Intelligent auxiliary robot for detecting working condition of intelligent tower crane and control method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112249902B (en) * 2019-07-21 2022-11-29 长沙智能驾驶研究院有限公司 Tower crane control method, device and system for smart construction site and tower crane virtual control cabin
CN112429647B (en) * 2020-11-16 2021-11-09 湖南三一塔式起重机械有限公司 Control method and control device of crane
CN112862887A (en) * 2020-12-31 2021-05-28 卓喜龙 Building construction safety management early warning system and safety cloud platform based on artificial intelligence and big data analysis
CN112758824A (en) * 2021-01-21 2021-05-07 宜昌市创星电子技术发展有限公司 Unmanned control system of tower crane

Also Published As

Publication number Publication date
CN113233336A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
Tang et al. Recognition and localization methods for vision-based fruit picking robots: A review
CN111243017B (en) Intelligent robot grabbing method based on 3D vision
CN103049912B (en) Random trihedron-based radar-camera system external parameter calibration method
CN113345008B (en) Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation
CN109202885B (en) Material carrying and moving composite robot
CN110450153A (en) A kind of mechanical arm article active pick-up method based on deeply study
CN109255302A (en) Object recognition methods and terminal, mobile device control method and terminal
CN104331894A (en) Robot unstacking method based on binocular stereoscopic vision
CN113233336B (en) Intelligent tower crane robot pick-and-place control method and system based on scene target recognition
WO2022095067A1 (en) Path planning method, path planning device, path planning system, and medium thereof
CN110969660A (en) Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning
CN113415728B (en) Automatic planning method and system for lifting path of tower crane
CN114241269B (en) A collection card vision fuses positioning system for bank bridge automatic control
CN111679690A (en) Method for routing inspection unmanned aerial vehicle nest distribution and information interaction
CN113674355A (en) Target identification and positioning method based on camera and laser radar
CN113433949A (en) Automatic following object conveying robot and object conveying method thereof
Beinschob et al. Advances in 3d data acquisition, mapping and localization in modern large-scale warehouses
Lim et al. Three-dimensional (3D) dynamic obstacle perception in a detect-and-avoid framework for unmanned aerial vehicles
CN114167866A (en) Intelligent logistics robot and control method
CN117021059A (en) Picking robot, fruit positioning method and device thereof, electronic equipment and medium
CN113341978A (en) Intelligent trolley path planning method based on ladder-shaped barrier
Beinschob et al. Strategies for 3D data acquisition and mapping in large-scale modern warehouses
CN116477505A (en) Tower crane real-time path planning system and method based on deep learning
CN104143189A (en) Method for extracting spatial features of 3D point cloud data of power transmission equipment
CN116700228A (en) Robot path planning method, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant