CN111776203A - Multi-wing unmanned aerial vehicle with grabbing function and working method - Google Patents

Multi-wing unmanned aerial vehicle with grabbing function and working method Download PDF

Info

Publication number
CN111776203A
CN111776203A CN202010598464.2A CN202010598464A CN111776203A CN 111776203 A CN111776203 A CN 111776203A CN 202010598464 A CN202010598464 A CN 202010598464A CN 111776203 A CN111776203 A CN 111776203A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
grabbing
movable support
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010598464.2A
Other languages
Chinese (zh)
Other versions
CN111776203B (en
Inventor
曹贺
陈宣友
张倩
谷全祥
吴强
范雅婕
郑宇�
赵雪冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aviation Industrial Information Center
Original Assignee
Aviation Industrial Information Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aviation Industrial Information Center filed Critical Aviation Industrial Information Center
Priority to CN202010598464.2A priority Critical patent/CN111776203B/en
Publication of CN111776203A publication Critical patent/CN111776203A/en
Application granted granted Critical
Publication of CN111776203B publication Critical patent/CN111776203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C27/00Rotorcraft; Rotors peculiar thereto
    • B64C27/04Helicopters
    • B64C27/08Helicopters with two or more rotors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/003Programme-controlled manipulators having parallel kinematics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D1/00Dropping, ejecting, releasing, or receiving articles, liquids, or the like, in flight
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a multi-wing unmanned aerial vehicle with a grabbing function and a working method thereof, wherein the multi-wing unmanned aerial vehicle with the grabbing function comprises the following components: the unmanned aerial vehicle comprises an unmanned aerial vehicle body, a grabbing mechanical arm and an image recognition system, wherein the grabbing mechanical arm is arranged below the unmanned aerial vehicle; the grabbing mechanical arm comprises: the driving mechanism and the driven mechanism are arranged at the bottom of the driving mechanism; the other end of the driven mechanism is provided with a clamping jaw assembly; be equipped with controller, remote control communication module and position sensing module in the unmanned aerial vehicle. The image recognition system is fixedly connected to the lower side of the unmanned aerial vehicle; compared with the traditional single-layer parallel mechanical arm and the traditional serial mechanical arm, the multi-freedom-degree end effector is connected in series and fixed on the basis of the parallel mechanical arm, and is matched with the image recognition system to recognize, classify and grab the articles in the flight area, and the grabbing process can be dynamically finished in the air or statically finished on the ground.

Description

Multi-wing unmanned aerial vehicle with grabbing function and working method
Technical Field
The invention belongs to the field of unmanned aerial vehicle equipment, and particularly relates to a multi-wing unmanned aerial vehicle with a grabbing function and a working method.
Background
The existing unmanned aerial vehicle with the grabbing function mostly adopts the mechanical arms connected in series, so that the unmanned aerial vehicle can grab a target object in the air, and due to the fact that the working space of the unmanned aerial vehicle is large, the requirement of the task on the positioning precision and the flight precision of the unmanned aerial vehicle can be reduced by using the mechanical arms connected in series in an actual work task. However, the serial mechanical arms inevitably cause error accumulation at each joint and accumulate on the end effector, and the problems of low working efficiency and poor bearing capacity can still be encountered during grabbing.
Disclosure of Invention
The purpose of the invention is as follows: the utility model provides a multi-wing unmanned aerial vehicle with snatch function and working method to solve the above-mentioned problem that prior art exists.
The technical scheme is as follows: a multi-wing drone with a grabbing function, characterized in that includes:
the unmanned aerial vehicle comprises an unmanned aerial vehicle body and a grabbing mechanical arm arranged below the unmanned aerial vehicle;
the grabbing mechanical arm comprises: the image recognition system comprises a driving mechanism, a driven mechanism and an image recognition system, wherein the driven mechanism is arranged at the bottom of the driving mechanism;
the other end of the driven mechanism is provided with a clamping jaw assembly;
the driving mechanism is fixedly arranged at the bottom of the unmanned aerial vehicle body; the unmanned aerial vehicle is provided with a controller, a remote control communication module, a position sensing module and an airborne computer.
The image recognition system comprises four cameras and an on-board computer.
In a further embodiment, the active mechanism comprises: the steering engine comprises a base, a first platform plate fixedly connected to the base, three steering engine fixing frames fixedly arranged at the bottom of the first platform plate, a driver fixedly arranged in the steering engine fixing frames, and a driving arm group arranged at the power output end of the steering engine fixing frames; four cameras fixedly installed at corners of fuselage and undercarriage and airborne computer fixedly installed in unmanned aerial vehicle equipment cabin
The base is fixedly connected to the bottom of the unmanned aerial vehicle and moves along with the flight of the unmanned aerial vehicle;
the driver is electrically connected with the unmanned aerial vehicle controller.
In a further embodiment, the follower mechanism comprises: the first driven arm group is connected to the other end of the driving arm group, the first movable support is hinged to the other end of the first driven arm group, the second driven arm group is hinged to the bottom of the first movable support, and the second movable support is hinged to the other end of the second driven arm group; the third driven arm group is hinged at the bottom of the second driven arm group, and the third driven arm group is hinged at the other end of the third driven arm group.
In a further embodiment, a mounting hole is formed in the center of the first platform plate, three clockwise-surrounding special-shaped support plates extend outwards from the edge of the first platform plate, three protrusions extend out from one side of the bottom of each special-shaped support plate, and protrusion grooves are formed in the protrusions;
arc-shaped convex blocks are arranged at two end sides of the steering engine fixing frame, and a square through hole is formed in the inner side of the steering engine fixing frame; the square penetration port is fixedly inserted into the driver shell;
the groove is embedded with the edge of the top of the steering engine fixing frame, and an arc-shaped convex block on one side of the bottom of the steering engine fixing frame is matched with a positioning groove formed in the bottom platform plate;
the bottom platform positioning grooves are three and are arranged corresponding to the lug grooves at the bottom of the special-shaped support plate;
the driving arm group comprises a connecting vertical frame which is in interference fit with the output shaft ends of the three drivers, a triangular truss which is integrally connected to the other end of the connecting vertical frame, and three hinged ball grooves which are respectively arranged at the end points of the triangular truss; the two ends of the driven arm group are provided with ball head fasteners which are rotatably connected with the ball grooves in a hinged mode.
In a further embodiment, the centers of the first platform plate and the first movable support, the second movable support and the third movable support are located on the same axis, and the driving arm group and the driven arm group connected to the first movable support, the second movable support and the third movable support are respectively located on the same axis;
the first movable support and the second movable support both comprise: three connecting arms vertically corresponding to the active arm group extend outwards from the edge; the hinge ball groove is fixedly connected with one side of the first movable support and one side of the second movable support, and the opening of the hinge ball groove is upward; and the bottom of the connecting arm is provided with a hinge ball groove with a downward opening.
In a further embodiment, the clamping jaw assembly is fixedly arranged on one side of the bottom of the third movable bracket.
In a further embodiment, the grasping arm degree of freedom is 3, and the grasping arm degree of freedom can be calculated by the following formula:
n=3+6+2×6=21; (1)
g=3×1+3×1+6×4=30 (2)
Figure BDA0002557851160000021
M=6(21-30-1)+3×1+3×1+3×24=18 (4)
F=18-12-3=3 (5)
whereinWherein M represents the degree of freedom of the mechanism, n represents the number of all components (including the frame) in the mechanism, g represents the total number of kinematic pairs in the mechanism,
Figure BDA0002557851160000031
the sum of the degrees of freedom contained in all kinematic pairs of the mechanism is shown, and F is the final result after elimination of the redundant degrees of freedom.
In a further embodiment, a plurality of laser sensing units are arranged on the clamping jaw assembly, and the laser sensing units are communicated with the unmanned aerial vehicle controller; the laser sensing unit is internally provided with a perspective cover arranged on the clamping jaw, a laser sensor arranged in the perspective cover, and a signal amplifier connected with the laser sensor.
In a further embodiment, the following working steps are included:
s1, classifying the N kinds of articles through a ground control system (not every article needs to be grabbed, and selectively grabbing, especially under the condition of multiple articles), and further setting a task grabbing target;
s2, a user sends an autonomous task execution instruction through a remote controller, a flight control instruction sent by the remote controller is sent to a controller through a remote control communication module, and the unmanned aerial vehicle controller receives the task instruction to control the unmanned aerial vehicle to fly to an area above a workpiece to be clamped;
s3, the image recognition system recognizes and classifies the workpieces in the area, the workpieces to be grabbed are determined through calculation of the onboard computer, the relative distance between the grabbing mechanism below the parallel mechanical arm and the workpieces to be grabbed is measured, the relative distance is sent to the onboard computer to calculate the required control quantity, and the required control quantity is sent to the flight controller;
s4, the flight controller respectively controls the flight of the unmanned aerial vehicle and the rotating speed and the rotating angle of a driver in a fixed frame of the steering engine, so as to drive the driving arm group to drive the driven arm combination and adjust the position and the angle of the first movable support, the second movable support and the third movable support which are connected among the driven arm group in the three-dimensional space;
s5, continuously calculating the relative distance between the grabbing mechanism and a workpiece to be grabbed through a visual recognition system, and further enabling a clamping jaw connected to the bottom of the third movable support to be located above the target piece and to be adjusted to an optimal grabbing angle so as to grab the object;
s6, after confirming that the workpiece is grabbed, the recognition and recognition system flies to the next working area, and the object is put down, so that the task can be finished or another round of workpiece grabbing work can be started;
in a further embodiment, a computer algorithm built into the on-board computer identifies and classifies the workpiece and settles the relative pose of the claw and the workpiece, identifying and classifying using the yolo-v3 algorithm;
the yolo-v3 algorithm divides the input image into S x S meshes, each of which is responsible for detecting the target object in which the center point falls. B target frames exist in a single grid, and each target frame is composed of five-dimensional prediction parameters including the coordinates (x, y) of the center point of the target frame, the width and the height (w, h) and the confidence score Si
The confidence score is calculated by formula 3-1
Si=Pr(O)*IoU (3-1)
In the formula, pr (O) represents the possibility that an object exists in the target frame of the current grid, and O represents a target object. IoU (Intersection over Union) shows the accuracy of the target border position predicted by the current model. Assume that the predicted target bounding box is p and the true target bounding box is t, boxtRepresenting the bounding box of the real object in the image, boxpRepresenting a predicted target bounding box; IoU is calculated by equation 3-2:
Figure BDA0002557851160000041
Pr(Cii O) represents the posterior probability that the target belongs to a certain kind of object i in the case where the target exists in the frame. Assuming that the target detection task has K kinds of objects in total, each grid predicts the ith object CiHas a conditional probability of Pr (C)i|O),i=1,2,...,K;
Pr (C) is obtained through calculationiI O), the confidence of the object existing in a certain target frame can be calculated during testing, as shown in formula 3-3:
Figure BDA0002557851160000042
in the yolo-v3 algorithm, the input image is divided into 7 × 7 meshes, each mesh predicts 2 target frames, and there are 20 targets to be measured, i.e., S ═ 7, B ═ 2, and K ═ 2. The algorithm finally outputs a predictor vector of length S × S (B × 5+ K) ═ 7 × 30.
Has the advantages that: compared with the traditional single-layer parallel mechanical arm and the traditional serial mechanical arm, the invention has the following advantages:
1) the invention adopts the technical scheme that an end effector with multiple degrees of freedom is connected in series and fixed on the basis of parallel mechanical arms to be matched with dynamic and static clamping environment environments in the air clamping process; compared with the traditional single-layer parallel mechanical arm and the traditional serial mechanical arm, the designed serial mechanical arm not only has stronger flexibility, independent kinematic chains, smaller error, smaller occupied space, but also has stronger rigidity.
2) The image recognition system and the yolo-v3 algorithm are matched to perform image recognition and classification on the articles in the flying area and grab static articles and dynamic articles, so that the grabbing precision of the grabbing mechanical arm is improved;
3) the joint expansibility of series arm is stronger, and its series joint layer's structure is similar completely, can have a plurality of series joint layers to carry out indiscriminate overlap joint, compares in traditional parallel mechanical arm, and series arm can not take place to interfere and increase and decrease joint layer figure portably from top to bottom in motion range, can select according to actual conditions. And the serial joint layer of the serial mechanical arm has the function of amplifying step by step, so that the requirements on carrying a platform and operation can be reduced in the actual situation, and the operation cost is reduced.
Drawings
Fig. 1 is a schematic structural diagram of a single-layer parallel mechanical arm in the prior art.
Fig. 2 is a schematic structural diagram of the multi-wing drone with grabbing function of the present invention.
Fig. 3 is a schematic structural view of the grasping robot arm of the present invention.
Figure 4 is a top view of the grasping robot arm of the present invention.
Fig. 5 is a schematic structural diagram of the active mechanism of the present invention.
Fig. 6 is a schematic diagram of the operation of the visual laser sensing unit of the present invention.
Fig. 7 is a schematic diagram of the operation of the multi-wing drone with grabbing function of the present invention.
FIG. 8 is a diagram of the YOLOv3 network model of the present invention.
FIG. 9 is a flow chart of the detection algorithm of the present invention.
Reference numerals: the unmanned aerial vehicle comprises an unmanned aerial vehicle body 1, a grabbing mechanical arm 2, a driving mechanism 20, a base 200, a first platform plate 201, a special-shaped support plate 2010, a lug groove 2011, a steering engine fixing frame 202, an arc-shaped lug 2020, a square through hole 2021, a positioning groove 2022, a driver 203, a driving arm group 204, a connecting vertical frame 2040, a triangular truss 2041, a hinged ball groove 2042, a driven mechanism 21, a first driven arm group 210, a first movable support 211, a second driven arm group 212, a second movable support 213, a third driven arm group 214, a third movable support 215, a ball head fastener 216, a connecting arm 217, a bottom platform plate 218, a clamping jaw assembly 23, a laser sensing unit 230, a camera 24, a driving arm 1a, a static platform 2a, a driven arm 3a, a movable platform 4a and a driver 5 a.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
The applicant finds that most of existing unmanned aerial vehicles with grabbing functions adopt serially connected mechanical arms, so that the unmanned aerial vehicles grab target objects in the air, and due to the fact that the working space of the unmanned aerial vehicles is large, the requirements of tasks on the positioning accuracy and the flight accuracy of the unmanned aerial vehicles can be reduced due to the use of the serially connected mechanical arms in actual work tasks. However, the serial mechanical arms inevitably cause error accumulation at each joint and accumulate on the end effector, and the problems of low working efficiency and poor bearing capacity can still be encountered during grabbing.
A multi-wing drone with a grabbing function, as shown in figures 2 to 5, comprising: the unmanned aerial vehicle comprises an unmanned aerial vehicle body 1, a grabbing mechanical arm 2, a driving mechanism 20, a base 200, a first platform plate 201, a special-shaped support plate 2010, a lug groove 2011, a steering engine fixing frame 202, an arc-shaped lug 2020, a square through hole 2021, a positioning groove 2022, a driver 203, a driving arm group 204, a connecting vertical frame 2040, a triangular truss 2041, a hinged ball groove 2042, a driven mechanism 21, a first driven arm group 210, a first movable support 211, a second driven arm group 212, a second movable support 213, a third driven arm group 214, a third movable support 215, a ball head fastener 216, a connecting arm 217, a bottom platform plate 218, a clamping jaw assembly 23, a laser sensing unit 230 and a camera 24.
A grabbing mechanical arm 2 and a clamping jaw assembly 23 are arranged below the unmanned aerial vehicle body 1; the grasping robot arm 2 includes: the image recognition system comprises a driving mechanism 20, a driven mechanism 21 arranged at the bottom of the driving mechanism 20 and an image recognition system; the driving mechanism 20 is fixedly installed at the bottom of the unmanned aerial vehicle body 1; the unmanned aerial vehicle is provided with a controller, a remote control communication module, a position sensing module and an airborne computer.
The image recognition system includes four cameras 24 and an on-board computer. The controller is electrically connected with the driving devices in the mechanical arm 2 and the clamping jaw assembly 23, and then the aerial clamping work is executed.
In the prior art, a mechanical arm is proposed based on the disadvantages of serial machines, and referring to fig. 1, the mechanical arm is a parallel mechanical arm, and a conventional single-layer parallel mechanical arm includes: the device comprises a driving arm 1a, a static platform 2a, a driven arm 3a, a movable platform 4a and a driver 5 a. In the assembling process, the mechanical arm transmission chain is formed by connecting a static platform 2a which is assembled with a driver 5a and connected with one end of three driving arms 1a with one end of three driven arms 3a and a movable platform 4a of an end effector, wherein the three static platforms are completely the same as the three driven arms. The driving arm 1a is positioned at the upper part of the transmission chain, a driver 5a is arranged at one end of the static platform 2a and drives the driving arm 1a to transmit, and the other end of the driving arm 1a is connected with one end of a driven arm 3a in a parallelogram structure through a spherical hinge. The other end of the driven arm 3a is connected with the movable platform 4a through a spherical hinge, and the movable platform 4a and the static platform 2a are ensured not to rotate relatively and only can do translational motion, so that a motion closed loop is realized. Each driving arm 1a can drive the driven arm 3a to realize the transmission of 3 degrees of freedom in a three-dimensional space, and then the three driven arms 3a can drive the movable platform 4a which is connected together to realize the movement of 6 degrees of freedom, thereby enlarging the working space. However, the movable platform 4a with 6 degrees of freedom inevitably affects the clamping accuracy due to the coupling characteristic of the parallel mechanism.
The grabbing mechanical arm 2 in the invention is composed of a driving mechanism 20 and a driven mechanism 21 arranged at the bottom of the driving mechanism 20; different from the traditional parallel mechanical arm, the multi-freedom-degree end effector is connected in series and fixed on the basis of the parallel mechanical arm, and the clamping jaw assembly 23 is arranged at the bottom of the driven mechanism 21 and is used for matching with the dynamic and static clamping environment in the air clamping process.
The driving mechanism 20 is fixedly installed at the bottom of the unmanned aerial vehicle body; the active mechanism 20 includes: the device comprises a base 200, a first platform plate 201 fixedly connected to the base 200, three steering engine fixing frames 202 fixedly arranged at the bottom of the first platform plate 201, a driver 203 fixedly arranged in the steering engine fixing frames 202, and a driving arm group 204 arranged at the power output end of the steering engine fixing frames 202; the base 200 is fixedly connected to the bottom of the unmanned aerial vehicle and moves along with the flight of the unmanned aerial vehicle;
the traditional static platform adopts a flat plate type member similar to a circle or a hexagon, but the stroke of the driving arm is limited in the actual working process; three clockwise-surrounding special-shaped support plates 2010 extend outwards from the edge of the first platform plate 201, and the first platform plate 201 is similar to the top plane of a fan structure when seen in a top view; a mounting hole is formed in the center of the first platform plate 201, and the first platform plate 201 is connected with the base 200 through a bolt; a bottom flat plate is vertically and oppositely arranged below the other side of the first flat plate 201; the bottom flat plate and the first flat plate 201 are in interference connection through a steering engine fixing frame 202; three bumps extending from one side of the bottom of the special-shaped support plate 2010, wherein grooves are formed in the bumps;
arc-shaped convex blocks 2020 are arranged at two end sides of the steering engine fixing frame 202, and square through-holes 2021 are formed in the inner side of the steering engine fixing frame; the square through hole 2021 is fixedly inserted into the shell of the driver 203; the grooves are embedded with the edge of the top of the steering engine fixing frame 202, the arc-shaped convex block 2020 on one side of the bottom of the steering engine fixing frame 202 is matched with the positioning groove 2022 formed in the bottom platform plate 218, and the three bottom platform positioning grooves 2022 are arranged corresponding to the convex block grooves 2011 at the bottom of the special-shaped support plate 2010; thereby realizing the parallel installation of the bottom plate and the first plate 201 with the power output shaft of the driver 203;
the active arm group 204 comprises a connecting vertical frame 2040 in interference fit with the output shaft ends of the three drivers 203, a triangular truss 2041 integrally connected with the other end of the connecting vertical frame 2040, and three hinged ball grooves 2042 respectively arranged at the end points of the triangular truss 2041; ball fasteners 216 rotatably connected with the hinged ball grooves 2042 are arranged at the two ends of the driven arm group.
The driver 203 is electrically connected with the unmanned aerial vehicle controller and drives the connecting vertical frames 2040 in the active arm group 204 to rotate by an angle according to the instruction of the controller.
Furthermore, the key to the realization of the multi-layer mechanical arm provided by the invention is an interlayer movable platform, and the movable platform mainly undertakes transmission between a first movable support 211 and a second movable support 213 of a transmission series mechanical arm, and between the second movable support 213 and a third movable support 215; in the transmission process between the first movable support 211 and the second movable support 213, and between the second movable support 213 and the third movable support 215, three transmission chains which complete the same are synchronously connected in series and in parallel, the transmission mode of the serial mechanical arm is still in parallel logic, and the motion of each active arm is independent to form an independent closed motion chain. The driven mechanism 21 includes: a first driven arm group 210 connected to the other end of the driving arm group 204, a first movable support 211 hinged to the other end of the first driven arm group 210, a second driven arm group 212 hinged to the bottom of the first movable support 211, and a second movable support 213 hinged to the other end of the second driven arm group 212; a third driven arm group 214 hinged at the bottom of the second driven arm group 213, and a third driven bracket 215 hinged at the other end of the third driven arm group 214. The third movable bracket 215 is used as an endmost movable platform, and the clamping jaw assembly 23 is fixedly arranged on one side of the bottom of the third movable bracket 215; similar to the motion form of a traditional single-layer parallel mechanical arm, the robot can realize translation in three-dimensional Euclidean space, and can carry end effectors with different structures and functions, wherein the clamping jaw assembly 23 is the preferable mode of the end effector. Meanwhile, the device can be linked with a claw, a sucker and the like, so that the operation under different working environments is realized. Compared with the traditional mechanical arm design, the parallel mechanism is not influenced by the coupling characteristic of the parallel mechanism, and the advantage of independent transmission of the series structure is added, so that the translational precision of the carrying end effector in a three-dimensional Euclidean space is higher than that of the moving platform of the traditional mechanical arm.
In a further embodiment, the centers of the first platform plate 201 and the first movable support 211, the second movable support 213, and the third movable support 215 are located on the same axis, and the driving arm group 204 and the driven arm groups connected to the first movable support 211, the second movable support 213, and the third movable support 215 are located on the same axis respectively; if the three driving mechanisms 20 and the driven mechanisms 21 connected independently do not keep the same axis in the serial connection structure, when the connecting vertical frame 2040 is driven to rotate, the driving arm connected to the connecting vertical frame 2040 is further rotated, and the first driven arm group 210 is further driven by the driving arm, so that the movement of the first movable support 211 is realized in a pushing manner. The desired motion cannot be achieved. In the process of movement, three groups of movement triangles formed by the driving arm group 204 and three groups of movement triangles formed by the first driven arm group 210 can enable the first movable support 211 to belong to an over-constrained state, so that the first driven arm group 210 cannot enable the first movable support 211 to move in the process of movement, and further a second driven arm hinged to the first movable support 211 and an end effector arranged below the second driven arm cannot be precisely driven.
The first movable bracket 211 and the second movable bracket 213 each include: three connecting arms 217 vertically corresponding to the active arm group 204 extend outwards from the edge; and a hinge ball groove 2042 with an upward opening fixedly connected to one side of the first movable bracket 211 and the second movable bracket 213; the bottom of the connecting arm 217 is provided with a hinge ball groove 2042 with a downward opening. The ball fasteners 216 rotatably connected with the driving arm and the driven arm group through the hinge ball grooves 2042 and the two ends of the driving arm and the driven arm group are respectively provided with the hinge ball grooves 2042, so that the driving arm and the driven arm group can be connected with the three movable supports through revolute pairs. Therefore, in the moving process, the three-layer movable support main body is kept horizontal, and the moving triangles among the three groups of driven arm groups are always kept in the moving synchronization.
The degree of freedom of the grabbing mechanical arm is further 3, and the degree of freedom of the grabbing mechanical arm can be calculated by the following formula:
n=3+6+2×6=21; (1)
g=3×1+3×1+6×4=30 (2)
Figure BDA0002557851160000091
M=6(21-30-1)+3×1+3×1+3×24=18 (4)
F=18-12-3=3 (5)
wherein, M represents the degree of freedom of the mechanism, n represents the number of all components (including the frame) in the mechanism, g represents the total number of kinematic pairs in the mechanism,
Figure BDA0002557851160000092
the sum of the degrees of freedom contained in all kinematic pairs of the mechanism is shown, and F is the final result after elimination of the redundant degrees of freedom.
Further, referring to fig. 6, a plurality of laser sensing units 230 are arranged on the clamping jaw assembly 23, and the laser sensing units are communicated with the unmanned aerial vehicle controller; the laser sensing unit 230 is internally provided with a perspective cover on the clamping jaw, a laser sensor arranged in the perspective cover, and a signal amplifier connected with the laser sensor.
The working principle is as follows: classifying N kinds of articles through a ground control system (not every article needs to be grabbed, and selectively grabbing is carried out, particularly under the condition of multiple articles), and further setting a task grabbing target; a user sends an autonomous task execution instruction through a remote controller, a flight control instruction sent by the remote controller is sent to a controller through a remote control communication module, and the unmanned aerial vehicle controller receives the task instruction to control the unmanned aerial vehicle to fly to an area above a workpiece to be clamped; the image recognition system recognizes and classifies the workpieces in the area, determines the workpieces to be grabbed through calculation of an onboard computer, measures the relative distance between a grabbing mechanism arranged below the parallel mechanical arm and the workpieces to be grabbed, sends the relative distance to the onboard computer to calculate the required control quantity and sends the required control quantity to the flight controller; the controller respectively controls the rotating speed and the rotating angle of a driver 203 in the fixed frame 202 of the steering engine, so as to drive the driving arm group 204 to drive the driven arm combination and the position and angle adjustment in the three-dimensional space of the first movable bracket 211, the second movable bracket 213 and the third movable bracket 215 connected among the driven arm groups; so that the jaw connected to the bottom of the third movable bracket 215 is positioned above the target member and can be adjusted at an optimal grasping angle. Continuously calculating the relative distance between the grabbing mechanism and a workpiece to be grabbed through a visual recognition system, so that a clamping jaw connected to the bottom of the third movable support 215 is positioned above the target piece and can be adjusted to an optimal grabbing angle, and then grabbing the object; after confirming that the workpiece is grabbed, the recognition and recognition system flies to the next working area, and the object is put down, so that the task can be finished or another round of workpiece grabbing work can be started.
The precision of the grabbing task is improved by adding a visual recognition system. Image acquisition is performed by a camera 24 fixedly attached at the corner, the relative pose of the claw and the workpiece is identified and classified and settled by an onboard computer, the identification and classification algorithm uses yolo-v3, and the solution of the relative pose uses a pose solution method based on the monocular camera 24. And sending the calculated information to a flight controller to control the flight position and attitude of the unmanned aerial vehicle and the rotating speed and rotating angle of an output shaft of the driver 203, feeding back the position of the gripper to an onboard computer through visual feedback, and circularly resolving and executing until the task is completed. The workpiece is identified and classified by adopting an identification vision system by adopting a yolo-V3 algorithm, and the algorithm process is as follows:
the input layer of the YOLO algorithm is data obtained by processing an input image in modes of clipping, normalization, data enhancement and the like. In CNN, a geometric feature obtained by subjecting data of a sample image to some processing is referred to as a feature map. The input layer may be considered as the initial feature map, and since finer information is required for target detection, YOLO uniformly fixes the size of the processed feature map to 448 x 3. 448 x 448 are picture pixel values of a single dimension, which are obtained by superimposing three color channels of red, green and blue on each other because the picture is colored.
The 24 convolution layers are followed, and the main operation is to perform convolution operation on the characteristic diagram processed by the input layer, which is essentially to extract the characteristic information of the input layer for subsequent classification and positioning processing. As shown in fig. 8, there are two convolution kernels, 3 × 3 and 1 × 1, of YOLO. Where convolution kernels of size 1 x 1 are used, the need to reduce the number of convolution kernel channels is mainly taken into account to reduce the parameters generated by the network.
Present between convolutional layers are pooling layers whose main operation is to down-sample the input data samples in the feature space. According to the space position of the feature matrix, the feature is divided according to the set granularity block, a new feature value is calculated in a small block, and the information in the original block is replaced. According to the rule of replacing the new feature value, the downsampling operation is commonly mean pooling and maximum pooling, and the YoLO adopts a maximum pooling method, namely, the maximum value in the block is used for replacing the original feature block.
The YOLO algorithm has two full connection layers between the last pooling layer and the output layer, and is mainly used for converting a two-dimensional matrix extracted from features into a one-dimensional matrix. The mode of connecting all the inputs with the network parameters for operation is a layer with the largest parameters and the largest operation amount in the network.
The last layer of the network is the output layer, which acts as a classifier. And classifying and outputting the one-dimensional vectors output by the full connection layer, wherein the number of the output characteristic graphs is the classification number of the target. The final output of the network is a 7 × 30 one-dimensional vector, which contains the classification result of the objects in the picture and the code of the position information thereof, and finally the detection result can be drawn in the original picture by decoding the vector in a unified and agreed manner.
The algorithm detection flow is as follows:
YOLO divides the input image into S × S meshes, each of which is responsible for detecting the target object in which the center point falls. B target frames exist in a single grid, and each target frame is composed of five-dimensional prediction parameters including the coordinates (x, y) of the center point of the target frame, the width and the height (w, h) and the confidence score Si
The confidence score is calculated by formula 3-1
Si=Pr(O)*IoU (3-1)
In the formula, pr (O) represents the possibility that an object exists in the target frame of the current grid, and O represents a target object. IoU (Intersection over Union) shows the accuracy of the target border position predicted by the current model. Assume that the predicted target bounding box is p and the true target bounding box is t, boxtRepresenting the bounding box of the real object in the image, boxpRepresenting a predicted target bounding box; IoU is calculated by equation 3-2:
Figure BDA0002557851160000111
Pr(Cii O) represents the posterior probability that the target belongs to a certain kind of object i in the case where the target exists in the frame. Assuming that the target detection task has K kinds of objects in total, each grid predicts the ith object CiHas a conditional probability of Pr (C)i|O),i=1,2,...,K;
Pr (C) is obtained through calculationiI O), the confidence of the object existing in a certain target frame can be calculated during testing, as shown in formula 3-3:
Figure BDA0002557851160000112
in the YOLO algorithm, an input image is divided into 7 × 7 meshes, each mesh predicts 2 target frames, and there are 20 targets to be measured in total, i.e., S is 7, B is 2, and K is 2. The algorithm finally outputs a predictor vector of length S × S (B × 5+ K) ═ 7 × 30. The general flow of the detection model is shown with reference to fig. 9.
Compared with the traditional single-layer parallel mechanical arm and the traditional serial mechanical arm, the serial mechanical arm designed by the invention has the advantages of stronger flexibility, independent kinematic chain, smaller error, smaller occupied space and stronger rigidity. The image recognition system and the yolo-v3 algorithm are matched to perform image recognition and classification on the articles in the flying area and grab static articles and dynamic articles, so that the grabbing precision of the grabbing mechanical arm is improved; in addition series connection arm's joint expansibility is stronger, and its series connection joint layer's structure is similar completely, can have a plurality of series connection joint layers to carry out indiscriminate overlap joint, compares in traditional arm, and series connection arm can not take place to interfere and increase and decrease joint layer figure portably from top to bottom in motion range, can select according to actual conditions. And the serial joint layer of the serial mechanical arm has the function of amplifying step by step, so that the requirements on carrying a platform and operation can be reduced in the actual situation, and the operation cost is reduced.
It should be noted that, in the above embodiments, the parameters can be freely selected according to different situations without contradiction. The present invention is not further described with respect to various possible parameter schemes in order to avoid unnecessary repetition.

Claims (10)

1. A multi-wing drone with a grabbing function, characterized in that includes:
the unmanned aerial vehicle comprises an unmanned aerial vehicle body and a grabbing mechanical arm arranged below the unmanned aerial vehicle;
the grabbing mechanical arm comprises: the image recognition system comprises a driving mechanism, a driven mechanism and an image recognition system, wherein the driven mechanism is arranged at the bottom of the driving mechanism;
the other end of the driven mechanism is provided with a clamping jaw assembly;
the driving mechanism is fixedly arranged at the bottom of the unmanned aerial vehicle body; the unmanned aerial vehicle is provided with a controller, a remote control communication module, a position sensing module and an airborne computer;
the image recognition system comprises four cameras and an on-board computer.
2. A multi-wing drone with grabbing function according to claim 1, characterised in that the active mechanism comprises: the steering engine comprises a base, a first platform plate fixedly connected to the base, three steering engine fixing frames fixedly arranged at the bottom of the first platform plate, a driver fixedly arranged in the steering engine fixing frames, and a driving arm group arranged at the power output end of the steering engine fixing frames; four cameras fixedly installed at corners of fuselage and undercarriage and airborne computer fixedly installed in unmanned aerial vehicle equipment cabin
The base is fixedly connected to the bottom of the unmanned aerial vehicle and moves along with the flight of the unmanned aerial vehicle;
the driver is electrically connected with the unmanned aerial vehicle controller.
3. A multi-wing drone with a grabbing function according to claim 1, characterised in that said follower means comprise: the first driven arm group is connected to the other end of the driving arm group, the first movable support is hinged to the other end of the first driven arm group, the second driven arm group is hinged to the bottom of the first movable support, and the second movable support is hinged to the other end of the second driven arm group; the third driven arm group is hinged at the bottom of the second driven arm group, and the third driven arm group is hinged at the other end of the third driven arm group.
4. The multi-wing unmanned aerial vehicle with the grabbing function as claimed in claim 2, wherein a mounting hole is formed in the center of the first platform plate, three clockwise-surrounding special-shaped support plates extend outwards from the edge of the first platform plate, three protrusions extend out from one side of the bottom of each special-shaped support plate, and protrusion grooves are formed in the protrusions;
arc-shaped convex blocks are arranged at two end sides of the steering engine fixing frame, and a square through hole is formed in the inner side of the steering engine fixing frame; the square penetration port is fixedly inserted into the driver shell;
the groove is embedded with the edge of the top of the steering engine fixing frame, and an arc-shaped convex block on one side of the bottom of the steering engine fixing frame is matched with a positioning groove formed in the bottom platform plate;
the bottom platform positioning grooves are three and are arranged corresponding to the lug grooves at the bottom of the special-shaped support plate;
the driving arm group comprises a connecting vertical frame which is in interference fit with the output shaft ends of the three drivers, a triangular truss which is integrally connected to the other end of the connecting vertical frame, and three hinged ball grooves which are respectively arranged at the end points of the triangular truss; the two ends of the driven arm group are provided with ball head fasteners which are rotatably connected with the ball grooves in a hinged mode.
5. The multi-wing drone with grabbing function of claim 3 or 4, wherein the first platform board and the centers of the first movable support, the second movable support and the third movable support are located on the same axis, and the driving arm set and the driven arm set connected to the first movable support, the second movable support and the third movable support are located on the same axis respectively;
the first movable support and the second movable support both comprise: three connecting arms vertically corresponding to the active arm group extend outwards from the edge; the hinge ball groove is fixedly connected with one side of the first movable support and one side of the second movable support, and the opening of the hinge ball groove is upward; and the bottom of the connecting arm is provided with a hinge ball groove with a downward opening.
6. The multi-wing drone with grabbing function of claim 3, wherein the clamping jaw assembly is fixedly mounted on one side of the bottom of the third movable support.
7. The multi-wing drone with grabbing function of claim 1, wherein the grabbing mechanical arm degree of freedom is 3, and the grabbing mechanical arm degree of freedom can be calculated by the following formula:
n=3+6+2×6=21; (1)
g=3×1+3×1+6×4=30 (2)
Figure FDA0002557851150000021
M=6(21-30-1)+3×1+3×1+3×24=18 (4)
F=18-12-3=3 (5)
wherein, M represents the degree of freedom of the mechanism, n represents the number of all components (including the frame) in the mechanism, g represents the total number of kinematic pairs in the mechanism,
Figure FDA0002557851150000022
the sum of the degrees of freedom contained in all kinematic pairs of the mechanism is shown, and F is the final result after elimination of the redundant degrees of freedom.
8. The multi-wing unmanned aerial vehicle with the grabbing function as claimed in claim 1, wherein a plurality of laser sensing units are arranged on the clamping jaw assembly, and the laser sensing units are communicated with an unmanned aerial vehicle controller; the laser sensing unit is internally provided with a perspective cover arranged on the clamping jaw, a laser sensor arranged in the perspective cover, and a signal amplifier connected with the laser sensor.
9. The method for operating a multi-wing drone with a capture function according to claim 1, characterized in that it comprises the following operating steps:
s1, classifying the N kinds of articles through a ground control system (not every article needs to be grabbed, and selectively grabbing, especially under the condition of multiple articles), and further setting a task grabbing target;
s2, a user sends an autonomous task execution instruction through a remote controller, a flight control instruction sent by the remote controller is sent to a controller through a remote control communication module, and the unmanned aerial vehicle controller receives the task instruction to control the unmanned aerial vehicle to fly to an area above a workpiece to be clamped;
s3, the image recognition system recognizes and classifies the workpieces in the area, the workpieces to be grabbed are determined through calculation of the onboard computer, the relative distance between the grabbing mechanism below the parallel mechanical arm and the workpieces to be grabbed is measured, the relative distance is sent to the onboard computer to calculate the required control quantity, and the required control quantity is sent to the flight controller;
s4, the flight controller respectively controls the flight of the unmanned aerial vehicle and the rotating speed and the rotating angle of a driver in a fixed frame of the steering engine, so as to drive the driving arm group to drive the driven arm combination and adjust the position and the angle of the first movable support, the second movable support and the third movable support which are connected among the driven arm group in the three-dimensional space;
s5, continuously calculating the relative distance between the grabbing mechanism and a workpiece to be grabbed through a visual recognition system, and further enabling a clamping jaw connected to the bottom of the third movable support to be located above the target piece and to be adjusted to an optimal grabbing angle so as to grab the object;
and S6, after confirming that the workpiece is grabbed, the recognition and recognition system flies to the next working area, and the object is put down, so that the task can be finished or another round of workpiece grabbing work can be started.
10. The multi-wing drone with grabbing function of claim 1, wherein the built-in computer algorithm in the onboard computer uses yolo-v3 algorithm to identify and classify the work pieces and settle the relative pose, identification and classification of the paw and the work pieces;
the yolo-v3 algorithm divides the input image into S × S meshes, each mesh being responsible for detecting the target object with the center point falling therein; b target frames exist in a single grid, and each target frame is composed of five-dimensional prediction parameters including the coordinates (x, y) of the center point of the target frame, the width and the height (w, h) and the confidence score Si
The confidence score is calculated by formula 3-1
Si=Pr(O)*IoU (3-1)
In the formula, Pr (O) represents the current netThe probability of an object existing in the lattice target frame, and O represents a target object; IoU (Intersection over Union) shows the accuracy of the target border position predicted by the current model; assume that the predicted target bounding box is p and the true target bounding box is t, boxtRepresenting the bounding box of the real object in the image, boxpRepresenting a predicted target bounding box; IoU is calculated by equation 3-2:
Figure FDA0002557851150000041
Pr(Cii O) represents the posterior probability that the target belongs to a certain kind of object i in the case where the target exists in the frame; assuming that the target detection task has K kinds of objects in total, each grid predicts the ith object CiHas a conditional probability of Pr (C)i|O),i=1,2,...,K;
Pr (C) is obtained through calculationiI O), the confidence of the object existing in a certain target frame can be calculated during testing, as shown in formula 3-3:
Figure FDA0002557851150000042
in the yolo-v3 algorithm, an input image is divided into 7 × 7 grids, each grid predicts 2 target frames, and there are 20 targets to be measured, i.e., S ═ 7, B ═ 2, and K ═ 2; the algorithm finally outputs a predictor vector of length S × S (B × 5+ K) ═ 7 × 30.
CN202010598464.2A 2020-06-28 2020-06-28 Multi-wing unmanned aerial vehicle with grabbing function and working method Active CN111776203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010598464.2A CN111776203B (en) 2020-06-28 2020-06-28 Multi-wing unmanned aerial vehicle with grabbing function and working method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010598464.2A CN111776203B (en) 2020-06-28 2020-06-28 Multi-wing unmanned aerial vehicle with grabbing function and working method

Publications (2)

Publication Number Publication Date
CN111776203A true CN111776203A (en) 2020-10-16
CN111776203B CN111776203B (en) 2022-06-14

Family

ID=72761518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010598464.2A Active CN111776203B (en) 2020-06-28 2020-06-28 Multi-wing unmanned aerial vehicle with grabbing function and working method

Country Status (1)

Country Link
CN (1) CN111776203B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120065779A1 (en) * 2010-09-15 2012-03-15 Seiko Epson Corporation Robot
CN105945910A (en) * 2016-05-16 2016-09-21 安庆米锐智能科技有限公司 Clamping mechanical arm of unmanned aerial vehicle
CN106725855A (en) * 2016-06-08 2017-05-31 中国矿业大学 A kind of series-parallel connection six degree of freedom minimally invasive surgical operation robot
CN107380420A (en) * 2017-08-23 2017-11-24 南京市特种设备安全监督检验研究院 A kind of vibrative mechanism detection means and method based on unmanned plane mechanical arm
CN108170160A (en) * 2017-12-21 2018-06-15 中山大学 It is a kind of to utilize monocular vision and the autonomous grasping means of airborne sensor rotor wing unmanned aerial vehicle
CN110321775A (en) * 2019-04-08 2019-10-11 武汉理工大学 A kind of drowning man's autonomous classification method waterborne based on multi-rotor unmanned aerial vehicle
CN111137464A (en) * 2019-12-16 2020-05-12 北京大学 Environment-friendly robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120065779A1 (en) * 2010-09-15 2012-03-15 Seiko Epson Corporation Robot
CN105945910A (en) * 2016-05-16 2016-09-21 安庆米锐智能科技有限公司 Clamping mechanical arm of unmanned aerial vehicle
CN106725855A (en) * 2016-06-08 2017-05-31 中国矿业大学 A kind of series-parallel connection six degree of freedom minimally invasive surgical operation robot
CN107380420A (en) * 2017-08-23 2017-11-24 南京市特种设备安全监督检验研究院 A kind of vibrative mechanism detection means and method based on unmanned plane mechanical arm
CN108170160A (en) * 2017-12-21 2018-06-15 中山大学 It is a kind of to utilize monocular vision and the autonomous grasping means of airborne sensor rotor wing unmanned aerial vehicle
CN110321775A (en) * 2019-04-08 2019-10-11 武汉理工大学 A kind of drowning man's autonomous classification method waterborne based on multi-rotor unmanned aerial vehicle
CN111137464A (en) * 2019-12-16 2020-05-12 北京大学 Environment-friendly robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李清等: "新型并联机器人的构型设计与运动学分析", 《包装工程》 *

Also Published As

Publication number Publication date
CN111776203B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US11691277B2 (en) Grasping of an object by a robot based on grasp strategy determined using machine learning model(s)
Paul et al. A multirotor platform employing a three-axis vertical articulated robotic arm for aerial manipulation tasks
CN109397249A (en) The two dimensional code positioning crawl robot system algorithm of view-based access control model identification
CN108415460B (en) Combined and separated rotor wing and foot type mobile operation robot centralized-distributed control method
CN111462154A (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
US11945106B2 (en) Shared dense network with robot task-specific heads
CN109623815A (en) A kind of compensation of undulation double SCM and method for unmanned pick-up boat
Cong et al. Design and development of robot arm system for classification and sorting using machine vision
CN2645862Y (en) Mobile mechanical arm system
CN111776203B (en) Multi-wing unmanned aerial vehicle with grabbing function and working method
Cong Visual servoing control of 4-DOF palletizing robotic arm for vision based sorting robot system
CN112207839A (en) Mobile household service robot and method
Shaikat et al. Computer vision based industrial robotic arm for sorting objects by color and height
CN117340895A (en) Mechanical arm 6-DOF autonomous grabbing method based on target detection
Hossain et al. Object recognition and robot grasping: A deep learning based approach
CN113878578B (en) Dynamic self-adaptive positioning method and system suitable for composite robot
CN215149139U (en) Logistics carrier based on visual identification
CN114998573A (en) Grabbing pose detection method based on RGB-D feature depth fusion
CN115870973A (en) Vision-based aircraft mechanical arm maneuvering grabbing system
Lee et al. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios
Wu et al. Intelligent explosive ordnance disposal UAV system based on manipulator and real-time object detection
Yu et al. Leader-follower formation for UAVS with fovs constraint
Wang et al. Highly Maneuverable Ground Reconnaissance Robot Based on Machine Learning
CN217039972U (en) Outdoor independent work's rubbish cleans machine people
US20230150151A1 (en) End of Arm Sensing Device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant