CN117530053A - Picking robot and tail end grabbing device and method thereof - Google Patents

Picking robot and tail end grabbing device and method thereof Download PDF

Info

Publication number
CN117530053A
CN117530053A CN202311483492.XA CN202311483492A CN117530053A CN 117530053 A CN117530053 A CN 117530053A CN 202311483492 A CN202311483492 A CN 202311483492A CN 117530053 A CN117530053 A CN 117530053A
Authority
CN
China
Prior art keywords
fruit
picking
finger
picking robot
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311483492.XA
Other languages
Chinese (zh)
Inventor
柴秀娟
樊湘鹏
张凝
张文蓉
孙坦
萨茹拉
王凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia Zhongnong North Agriculture And Animal Husbandry Technology Co ltd
Agricultural Information Institute of CAAS
Original Assignee
Inner Mongolia Zhongnong North Agriculture And Animal Husbandry Technology Co ltd
Agricultural Information Institute of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia Zhongnong North Agriculture And Animal Husbandry Technology Co ltd, Agricultural Information Institute of CAAS filed Critical Inner Mongolia Zhongnong North Agriculture And Animal Husbandry Technology Co ltd
Priority to CN202311483492.XA priority Critical patent/CN117530053A/en
Publication of CN117530053A publication Critical patent/CN117530053A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D46/00Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
    • A01D46/30Robotic devices for individually picking crops

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Manipulator (AREA)

Abstract

A picking robot, and an end gripping device and method thereof, the picking robot comprising an end gripping device comprising: the end effector comprises a shell, a fixed block, a driving mechanism and a clamping mechanism, wherein the driving mechanism is arranged in the shell and is respectively connected with the fixed block and the clamping mechanism; the clamping mechanism is arranged at one end of the shell; the other end of the shell is arranged on the fixed block; the coupler is arranged below the fixed block and connected with the driving mechanism; the driving mechanism drives the coupler to drive the fixed block to rotate with +/-360 degrees; the binocular depth camera is arranged on the camera support frame, and the camera support frame is arranged on the coupler through the camera flange; and the end cover is respectively connected with the coupler and the mechanical arm of the picking robot, a communication interface is arranged on the end cover, and the driving mechanism and the binocular depth camera are connected with the controller of the picking robot through the communication interface. The invention also provides a terminal grabbing method of the picking robot. The invention has simple structure, convenient operation and high efficiency.

Description

Picking robot and tail end grabbing device and method thereof
Technical Field
The invention relates to the technology of agricultural intelligent picking robots, in particular to a fruit and vegetable picking robot and a terminal grabbing device and method thereof.
Background
The quality of picking fruits and vegetables directly affects links such as storage, processing and sales of fruits and vegetables, thereby finally affecting market price and economic benefit. Due to the unstructured characteristics of the fruit and vegetable planting environment and the complexity of growth, the current fruit and vegetable picking basically takes manual labor as a main part, the degree of automation is still very low, the labor intensity is high, and the high cost becomes a main factor for preventing the mechanized and modern development process of the fruit and vegetable industry. Generally, branches and leaves of the fruit and vegetable growing environment are staggered, targets such as fruits are easily shielded by the branches and leaves, the growing postures of the fruits are large in difference, overlapping and shielding phenomena among the fruits are serious, and the picking operation of the agricultural robot is extremely challenging. Therefore, how to accurately acquire the fruit and vegetable pose and perform lossless picking is always a core problem and a bottleneck problem in the field of picking automation.
The end effector is a key part of a fruit and vegetable picking robot, and after fruits and vegetables are identified, positioned and a picking route is determined, the end effector is utilized to separate fruits from fruit stalks or branches, so that the picking is an important link. As a core execution part of the whole picking action, the end gripping device of the picking robot is particularly important. Visual perception and attitude estimation are the preconditions and basis for ensuring accurate grabbing of the end effector. The depth image simulates the stereoscopic perception capability of human beings in the visual representation form of the depth information, and has the characteristics of internal consistency and shape prior of the target. With the rise of depth cameras such as RealSense and Kinect, a data foundation is laid for visual perception tasks based on RGB-D. Currently, picking execution devices are evolving towards target generalization, intellectualization and clustering.
In the picking process, the application force of the end effector on the fruits is particularly important, if the application force is small, the fruits cannot be picked, the surfaces of the fruits rub against the end effector, or the fruits fall off in the picking and collecting processes to cause fruit scratch or bruise, and if the application force is too large, the fruits are damaged. In the prior art, the end effector relies on establishing an accurate mathematical model for the end effector when grabbing fruits and vegetables, the control process is complex, partial unmodeled dynamic factors can be ignored, and the robustness and the stability of the end grabbing device are further affected. The traditional rigid tail end grabbing device has strong specificity, low utilization rate and increased cost expenditure, in addition, the traditional wrapping type grabbing hand adopts rigid materials, the wrapping force is difficult to control, the shape of the grabbing hand is fixed, and the enough self-adaptive capacity is lacked. For the identification links of overlapped fruits, the positioning error can cause the deviation of the end actuating mechanism in the grabbing process, and the damage of the fruits and the failure of the picking process are easy to cause.
In the prior art, the identification and the positioning of fruits and vegetables are mostly only to determine the center of a target, and the orientation and the gesture information of the fruits and vegetables are not identified. After the target position is obtained, the mechanical arm carries the end effector to approach the fruits in a fixed direction, and the end effector cannot rotate or stretch the fruit and vegetable targets in a specific direction relative to the fruit stalks, so that the fruits are damaged in the picking process. In addition, as the growth postures of fruits are different, the end effector is easy to interfere with fruit stalks of the fruits in the picking process, so that fruit grabbing deviation and picking grabbing errors are caused, and the picking success rate and the picking harvesting efficiency of a robot are seriously reduced.
To sum up, the picking robot end grabbing device in the prior art has the following problems:
1) The grabbing accuracy is low, and the grabbing damage is easy to generate. The existing end actuating mechanism is large in weight and complex in control implementation, the robustness and the stability of the end grabbing device are poor, the fruit skin is easily damaged when the picking robot grabs fragile fruits such as fruits and vegetables, and the picking quality is affected;
2) The universality and applicability are poor. The universality of the picking tail end not only comprises the universality of different environment operations, but also comprises the universality of similar operations among fruits and vegetables, and most of the existing picking tail ends are arranged in a copying way aiming at specific fruits and vegetables and cannot be transferred to other types of fruits and vegetables for application;
3) High cost and difficult popularization. The existing end actuating mechanism has high cost and difficult maintenance, and is not beneficial to the commercialized landing and popularization and application of the agricultural picking robot;
4) The availability of effective information on the target fruit is poor. The advantages of the two-dimensional image and the three-dimensional point cloud information are not fully utilized for acquiring the gesture of the fruit, so that the accuracy of positioning and gesture estimation is low, and the success rate is reduced.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a picking robot and a tail end grabbing device and method thereof.
In order to achieve the above object, the present invention provides an end gripping device of a picking robot, comprising:
the end effector comprises a shell, a fixed block, a driving mechanism and a clamping mechanism, wherein the driving mechanism is arranged in the shell and is respectively connected with the fixed block and the clamping mechanism; the clamping mechanism is arranged at one end of the shell; the other end of the shell is arranged on the fixed block;
the coupler is arranged below the fixed block and is connected with the driving mechanism; the driving mechanism drives the coupler to drive the fixed block to rotate with +/-360 degrees;
the binocular depth camera is arranged on a camera support frame, and the camera support frame is arranged on the coupler through a camera flange; and
and the end cover is respectively connected with the coupler and the mechanical arm of the picking robot, a communication interface is arranged on the end cover, and the driving mechanism and the binocular depth camera are connected with the controller of the picking robot through the communication interface.
The tail end grabbing device of the picking robot is characterized in that the clamping mechanism is a two-finger clamping mechanism and comprises a clamping jaw first finger and a clamping jaw second finger which are oppositely arranged, and bottoms of the clamping jaw first finger and the clamping jaw second finger are connected with the driving mechanism through a first fixing piece and a second fixing piece respectively.
The tail end grabbing device of the picking robot comprises a clamping jaw first finger and a clamping jaw second finger which are soft silicon rubber flexible material pieces, and concave-convex textures of bionic fingerprints are arranged on the surfaces of the clamping jaw first finger and the clamping jaw second finger.
The tail end grabbing device of the picking robot is characterized in that a thin film pressure sensor is arranged on the first finger and/or the second finger of the clamping jaw, and the thin film pressure sensor is connected with a controller of the picking robot.
The tail end grabbing device of the picking robot comprises a driving mechanism, wherein the driving mechanism comprises a direct-current servo motor and a gear rack transmission mechanism, the gear rack transmission mechanism is respectively connected with the direct-current servo motor, the first finger of the clamping jaw and the second finger of the clamping jaw, and the direct-current servo motor drives the gear rack transmission mechanism to drive the first finger of the clamping jaw and the second finger of the clamping jaw to move oppositely or reversely.
The tail end grabbing device of the picking robot, wherein the strokes of the first clamping jaw finger and the second clamping jaw finger are 0-50mm; the single-finger clamping force of the first clamping jaw finger and the second clamping jaw finger is 40-100N, and the position repetition precision is +/-0.02 mm.
The tail end grabbing device of the picking robot, wherein the driving mechanism further comprises a stepping motor, the stepping motor is connected with the fixed block and drives the fixed block to rotate so as to drive the clamping mechanism to rotate so as to separate fruits.
The tail end grabbing device of the picking robot is characterized in that the binocular depth camera is located above the rear of the clamping mechanism and mounted on the camera connecting piece, the camera connecting piece is connected with one end of the camera support frame through the rotary locating pin, the other end of the camera support frame is mounted on the camera flange, and the camera flange is mounted on the coupler.
In order to better achieve the above object, the present invention also provides a picking robot, which includes the above end gripping device.
In order to better achieve the above object, the present invention further provides an end gripping method of a picking robot, wherein the picking robot includes the end gripping device described above, the end gripping method including the steps of:
s100, shooting and obtaining RGB images and fruit depth images of fruits by a binocular depth camera of the tail end grabbing device in the moving process of the picking robot;
s200, a deep learning-based target detection algorithm is built in a controller of the picking robot, real-time detection and coarse positioning are carried out on fruit targets in the RGB images of the fruits, meanwhile, the fruit maturity is judged according to the fruit characteristics, and the center coordinates of the mature fruits are returned;
s300, utilizing the fruit depth image to reconstruct the mature fruit in a three-dimensional mode, estimating the posture of the fruit by combining the center coordinates of the mature fruit and the 3D point cloud information, and correcting the center coordinates of the mature fruit according to the posture of the fruit;
s400, dividing point clouds of a single mature fruit from the range of a fruit growing area, slicing the point clouds, measuring the posture of the fruit according to fruit section data obtained after slicing, and simultaneously fusing RGB (red, green and blue) images of the fruit and 3D point cloud information to accurately position the fruit; and
s500, obtaining accurate center coordinates and space postures of mature fruits, calibrating by hands and eyes to obtain a coordinate system conversion matrix, converting coordinates of grabbing points to the coordinate systems of the mechanical arm and the tail end grabbing device of the picking robot, sending the coordinates to the mechanical arm and the tail end grabbing device through a transmission control protocol, and controlling the mechanical arm and the tail end grabbing device to finish picking of the mature fruits.
The terminal grabbing method of the picking robot further comprises the following steps:
s600, after picking of single mature fruits is completed, the mechanical arm returns to a pre-picking point, whether the fruit is the last fruit is judged, if yes, the mechanical arm returns to an initial position, the tail end grabbing device returns to an initial state, and the current task is ended; if not, picking the next mature fruit.
According to the tail end grabbing method of the picking robot, picking sequences are planned based on the energy consumption optimal mode, all the adaptive kinematic inverse solutions are obtained, and the picking sequence planning of a plurality of fruit targets is achieved according to the energy consumption function sequencing.
The invention has the technical effects that:
according to the invention, automatic picking and harvesting of fruits and vegetables just needed in da Zong in an agricultural scene is realized, the two-finger clamping mechanism, the binocular depth camera and the film pressure sensor are integrated into the tail end grabbing device, and functions of fruit target acquisition, maturity discrimination, pose estimation, grabbing of mature targets to be picked and the like can be realized under the control of the picking robot controller, so that the automatic picking and harvesting device is simple in structure, convenient to operate and high in efficiency. In order to reduce the grabbing damage to mature and breakable fruits, the surface of the two-finger clamping mechanism is provided with a concave-convex soft silicone rubber type flexible material, so that the contact area of the end effector and the fruits can be increased, the damage rate of the fruits can be reduced, the friction coefficient of the clamping fingers and the fruits can be increased, and the fruits are prevented from falling off in the picking grabbing and moving processes; in order to acquire the pressure condition in the fruit grabbing process in real time and reduce the damage to fruit grabbing, a film pressure sensor is integrated on the two-finger clamping mechanism, a resistance signal is converted into a high-low level signal through a special linear voltage conversion module, the threshold value of the film pressure sensor can be set for fruits and vegetables with different hardness, and when the receiving pressure is larger than the set threshold value, the controller outputs a high level, so that the grabbing of an end effector is controlled more accurately and effectively; the binocular depth camera and the film pressure sensor are integrated on the end effector to form complementary advantages, so that the positioning accuracy can be improved, nondestructive picking of fruits and vegetables can be ensured, and the adaptability of the fruit and vegetable picking robot to complex operation environments is enhanced. Under the support of RGB-D images acquired by a binocular depth camera, a fruit target is identified through a deep learning target detection algorithm, maturity judgment is carried out, three-dimensional reconstruction and posture estimation are carried out on the mature target, two-dimensional images and 3D information are fused, more accurate center coordinates of fruit picking points are obtained, and the grabbing success rate is greatly improved.
The invention will now be described in more detail with reference to the drawings and specific examples, which are not intended to limit the invention thereto.
Drawings
FIG. 1 is a schematic diagram illustrating a connection between an end gripping device and a robot arm according to an embodiment of the present invention;
FIG. 2 is a perspective view of an end effector according to an embodiment of the present invention;
FIG. 3 is a schematic view of an end effector according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the operation of an embodiment of the present invention;
FIG. 5 is a schematic diagram of a visual perception positioning of an end effector according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a position and orientation estimation of an end gripping device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of picking by the end picking device according to an embodiment of the present invention.
Wherein reference numerals are used to refer to
1. Mechanical arm
11. Base seat
12. First arm
13. Second arm
14. Shoulder joint
15. Elbow joint
16. First wrist joint
17. Second wrist joint
18. Third wrist joint
2. Terminal grabbing device
21. Binocular depth camera
211. Camera support
212. Camera connector
213. Rotary positioning pin
214. Camera flange
22. End effector
221. Fixed block
222. Shell body
223. First finger of clamping jaw
224. Second finger of clamping jaw
23. End cap
24. Coupling device
25. Communication interface
Detailed Description
The structural and operational principles of the present invention are described in detail below with reference to the accompanying drawings:
the picking of the fruit and vegetable in the facility agriculture is a typical labor-intensive operation link, and has very high requirements on the number of labor personnel and the working quality. Tomatoes, strawberries and the like are used as a large number of fresh fruit and vegetable products meeting the basic demands of residents, effective supply must be ensured, and higher requirements are put on the timeliness and quality of picking. Because the outer skins of fruits and vegetables such as tomatoes and strawberries are fragile, the growing environment is complex, and the fruits and vegetables are extremely easy to damage in the picking process, the structure, the flexible mode and the grabbing execution strategy of the end execution piece become important factors influencing the fruit and vegetable picking damage.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a connection between an end gripping device 2 and a robot arm 1 according to an embodiment of the present invention. The picking robot comprises a body (not shown), a mechanical arm 1 and a tail end grabbing device 2, wherein the mechanical arm 1 is arranged on the body, the tail end grabbing device 2 is arranged at the tail end of the mechanical arm 1 and is an end executing mechanism of the picking robot, and the picking robot is integrated with the mechanical arm 1 with multiple degrees of freedom of the picking robot to achieve a picking function. The multi-degree-of-freedom mechanical arm 1 of the embodiment comprises a base 11, a first arm 12 and a second arm 13, wherein the first arm 12 is connected with the base 11 through a shoulder joint 14; the second arm 13 is connected to the first arm 12 by an elbow joint 15; the end grasping device 2 is connected to the second arm 13 through a wrist joint, and the wrist joint of the present embodiment includes a first wrist joint 16, a second wrist joint 17, and a third wrist joint 18 connected in sequence, the first wrist joint 16 being connected to the second arm 13 and the second wrist joint 17, respectively, and the third wrist joint 18 being connected to the second wrist joint 17 and the end grasping device 2, respectively. The mechanical arm 1 comprises 6 rotating joints, the tail end grabbing device 2 is fixedly connected with a 6 th joint, namely a third wrist joint 18, of the mechanical arm 1, and a communication interface 25 of the tail end grabbing device 2 is connected with a communication interface 25 of the mechanical arm 1 so as to realize data communication and control. The mechanical arm 1 is mounted on a moving platform or a fixed platform through a base 11, a shoulder joint 14 and an elbow joint 15 are used for executing large-amplitude actions, a first wrist joint 16 and a second wrist joint 17 are used for executing finer actions, and a third wrist joint 18 is used for connecting the end grabbing device 2. When the picking robot works, the tail end grabbing device 2 can be matched with the NVIDIA Jetson AGX development kit, the embedded microcontroller and the mechanical arm 1 to finish grabbing operation on fruits together. The structure, the mutual position relationship, the connection relationship, the functions and the like of other parts of the picking robot are mature prior art, so that the description is omitted here. Only the tip grasping apparatus 2 and the tip grasping method of the present invention will be described in detail below.
Referring to fig. 2 and 3, fig. 2 is a perspective view of an end effector 2 according to an embodiment of the present invention, and fig. 3 is a schematic structural view of the end effector 2 according to an embodiment of the present invention. The end gripping device 2 of the picking robot of the present invention includes: the end effector 22 includes a housing 222, a fixing block 221, a driving mechanism (not shown) and a clamping mechanism, the driving mechanism being disposed in the housing 222 and connected to the fixing block 221 and the clamping mechanism, respectively; the clamping mechanism is arranged at one end of the shell 222; the other end of the housing 222 is mounted on the fixing block 221; a coupling 24 installed below the fixed block 221 and connected to the driving mechanism; the driving mechanism drives the coupler 24 to drive the fixed block 221 to rotate within plus or minus 360 degrees; a binocular depth camera 21 mounted on a camera support 211, the camera support 211 being mounted on the coupler 24 by a camera flange 214; and the end cover 23 is respectively connected with the coupler 24 and the mechanical arm 1 of the picking robot, a communication interface 25 is arranged on the end cover 23, and the driving mechanism and the binocular depth camera 21 are connected with the controller of the picking robot through the communication interface 25. The end grabbing device 2 is connected with the mechanical arm 1 through an end cover 23, and the connection fastening degree can be adjusted and the installation and the disassembly can be carried out through a fixing mechanism.
In this embodiment, the clamping mechanism is preferably a two-finger clamping mechanism, and includes a first clamping jaw finger 223 and a second clamping jaw finger 224 that are disposed opposite to each other, where bottoms of the first clamping jaw finger 223 and the second clamping jaw finger 224 are connected to the driving mechanism through a first fixing member and a second fixing member, respectively. The first clamping jaw finger 223 and the second clamping jaw finger 224 are preferably soft silicon rubber flexible material pieces, and concave-convex textures of bionic fingerprints are arranged on the surfaces of the first clamping jaw finger 223 and the second clamping jaw finger 224, so that fruits and vegetables are prevented from sliding off and being damaged during grabbing or carrying. The first clamping jaw finger 223 and/or the second clamping jaw finger 224 are/is further provided with a film pressure sensor, and the film pressure sensor is connected with a controller of the picking robot to accurately control the grabbing force, so that a foundation and convenience are laid for nondestructive picking.
The binocular depth camera 21 is located above the rear of the clamping mechanism and is mounted on a camera connecting piece 212, the camera connecting piece 212 is connected with one end of a camera supporting frame 211 through a rotary positioning pin 213, the other end of the camera supporting frame 211 is mounted on a camera flange 214, and the camera flange 214 is mounted on the coupler 24. The camera connector 212 of the binocular depth camera 21 can manually realize angle adjustment under the action of the rotary positioning pin 213, so that the capability of the binocular depth camera for sensing the visual field range of the surrounding environment is greatly improved, and the performance of working in coping with different picking scenes is enhanced. The binocular depth camera 21 preferably adopts a Intel RealSense D camera as an 'eye' of the end effector 22, and the controller preferably adopts NVIDIA Jetson AGX development kit, NVIDIA Jetson AGX can be used as an embedded system deployment platform of the picking robot, and can be used for identifying and positioning by processing fruit and vegetable images. The end effector 22 preferably employs an STM32F106 embedded microcontroller as its core control system.
The driving mechanism of the embodiment comprises a direct current servo motor and a gear rack transmission mechanism, the gear rack transmission mechanism is respectively connected with the direct current servo motor, the first clamping jaw finger 223 and the second clamping jaw finger 224, the direct current servo motor drives the gear rack transmission mechanism to drive the first clamping jaw finger 223 and the second clamping jaw finger 224 to move oppositely or back to back, and therefore two-finger movement and interval adjustment are achieved. The stroke of the first clamping jaw finger 223 and the second clamping jaw finger 224 is preferably 0-50mm; the single-finger clamping force of the first clamping jaw finger 223 and the second clamping jaw finger 224 is preferably 40-100N, and the position repeatability precision is preferably +/-0.02 mm.
The driving mechanism may further include a stepper motor, where the stepper motor is connected to the fixing block 221, and drives the fixing block 221 to rotate to drive the clamping mechanism to rotate to separate fruits. I.e. the fixing block 221 is fixed to the coupling 24 immediately behind the camera flange 214. Under the rotation driving action of the coupler 24, the fixing block 221 can rotate positively and negatively under the driving of the internal motor, and can drive the clamping mechanism to rotate 360 degrees, so that fruits with different growth postures can be grabbed and picked, and the flexibility and adaptability of the clamping mechanism are greatly improved.
Referring to fig. 4 to 7, fig. 4 is a working schematic diagram of an embodiment of the present invention, fig. 5 is a visual perception positioning schematic diagram of an end gripping device 2 of an embodiment of the present invention, fig. 6 is a pose estimation schematic diagram of the end gripping device 2 of an embodiment of the present invention, and fig. 7 is a gripping picking schematic diagram of the end gripping device 2 of an embodiment of the present invention. When the fruit is picked, the picking point is the centroid or the central area of the fruit. The invention is based on a binocular depth camera 21, and identifies and accurately positions ripe fruits to be picked by fusing two-dimensional RGB images and 3D point cloud information. After the binocular depth camera 21 acquires the RGB two-dimensional image, firstly, the recognition of fruits and the discrimination and screening of mature fruits are carried out through a YOLO V8 deep learning target detection algorithm built in a visual system, the center coordinates of the mature fruits to be picked are returned, the workload of a 3D reconstruction algorithm can be greatly reduced for judging the fruit maturity, and the picking efficiency is improved. And after three-dimensional reconstruction of the mature fruit, carrying out posture estimation on the fruit by combining the fruit center coordinates returned by the two-dimensional RGB image and the 3D point cloud information, correcting the fruit center coordinates according to the fruit posture, and carrying out accurate positioning by fusing the two-dimensional image and the 3D point cloud information to obtain the center coordinates of the grabbing points of the tail end grabbing device 2. Finally, a coordinate system conversion matrix is obtained by hand eye calibration, the coordinates of the grabbing points are converted into the coordinate system of the mechanical arm 1 and the end effector 22, and the coordinates are sent to the controllers of the mechanical arm 1 and the end grabbing device 2 through a transmission control protocol, so that the picking end grabbing device 2 is controlled to finish picking and grabbing of mature fruits.
As shown in fig. 7, in this embodiment, a NVIDIA Jetson AGX development kit is preferably used as a main controller of the picking robot, and functions of a host computer are realized, and an ROS system is mounted in the controller. NVIDIA Jetson AGX the master control communicates with the mechanical arm 1 and the end gripping device 2 through the CAN, and the control of the end gripping device 2 is realized by an embedded microcontroller. When the binocular depth camera 21 positioned at the rear upper part of the clamping mechanism acquires the image of the fruit and vegetable target in the visual field, the visual analysis model (trained YOLO v 8) of the Jetson AGX Orin detects the fruit target and judges the maturity, and the spatial pose information of the mature target to be picked is obtained through coordinate conversion. NVIDIA Jetson AGX the main control sends an instruction to a servo drive controller of the mechanical arm 1 through a CAN bus, then the mechanical arm 1 is driven to move to a pre-picking point through each joint motor, after the mechanical arm 1 reaches a preset position, the angle, the angular speed and the angular acceleration are detected through a joint inertial sensor and fed back to NVIDIA Jetson AGX main control to realize stable control, after NVIDIA Jetson AGX main control sends an instruction to the tail end grabbing device 2, the embedded microcontroller drives clamping jaws of the tail end grabbing device 2, namely a clamping jaw first finger 223 and a clamping jaw second finger 224, to grab target fruits through a direct current motor, the finger end pressure sensor senses pressure and feeds back through a film pressure sensor, and finally the clamping jaw first finger 223 and the clamping jaw second finger 224 are driven to rotate and twist off through a fixed block 221 to finish fruit picking.
In this embodiment, the method for picking the end of the picking robot can plan the picking sequence based on the optimal energy consumption mode to obtain all the adapted inverse kinematics solutions, and order the picking sequence of the fruit targets according to the energy consumption function, and specifically includes the following steps:
step S100, shooting and obtaining RGB images and depth images of fruits by the binocular depth camera 21 of the tail end grabbing device 2 along with the movement process of the picking robot;
step 200, a target detection algorithm based on deep learning is built in a controller of the picking robot, real-time detection and coarse positioning are carried out on fruit targets in the RGB image of the fruit, meanwhile, the fruit maturity is judged according to the fruit characteristics, and the center coordinates of the mature fruit are returned;
step S300, three-dimensional reconstruction is carried out on the mature fruit by utilizing the obtained fruit depth image, the fruit posture is estimated by combining the center coordinates of the mature fruit returned by the two-dimensional RGB image and the 3D point cloud information, and the center coordinates of the mature fruit are corrected according to the fruit posture;
step S400, dividing point clouds of a single mature fruit from the range of a fruit growing area, slicing the point clouds, measuring the posture of the fruit according to fruit section data obtained after slicing, and simultaneously fusing RGB (red, green and blue) images of the fruit and 3D point cloud information to accurately position the fruit; and
and S500, obtaining accurate center coordinates and space postures of mature fruits, calibrating by using hands and eyes to obtain a coordinate system conversion matrix, converting coordinates of grabbing points into coordinate systems of a mechanical arm 1 and a tail end grabbing device 2 of the picking robot, sending the coordinate systems to the mechanical arm 1 and the tail end grabbing device 2 through a transmission control protocol, and controlling the mechanical arm 1 and the tail end grabbing device 2 to finish picking of the mature fruits.
The present embodiment may further include: step S600, after picking of one mature fruit is completed, the mechanical arm returns to a pre-picking point, whether the fruit is the last fruit is judged, if yes, the mechanical arm 1 returns to the initial position, the tail end grabbing device 2 returns to the initial state, and the current task is ended; if not, picking the next mature fruit according to the optimal sequence until the last fruit of the current task is picked.
Specifically, when the visual perception system successfully acquires the target fruit on-imageAfter the prediction frame in (2), calculating the pixel width and centroid (fruit center point) position of the target in the image, and outputting the coordinate position (P) of the width of the target in the camera coordinate system by using the depth camera camera ). Since the picked fruit is elliptical, in consideration of the fact that there is a loss of depth information in the projection of the three-dimensional object onto the two-dimensional imaging plane, the position (P camera ) Is increased by a value in the Z-axis direction to compensate the width of the object, and the approximate centroid position (P 'of the object is obtained' camera ). Converting the position of the object in the camera coordinate system into the position (P) of the object in the robot arm 1 coordinate system by the secondary transformation matrix robot ) The formula is as follows:
after the position of the center line point of the fruit is obtained, the posture of the mature fruit to be picked needs to be further estimated to determine the grabbing posture of the tail end grabbing device 2. As shown in fig. 6, the internal and external parameters of the camera are obtained through system calibration, and the three-dimensional reconstruction of the fruit can be completed according to the calibration parameters and the fruit phase diagram, so that the point cloud data of the fruit can be obtained. The method comprises the following steps: firstly, the point cloud of a single mature fruit is segmented from the range of a fruit growing area, then the point cloud is subjected to slicing treatment, and the fruit is subjected to gesture measurement according to the fruit section data obtained after slicing. When the fruit point cloud is sliced, as the point cloud is a discretization representation of the surface shape of an object, the influence of the density of the point cloud is great when the cross section profile is determined according to the points on the slice plane, so that the point cloud with the thickness of 0.2mm before and after the slice plane is projected onto the slice plane to generate the cross section profile, and the calculation process is as follows:
P i ={p 0 ,p 0 ,…,p a ,…p n },p a ={x a ,y a ,z a }∈R 3
wherein y is min And y max The minimum value and the maximum value of the point cloud of the fruit to be picked in the y direction are obtained; h is the interval of the cross sections, and the size is 1mm; "Ceiling" is an upward rounding function; i is a slice sequence number; y is i For slice plane position, P i For the contour point set segmented from the fruit point cloud, p a For one of the points, the coordinates are (x a ,y a ,z a )。
Specifically, when measuring and calculating the rolling angle of the fruit, two junctions A and B of the fruit cavity and the outer contour shell 222 are found according to the cloud section data of the fruit point taken by the section passing through the center point O of the fruit, and the expression L of the characteristic line of the fruit in the short axis direction is calculated according to the coordinates of A and B X Then L is taken X Projecting to the XOZ plane of the fruit, and calculating the rolling angle with the X-axis to obtain the rolling angle. When the pitch angle of the fruit is measured and calculated, each section data is traversed, and two junction points A of the center of the fruit and the outline of the fruit are obtained i And B i Calculating the midpoint of the two-point connection line, and marking the midpoint as M i (i=1, 2,3, …, n). According to the point set M i Fitting the feature line L of the fruit in the long axis direction by least square method Y . Will L Y Projecting to the YOZ plane, and obtaining the pitch angle with the Y-axis. Similarly, L is Y Projection onto the XOY plane gives the yaw angle.
After the position of the fruit target under the coordinate system of the mechanical arm 1 and the spatial postures of the fruit such as the rolling angle, the pitch angle, the yaw angle and the like are obtained, the NVIDIA Jetson AGX development kit and the embedded microcontroller drive mechanical arm 1 carry the end effector 22 to approach the position of the fruit target in the space, when the end effector approaches the fruit, the clamping mechanism of the end grabbing device 2 is opened and clamps mature fruits, then the end effector 22 fixing block 221 is driven by the internal motor to rotate forward or reversely for 90 degrees to twist off the fruits, and then the picking action is completed under the driving of the mechanical arm 1.
In order to achieve picking of mature fruit targets, besides the fact that a platform of a picking robot accurately navigates and moves to a fruit and vegetable body position, and a binocular depth camera 21 obtains fruit target gesture coordinates, a spatial movement track of the mechanical arm 1 needs to be planned in real time according to the fruit target coordinates, and an end effector 22 is driven to complete picking of fruits in an optimal picking gesture, wherein the picking of fruits mainly involves forward and backward kinematic analysis of the mechanical arm 1 and spatial planning of joints of the mechanical arm 1. Under the condition of knowing the terminal coordinates, the rotation angle required by each joint is obtained through matrix operation. After the position and the posture of the tail end grabbing device 2 are given, joint angles of all reachable given positions and postures of the mechanical arm 1 are calculated, and the mechanical arm 1 finishes picking actions according to inverse kinematics and picking sequences.
In the present embodiment, the pose of the tip grasping device 2 is calculated as: the pose of the end gripping device 2, that is, the position and the pose of the finger coordinate system { T } where the end gripping device 2 is located, relative to the base coordinate system { B } of the mechanical arm 1, is described by using the euler angle, that is, the angle α is rotated around the x axis of the base coordinate system { B } to obtain { T' }, then the angle β is rotated around the y axis of the self coordinate system to obtain { T "}, and finally the angle γ is rotated along the z axis of the self coordinate system to obtain the finger coordinate system { T }. According to the rotation condition of the Euler angle and based on the principle of motion coordinate system transformation, the posture of { T } relative to the base coordinate system { B } of the mechanical arm 1 is described as follows:
when picking fruits and vegetables in a facility environment, the picking and grabbing device at the tail end cannot rotate randomly, and grabbing of the fruits must be completed in a picking plane, so that rotation around the x-axis direction and rotation around the z-axis must be consistent with basic coordinates, and the tail end grabbing device 2 can grab a fruit target in a better posture by properly adjusting rotation of the tail end grabbing device 2 around the y-axis direction.
Specifically, in this embodiment, the 6-degree-of-freedom mechanical arm 1 is taken as an example, the inverse kinematics equation is a nonlinear transcendental equation, and all inverse kinematics solutions can be solved by algebra. After the binocular depth camera 21 obtains the spatial coordinates of the fruit and vegetable targets, the binocular camera is firstly calibrated by internal parameters, and the mechanical arm 1 and the binocular camera in the embodiment are fixedly installed, so that the mechanical arm 1 and the binocular camera are calibrated by adopting a fixed hand-eye calibration mode, and the spatial coordinate system of the fruits is converted into the coordinate system of the mechanical arm 1. When the end grabbing device 2 picks a fruit target, mapping the picking pose of the end grabbing device 2 to the pose of the 6 th joint of the picking mechanical arm 1 according to the structure and the pose of the end grabbing device 2, and obtaining the motion equation of the 6 th joint of the mechanical arm 1, namely the third wrist joint 18, as follows:
the inverse transformation and homogeneous transformation are carried out on the above, and the angle values of all joints are obtained according to the equality of matrix rows and columns on two sides of the equation as follows:
in the above formula, θ 1 ~θ 6 Is the angle of each joint of the 6-degree-of-freedom mechanical arm. According to the results of the angles of the jointsGiven the fixed pose of the end gripping device 2, the inverse kinematics combination of the 6-degree-of-freedom robotic arm 1 to achieve this pose is as follows:
as can be seen from the above inverse kinematics analysis, when the robot arm 1 gives the picking motion of the fruit target position by the fixed pose of the end gripping device 2, 8 sets of inverse kinematics solutions can be obtained at most. After the inverse kinematics of the mechanical arm 1 are obtained, the trajectory planning is performed on the mechanical arm 1. Thereby obtaining the time histories of the position, the speed and the acceleration of each degree of freedom motor of the mechanical arm 1, and obtaining the angular displacement, the angular speed and the angular acceleration of each joint by combining constraint conditions.
Since the 6-degree-of-freedom mechanical arm 1 has 8 sets of closed inverse kinematics solutions in the fixed posture of the end gripping device 2. In order to reduce the problems of repeated paths and overlarge energy consumption of the picking robot when facing multi-target picking, the embodiment adopts an inverse kinematics solution selection and picking sequence planning method based on an 'energy consumption optimal' mode.
Specifically, the "energy consumption optimal" mode is measured by the angle at which each joint of the robot arm 1 rotates in the joint space, not the moving distance in the cartesian coordinate system. Firstly, calculating the sum of weighted rotation angle differences of all inverse solutions of picking the current fruit point in a fixed posture and all inverse solutions of the fixed posture of all next candidate points, and then sorting according to a calculation result to obtain a globally optimal picking sequence and an energy consumption optimal inverse kinematics solution, wherein the energy consumption function is as follows:
wherein,
in the functions, P is an energy consumption function; n is the end gripping device 2 to be passed throughThe number of picking points; m is the number of joints of the mechanical arm 1; θ is the articulation angle; omega i The weight corresponding to the ith joint is the percentage of the torque provided by each joint motor;to pick the kth fruit, select the nth fruit picking point k The inverse kinematics of the group resolves, the angle value of the i-th joint.
Specifically, for the mechanical arm 1, a set of inverse solutions is selected that have the smallest weighted angle of rotation of each joint when the picking task is completed, and at this time, the energy consumption during rotation is optimal. According to the coordinates of the tail end grabbing device 2, the angles of joints of the mechanical arm 1 are obtained, a model is written into a control program, and therefore the coordinates of fruits are calculated through the binocular depth camera 21 in a picking link to infer and output the rotation angles of motors of all joints. Based on the same thought, solving all suitable inverse solution energy consumption functions according to the sequencing according to the 'energy consumption optimal' mode, and sequencing, so that a minimum energy consumption function picking sequence and a corresponding mechanical arm 1 fixed posture inverse kinematics solution of each target point are obtained, and the picking sequence planning of a plurality of fruit targets is completed.
In order to reduce the damage incidence rate of fruit and vegetable picking links, realize more accurate and efficient fruit and vegetable picking and promote popularization and application of the fruit and vegetable picking robot, the invention sets the structure, the materials, the grabbing strategy and the implementation mode of the end effector 22. The end effector 22, the binocular depth camera 21, an internal driving motor and the like jointly form the end grabbing device 2 of the picking robot, fruit space positioning is carried out through binocular vision, the end effector 22 is guided to target fruits, two-finger clamping jaws are closed to clamp the fruits, and then separation of the fruits and the fruit stalks is achieved through torsion. The end effector 22 includes a two finger gripping mechanism, a film pressure sensor, a dc motor, a flexible surface, a coupler 24, internal gears, etc., with a binocular camera as a sensor for fruit identification and positioning. The concave-convex textures of the bionic fingerprints on the surfaces of the two clamping jaws of the end effector 22 prevent fruits and vegetables from sliding off during grabbing or carrying; the film pressure sensor is arranged on the end effector 22 to accurately control the grabbing force, thereby achieving the purpose of nondestructive picking. The film pressure sensor is a key control sensor for controlling the clamping force of the two clamping jaws to clamp fruits, the direct current motor drives the first clamping jaw finger 223 and the second clamping jaw finger 224 to clamp fruits, and the stepper motor drives the fixed block 221 to rotate the fruits along the longitudinal axis so as to achieve the purpose of separating the fruits. In the picking process, the flexible materials of the first clamping jaw finger 223 and the second clamping jaw finger 224 deform when contacting with fruits, so that the contact area is enlarged, the characteristic of wrapping is generated, and the damage to the fruits and the falling of the fruits in the picking process can be reduced. In the operation process, the driving mechanism adopts flexible driving, so that the provided driving force is non-rigid, and certain self-adaptability can be generated in the clamping link, so that the picking success rate is increased, and the clamping damage rate is reduced. In order to realize continuous picking and grabbing, a three-dimensional visual perception-based fruit and vegetable pose estimation and grabbing method is adopted, so that the picking effect and the operation performance of the picking robot are effectively improved.
The invention acquires RGB-D images based on the binocular depth camera 21, fuses two-dimensional RGB image information and three-dimensional point cloud information with each other, detects fruit targets and judges the maturity, coarsely positions the mature targets to be picked, and combines the three-dimensional point cloud information to obtain precise pose information of fruit picking points, thereby determining the central coordinates of the picking points of the tail end grabbing device 2, and effectively improving the positioning and grabbing precision of the fruit targets. The height and angle of the binocular depth camera 21 are adjustable, and meanwhile, the fixing block 221 where the clamping mechanism is located can rotate infinitely, so that the capability of the tail end grabbing device 2 for adaptively grabbing and picking fruits in different shapes, postures and directions is improved. Through the co-fusion technology of fusing the rigid and flexible materials, the multi-dimensional information fusion theory is utilized to establish the fruit and vegetable pose estimation and grabbing strategy, so that flexible picking is realized, and meanwhile, the problems that the end effector 22 is easy to bend and cause grabbing failure and fruit damage due to too high flexibility are solved, so that the picking robot can work in a complex environment and in a narrow space.
According to the invention, the end effector 22 with the flexible grabbing function and the binocular depth camera 21 are adopted, and the fruit grabbing method based on multi-source information fusion perception is adopted, so that automatic picking and harvesting of da Zong fruits and vegetables just needed in an agricultural scene is realized. In order to improve the reliability of the end effector 22 and reduce the damage of the picking robot to the grabbing object in the grabbing and carrying processes, a flexible two-finger clamping mechanism with a light weight characteristic is arranged, and the surfaces of the two clamping jaws are provided with concave-convex textures of bionic fingerprints, so that the fruits and vegetables are prevented from sliding off during grabbing or carrying; meanwhile, a film pressure sensor is arranged on the end effector 22 to accurately control the grabbing force, so that the purpose of nondestructive picking is achieved; the binocular depth camera 21 is utilized to acquire position and posture information of fruits, and data support is provided for accurate grabbing of the end effector 22. The device has the characteristics of simple structure, light weight, simple control, accurate and efficient grabbing and the like, and has important significance for the commercialized landing and popularization and application of the agricultural picking robot.
Of course, the present invention is capable of other various embodiments and its several details are capable of modification and variation in light of the present invention, as will be apparent to those skilled in the art, without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A tip grasping device of a picking robot, comprising:
the end effector comprises a shell, a fixed block, a driving mechanism and a clamping mechanism, wherein the driving mechanism is arranged in the shell and is respectively connected with the fixed block and the clamping mechanism; the clamping mechanism is arranged at one end of the shell; the other end of the shell is arranged on the fixed block;
the coupler is arranged below the fixed block and is connected with the driving mechanism; the driving mechanism drives the coupler to drive the fixed block to rotate with +/-360 degrees;
the binocular depth camera is arranged on a camera support frame, and the camera support frame is arranged on the coupler through a camera flange; and
and the end cover is respectively connected with the coupler and the mechanical arm of the picking robot, a communication interface is arranged on the end cover, and the driving mechanism and the binocular depth camera are connected with the controller of the picking robot through the communication interface.
2. The end gripping device of a picking robot of claim 1, wherein the gripping mechanism is a two-finger gripping mechanism comprising a first finger of a gripping jaw and a second finger of a gripping jaw which are arranged opposite to each other, and bottoms of the first finger of the gripping jaw and the second finger of the gripping jaw are connected with the driving mechanism through a first fixing piece and a second fixing piece respectively.
3. The end gripping device of a picking robot of claim 2, wherein the first finger of the clamping jaw and the second finger of the clamping jaw are soft silicon rubber flexible material pieces, and the surfaces of the first finger of the clamping jaw and the second finger of the clamping jaw are provided with concave-convex textures of bionic fingerprints.
4. The end gripping device of a picking robot of claim 2, wherein the first finger and/or the second finger of the gripping jaw are provided with a film pressure sensor, the film pressure sensor being connected to a controller of the picking robot.
5. The end grasping device of a picking robot according to claim 2, wherein the driving mechanism comprises a direct current servo motor and a gear rack transmission mechanism, the gear rack transmission mechanism is respectively connected with the direct current servo motor, the first finger of the clamping jaw and the second finger of the clamping jaw, and the direct current servo motor drives the gear rack transmission mechanism to drive the first finger of the clamping jaw and the second finger of the clamping jaw to move towards or away from each other.
6. The end grasping device of a picking robot of claim 5 wherein the travel of the first finger of the jaw and the second finger of the jaw is 0-50mm; the single-finger clamping force of the first clamping jaw finger and the second clamping jaw finger is 40-100N, and the position repetition precision is +/-0.02 mm.
7. The end grasping device of a picking robot of claim 5 wherein the drive mechanism further comprises a stepper motor, the stepper motor coupled to the fixed block and driving the fixed block to rotate rotates the grasping mechanism to separate the fruit.
8. The tip grabbing device of a picking robot of claim 1, wherein the binocular depth camera is located above and behind the clamping mechanism and mounted on a camera connector, the camera connector is connected to one end of the camera support frame by a rotary locating pin, the other end of the camera support frame is mounted on the camera flange, and the camera flange is mounted on the coupler.
9. A picking robot comprising an end gripping device according to any one of claims 1-8.
10. A tip grabbing method of a picking robot, characterized in that the picking robot comprises a tip grabbing device as claimed in any one of claims 1-8, the tip grabbing method comprising the steps of:
s100, shooting and obtaining RGB images and fruit depth images of fruits by a binocular depth camera of the tail end grabbing device in the moving process of the picking robot;
s200, a deep learning-based target detection algorithm is built in a controller of the picking robot, real-time detection and coarse positioning are carried out on fruit targets in the RGB images of the fruits, meanwhile, the fruit maturity is judged according to the fruit characteristics, and the center coordinates of the mature fruits are returned;
s300, utilizing the fruit depth image to reconstruct the mature fruit in a three-dimensional mode, estimating the posture of the fruit by combining the center coordinates of the mature fruit and the 3D point cloud information, and correcting the center coordinates of the mature fruit according to the posture of the fruit;
s400, dividing point clouds of a single mature fruit from the range of a fruit growing area, slicing the point clouds, measuring the posture of the fruit according to fruit section data obtained after slicing, and simultaneously fusing RGB (red, green and blue) images of the fruit and 3D point cloud information to accurately position the fruit; and
s500, obtaining accurate center coordinates and space postures of mature fruits, calibrating by hands and eyes to obtain a coordinate system conversion matrix, converting coordinates of grabbing points to the coordinate systems of the mechanical arm and the tail end grabbing device of the picking robot, sending the coordinates to the mechanical arm and the tail end grabbing device through a transmission control protocol, and controlling the mechanical arm and the tail end grabbing device to finish picking of the mature fruits.
11. The tip grabbing method of a picking robot of claim 10, further comprising:
s600, after picking of single mature fruits is completed, the mechanical arm returns to a pre-picking point, whether the fruit is the last fruit is judged, if yes, the mechanical arm returns to an initial position, the tail end grabbing device returns to an initial state, and the current task is ended; if not, picking the next mature fruit.
12. The end picking method of the picking robot of claim 10, wherein the picking sequence is planned based on an energy consumption optimal mode, all adapted kinematic inverse solutions are obtained, and the picking sequence planning for a plurality of fruit targets is achieved according to the energy consumption function sequencing.
CN202311483492.XA 2023-11-09 2023-11-09 Picking robot and tail end grabbing device and method thereof Pending CN117530053A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311483492.XA CN117530053A (en) 2023-11-09 2023-11-09 Picking robot and tail end grabbing device and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311483492.XA CN117530053A (en) 2023-11-09 2023-11-09 Picking robot and tail end grabbing device and method thereof

Publications (1)

Publication Number Publication Date
CN117530053A true CN117530053A (en) 2024-02-09

Family

ID=89787297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311483492.XA Pending CN117530053A (en) 2023-11-09 2023-11-09 Picking robot and tail end grabbing device and method thereof

Country Status (1)

Country Link
CN (1) CN117530053A (en)

Similar Documents

Publication Publication Date Title
CN108399639B (en) Rapid automatic grabbing and placing method based on deep learning
Zhao et al. Dual-arm robot design and testing for harvesting tomato in greenhouse
CN110116411B (en) Robot 3D vision hand-eye calibration method based on spherical target
US8244402B2 (en) Visual perception system and method for a humanoid robot
CN107139178B (en) Unmanned aerial vehicle and vision-based grabbing method thereof
WO2023056670A1 (en) Mechanical arm autonomous mobile grabbing method under complex illumination conditions based on visual-tactile fusion
CN110000785B (en) Agricultural scene calibration-free robot motion vision cooperative servo control method and equipment
CN107186708B (en) Hand-eye servo robot grabbing system and method based on deep learning image segmentation technology
Yu et al. A lab-customized autonomous humanoid apple harvesting robot
Sommer et al. Bimanual compliant tactile exploration for grasping unknown objects
CN103895042A (en) Industrial robot workpiece positioning grabbing method and system based on visual guidance
JP2022542241A (en) Systems and methods for augmenting visual output from robotic devices
Zou et al. Fault-tolerant design of a limited universal fruit-picking end-effector based on vision-positioning error
US20220402131A1 (en) System and method for error correction and compensation for 3d eye-to-hand coordinaton
Koyama et al. Integrated control of a multi-fingered hand and arm using proximity sensors on the fingertips
EP4116043A2 (en) System and method for error correction and compensation for 3d eye-to-hand coordination
Stückler et al. Adaptive tool-use strategies for anthropomorphic service robots
Quiros et al. Object locator and collector robotic arm using artificial neural networks
CN115139315A (en) Grabbing motion planning method for picking mechanical arm
Kita et al. A method for handling a specific part of clothing by dual arms
CN117530053A (en) Picking robot and tail end grabbing device and method thereof
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
CN116038666A (en) Double-arm robot device integrated with multiple sensors and cooperative compliant operation method thereof
CN221010887U (en) Picking robot and tail end grabbing device thereof
Ren et al. Vision based object grasping of robotic manipulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination