CN115553132A - Litchi recognition method based on visual algorithm and bionic litchi picking robot - Google Patents

Litchi recognition method based on visual algorithm and bionic litchi picking robot Download PDF

Info

Publication number
CN115553132A
CN115553132A CN202211284014.1A CN202211284014A CN115553132A CN 115553132 A CN115553132 A CN 115553132A CN 202211284014 A CN202211284014 A CN 202211284014A CN 115553132 A CN115553132 A CN 115553132A
Authority
CN
China
Prior art keywords
litchi
image
bionic
target
picking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211284014.1A
Other languages
Chinese (zh)
Inventor
唐昀超
邹湘军
汤威
阙天顺
严植玮
龙泽政
邹天龙
苏超云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Zhongke Agricultural Robot And Intelligent Agricultural Innovation Research Institute
Original Assignee
Foshan Zhongke Agricultural Robot And Intelligent Agricultural Innovation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Zhongke Agricultural Robot And Intelligent Agricultural Innovation Research Institute filed Critical Foshan Zhongke Agricultural Robot And Intelligent Agricultural Innovation Research Institute
Priority to CN202211284014.1A priority Critical patent/CN115553132A/en
Publication of CN115553132A publication Critical patent/CN115553132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D46/00Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
    • A01D46/30Robotic devices for individually picking crops
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D67/00Undercarriages or frames specially adapted for harvesters or mowers; Mechanisms for adjusting the frame; Platforms
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D91/00Methods for harvesting agricultural products
    • A01D91/04Products growing above the soil
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/14Receivers specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a litchi recognition method based on a visual algorithm and a bionic litchi picking robot. The litchi identification method comprises the following steps: training weight, image preprocessing, target recognition, maturity judgment and target positioning, obtaining point cloud information of the litchi fruit target based on a binocular stereo matching principle, and calculating to obtain the three-dimensional position of the litchi fruit target. The bionic litchi picking robot comprises a mobile platform, a positioning device, a mechanical arm, an autonomous navigation binocular camera, an autonomous recognition positioning binocular camera and a bionic end effector. The litchi image recognition system can accurately recognize litchi images in a field complex environment, is provided with the bionic clamping and clamping-shearing integrated end effector, can realize accurate autonomous navigation and positioning, and is high in picking efficiency and small in damage to litchi fruits.

Description

Litchi recognition method based on visual algorithm and bionic litchi picking robot
Technical Field
The invention belongs to the field of agricultural intelligent machinery, and particularly relates to a litchi recognition method based on a visual algorithm and a bionic litchi picking robot.
Background
China is a big agricultural planting country, litchi is called one of four big fruits in the south China, and litchi picking is a key link. Due to the particularity of litchi picking, the litchi picking robot also has some technical problems. On one hand, the litchi picking robot identifies litchi by comparing an intercepted image of a target litchi with a training weight of deep learning through a binocular camera, and then judging whether the target is litchi or not according to a comparison result. However, under the influence of outdoor environment illumination, the camera acquires images under the conditions of overexposure or over darkness, and shielding exists among the litchis, so that a lot of interference is generated, the target litchis in complex scenes cannot be effectively classified and identified, and picking failure is caused. Therefore, how to process the target litchi image in the field complex environment and improve the litchi visual identification precision is a key technical problem which needs to be solved urgently by the litchi picking robot. On the other hand, the existing litchi picking robot adopts the clamping finger end effector to grab litchi, the litchi is stringed and fruited, the fruit is soft, the clamping finger grabbing efficiency is low, and the litchi fruit can be damaged in the shearing process. Therefore, the end effector of the litchi picking robot needs to be improved, so that the litchi picking robot realizes the clamping and shearing integration, reduces the damage rate of litchi fruits and improves the picking efficiency. In addition, at present, the litchi picking robot needs to be placed beside a fruit tree firstly during picking, a large amount of manpower needs to be consumed when the robot is frequently moved manually, and real automation is difficult to realize, so that automatic navigation of the litchi picking robot is realized, and the picking efficiency is improved.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a litchi recognition method based on a visual algorithm and a bionic litchi picking robot, can accurately recognize litchi images in a field complex environment, is provided with an end effector integrating bionic clamping and shearing, can realize accurate autonomous navigation and positioning, and has high picking efficiency and small damage to litchi fruits.
The purpose of the invention is realized by the following technical scheme:
a bionic litchi picking robot comprises a mobile platform, a positioning device 4, a mechanical arm 6, an autonomous navigation binocular camera 8, an autonomous recognition and positioning binocular camera 9 and a bionic end effector 10; the positioning device 4 and the mechanical arm 6 are installed on the mobile platform, the autonomous navigation binocular camera 8 is installed on the mobile platform, and the autonomous recognition positioning binocular camera 9 is installed on the mechanical arm 6; the bionic end effector 10 is mounted at the end of the mechanical arm 6.
The positioning device 4 is a satellite and Beidou navigation positioning device.
The mobile platform comprises a crawler-type walking chassis 1 and a supporting platform 2; the supporting platform 2 is arranged above the crawler-type walking chassis 1 and connected with the crawler-type walking chassis through bolts and used for supporting the mechanical arm. The crawler belt of the crawler belt type walking chassis 1 is large in ground contact area and small in unit area pressure compared with the traditional walking wheels, can flexibly adapt to various terrains, and can stably work in an orchard environment.
A litchi collecting box 3 is arranged on the side edge of the mobile platform and used for storing picked litchi; a litchi receiving device 11 is arranged below the bionic end effector 10, and an outlet of the litchi receiving device 11 is communicated with the litchi collecting box 3.
The automatic identification and positioning binocular camera 9 is used for collecting litchi images, preprocessing the images and identifying and three-dimensionally positioning target litchi and is installed on the mechanical arm through the support 7. The self-navigation binocular camera 8 is used for building and loading an orchard map and is installed on the mobile platform through the support 5.
The bionic end effector comprises an end connecting seat 12, an L-shaped clamping finger seat 13, a spring 14, a movable clamping finger 15, a bionic olecranon-shaped movable blade 16, a movable blade pressing plate 17, a fixed blade upper seat 18, an electromagnet 19, a base 20, a fixed clamping finger 21 and a bionic olecranon-shaped fixed blade 22; the tail end connecting seat 12 is fixed at the tail end of the mechanical arm; the end connecting seat 12 is connected with the base 20 through an aluminum profile; the electromagnet 19 and the fixed clamping finger 21 are arranged on the upper surface of the base 20, and the spring 14 is an upper part of the electromagnet and is used for resetting the electromagnet; the bionic olecranon-shaped fixed blade 22 is fixed on the upper surface of the fixed clamping finger 21 through the fixed blade upper seat 18; two sides of the L-shaped clamping finger seat 13 are connected with the electromagnet 19 through bolts; the bionic olecranon-shaped movable blade 16 is fixed on one side of the L-shaped inner side of the clamping finger seat 13 through a movable blade pressing plate 17, and the movable clamping finger 15 is fixed on the other side of the L-shaped inner side of the clamping finger seat 13.
The bionic olecranon-type fixed blade 22 corresponds to the bionic olecranon-type movable blade 16; the cutter point of the bionic olecranon-shaped fixed blade 22 is in a bent olecranon shape; the bionic olecranon-shaped movable blade 16 is in the shape of an olecranon jaw, and the chamfer angle of the outer end of the blade is 30-40 degrees.
The movable clamp fingers correspond to the fixed clamp fingers, the movable clamp fingers and the fixed clamp fingers are all formed by 3D printing of PC-ABS materials, CFRP materials are adopted on the surfaces of the movable clamp fingers, small bionic octopus-shaped suckers are adopted on the surfaces of the movable clamp fingers, the small suckers are funnel-shaped, large heads are outwards distributed in multiple rows on the left and right, and adjacent two rows are arranged in a staggered mode.
The receiving device 11 adopts a bionic frog mouth type design and realizes the opening and closing of the frog mouth through an electromagnet.
The picking robot of the invention has the working principle that: (1) The bionic end effector adopts an electromagnet 19 as a power device, the electromagnet 19 can meet the requirements of shearing force and stroke, and when picking, the electromagnet 19 is electrified to pull the clamping finger seat 13 to drive the movable clamping finger 15 and the bionic olecranon-shaped movable blade 16 to complete the clamping and shearing integrated operation; (2) The bionic olecranon type blade is designed, the fixed blade is the upper beak of the bionic olecranon, the cutting edge of the upper beak of the bionic olecranon is provided with the arc-shaped vertical protrusion, the occlusion area can be enlarged, and a target is not easy to fall off when the target is occluded; (3) The movable clamp finger and the fixed clamp finger are both formed by 3D printing of a PC-ABS material, the PC-ABS material is thermoplastic plastic formed by Polycarbonate (PC) and polypropylene fine (ABS) alloy, the PC-ABS material has the excellent characteristics of the two materials, the temperature resistance reaches 135 ℃, the PC-ABS material cannot deform in a high-temperature environment in the field, and the density of the PC-ABS material is about 1.2g/cm 3 The overall mass of the bionic end effector is effectively reduced; (4) The surfaces of the movable clamping fingers and the fixed clamping fingers are designed into a bionic octopus tentacle small sucker made of CFRP materials, and the CFRP materials have high friction coefficient, light weight, high strength, corrosion resistance and other excellent characteristics and can be well adapted to the field complex environment; when the movable clamping fingers and the fixed clamping fingers are clamped in a closed manner, the bionic octopus tentacle small sucker can extrude air to realize suction, the clamping is more stable, and litchi fruits are well prevented from falling off; (5) The bionic receiving device adopts a bionic frog mouth type design, the frog mouth is closed in the picking process, the opened frog mouth is prevented from ejecting the target litchi open, and when the picking process is finished and returns to the recognition posture, the frog mouth is opened to receive the picked litchi and then stably falls into the collecting box through the snake-shaped pipeline.
A litchi recognition method based on an image visual algorithm comprises the following steps:
(A) Training the weight: collecting a litchi sample image, and rotating, offsetting, mirroring and cutting the image to expand an image data set; then, labeling the litchi in the image, and training the image and the label by adopting an optimized YOLOv7 deep learning network to obtain a training weight;
(B) Image preprocessing: in a picking mode, carrying out image acquisition on litchi fruits by using an autonomous recognition and positioning binocular camera, and then carrying out overexposed litchi image preprocessing or overlooked litchi image preprocessing according to the field illumination influence condition;
(C) Target identification: pushing the preprocessed litchi image into an optimized YOLOv7 neural recognition network, comparing and matching the litchi image with the training weight obtained in the step (A), and recognizing a litchi fruit target from the litchi image;
(D) Judging the maturity: sorting the identified litchi fruit targets from left to right, collecting RGB (red, green and blue) characteristics of each target, calculating variance and difference between R and G, judging litchi maturity information according to the variance and the difference between R and G, feeding the litchi maturity information back to a control center, and judging whether litchi fruits are picked or not;
(E) Positioning a target: when the control center judges that the litchi fruit target is mature, the autonomous recognition and positioning binocular camera obtains the litchi fruit target information, the depth map of the litchi fruit target is obtained through the camera binocular stereo matching principle, and then the point cloud information of the litchi fruit target is obtained through three-dimensional reconstruction.
In the step (a), the optimized YOLOv7 deep learning network is to optimize a neural network structure and training network parameters of YOLOv7, that is, to optimize the structure based on a CNN-transformer module, replace an original backbone module of the YOLOv7 neural network, perform down-sampling modification on a feature extraction part of the CNN-transformer module, and increase the global pooling times, so that the module supports feature reuse and feature propagation; and replacing and modifying a Head module of a DetectX Head module and a YOLOv7 neural network structure, and modifying a Conv module in the DetectX Head module to reduce the parameter number.
In the step (B), the preprocessing of the overexposed litchi image is to adopt a bilateral filter function to carry out edge-preserving denoising on the image, then convert the image into an HSV color space, adopt histogram equalization operation on a V component to obtain a V component image with smooth illumination intensity change, use power transformation operation on the V component image to uniformly reduce the overall illumination brightness, replace the original V component image with the processed V component image to obtain an illumination intensity reduced image, and then convert the image back into an RGB color space; then, the optimized multiscale Retinex algorithm (MSRCR) with the color recovery factor fused is adopted to carry out color recovery on the image, and the image with uniform illumination and the color according with the true value is obtained. The invention optimizes MSRCR as the selection of optimized scale and replaces Gaussian filtering with self-adaptive bilateral filtering.
In the step (B), the preprocessing of the too-dark litchi image is to adopt a bilateral filter function to carry out edge-preserving denoising on the image, then convert the image into an HSV color space, adopt logarithmic transformation and wavelet transformation operations on a V component to obtain an image with uniform stretching illumination intensity, then use gamma correction on the converted RGB image, and then adopt a color saturation adaptive enhancement algorithm based on an HSL space to obtain an image with improved illumination intensity and color conforming to a true value. The color saturation adaptive enhancement algorithm based on the HSL space is used for adaptively enhancing the original color of a pixel point by adaptively enhancing three components of R, G and B after a gain parameter is calculated in the HSL space.
In the step (D), the litchi maturity judgment is to sort the identification targets from left to right, calculate the RGB variance values of the identification targets and the difference between the R component and the G component, set a maturity threshold i according to the statistical difference between the R and the G of the ripe litchi fruits, and judge that the litchi is ripe and can be picked if R-G > = i; and if R-G < i, judging the litchi to be immature litchi, and skipping to the calculation of the next recognition target.
A litchi picking method adopts the bionic litchi picking robot and a litchi recognition method based on an image visual algorithm for picking, and comprises the following steps:
(1) Calibrating a camera: calibrating an autonomous navigation binocular camera 8 and an autonomous recognition positioning binocular camera 9 respectively, performing monocular calibration on the cameras, and acquiring two internal reference matrixes and distortion matrixes of a left camera and a right camera; then, binocular calibration is carried out on the binocular camera, a reprojection matrix is obtained, and the corrected binocular camera is obtained; solving a conversion matrix from the pixel coordinate system to a space coordinate system, and acquiring a conversion relation between a camera coordinate system and a robot coordinate system;
(2) Drawing and autonomous navigation are built in the orchard site: using an autonomous navigation binocular camera 8 and an optimized ORB-SLAM3 algorithm to map a litchi orchard site; then, the litchi picking robot runs an optimized ORB-SLAM3 algorithm and a path planning algorithm, and performs positioning and autonomous navigation through information fed back by a satellite and a Beidou navigation positioning device;
(3) Target recognition: when the autonomous recognition positioning binocular camera 9 detects that litchi fruits exist on the roadside, feeding back information to the control center, replanning the traveling route, and enabling the litchi picking robot to autonomously navigate and stop in front of the target litchi fruit tree and enter a picking mode; by adopting the litchi recognition method based on the image visual algorithm, the weight is trained, the image is preprocessed, the target is recognized, the maturity is judged, the target is positioned, the point cloud information of the litchi fruit target is obtained based on the binocular stereo matching principle, and the three-dimensional position of the litchi fruit target is calculated and obtained;
(4) Picking: the control center of the litchi picking robot plans a motion track from the mechanical arm to a litchi fruit target according to the three-dimensional position of the litchi fruit target, and drives the bionic end effector to the position of a stem picking point of a litchi fruit; then the control center transmits a signal to the electromagnet, the electromagnet is electrified and pulls a clamping finger seat of the bionic end effector to enable the movable clamping finger and the fixed clamping finger to clamp fruit stalks, and the bionic olecranon-type movable blade and the bionic olecranon-type fixed blade are closed to execute shearing action, so that clamping and shearing are integrated; the picked litchis reach the litchis collecting box through the receiving device.
In the step (1), single and double targets of a binocular camera are determined to solve camera parameters, a geometric model of camera imaging is established, and an inner parameter matrix, an outer parameter matrix and a distortion matrix of a left camera and a right camera are obtained through single target calibration; and then parameters such as a re-projection matrix, a mapping table and the like of the binocular camera are obtained through binocular calibration. The calibration method used by the invention is a Zhangzhengyou chessboard pattern calibration method.
In the step (2), the optimized ORB-SLAM3 algorithm is to optimize the ORB-SLAM3 tracking thread algorithm by using the optimized YOLOv7 deep learning network, that is, an area of interest is identified by using the optimized YOLOv7 deep learning network, then a distance threshold L is set by combining point cloud information, if the point cloud position difference L > = L of the ORB feature points between frames, the dynamic point is determined, and the dynamic point is deleted from the ORB feature points, and only the static point is left.
Compared with the prior art, the invention has the advantages and effects that:
(1) The optimized YOLOv7 deep learning network can solve the problem of gradient repetition, reduce the channel traversal times, improve the efficiency of convolution and full pooling and ensure that the module is lighter while maintaining the precision. Although the existing CNN-transducer can capture the long-distance dependency relationship among the Patch and expand the receptive field, the CNN-transducer also depends on a global attention mechanism, in order to solve the coupling of the part, the invention adopts a DetectX Head module and a Head module of a YOLOv7 neural network structure to replace and modify, reduces the parameter number, improves the computing capability, increases the real-time property of the network, improves the decoupling capability of the neural network under the condition of ensuring the speed, strengthens the independence of each module and strengthens the functions of the modules. The optimized YOLOv7 deep learning network training is adopted to obtain the training weight of litchi recognition, the network structure is lighter, the training time is shortened, the receptive field is improved, not only can the detail features of litchi be completely extracted, but also the recognition accuracy and the label smoothness can be improved.
(2) The litchi image pre-processing method aims at preprocessing the litchi image which is overexposed or too dark under the influence of the field illumination, can effectively reduce the influence of a camera on the collected litchi image in the field illumination environment, and improves the robustness of the image recognition and the accuracy rate of litchi fruit recognition in the field environment.
(3) According to the invention, an optimized ORB-SLAM3 algorithm is adopted, and since the orchard is in a field environment, dynamic objects exist in the mapping and navigation processes, the dynamic points are added into a BoW dictionary by the conventional ORB-SLAM3, so that mapping and navigation are not accurate; therefore, the optimized ORB-SLAM3 algorithm provided by the invention enables map information in the region of interest to be accurate and dense, realizes robust and accurate mapping in dynamic environment of an orchard, and combines information feedback of a satellite and a Beidou navigation device, so that the picking robot has higher navigation and positioning precision during autonomous operation of a planned path, and realizes more stable autonomous navigation.
(4) The maturity of the litchi is judged by adopting the OPENCV function, and because the litchi has obvious red-green difference from the immature state to the mature state, whether the litchi is mature or not is judged by calculating the RGB variance of the litchi and the R-B value, so that the immature litchi can be prevented from being mistakenly picked.
Drawings
Fig. 1 is a schematic structural diagram of a bionic litchi picking robot.
Fig. 2 is a schematic structural diagram of the bionic end effector.
Fig. 3 is a schematic structural view of the robot arm.
Fig. 4 is a work flow chart of the litchi picking method.
FIG. 5 is a diagram illustrating the pre-processing effect of an overexposed litchi image, wherein FIG. 5 (1) is a litchi overexposed image; fig. 5 (2) is a processing result diagram of the litchi overexposure image.
Fig. 6 is a diagram illustrating the preprocessing effect of the excessively dark litchi image, wherein fig. 6 (1) is an excessively dark litchi image; fig. 6 (2) is a processing result diagram of the litchi excessively dark image.
In the figure, 1-walking chassis, 2-supporting platform, 3-collecting box, 4-positioning device, 5-support, 6-mechanical arm (6 a-base, 6 b-base, 6 c-shoulder, 6 d-elbow, 6 e-wrist 1,6 f-wrist 2,6 g-wrist 3), 7-support, 8-autonomous navigation binocular camera, 9-autonomous recognition and positioning binocular camera, 10-bionic end actuator, 11-receiving device, 12-end connecting base, 13-L type clamping finger base, 14-spring, 15-movable clamping finger, 16-bionic eagle mouth type movable blade, 17-movable blade pressing plate, 18-fixed blade upper base, 19-electromagnet, 20-base, 21-fixed clamping finger and 22-bionic eagle mouth type fixed blade.
Detailed Description
In order that the invention may be readily understood, reference will now be made in detail to the specific embodiments of the invention. The following examples will aid those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any manner. It should be noted that, for a person skilled in the art, many variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.
Example 1
As shown in figure 1, the bionic litchi robot comprises a movable crawler chassis 1 and a supporting platform 2 connected with the crawler chassis, wherein a satellite, a Beidou navigation positioning device 4 and a mechanical arm 6 are borne on the supporting platform 2, and a litchi collecting box 3 is installed on the side surface. As shown in fig. 3, the robot arm 6 is a six-degree-of-freedom robot arm, and includes a base 6a, a base 6b, a shoulder 6c, an elbow 6d, a wrist 6e, a wrist 6f, and a wrist 6g, which are connected in sequence; the base 6a is fixed on the mobile platform, the wrist 6g is connected with the bionic end effector through a flange plate, and the self-recognition positioning binocular camera 9 is fixed on the wrist 6 f. An autonomous navigation camera support 5 is installed on the supporting platform 2, an autonomous navigation binocular camera 8 is installed on the support 5, a camera support 7 is installed on a wrist portion 6f of the mechanical arm, an autonomous recognition positioning binocular camera 9 is installed on the camera support 7, a bionic terminal actuator 10 is carried at the tail end of the mechanical arm, and a receiving device 11 is installed below the bionic terminal actuator.
As shown in fig. 2, the bionic end effector includes an end connection seat 12, a clamping finger seat 13, a spring 14, a movable clamping finger 15, a bionic olecranon-type moving blade 16, a moving blade pressing plate 17, a fixed blade upper seat 18, an electromagnet 19, a base 20, a fixed clamping finger 21, and a bionic olecranon-type fixed blade 22. The connecting plate 12 realizes the connection of bionical end effector and arm wrist 6g, fixes and fixes a position bionical end effector 10 simultaneously, and the activity is pressed from both sides and is pointed 15 and decide that 21 whole insides adopt PC ABS material 3D to print and form, and the PC ABS material is by the thermoplastic that Polycarbonate (PC) and the smart (ABS) alloy of polypropylene formed, presss from both sides the little sucking disc of bionical octopus that indicates surface design for adopting the CFRP material, increases the clamping stability to the litchi fruit stalk. An electromagnet 19 and a fixed clamping finger 21 are fixed on a base 20, a clamping finger seat 13 is connected with the electromagnet 19, a movable clamping finger 15 is installed on one side of the inner side of an L-shaped clamping finger seat 13, a bionic olecranon-shaped movable blade 16 is fixed on the other side of the inner side of the L-shaped clamping finger seat 13 by a movable blade pressing plate 17, the electromagnet 19 is electrified to generate magnetic force, the clamping finger seat 13 is pulled, the opening and closing movement of the bionic olecranon-shaped movable blade 16 and the movable clamping finger 15 is realized, the design of the bionic olecranon-shaped cutter effectively prevents litchi stalks from sliding out of a shearing range in the shearing process, the bionic olecranon-shaped retractable receiving device can quickly receive litchis, the operation time of a mechanical arm is reduced, the picking speed is accelerated, and the mechanical arm adopts a light-drive-control integrated network controller.
As shown in fig. 4, before picking operation, single and binocular calibration is performed on a binocular camera; the method comprises the steps of erecting a checkerboard calibration plate, and shooting images of the calibration plate from all angles by the motion of a robot to calibrate a camera monocular; obtaining internal and external parameters and distortion matrixes of a left camera and a right camera after monocular calibration, and sequentially carrying out binocular calibration on the cameras to obtain a re-projection matrix corrected by the binocular cameras and a relation between a pixel coordinate and an object coordinate; and calibrating the hand and the eye, and determining a conversion matrix from the pixel coordinate system to the space manipulator coordinate system.
As shown in fig. 4, during picking, the autonomous navigation binocular camera 8 combines a map established by the ORB-SLAM3 algorithm of the optimized binocular vision fusion IMU of the invention with information fed back by a satellite and a beidou device to plan a path, so as to realize high-precision autonomous navigation in a dynamic environment, and the autonomous recognition and positioning binocular camera 9 detects litchi on a roadside fruit tree, and then controls the walking chassis 1 to autonomously navigate to the vicinity of the fruit tree through signal communication, so as to enter a picking posture, capture litchi images by the autonomous recognition and positioning binocular camera 8, and filter the images to remove noise interference; by processing the illumination with the illumination processing algorithm developed by the invention, the possibility of reducing the recognition accuracy due to image exposure or excessive darkness is reduced, and the image processing effect is shown in fig. 5 (1), 5 (2), 6 (1) and 6 (2); pushing the preprocessed image into a YOLOv7 training model which is optimized by a neural network structure and parameters for recognition to obtain a picking target; judging whether the picking object is mature or not by calculating the variance of RGB values of the picking object and the difference of R and G component values, and determining whether picking is performed or not; obtaining depth information by using a camera binocular stereo matching principle, and obtaining litchi point cloud information through three-dimensional reconstruction; transmitting the point cloud information of the litchi to a control center, and planning the motion track of the mechanical arm 6 by calculating the positive and negative solutions of each joint motor; the mechanical arm drives the bionic end effector 10 to reach a picking position, the control center transmits a signal to the electromagnet 19, the electromagnet is electrified to pull the clamping finger seat 13, the movable clamping finger 15 and the fixed clamping finger 21 clamp fruit stalks firstly, then the movable blade 16 and the fixed blade 22 are closed to execute shearing action, clamping and shearing are integrated, after picking is finished, the mechanical arm returns to an identification pose, the bionic frog mouth type receiving device is opened, the clamping fingers release litchis, and the picked litchis are transferred to the litchis collecting box 3. And after all the targets are picked, entering an autonomous navigation state to continue the operation.
The above description is only an example of the present invention, but the present invention is not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention are all equivalent substitutions and are intended to be included within the scope of the present invention.

Claims (10)

1. A litchi recognition method based on an image visual algorithm is characterized by comprising the following steps:
(A) Training the weight: collecting a litchi sample image, and rotating, offsetting, mirroring and cutting the image to expand an image data set; then, labeling the litchi in the image, and training the image and the label by adopting an optimized YOLOv7 deep learning network to obtain a training weight;
(B) Image preprocessing: in a picking mode, carrying out image acquisition on litchi fruits by using an autonomous recognition and positioning binocular camera, and then carrying out overexposed litchi image preprocessing or overlooked litchi image preprocessing according to the field illumination influence condition;
(C) Target identification: pushing the preprocessed litchi image into an optimized YOLOv7 neural recognition network, comparing and matching the litchi image with the training weight obtained in the step (A), and recognizing a litchi fruit target from the litchi image;
(D) Judging the maturity: sorting the identified litchi fruit targets from left to right, collecting RGB (red, green and blue) characteristics of each target, calculating variance and difference between R and G, judging litchi maturity information according to the variance and the difference between R and G, feeding the litchi maturity information back to a control center, and judging whether litchi fruits are picked or not;
(E) Positioning a target: when the control center judges that the litchi fruit target is mature, the autonomous recognition and positioning binocular camera obtains the litchi fruit target information, the depth map of the litchi fruit target is obtained through the camera binocular stereo matching principle, and then the point cloud information of the litchi fruit target is obtained through three-dimensional reconstruction.
2. The litchi identification method based on image vision algorithm as claimed in claim 1, wherein: in the step (a), the optimized YOLOv7 deep learning network is to optimize a neural network structure and training network parameters of YOLOv7, that is, to optimize the structure based on a CNN-transformer module, replace an original backbone module of the YOLOv7 neural network, perform down-sampling modification on a feature extraction part of the CNN-transformer module, and increase the global pooling times, so that the module supports feature reuse and feature propagation; and replacing and modifying a Head module of a DetectX Head module and a YOLOv7 neural network structure, and modifying a Conv module in the DetectX Head module to reduce the parameter number.
3. The litchi recognition method based on image vision algorithm of claim 1, wherein: in the step (B), the preprocessing of the overexposed litchi image is to adopt a bilateral filter function to carry out edge-preserving denoising on the image, then convert the image into an HSV color space, adopt histogram equalization operation on a V component to obtain a V component image with smooth illumination intensity change, use power transformation operation on the V component image to uniformly reduce the overall illumination brightness, replace the original V component image with the processed V component image to obtain an illumination intensity reduced image, and then convert the image back into an RGB color space; then, carrying out color recovery on the image by adopting an optimized multi-scale Retinex algorithm with color recovery factors fused, and obtaining an image with uniform illumination and colors according with a true value;
the method comprises the steps of firstly adopting a bilateral filter function to carry out edge protection and denoising on an image, then converting the image into an HSV color space, adopting logarithmic transformation and wavelet transformation operations on a V component to obtain an image with uniform illumination intensity, then using gamma correction on the converted RGB image, and then adopting a color saturation self-adaptive enhancement algorithm based on an HSL space to obtain an image with improved illumination intensity and color according with a true value.
4. The litchi identification method based on image vision algorithm as claimed in claim 1, wherein: in the step (D), the litchi maturity judgment is to sort the identification targets from left to right, calculate the RGB variance values of the identification targets and the difference between the R component and the G component, set a maturity threshold i according to the statistical difference between the R and the G of the ripe litchi fruits, and judge that the litchi is ripe and can be picked if R-G > = i; and if R-G < i, judging the litchi to be immature litchi, and skipping to the calculation of the next recognition target.
5. The utility model provides a bionical litchi picking robot which characterized in that: adopting the litchi recognition method based on the image vision algorithm of any one of claims 1 to 4; the picking robot comprises a mobile platform, a positioning device, a mechanical arm, an autonomous navigation binocular camera, an autonomous recognition positioning binocular camera and a bionic end effector; the positioning device and the mechanical arm are installed on the mobile platform, the autonomous navigation binocular camera is installed on the mobile platform, and the autonomous recognition positioning binocular camera is installed on the mechanical arm; the bionic end effector is arranged at the tail end of the mechanical arm.
6. The biomimetic litchi picking robot according to claim 5, wherein: the bionic end effector comprises an end connecting seat, an L-shaped clamping finger seat, a movable clamping finger, a bionic olecranon-shaped movable blade, a movable blade pressing plate, a fixed blade upper seat, an electromagnet, a base, a fixed clamping finger and a bionic olecranon-shaped fixed blade; the tail end connecting seat is fixed at the tail end of the mechanical arm; the tail end connecting seat is connected with the base through an aluminum profile; the electromagnet and the fixed clamping finger are arranged on the upper surface of the base; the bionic olecranon-shaped fixed blade is fixed on the upper surface of the fixed clamping finger through the fixed blade upper seat; two surfaces of the L-shaped clamping finger seat are connected with the electromagnet; the bionic olecranon-shaped movable blade is fixed on one side of the inner side of the L shape of the clamping finger seat through a movable blade pressing plate, and the movable clamping finger is fixed on the other side of the inner side of the L shape of the clamping finger seat.
7. The biomimetic litchi picking robot according to claim 6, wherein: the bionic olecranon-shaped fixed blade corresponds to the bionic olecranon-shaped movable blade; the tool tip of the bionic olecranon-shaped fixed blade is in a bent olecranon shape; the bionic olecranon-shaped movable blade is in the shape of an olecranon jaw, and the chamfer angle of the outer end of the blade is 30-40 degrees; the movable clamp fingers correspond to the fixed clamp fingers, the movable clamp fingers and the fixed clamp fingers are all formed by 3D printing of PC-ABS materials, CFRP materials are adopted on the surfaces of the movable clamp fingers, the surface structures of the CFRP materials are bionic octopus hand-touching small suckers, the small suckers are funnel-shaped, the large heads of the small suckers face outwards and are distributed in multiple rows in the left and right directions, and adjacent two rows of the small suckers are arranged in a staggered mode.
8. The biomimetic litchi picking robot according to claim 5, wherein: a litchi collecting box is arranged on the side edge of the mobile platform; a litchi receiving device is arranged below the bionic end effector, the opening and closing of a frog mouth are realized through an electromagnet by adopting a bionic frog mouth type design; the outlet of the litchi receiving device is communicated with a litchi collecting box.
9. A litchi picking method is characterized by comprising the following steps: picking by adopting the bionic litchi picking robot as claimed in any one of claims 5 to 8, comprising the following steps:
(1) Calibrating a camera: calibrating an autonomous navigation binocular camera and an autonomous recognition positioning binocular camera respectively, performing monocular calibration on the cameras, and acquiring two internal reference matrixes and distortion matrixes of a left camera and a right camera; then, binocular calibration is carried out on the binocular camera, a re-projection matrix is obtained, and a corrected binocular camera is obtained; solving a conversion matrix from a pixel coordinate system to a space coordinate system, and acquiring a conversion relation between a camera coordinate system and a robot coordinate system;
(2) Building a picture and performing autonomous navigation on an orchard site: using an autonomous navigation binocular camera and an optimized ORB-SLAM3 algorithm to map a litchi orchard site; then, the litchi picking robot runs an optimized ORB-SLAM3 algorithm and a path planning algorithm, and carries out positioning and autonomous navigation through information fed back by a satellite and a Beidou navigation positioning device;
(3) Target identification: when the automatic identification and positioning binocular camera detects that litchi fruits are on the roadside in real time, feeding back information to the control center, replanning the traveling route, and enabling the litchi picking robot to enter a picking mode after autonomously navigating and stopping in front of a target litchi fruit tree; the litchi recognition method based on the image vision algorithm of any one of claims 1 to 4 is adopted, the weight, the image preprocessing, the target recognition, the maturity judgment and the target positioning are trained, the point cloud information of the litchi fruit target is obtained based on the binocular stereo matching principle, and the three-dimensional position of the litchi fruit target is calculated and obtained;
(4) Picking: the control center of the litchi picking robot plans a motion track from the mechanical arm to a litchi fruit target according to the three-dimensional position of the litchi fruit target, and drives the bionic end effector to the position of a stem picking point of a litchi fruit; then the control center transmits a signal to the electromagnet, the electromagnet is electrified and pulls a clamping finger seat of the bionic end effector to enable the movable clamping finger and the fixed clamping finger to clamp the fruit stalks, and the bionic olecranon-type movable blade and the bionic olecranon-type fixed blade are closed to execute shearing action, so that clamping and shearing are integrated; the picked litchis reach the litchis collecting box through the receiving device.
10. The litchi picking method according to claim 9, wherein: in the step (2), the optimized ORB-SLAM3 algorithm is to optimize the ORB-SLAM3 tracking thread algorithm by using the optimized YOLOv7 deep learning network, that is, an area of interest is identified by using the optimized YOLOv7 deep learning network, then a distance threshold L is set by combining point cloud information, if the point cloud position difference L > = L of the ORB feature points between frames, the dynamic point is determined, and the dynamic point is deleted from the ORB feature points, and only the static point is left.
CN202211284014.1A 2022-10-20 2022-10-20 Litchi recognition method based on visual algorithm and bionic litchi picking robot Pending CN115553132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211284014.1A CN115553132A (en) 2022-10-20 2022-10-20 Litchi recognition method based on visual algorithm and bionic litchi picking robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211284014.1A CN115553132A (en) 2022-10-20 2022-10-20 Litchi recognition method based on visual algorithm and bionic litchi picking robot

Publications (1)

Publication Number Publication Date
CN115553132A true CN115553132A (en) 2023-01-03

Family

ID=84746258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211284014.1A Pending CN115553132A (en) 2022-10-20 2022-10-20 Litchi recognition method based on visual algorithm and bionic litchi picking robot

Country Status (1)

Country Link
CN (1) CN115553132A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116021526A (en) * 2023-02-07 2023-04-28 台州勃美科技有限公司 Agricultural robot control method and device and agricultural robot
CN116210459A (en) * 2023-05-08 2023-06-06 山西工学院 Fruit picking robot
CN116267236A (en) * 2023-05-23 2023-06-23 四川省机械研究设计院(集团)有限公司 Cluster fruit picking robot
CN116508493A (en) * 2023-04-27 2023-08-01 仲恺农业工程学院 Gantry crawler type tea picking robot and picking method thereof
CN117397463A (en) * 2023-11-21 2024-01-16 广东省农业科学院设施农业研究所 Working method of litchi picking machine
CN118058074A (en) * 2024-02-20 2024-05-24 广东若铂智能机器人有限公司 Method for judging burst interference in string-type fruit picking process
CN118266325A (en) * 2024-05-31 2024-07-02 中慧高芯技术(山东)有限公司 Full-automatic asparagus picking machine and picking method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116021526A (en) * 2023-02-07 2023-04-28 台州勃美科技有限公司 Agricultural robot control method and device and agricultural robot
CN116508493A (en) * 2023-04-27 2023-08-01 仲恺农业工程学院 Gantry crawler type tea picking robot and picking method thereof
CN116508493B (en) * 2023-04-27 2024-04-05 仲恺农业工程学院 Gantry crawler type tea picking robot and picking method thereof
CN116210459A (en) * 2023-05-08 2023-06-06 山西工学院 Fruit picking robot
CN116267236A (en) * 2023-05-23 2023-06-23 四川省机械研究设计院(集团)有限公司 Cluster fruit picking robot
CN116267236B (en) * 2023-05-23 2023-08-25 四川省机械研究设计院(集团)有限公司 Cluster Fruit Picking Robot
CN117397463A (en) * 2023-11-21 2024-01-16 广东省农业科学院设施农业研究所 Working method of litchi picking machine
CN117397463B (en) * 2023-11-21 2024-05-07 广东省农业科学院设施农业研究所 Working method of litchi picking machine
CN118058074A (en) * 2024-02-20 2024-05-24 广东若铂智能机器人有限公司 Method for judging burst interference in string-type fruit picking process
CN118266325A (en) * 2024-05-31 2024-07-02 中慧高芯技术(山东)有限公司 Full-automatic asparagus picking machine and picking method

Similar Documents

Publication Publication Date Title
CN115553132A (en) Litchi recognition method based on visual algorithm and bionic litchi picking robot
Sarig Robotics of fruit harvesting: A state-of-the-art review
CN112070818B (en) Robot disordered grabbing method and system based on machine vision and storage medium
CN108399639B (en) Rapid automatic grabbing and placing method based on deep learning
CN107767423B (en) mechanical arm target positioning and grabbing method based on binocular vision
CN108247635B (en) Method for grabbing object by depth vision robot
CN110122073B (en) Strawberry picking robot based on machine vision
CN112715162B (en) System for intelligent string type fruit of picking
CN111418349B (en) Intelligent fruit picking robot and method for realizing fruit picking
CN111260289A (en) Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation
CN111462154A (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
CN114067309B (en) Sweet pepper identification and picking sequence determination method based on multi-view three-dimensional reconstruction
CN110992422B (en) Medicine box posture estimation method based on 3D vision
CN113569922A (en) Intelligent lossless apple sorting method
CN117325170A (en) Method for grabbing hard disk rack based on depth vision guiding mechanical arm
Jin et al. Detection method for table grape ears and stems based on a far-close-range combined vision system and hand-eye-coordinated picking test
Zhang et al. An automated apple harvesting robot—From system design to field evaluation
CN104992448A (en) Automatic positioning method for robot picking grapes in a damage-free way
CN113681552B (en) Five-dimensional grabbing method for robot hybrid object based on cascade neural network
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
CN116524344A (en) Tomato string picking point detection method based on RGB-D information fusion
CN115810188A (en) Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image
CN115909075A (en) Power transmission line identification and positioning method based on depth vision
CN115589845A (en) Intelligent cotton picking robot and cotton picking operation path planning method thereof
CN114998573A (en) Grabbing pose detection method based on RGB-D feature depth fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination