CN116117807A - Chilli picking robot and control method - Google Patents
Chilli picking robot and control method Download PDFInfo
- Publication number
- CN116117807A CN116117807A CN202211742467.4A CN202211742467A CN116117807A CN 116117807 A CN116117807 A CN 116117807A CN 202211742467 A CN202211742467 A CN 202211742467A CN 116117807 A CN116117807 A CN 116117807A
- Authority
- CN
- China
- Prior art keywords
- picking
- pepper
- fruit
- path
- potential field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 240000008574 Capsicum frutescens Species 0.000 title claims description 17
- 235000013399 edible fruits Nutrition 0.000 claims abstract description 61
- 235000002566 Capsicum Nutrition 0.000 claims abstract description 55
- 235000016761 Piper aduncum Nutrition 0.000 claims abstract description 42
- 235000017804 Piper guineense Nutrition 0.000 claims abstract description 42
- 235000008184 Piper nigrum Nutrition 0.000 claims abstract description 42
- 239000006002 Pepper Substances 0.000 claims abstract description 41
- 241000722363 Piper Species 0.000 claims abstract description 41
- 238000009826 distribution Methods 0.000 claims abstract description 28
- 239000012636 effector Substances 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims abstract description 10
- 238000001228 spectrum Methods 0.000 claims abstract description 7
- 239000003016 pheromone Substances 0.000 claims description 33
- 230000004927 fusion Effects 0.000 claims description 14
- 230000033001 locomotion Effects 0.000 claims description 13
- 230000012010 growth Effects 0.000 claims description 12
- 238000003306 harvesting Methods 0.000 claims description 12
- 230000001276 controlling effect Effects 0.000 claims description 11
- 241000257303 Hymenoptera Species 0.000 claims description 9
- 239000001390 capsicum minimum Substances 0.000 claims description 7
- QVFWZNCVPCJQOP-UHFFFAOYSA-N chloralodol Chemical compound CC(O)(C)CC(C)OC(O)C(Cl)(Cl)Cl QVFWZNCVPCJQOP-UHFFFAOYSA-N 0.000 claims description 7
- 241000758706 Piperaceae Species 0.000 claims description 6
- 238000002310 reflectometry Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 5
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 2
- 125000003275 alpha amino acid group Chemical group 0.000 claims description 2
- 150000001875 compounds Chemical class 0.000 claims description 2
- 230000002596 correlated effect Effects 0.000 claims description 2
- 230000001105 regulatory effect Effects 0.000 claims description 2
- 239000003638 chemical reducing agent Substances 0.000 claims 2
- 230000009897 systematic effect Effects 0.000 abstract 1
- 241000196324 Embryophyta Species 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000002689 soil Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 241001464837 Viridiplantae Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000012271 agricultural production Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 229930002875 chlorophyll Natural products 0.000 description 1
- 235000019804 chlorophyll Nutrition 0.000 description 1
- ATNHDLDRLWWWCB-AENOIHSZSA-M chlorophyll a Chemical compound C1([C@@H](C(=O)OC)C(=O)C2=C3C)=C2N2C3=CC(C(CC)=C3C)=[N+]4C3=CC3=C(C=C)C(C)=C5N3[Mg-2]42[N+]2=C1[C@@H](CCC(=O)OC\C=C(/C)CCC[C@H](C)CCC[C@H](C)CCCC(C)C)[C@H](C)C2=C5 ATNHDLDRLWWWCB-AENOIHSZSA-M 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 235000012055 fruits and vegetables Nutrition 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000008635 plant growth Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01D—HARVESTING; MOWING
- A01D46/00—Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
- A01D46/30—Robotic devices for individually picking crops
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
- B25J5/007—Manipulators mounted on wheels or on carriages mounted on wheels
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/1605—Simulation of manipulator lay-out, design, modelling of manipulator
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Robotics (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Environmental Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a pepper picking robot and a control method, which belong to the field of automatic picking, wherein the control system of the pepper picking robot firstly builds a real-time three-dimensional map of a farm to be picked through a GIS (geographic information system), determines the distribution condition of crop yield according to a crop spectrum database, obtains an optimal driving path through a potential field function, then carries out systematic power-on self-check on the picking robot, after the self-check passes, a lower computer controls a chassis to move to an initial position, acquires the image characteristics of picked pepper fruits through a depth camera, carries out information identification calculation on the image acquired by the depth camera, acquires pepper fruit information, and then controls a picking end effector to move to a designated position for grabbing and picking, wherein the lower computer determines an optimal picking sequence according to the three-dimensional coordinates of the pepper fruits through an ant colony algorithm; the full-automatic intelligent remote control of the pepper picking robot and the high-precision grabbing of pepper fruit targets are realized, the picking process is simplified, and the picking time is saved.
Description
Technical Field
The invention belongs to the field of automatic picking, and particularly relates to a pepper picking robot and a control method.
Background
With the research and promotion of artificial intelligence in the agricultural field in recent years, an agricultural picking robot starts to be switched into a test stage from research and development, so that the agricultural picking robot can not only improve the picking operation efficiency, reduce the damage of fruits and vegetables and the agricultural production cost, but also save labor resources and increase the income of farmers, and can also effectively relieve the problem of shortage of agricultural labor in China, thereby providing a new mode for automatic picking of fruits.
At present, research on fruit picking machine control systems at home and abroad is still in a starting stage, most of research in recent years aims at semi-automatic picking control under a structured environment, when facing a complex non-structural environment, the fruit yield and the area cannot be accurately divided, namely, mature peppers cannot be completely picked at one time, manual intervention is needed, the pepper is irregular in shape, the branches are complex, the shielding performance is high, the targets are small, rapid and accurate identification of the fruits cannot be carried out, and the peppers are difficult to pick in a full-automatic, accurate, rapid and efficient manner, so that the situations of missing and inaccurate positioning and damage to the peppers occur during picking.
Therefore, the pepper picking robot and the control method are provided, intelligent identification and accurate picking of pepper fruits are realized, the picking efficiency is improved, and the picking leakage rate is reduced.
Disclosure of Invention
In order to overcome the problems in the background art, the invention provides a pepper picking robot and a control method, and aims to solve the problems that when the traditional picking robot faces a complex non-structural environment, the yield of pepper fruits and a picking area cannot be accurately divided, the pepper fruits with high shielding and small targets cannot be rapidly and accurately identified, and the fruits cannot be fully automatically, stably and efficiently picked.
In order to achieve the above purpose, the present invention is implemented by the following technical scheme:
the control method of the pepper picking robot comprises the following steps:
step one, establishing a crop spectrum database;
constructing a three-dimensional area map to determine crop yield distribution conditions, determining vegetation distribution conditions and vegetation coverage ratios according to the database established in the first step, and drawing a crop distribution three-dimensional map by using an arcscene module in ARGIS software according to the vegetation distribution conditions and the vegetation coverage ratios;
step three, planning a robot movement optimal path, establishing an artificial potential field by utilizing a potential field function, obtaining a global potential field function of the whole running space through a potential field function and a repulsive force field function, and solving an optimal running path according to the potential field function;
step four, the system is electrified for self-checking, and the picking robot moves to a designated position;
fifthly, object identification and positioning can be picked, a Yolov7 multi-mode attention fusion lightweight depth network model is constructed, image information target identification is carried out on images taken by a depth camera through a network, and a fruit growth direction and fruit centroid pixel coordinates are obtained;
and step six, executing a harvesting program, and determining the optimal harvesting sequence through the obtained three-dimensional coordinates of the centroid of the fruit by an ant colony algorithm.
Preferably, the establishing a crop spectrum database in the step one includes: step 1.1, collecting multispectral data of capsicum in each growing period of a picking field by using a multispectral lens;
step 1.2, comprehensively reflecting the growth distribution and coverage condition of the peppers by using a normalized differential vegetation index to the acquired data, wherein the normalized differential vegetation index is calculated by measuring values of two wave bands, namely a visible light red wave band and a near infrared wave band, and the calculation formula of the normalized differential vegetation index is as follows:
NDVI=(ρNIR-ρR)/(ρNIR+ρR) (1)
wherein: NDVI is normalized differential vegetation index, ρnir is the reflectivity in the near infrared band, ρr is the reflectivity in the red band.
Preferably, the moving optimal path calculating process in the step three is:
setting the environment as an abstract artificial gravitational field, generating 'attraction' on the mobile robot by a target point, generating 'repulsion' on the mobile robot by an obstacle, controlling the motion of the mobile robot by the resultant force of the two, and establishing an artificial potential field by utilizing a potential field function U (q):
U(q)=U att (q)+U rep (q) (2)
attraction potential field function:
repulsive potential field function:
(3) In the formulae (3) and (4), U (q) represents the sum of the attractive potential field function and the repulsive potential field function, U att (q) represents a function of the gravitational potential field, U rep (q) shows the repulsive potential field function, ζ shows that the proportional position gain coefficient is positively correlated with the NDVI value, ρ (q, q) goal ) Representing the Euclidean distance |q-q from the current point to the target point goal I, η denotes the repulsive force gain, ρ (q, q) obs ) Representing the distance of the current point to the obstacle ρ 0 Representing an obstacle acting distance threshold;
according to the three-dimensional area map constructed in the second step, when the pepper picking robot enters the influence range of the obstacle, the larger the distance between the two is, the smaller the repulsive potential field value is, otherwise, the smaller the distance is, the larger the potential field value is; the pepper picking robot moves towards the target point along the direction of the fastest gradient descent of the potential force field value U (q).
Preferably, the fifth implementation step is as follows:
according to the collected image characteristics of fruits to be picked, a Yolov7 multi-mode attention fusion light-weight depth network model is constructed, the network carries out image information target identification on RGB images adopted by a depth camera, the fruit growth direction and fruit centroid pixel coordinates are obtained, the depth information obtained by the depth camera and camera internal parameters are combined, the three-dimensional coordinates of all target pepper fruit centroids relative to a diaphragm of the depth camera are obtained, and a three-dimensional coordinate calculation formula of the target pepper fruit centroids relative to the diaphragm of the depth camera is as follows:
X′[mm]=X[mm]-N (7)
(5) In the formulas (6), (7) and (8), X is mm]Representing the projection of the distance of the centre of mass of the fruit relative to the centre of the image on the x-axis, Y [ mm ]]Representing the projection of the distance of the centroid of the fruit relative to the centre of the image on the y-axis,representing the depth map at coordinates (x 0 ,y 0 ) The value at point c x Representing the coordinates of the center of the image along the x-axis in pixels, c y Representing the coordinates of the center of the image along the y-axis, f x And f y Is the focal length along the x-axis and the y-axis of the camera optical system that obtains the image by calibration, (x) 0 ,y 0 ) Is image middle inspectionThe measured coordinates of the center of the object, in pixels, formula (7) is used to calculate the offset distance of the RGB camera module from the center of realism, where X' is [ mm ]]Unbiased projection representing distance from camera center to object along x-axis, N is offset of depth camera lens, Z [ mm ]]Representing a projection of the distance of the centroid of the fruit relative to the center of the image on the z-axis;
through the calculation, the pixel coordinates of the fruit centroid obtained by the lightweight network can be converted into three-dimensional coordinates of the fruit centroid relative to the camera optical center.
Preferably, after the device receives the three-dimensional coordinate list of the centroid of the fruit obtained in the step five, the step six of determining the optimal picking sequence of the capsicum through the three-dimensional coordinate by the ant colony algorithm comprises the following steps:
step 6.1, simulating a picking path as an ant moving path, initializing ants with the number m being 1.5 times of the total three-dimensional coordinates, a pheromone factor alpha being 1, a heuristic function factor beta being 1, a beta pheromone volatilization factor rho being 0.2, a pheromone constant Q being 10, and the maximum iteration number t being 200;
step 6.2, simulating a picking gripper moving path, randomly placing m ants at different departure places, and calculating a next three-dimensional coordinate to be accessed until each ant accesses all the three-dimensional coordinates;
step 6.3, determining the next movement point according to the pheromone concentration of the connecting path between the points, and at the moment t, the probability that the kth ant is transferred from the point i to the point j is as follows:
in the formula (9), i and j respectively represent the start point and the end point of each path, alpha and rho represent the regulating factors, and tau i (t) represents the pheromone concentration from i to j at time t, n ij (t) equals the path length d ij Reciprocal, allowances of (2) k Representing a set of unvisited coordinates;
the greater the pheromone concentration on paths i to j, τ ij The greater the value of (t), the more the picking path is selectedThe greater the probability of (a), and likewise the shorter the path length, the more n ij (t)=1/d ij The greater the probability that the picking path is selected, the greater;
in the process of picking the gripper movement, calculating the path length L of each ant passing by, recording the historical optimal solution in the current iteration times, and simultaneously updating the pheromone concentration of the path connected with each three-dimensional coordinate, wherein the expression of the pheromone update is as follows
τ is (t+n)=ρτ ij (t)+Δτ ij (10)
In the formula (10), ρ is a volatilization factor, and Δτ ij Representing the total amount of pheromones left by ants of all paths i to j after traversing all three-dimensional coordinates, namely:
in the formula (11), the amino acid sequence of the compound,representing the amount of pheromone left by the kth ant over paths i through j;
if the kth ant passes through paths i to j, then
In the formula (12), Q is a pheromone constant, L k Is the total length of the path which has been travelled;
when the maximum iteration times are reached, outputting a recorded historical optimal solution, wherein the optimal solution is the optimal picking path of the chillies.
The utility model provides a chilli picking robot includes depth camera, laser radar, gyroscope sensor, keep away the barrier sensor, film pressure sensor, linear electric motor, slide rail I, the lead screw, slide rail II, the steering wheel, the cylinder, gear motor, the frame, gear motor installs on frame bottom four corners walking wheel, slide rail I installs in frame bottom both sides, linear electric motor is respectively installed to two slide rail I tip, install slide rail II on the slider of two slide rails I, linear electric motor is installed to slide rail II tip, install the lead screw on the slide rail II, the steering wheel is installed to the lead screw lower extreme, the cylinder is installed on steering wheel bottom output shaft, keep away barrier sensor and film pressure sensor and install on the rudder side, the depth camera is installed and is used for shooing the chilli image and carries out target information discernment at the steering wheel side, laser radar passes through the support mounting on the frame, gyroscope sensor installs on laser radar's support.
Preferably, the depth camera, the laser radar are connected to the lower computer through a USB interface, the depth camera, the laser radar and the GIS module form a fusion sensing system, the gyroscope sensor, the obstacle avoidance sensor and the film pressure sensor are connected to the input end of the lower computer through a serial interface circuit, the linear motor is connected with a linear motor controller, the linear motor controller is connected to the output end of the lower computer through a serial interface circuit, the lead screw is connected with a lead screw controller, the lead screw controller is connected to the output end of the lower computer through a serial interface circuit, the steering engine is connected with a steering engine controller through a serial interface circuit, the steering engine is connected to the output end of the lower computer, the cylinder is connected with an end effector controller, the end effector controller is connected to a chassis controller through a serial interface circuit, the chassis controller is connected to the output end of the lower computer through an Ethernet, the lower computer is connected with a LoRa wireless module, and the upper computer and the user control end are connected with a PC end through an Ethernet.
The invention has the beneficial effects that:
according to the invention, a GIS module is used for determining a crop yield distribution map, remote sensing monitoring of the pepper yield in a pepper planting area is realized, an optimal picking path is solved according to the yield distribution map and a manual potential field algorithm, intelligent recognition, accurate positioning and orientation of fruits on a micro processing system are realized by means of a Yolov7 multi-mode attention fusion light-weighted depth network model and a depth camera, high-precision positioning of pepper fruit targets is realized, a picking path is planned by adopting an ant colony algorithm, a user control end instruction and a machine state are transmitted in real time by utilizing a LoRa wireless module, full-automatic intelligent remote control of a pepper picking robot and high-precision grabbing of the pepper fruit targets are realized, picking procedures are simplified, and picking time is saved.
Drawings
FIG. 1 is a schematic diagram of a control system hardware connection according to the present invention;
FIG. 2 is a flow chart of determining crop yield distribution using GIS according to the present invention;
FIG. 3 is a flow chart of the system power-on self-test of the present invention;
FIG. 4 is a flow chart of the invention for identifying and locating a pickable object;
FIG. 5 is a flow chart of the invention for determining the optimal picking order by the ant colony algorithm;
FIG. 6 is a flow chart of a harvesting procedure according to the present invention;
FIG. 7 is a schematic diagram of a control device according to the present invention;
fig. 8 is a schematic view of the structure of the picking grip of the present invention.
The system is characterized by comprising a 1-lower computer, a 2-fusion sensing system, a 201-depth camera, a 202-laser radar, a 203-GIS module, a 3-sensor, a 301-gyroscope sensor, a 302-obstacle avoidance sensor, a 303-film pressure sensor, a 4-linear motor controller, a 401-linear motor, a 402-sliding rail I, a 5-screw controller, a 501-screw, a 502-sliding rail II, a 6-steering engine controller, a 601-steering engine, a 7-end effector controller, a 701-cylinder, an 8-chassis controller, a 801-speed reducing motor, a 9-LoRa wireless module, a 10-local area network, a 11-user control end, a 12-upper computer and a 13-frame.
Detailed Description
With the objects, technical solutions and advantageous effects of the present invention made clearer, preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, so as to facilitate understanding of the skilled person.
As shown in fig. 1, the control method and the control method of the pepper picking robot comprise a lower computer 1, wherein the lower computer 1 is connected with a fusion sensing system 2 through USB two-way communication, the lower computer 1 is connected with a linear motor controller 4, a screw rod controller 5, a steering engine controller 6, an end effector controller 7, a chassis controller 8 and a loRa wireless module 9 through a serial port communication mode, and the loRa wireless module 9 is connected with a user control end 11 and an upper computer 12 through a local area network 10.
As shown in fig. 1 and 7 and 8, the chilli picking robot comprises a depth camera 201, a laser radar 202, a gyroscope sensor 301, an obstacle avoidance sensor 302, a film pressure sensor 303, a linear motor 401, a sliding rail I402, a lead screw 501, a sliding rail II 502, a steering engine 601, a cylinder 701, a gear motor 801 and a frame 13, wherein the gear motor 801 is arranged on four corners travelling wheels at the bottom of the frame 13, the travelling and steering of the picking robot are controlled by controlling the starting and stopping of each motor and the rotating speed of the motor, the gear motor 801 is connected with a chassis controller 8, the chassis controller 8 is connected to the output end of a lower computer 1 through a serial interface circuit, the sliding rail I402 is arranged on two sides of the bottom of the frame 13, the ends of the two sliding rails I402 are respectively provided with a linear motor 401, the linear motor 401 is connected with a linear motor controller 4, the longitudinal movement of a picking gripper is controlled through the sliding rail I402 and the linear motor 401, the linear motor controller 4 is connected to the output end of the lower computer 1 through a serial interface circuit, a slide rail II 502 is arranged on the slide blocks of the two slide rails I402, a linear motor 401 is respectively arranged at the end parts of the slide rails II 502 and used for controlling the picking grippers to transversely move, a lead screw 501 is arranged on the slide rail II 502 and connected with a lead screw controller 5, the lead screw controller 5 is connected to the output end of the lower computer 1 through a serial interface circuit and used for controlling the picking grippers to move up and down, a steering engine 601 is arranged at the lower end of the lead screw 501 and connected with a steering engine controller 6, the steering engine 601 is used for controlling the picking grippers to turn, the steering engine controller 6 is connected to the output end of the lower computer 1 through a serial interface circuit, a cylinder 701 is arranged on the output shaft at the bottom of the steering engine 601 and connected with an end effector controller 7, the end effector controller 7 is connected to the output end of the lower computer 1 through a serial interface circuit, the obstacle avoidance sensor 302 and the film pressure sensor 303 are arranged on the side edge of the steering engine 601, the gyroscope sensor 301, the obstacle avoidance sensor 302 and the film pressure sensor 303 are connected to the input end of the lower computer 1 through serial interface circuits, the obstacle avoidance sensor 302 detects picking obstacles, the film pressure sensor 303 detects the grabbing force during picking, picking targets are prevented from being damaged, the depth camera 201 is arranged on the side edge of the steering engine 601 and used for shooting pepper images to conduct target information identification, the laser radar 202 is arranged on the frame 13 through a bracket, the picking targets are detected in a distance mode, the depth camera 201, the laser radar 202 and the GIS module 203 form a fusion sensing system 2, accurate calculation, positioning and identification are conducted on target information, the GIS module 203 acquires farmland information system data through remote sensing and remote measuring of processed farmland data areas through satellites, spatial crop distribution is analyzed, the fusion sensing system 2 is connected to the lower computer 1 through USB two-way communication, and the gyroscope sensor 301 is arranged on the bracket of the laser radar 202 to judge the posture of the picking robot.
A control method of a pepper picking robot comprises the following steps:
step one, establishing a crop spectrum database;
constructing a three-dimensional area map to determine crop yield distribution conditions, determining vegetation distribution conditions and vegetation coverage ratios according to the database established in the first step, and drawing a crop distribution three-dimensional map by using an arcscene module in ARGIS software according to the vegetation distribution conditions and the vegetation coverage ratios;
step three, planning a robot movement optimal path, establishing an artificial potential field by utilizing a potential field function, obtaining a global potential field function of the whole running space through a potential field function and a repulsive force field function, and solving an optimal running path according to the potential field function;
step four, the system is electrified for self-checking, and the picking robot moves to a designated position;
fifthly, object identification and positioning can be picked, a Yolov7 multi-mode attention fusion lightweight depth network model is constructed, image information target identification is carried out on images taken by a depth camera through a network, and a fruit growth direction and fruit centroid pixel coordinates are obtained;
and step six, executing a harvesting program, and determining the optimal harvesting sequence through the obtained three-dimensional coordinates of the centroid of the fruit by an ant colony algorithm.
As shown in fig. 2, when the GIS system constructs a three-dimensional area map to determine the crop yield distribution, firstly, an unmanned aerial vehicle carrying infrared, red light, green light, red edges and RGB synthesized five multispectral lenses is used to collect multispectral data of each growing period of the capsicum in the picking field, and the normalized differential vegetation index (Normalized Difference Vegetation Index, NDVI) is selected according to the collected data to comprehensively reflect the growth distribution and coverage condition of the capsicum, so that plant leaf surfaces have strong absorption characteristics in the red light wave band of visible light and strong reflection characteristics in the near infrared wave band, which is the physical basis of vegetation remote sensing monitoring. The NDVI index obtained by combining the measured values of the two wave bands is sensitive to the change of soil background, and vegetation and water can be well identified. When the vegetation is in medium and low coverage, the index is rapidly increased along with the increase of the coverage, and the growth is slow after reaching a certain coverage, so the method is suitable for dynamic monitoring of early and medium growth stages of the vegetation. Wherein, the calculation formula of the NDVI index is as follows:
NDVI= (ρNIR-ρR)/(ρNIR+ρR) (1)
wherein: ρNIR is the reflectivity in the near infrared band and ρR is the reflectivity in the red band.
The red light wave band (0.58-0.68 μm) of visible light is located in chlorophyll absorption band, and the near infrared wave band (0.75-1.10 μm) is located in the high reflection area of green plant spectrum. The NDVI value ranges from-1 to 1, and an NDVI value of approximately 0 represents a bare soil area without vegetation; positive values of NDVI indicate vegetation coverage and increase with increasing coverage, greater than 0.7 indicating higher vegetation density in the area; while the ground cover snow water area NDVI is negative. NDVI is the best indicator of plant space density and plant growth state, and is linearly related to the distribution density of vegetation cover, and is generally applied to the fields of vegetation growth state detection, vegetation cover and the like.
After the NDVI index of each growing period is calculated, the GIS module acquires farmland information system data, remote sensing and remote measuring are carried out on the processed farmland data area by using satellites, five wave bands of infrared, red light, green light, red edge and RGB are synthesized and calculated to obtain vegetation indexes, vegetation distribution conditions and vegetation coverage ratio are determined, space crop distribution is analyzed, and a crop distribution three-dimensional map is drawn according to the plant distribution conditions and vegetation coverage ratio by using the arcscene module in ARGIS software.
In the third step of the invention, when the upper computer plans the moving path of the robot, after the GIS module transmits the space crop distribution information to the upper computer, the upper computer solves the picking optimal path by utilizing an artificial potential field algorithm according to a crop distribution three-dimensional map, when the robot moves in the surrounding environment, the environment is designed into an abstract artificial gravitational field, a target point generates 'attraction' to the moving robot, an obstacle generates 'repulsion' to the moving robot, finally, the movement of the moving robot is controlled by the resultant force of the two, an artificial potential field is established by utilizing a potential field function U, the potential field function is a micro-function, and the size of a potential field function value of a certain point in space represents the potential field strength of the point. The sum of the gravitational potential field function and the repulsive potential field function is represented by U (q).
U(q)=U att (q)+U rep (q) (2)
Attraction potential field function:
since the robot travel path is related to the current area crop vegetation coverage, ζ represents a positive correlation of the proportional position gain coefficient to the NDVI value, ρ (q, q goal ) Representing the Euclidean distance |q-q from the current point to the target point goal |
Repulsive potential field function:
eta represents the repulsive force gain ρ (q, q obs ) Representing the distance of the current point to the obstacle ρ 0 Representing the obstacle acting distance threshold.
According to the gravitational field function and the repulsive force field function, a global potential field function of the whole running space can be obtained, a three-dimensional area map is built according to the second step, and after the virtual pepper picking robot enters the influence range of the obstacle, the larger the distance between the two is, the smaller the repulsive force potential field value is, otherwise, the smaller the distance is, and the larger the potential field value is; the capsicum picking robot moves towards the target point along the direction of the highest gradient descent of the potential force field value U (q), an optimal running path can be obtained, the obtained optimal path is stored in a readable list, and the list is transmitted to a lower computer through a LoRa wireless module.
In the invention, as shown in fig. 3, when the system is powered on for self-checking, a picking end effector returns to an initial position, whether hardware is abnormal or not is judged, then the gesture of the chilli picking robot is judged through a gyroscope sensor, a program is initialized, when the system component cannot normally run or the system component outputs a corresponding error prompt, when the picking robot is unbalanced, a chassis controller controls a gear motor to adjust the gesture to be horizontal, after the self-checking is passed, the system enters a standby state, and a user control end waits for issuing a picking instruction.
As shown in fig. 4, when the picking object is identified and positioned, after a picking instruction is issued by a user control end, the user instruction is synchronized to a lower computer through a local area network connection system LoRa wireless module, after the lower computer receives the picking instruction, a Yolov7 multi-mode attention fusion lightweight depth network model is constructed according to the image characteristics of fruits to be picked, an RGB image taken by a depth camera is identified by utilizing the network to perform image information target identification, a fruit growth direction and fruit centroid pixel coordinates are obtained, and the depth information and camera internal parameters obtained by the obtained depth camera are combined to obtain the three-dimensional coordinates of all target capsicum fruit centroids relative to the aperture of the depth camera. Wherein:
X[mm]is the projection (in mm) of the distance on the x-axis relative to the centre of the image, Y [ mm ]]Is the distance from the center of the imageFrom the projection on the y-axis (in mm),is the depth map at the coordinates (x 0 ,y 0 ) Value at point (in mm), c x Is the coordinates of the center of the image along the x-axis (in pixels), c y Is the coordinates (in pixels) of the center of the image along the y-axis, f x And f y Is the internal parameters of the camera optics (focal length along x-axis and y-axis) for obtaining images by calibration, (x) 0 ,y 0 ) Is the coordinates of the center of the detected object in the image in pixels. When calculating the distance to the object along the x-axis, the offset of the RGB camera module from the center of realism needs to be considered:
X′[mm]=X[mm]-35 (7)
where X' [ mm ] is an unbiased projection of the distance along the X-axis from the camera center to the object (in mm), and 35 (mm) is the offset of the Intel Real Sense Depth camera d435 i.
Finally, according to the data obtained by the formula, the data of the Z axis in the three-dimensional coordinates can be obtained
Wherein Z [ mm ] is an unbiased projection (in mm) of the distance along the Z-axis from the camera center to the object;
through the calculation, the pixel coordinates of the fruit centroid obtained by the lightweight network can be converted into three-dimensional coordinates of the fruit centroid relative to the camera optical center.
As shown in fig. 5, when the harvesting procedure is executed, the optimal picking sequence is determined by the obtained three-dimensional coordinates of the fruit centroid, after the lower computer receives the three-dimensional coordinate list of the fruit centroid, the picking path is set as an ant moving path, firstly, the number m of ants is 1.5 times of the total number of the three-dimensional coordinates, the pheromone factor alpha is 1, the heuristic function factor beta is 1, beta, the pheromone volatilization factor rho is set to 0.2, the pheromone constant Q is set to 10, the maximum iteration number t is set to 200, then m ants are randomly placed at different departure places, and the next three-dimensional coordinates to be accessed are calculated until each ant accesses all the three-dimensional coordinates. Determining the next movement point according to the pheromone concentration of the connecting paths between the points, and at the moment t, the probability of transferring the kth ant from the point i to the point j is as follows
Wherein i and j respectively represent the start point and the end point of each path, and alpha and rho are adjusting factors for adjusting tau ij (t) and n ij The action between (t), τ i (t) represents the pheromone concentration from i to j at time t, n ij (t) equals the path length d ij Is the reciprocal of allowances of (2) k Representing a set of unviewed coordinates, τ if the pheromone concentration on paths i to j is greater ij The greater the value of (t), the greater the probability that the path is selected; also, the shorter the path length is, the n ij (t)=1/d ij The larger the probability that the path is selected, the greater. In the ant movement process, calculating the path length L of each ant, recording the historical optimal solution in the current iteration times, and simultaneously updating the pheromone concentration of the path connected with each three-dimensional coordinate, wherein the expression of the pheromone update is as follows
τ is (t+n)=ρτ ij (t)+Δτ ij (10)
Where ρ is the volatilization factor, Δτ ij Representing the total amount of pheromones left by ants of all paths i to j after traversing all three-dimensional coordinates, namely:
wherein: Δτ ij Representing the amount of pheromone left by the kth ant over paths i through j
If the kth ant passes through paths i to j, then
Wherein: q is pheromone constant, L k Is the total length of the path which has been travelled;
and when the maximum iteration number is reached, outputting the recorded historical optimal solution.
As shown in fig. 6, when the harvesting procedure is executed, the optimal harvesting sequence is calculated according to the ant colony algorithm, the obstacle avoidance sensor guides the lower computer to control the linear motor controller and the screw rod controller to respectively control the linear motor and the screw rod to run so that the harvesting end effector reaches a fruit harvesting point, the lower computer controls the steering engine above the clamping jaw to rotate the harvesting gripper so that the gripper is parallel to the fruit growing direction according to the fruit growing direction obtained by the fusion sensing system, the lower computer controls the end effector controller to control the air cylinder so that the harvesting gripper grabs the target pepper fruit, when a signal transmitted by the film pressure sensor on the harvesting gripper to the lower computer reaches a set threshold value, the air cylinder for controlling the clamping jaw stops running, and then the lower computer controls the steering engine above the clamping jaw to rotate 90 degrees so that the target pepper fruit is taken down, and then the lower computer controls the linear motor controller and the screw rod controller to respectively control the steering engine above the linear motor and the screw rod to put the fruit into a designated position and reset. If the target pepper fruits still exist in the identification range of the fusion sensing system, the picking program is circularly executed, otherwise, the lower computer controls the chassis controller to control the speed reducing motor to move to the next area for picking.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any changes or substitutions that do not undergo the inventive effort should be construed as falling within the scope of the present invention.
Claims (7)
1. The control method of the pepper picking robot is characterized by comprising the following steps of:
step one, establishing a crop spectrum database;
constructing a three-dimensional area map to determine crop yield distribution conditions, determining vegetation distribution conditions and vegetation coverage ratios according to the database established in the first step, and drawing a crop distribution three-dimensional map by using an arcscene module in ARGIS software according to the vegetation distribution conditions and the vegetation coverage ratios;
step three, planning a robot movement optimal path, establishing an artificial potential field by utilizing a potential field function, obtaining a global potential field function of the whole running space through a potential field function and a repulsive force field function, and solving an optimal running path according to the potential field function;
step four, the system is electrified for self-checking, and the picking robot moves to a designated position;
fifthly, object identification and positioning can be picked, a Yolov7 multi-mode attention fusion lightweight depth network model is constructed, image information target identification is carried out on images taken by a depth camera through a network, and a fruit growth direction and fruit centroid pixel coordinates are obtained;
and step six, executing a harvesting program, and determining the optimal harvesting sequence through the obtained three-dimensional coordinates of the centroid of the fruit by an ant colony algorithm.
2. A method of controlling a pepper picking robot as claimed in claim 1, wherein said creating a crop spectrum database in step one comprises: step 1.1, collecting multispectral data of capsicum in each growing period of a picking field by using a multispectral lens;
step 1.2, comprehensively reflecting the growth distribution and coverage condition of the peppers by using a normalized differential vegetation index to the acquired data, wherein the normalized differential vegetation index is calculated by measuring values of two wave bands, namely a visible light red wave band and a near infrared wave band, and the calculation formula of the normalized differential vegetation index is as follows:
NDVI=(ρNIR-ρR)/(ρNIR+ρR) (1)
wherein: NDVI is normalized differential vegetation index, ρnir is the reflectivity in the near infrared band, ρr is the reflectivity in the red band.
3. The method for controlling a pepper picking robot as claimed in claim 1, wherein the moving optimal path calculating process in the step three is as follows:
setting the environment as an abstract artificial gravitational field, generating 'attraction' on the mobile robot by a target point, generating 'repulsion' on the mobile robot by an obstacle, controlling the motion of the mobile robot by the resultant force of the two, and establishing an artificial potential field by utilizing a potential field function U (q):
U(q)=U att (q)+U rep (q) (2)
attraction potential field function:
repulsive potential field function:
(2) In the formulae (3) and (4), U (q) represents the sum of the attractive potential field function and the repulsive potential field function, U att (q) represents a function of the gravitational potential field, U rep (q) shows the repulsive potential field function, ζ shows that the proportional position gain coefficient is positively correlated with the NDVI value, ρ (q, q) goal ) Representing the Euclidean distance |q-q from the current point to the target point goal I, η denotes the repulsive force gain, ρ (q, q) obs ) Representing the distance of the current point to the obstacle ρ 0 Representing an obstacle acting distance threshold;
when the pepper-free picking robot enters the influence range of the obstacle, the larger the distance between the pepper-free picking robot and the obstacle is, the smaller the repulsive potential field value is, otherwise, the smaller the distance is, the larger the potential field value is; the pepper picking robot moves towards the target point along the direction of the fastest gradient descent of the potential force field value U (q).
4. The method for controlling the pepper picking robot as claimed in claim 1, wherein the fifth implementation step comprises the following steps:
according to the collected image characteristics of fruits to be picked, a Yolov7 multi-mode attention fusion light-weight depth network model is constructed, the network carries out image information target identification on RGB images adopted by a depth camera, the fruit growth direction and fruit centroid pixel coordinates are obtained, the depth information obtained by the depth camera and camera internal parameters are combined, the three-dimensional coordinates of all target pepper fruit centroids relative to a diaphragm of the depth camera are obtained, and a three-dimensional coordinate calculation formula of the target pepper fruit centroids relative to the diaphragm of the depth camera is as follows:
in which X is mm]Representing the projection of the distance of the centroid of the fruit relative to the centre of the image on the x-axis,representing the depth map at coordinates (x 0 ,y 0 ) The value at point c x Representing the coordinates of the center of the image along the x-axis, f x Is the focal length along the x-axis of the camera optics that obtain the image by calibration, (x) 0 ,y 0 ) Is the center coordinates of the detected object in the image in pixels;
wherein Y is [ mm ]]Representing the projection of the distance of the centre of mass of the fruit relative to the centre of the image on the y-axis, c y Representing the coordinates of the center of the image along the y-axis, f y The focal length of a camera optical system for obtaining an image through calibration along a y axis;
X'[mm]=X[mm]-n (7)
equation (7) is to calculate the offset distance of the depth camera module from the center of realism, where X' mm represents the unbiased projection of the distance along the X-axis from the camera center to the object, and n represents the lens offset of the depth camera;
wherein Z [ mm ] represents the projection of the distance of the center of mass of the fruit relative to the center of the image on the Z-axis;
through the calculation, the pixel coordinates of the fruit centroid obtained by the lightweight network can be converted into three-dimensional coordinates of the fruit centroid relative to the camera optical center.
5. The method for controlling a pepper picking robot according to claim 1 and 5, wherein after the equipment receives the three-dimensional coordinate list of the centroid of the fruit obtained in the step five, the step six of determining the optimal picking sequence of the peppers through the three-dimensional coordinates by the ant colony algorithm comprises the following steps:
step 6.1, simulating a picking path as an ant moving path, initializing ants with the number m being 1.5 times of the total three-dimensional coordinates, a pheromone factor alpha being 1, a heuristic function factor beta being 1, a beta pheromone volatilization factor rho being 0.2, a pheromone constant Q being 10, and the maximum iteration number t being 200;
step 6.2, simulating a picking gripper moving path, randomly placing m ants at different departure places, and calculating a next three-dimensional coordinate to be accessed until each ant accesses all the three-dimensional coordinates;
step 6.3, determining the next movement point according to the pheromone concentration of the connecting path between the points, and at the moment t, the probability that the kth ant is transferred from the point i to the point j is as follows:
in the formula (9), i and j respectively represent the start point and the end point of each path, alpha and rho represent the regulating factors, and tau i (t) represents the pheromone concentration from i to j at time t, n ij (t) equals the path length d ij Reciprocal, allowances of (2) k Representing a set of unvisited coordinates;
the greater the pheromone concentration on paths i to j, τ ij (t)The greater the value, the greater the probability that the picking path will be selected, and likewise the shorter the path length, n ij (t)=1/d ij The greater the probability that the picking path is selected, the greater;
in the process of picking the gripper movement, calculating the path length L of each ant passing by, recording the historical optimal solution in the current iteration times, and simultaneously updating the pheromone concentration of the path connected with each three-dimensional coordinate, wherein the expression of the pheromone update is as follows
τ is (t+n)=ρτ ij (t)+Δτ ij (10)
In the formula (10), ρ is a volatilization factor, and Δτ ij Representing the total amount of pheromones left by ants of all paths i to j after traversing all three-dimensional coordinates, namely:
in the formula (11), the amino acid sequence of the compound,representing the amount of pheromone left by the kth ant over paths i through j;
if the kth ant passes through paths i to j, then
In the formula (12), Q is a pheromone constant, L k Is the total length of the path which has been travelled;
when the maximum iteration times are reached, outputting a recorded historical optimal solution, wherein the optimal solution is the optimal picking path of the chillies.
6. A pepper picking robot applied to a pepper picking robot as claimed in any one of the claims 1-5, characterized in that: the chilli picking robot comprises a depth camera (201), a laser radar (202), a gyroscope sensor (301), obstacle avoidance sensors (302), a film pressure sensor (303), a linear motor (401), a sliding rail I (402), a lead screw (501), a sliding rail II (502), a steering engine (601), a cylinder (701), a gear motor (801) and a frame (13), wherein the gear motor (801) is arranged on four corners of the bottom of the frame (13), the sliding rail I (402) is arranged on two sides of the bottom of the frame (13), the end parts of the two sliding rails I (402) are respectively provided with the linear motor (401), sliding blocks of the two sliding rails I (402) are provided with a sliding rail II (502), the end parts of the sliding rail II (502) are provided with the linear motor (401), a lead screw (501) is arranged on the sliding rail II (502), the lower end of the lead screw (501) is provided with a steering engine (601), the cylinder (701) is arranged on the output shaft of the bottom of the steering engine (601), the obstacle avoidance sensors (302) and the film pressure sensor (303) are arranged on the side of the steering engine (601), the depth camera (201) is arranged on the side of the steering engine (601) for radar information recognition of a chilli image, the gyro sensor (301) is mounted on a stand of the lidar (202).
7. A pepper picking robot as claimed in claim 6, characterized in that: the depth camera (201), the laser radar (202) are connected to the lower computer (1) through a USB interface, the depth camera (201), the laser radar (202) and the GIS module (203) form a fusion sensing system (2), a gyroscope sensor (301), an obstacle avoidance sensor (302) and a film pressure sensor (303) are connected to the input end of the lower computer (1) through a serial interface circuit, a linear motor (401) is connected to a linear motor controller (4), the linear motor controller (4) is connected to the output end of the lower computer (1) through a serial interface circuit, a lead screw (501) is connected to a lead screw controller (5), the lead screw controller (5) is connected to the output end of the lower computer (1) through a serial interface circuit, a steering engine (601) is connected to a steering engine controller (6), the steering engine controller (6) is connected to the output end of the lower computer (1) through a serial interface circuit, an air cylinder (701) is connected to an end effector controller (7), the end effector controller (7) is connected to the output end of the lower computer (1) through a serial interface circuit, a speed reducer motor (801) is connected to a speed reducer (8) controller (8) is connected to the output end of the lower computer (1) through a serial interface circuit, the lower computer (1) is connected to the chassis (1), the LoRa wireless module (9) is connected with the upper computer (12) and the user control end (11) through the Ethernet, and the upper computer (12) and the user control end (11) are connected with the PC end through the Ethernet.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211742467.4A CN116117807A (en) | 2022-12-30 | 2022-12-30 | Chilli picking robot and control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211742467.4A CN116117807A (en) | 2022-12-30 | 2022-12-30 | Chilli picking robot and control method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116117807A true CN116117807A (en) | 2023-05-16 |
Family
ID=86300322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211742467.4A Pending CN116117807A (en) | 2022-12-30 | 2022-12-30 | Chilli picking robot and control method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116117807A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118003340A (en) * | 2024-04-08 | 2024-05-10 | 厦门熠明机器人自动化有限公司 | Visual mechanical arm material grabbing control method and system based on deep learning |
CN118411100A (en) * | 2024-07-02 | 2024-07-30 | 浙江鸟潮供应链管理有限公司 | Task request processing method and device, electronic equipment and storage medium |
-
2022
- 2022-12-30 CN CN202211742467.4A patent/CN116117807A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118003340A (en) * | 2024-04-08 | 2024-05-10 | 厦门熠明机器人自动化有限公司 | Visual mechanical arm material grabbing control method and system based on deep learning |
CN118411100A (en) * | 2024-07-02 | 2024-07-30 | 浙江鸟潮供应链管理有限公司 | Task request processing method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116117807A (en) | Chilli picking robot and control method | |
EP2772814B1 (en) | Tree metrology system | |
Stefas et al. | Vision-based monitoring of orchards with UAVs | |
CN102368158B (en) | Navigation positioning method of orchard machine | |
CN109792951B (en) | Unmanned aerial vehicle air route correction system for pollination of hybrid rice and correction method thereof | |
CN108827297B (en) | Image-based real-time planning method for agricultural inspection track of unmanned aerial vehicle | |
CN108881825A (en) | Rice weed monitoring unmanned system and its monitoring method based on Jetson TK1 | |
CN111178148B (en) | Ground target geographic coordinate positioning method based on unmanned aerial vehicle vision system | |
CN108901366B (en) | Heaven and earth integrated orange picking method | |
CN109255302A (en) | Object recognition methods and terminal, mobile device control method and terminal | |
CN113920474B (en) | Internet of things system and method for intelligently supervising citrus planting situation | |
CN106155082B (en) | A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream | |
CN106708075B (en) | Wide-range rape field SPAD value remote sensing system based on fixed-wing unmanned aerial vehicle and acquisition method | |
CN109739133A (en) | Tomato picking robot system and its control method based on radar fix | |
WO2023204243A1 (en) | Forestry management system and forestry management method | |
Wang et al. | Research advance in phenotype detection robots for agriculture and forestry | |
Fan et al. | A high-throughput phenotyping robot for measuring stalk diameters of maize crops | |
CN111931832A (en) | Optimal data acquisition method and system for substation inspection equipment | |
CN115280960B (en) | Combined harvester steering control method based on field vision SLAM | |
Ji et al. | Performance analysis of target information recognition system for agricultural robots | |
Nguyen et al. | Characteristics of optical flow from aerial thermal imaging,“thermal flow” | |
Li et al. | UAVs-Based Smart Agriculture IoT Systems: An Application-Oriented Design | |
Conejero et al. | Collaborative smart-robot for yield mapping and harvesting assistance | |
CN112461362B (en) | System and method for monitoring space illuminance by using unmanned aerial vehicle | |
Xin et al. | Key Issues and Countermeasures of Machine Vision for Fruit and Vegetable Picking Robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |