CN114485667B - Light intelligent orchard ground navigation method - Google Patents

Light intelligent orchard ground navigation method Download PDF

Info

Publication number
CN114485667B
CN114485667B CN202210036882.1A CN202210036882A CN114485667B CN 114485667 B CN114485667 B CN 114485667B CN 202210036882 A CN202210036882 A CN 202210036882A CN 114485667 B CN114485667 B CN 114485667B
Authority
CN
China
Prior art keywords
point cloud
fruit tree
navigation
orchard
land
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210036882.1A
Other languages
Chinese (zh)
Other versions
CN114485667A (en
Inventor
郑永军
李文伟
刘伟洪
江世界
杨圣慧
许家伟
苏道毕力格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202210036882.1A priority Critical patent/CN114485667B/en
Publication of CN114485667A publication Critical patent/CN114485667A/en
Application granted granted Critical
Publication of CN114485667B publication Critical patent/CN114485667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Robotics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a light and intelligent navigation method for an orchard land, and belongs to the technical field of agricultural intelligence. The method comprises the following steps: s1, acquiring original three-dimensional point cloud data of an orchard through a three-dimensional laser radar, and filtering the original three-dimensional point cloud; s2, extracting trunk information of the fruit trees by using a PCL point cloud library; s3, performing line end positioning by identifying trunks, and determining a land navigation starting point; s4, extracting the features of the land of the orchard; s5, matching the effective navigation paths by using a neural network; s6, controlling the robot to turn to finish ground navigation. The invention collects the cloud data of the fruit garden by using the laser radar, is more suitable for hilly orchards with strong illumination and complex topography, and adopts a neural network method to match the effective land steering path, and has high matching precision and strong anti-interference capability.

Description

Light intelligent orchard ground navigation method
Technical Field
The invention belongs to the field of robot automatic navigation, and particularly relates to a light and intelligent orchard ground navigation method.
Background
The high-quality orchards mainly produced by fruits in China are mainly distributed in hilly areas such as Shandong, chuan, guangdong and Guangxi Provinces and the like. Along with the continuous progress of technology, some automatic equipment starts to replace traditional manual operation, is gradually applied to links such as production, management and protection and harvesting in hilly orchards, reduces intensity of labour, improves production efficiency.
Inter-row navigation and ground navigation are necessary links for the orchard robot to realize autonomous operation in the whole process. However, at present, research on navigation of an orchard robot is mainly directed to an inter-row environment, less research on turning of the ground is performed, and most of research on navigation of the ground is performed around a field environment, and the difficulty is that: firstly, GNSS positioning information is lost due to orchard closing, and real-time absolute coordinate positioning cannot be realized; secondly, the method is limited by factors such as terrain conditions, planting modes and the like, the difference of land conditions is large, the land identification is difficult, and the automatic line changing difficulty is high. Therefore, in general, these robots are switched to the manual driving mode when turning. Therefore, the method solves the difficulty of navigation of the ground head of the orchard, and has practical significance for autonomous operation of the orchard robot.
The orchard ground navigation mainly comprises two parts of end-of-line positioning and ground steering. Aiming at the end-of-line positioning of an orchard, most domestic researches adopt two methods of GNSS and machine vision. However, machine vision navigation is easily affected by illumination intensity, large detection errors are easily caused, certain limitations exist, most orchard environments are closed, GNSS signals are easily lost, and navigation accuracy is affected. Therefore, compared with visual navigation and GNSS navigation, the laser radar is less affected by the environment, has high automatic navigation stability, and is more suitable for the position of the hill orchard with strong illumination, complex terrain and thick leaves.
The machine learning method has been widely used due to high precision and strong classification capability. Aiming at the ground steering of the robot, if the steering motion information of the robot during manual driving can be recorded and is used as a data set to train to obtain a network model, an optimal path can be effectively matched from steering paths according to the ground characteristics of an orchard, the ground steering strategy of the robot is determined, and finally ground navigation is realized.
Disclosure of Invention
The invention aims to provide a light and intelligent navigation method for the land of an orchard, which aims to solve the problems.
In order to achieve the above object, the present invention provides the following technical solutions:
a light intelligent navigation method for the garden head comprises the following steps:
s1, acquiring original three-dimensional point cloud data of an orchard through a three-dimensional laser radar, and filtering the original three-dimensional point cloud;
s2, extracting trunk information of the fruit trees by using a PCL point cloud library;
s3, performing line end positioning by identifying trunks, and determining a land navigation starting point;
S4, extracting the features of the land of the orchard;
S5, matching the effective navigation paths by using a neural network;
s6, controlling the robot to turn to finish ground navigation.
The method comprises the following specific steps:
S1, three-dimensional point cloud filtering of an orchard;
Filtering the original three-dimensional point cloud, filtering outlier point cloud and ground point cloud, and filtering point cloud outside three-time plant spacing from the laser radar, so as to ensure that at least three fruit trees are included in forward looking distances on the left side and the right side of the laser radar;
s2, extracting trunk information of the fruit tree;
Dividing the filtered three-dimensional point cloud, extracting single fruit trees by a K-mean clustering algorithm, and then extracting trunk point cloud from the single fruit trees by a random sampling consistency algorithm to finish the extraction of trunk information of the fruit trees;
S3, when the laser radar detects that the number of single-side trunks in the forward-looking range is less than three and the number of the trunks is continuously reduced along with the movement of the robot, judging that the trunk with the farthest distance in the forward-looking range is a trunk at the end of a line, and the midpoint of connecting lines of the trunks at the end of the line at the left side and the right side is a ground navigation starting point;
S4, extracting the tree line width, the land length and the diameter of the fruit tree canopy by using point cloud data after the robot reaches the land navigation starting point;
S5, matching an effective navigation path by using a BP neural network according to the width of the tree row, the land length and the diameter of the fruit tree canopy;
S6, after the effective navigation path is matched, the robot is controlled to navigate, the actual running path of the robot is obtained in real time by using the encoder and the IMU, and the actual path is adjusted according to the planned path, so that the ground navigation is completed.
In the step S1, first, a through filter is adopted to perform filtering treatment on point clouds in the directions of an X axis, a Y axis and a Z axis respectively, point clouds except for three times of plant spacing in the direction of the X axis are filtered according to a plant spacing Lp of a fruit tree, point clouds except for 1.5 times of line spacing in the direction of the Y axis are filtered according to a line spacing Lp of the fruit tree, and ground point clouds are filtered according to a minimum value Z min of a Z axis coordinate of the point clouds; then adopting StatisticalOutlierRemoval filters, analyzing the field of each point, calculating the average distance from each point to all adjacent points, and filtering out points with the average distance outside the standard range, thereby filtering out outliers;
In step S2, the filtered point cloud data is inspected by adopting a K-mean clustering algorithm, the point clouds are formed into n classes through the distance between the point clouds, then two classes with the minimum distance are combined into one class, the distance between the classes is recalculated, and iteration is carried out until the point cloud data of the single fruit tree are extracted; then simplifying the trunk of the fruit tree into a cylindrical model, and extracting trunk point clouds from single-plant fruit tree point clouds by adopting a random sampling consistency algorithm (RANSAC);
In step S3, when the laser radar detects that the number of trunks at one side in the forward-looking range is less than three and the number of trunks is continuously reduced along with the movement of the robot, determining that the trunk with the farthest distance in the forward-looking range is the trunk at the end of a line, and controlling the robot to reach the beginning of the ground navigation by taking the midpoint of the connecting line of the trunks at the end of the line at the left side and the right side as the beginning of the ground navigation to stop walking;
In step S4, calculating the distance between the trunk point cloud data of the tail trunk points of the two rows, namely, the width of the tree row; projecting the original three-dimensional point cloud data onto an XOY plane, extracting the projected outline of the last fruit tree by adopting BoundaryEstimation algorithm, enabling the projection to be approximately equivalent to a circle, and calculating the diameter of the circle by curvature to obtain the diameter of the crown layer of the fruit tree; collecting and obtaining the distance from the robot to the boundary of the land of the orchard, and subtracting the radius of the crown layer of the fruit tree from the distance to obtain the length of the land;
Step S5 comprises the steps of:
Acquiring a large amount of end-of-line fruit tree point cloud data in the early stage, manually operating the robot to turn, recording a turning track, adopting a BP neural network, taking the width of a tree line, the length of a land and the diameter of a fruit tree canopy as input, and taking whether the turning is successful or not and a turning path as output, and constructing a 'position and information-turning state' network model;
and (3) inputting the tree row width, the land length and the diameter of the fruit tree canopy acquired in the step (S4) into a 'position and information-turning state' network model, and obtaining an optimal turning path by utilizing neural network matching.
Compared with the prior art, the invention has the beneficial effects that:
The method provided by the invention is used for acquiring the cloud data of the fruit garden by using the laser radar, is more suitable for hilly orchards with strong illumination and complex topography, and is high in matching precision and strong in anti-interference capability due to the fact that the effective land steering path is matched by using the neural network method.
Drawings
The invention has the following drawings:
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 (a) is original three-dimensional point cloud data of an orchard according to the present invention;
FIG. 2 (b) is three-dimensional point cloud data after ground filtering according to the present invention;
Fig. 2 (c) is tree trunk point cloud data of the fruit tree of the present invention;
FIG. 3 (a) is a diagram illustrating a first end-of-line positioning according to the present invention;
FIG. 3 (b) is a second diagram illustrating the end-of-line positioning according to the present invention;
FIG. 3 (c) is a third diagram of the end-of-line positioning of the present invention;
FIG. 4 is a schematic diagram of the extraction of ground features according to the present invention;
FIG. 5 is a schematic diagram of a network model construction of the present invention;
FIG. 6 (a) is a schematic diagram of the optimal steering path of the present invention;
FIG. 6 (b) is a second schematic diagram of the optimal steering path according to the present invention;
FIG. 6 (c) is a third schematic diagram of the optimal steering path of the present invention;
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the light and intelligent orchard head navigation method of the invention comprises the following steps:
s1, acquiring three-dimensional original point cloud data of an orchard;
Acquiring three-dimensional original point cloud data of an orchard through a three-dimensional laser radar, converting the original point cloud data from a spherical coordinate system to a Cartesian coordinate system OXYZ, and defining the center of the robot as an original point O, wherein an X axis points to the direction of a headstock in a horizontal plane, a Z axis is vertical to the ground and is upward, and a Y axis is determined by a right-hand rule;
s2, extracting trunk information of the fruit trees by using a PCL point cloud library;
S1, three-dimensional point cloud filtering of an orchard;
Fig. 2 (a) is original three-dimensional point cloud data of an orchard acquired by a laser radar, as shown in fig. 2 (b), the original three-dimensional point cloud of the laser radar is filtered, outlier point and ground point cloud are filtered, and point cloud which is three times of plant distance away from the laser radar is filtered, so that at least three fruit trees are included in forward looking distances on the left side and the right side of the laser radar respectively, and the specific process is as follows:
Firstly, respectively carrying out filtering treatment on point clouds in the X-axis direction, the Y-axis direction and the Z-axis direction by adopting a straight-through filter, filtering the point clouds except for three times of the plant spacing in the X-axis direction according to the plant spacing Lp of the fruit trees, filtering the point clouds except for 1.5 times of the line spacing in the Y-axis direction according to the line spacing L of the fruit trees, and filtering the ground point clouds according to the minimum value Z min of the Z-axis coordinate of the point clouds; then adopting StatisticalOutlierRemoval filters, analyzing the field of each point, calculating the average distance from each point to all adjacent points, and filtering out points with the average distance outside the standard range, thereby filtering out outliers;
s2, extracting trunk point clouds of the fruit trees;
Dividing the filtered point clouds of the fruit trees, examining the filtered point cloud data through a K-mean clustering algorithm, forming the point clouds into n classes through the distance between the point clouds, then combining two classes with the minimum distance into one class, recalculating the distance between the classes, and iterating until single plant fruit tree point cloud data are extracted; then simplifying the trunk of the fruit tree into a cylindrical model, extracting trunk point clouds from single-plant fruit tree point clouds by adopting a random sampling consistency algorithm (RANSAC), and completing the extracted trunk point clouds as shown in fig. 2 (c);
s3, performing line end positioning by identifying trunks, and determining a land navigation starting point;
As shown in fig. 3 (a) -3 (c), in the movement process between robots, when the laser radar detects that the number of single-side trunks in the forward-looking range is less than three and the number of trunks is continuously reduced along with the movement of the robots, judging that the trunk with the farthest distance in the forward-looking range is the trunk at the end of a row, wherein the middle point of connecting lines of the trunks at the end of the row at the left side and the right side is the ground navigation starting point, controlling the robots to reach the ground navigation starting point, and stopping walking;
S4, extracting the features of the land of the orchard;
as shown in fig. 4, after the robot reaches the ground navigation start point, the original three-dimensional point cloud data is projected onto a two-dimensional plane, and the distance between the trunk point cloud data at the tail ends of the two rows is calculated, namely, the tree row width H; extracting the projected outline of the last fruit tree by adopting BoundaryEstimation algorithm, and approximately equating the projection into a circle, and calculating the diameter of the circle by curvature to obtain the diameter D of the canopy of the fruit tree; meanwhile, the distance from the robot to the boundary of the land of the orchard is acquired, the radius of the crown layer of the fruit tree is subtracted from the distance, and the land length L is calculated;
S5, matching the effective navigation paths by the neural network;
Step S5 comprises the steps of:
a) Acquiring a large amount of end-of-line fruit tree point cloud data in the early stage, manually operating the robot to turn, recording a turning track, adopting a BP neural network, and constructing a 'position and information-turning state' network model by taking the width of a tree line, the length of a land head and the diameter of a fruit tree canopy as input and taking whether the turning is successful or not and a turning path as output as shown in fig. 5;
b) Taking the tree row width, the land length and the fruit tree canopy diameter acquired in the step S4 as inputs, utilizing a neural network to match to obtain an optimal steering path, and under different inputs (the tree row width, the land length and the fruit tree canopy diameter), respectively showing the optimal steering path of the robot as shown in figures 6 (a), 6 (b) and 6 (c), wherein the steering path is in a U shape, an arch shape and a fish tail shape respectively.
S6, controlling the robot to turn to finish ground navigation.
After the effective path is matched, the robot is controlled to navigate, the actual running path of the robot is obtained in real time by using the encoder, the IMU and the laser odometer, and the actual path is adjusted according to the planned path, so that the ground navigation is completed.
The above embodiments are only for illustrating the present invention, not for limiting the present invention, and various changes and modifications may be made by one skilled in the relevant art without departing from the spirit and scope of the present invention, and thus all equivalent technical solutions are also within the scope of the present invention.
What is not described in detail in this specification is prior art known to those skilled in the art.

Claims (3)

1. The light intelligent orchard ground navigation method is characterized by comprising the following steps of:
s1, acquiring original three-dimensional point cloud data of an orchard through a three-dimensional laser radar, and filtering the original three-dimensional point cloud, wherein the method specifically comprises the following steps:
Three-dimensional point cloud filtering of an orchard: filtering the original three-dimensional point cloud, filtering outlier point cloud and ground point cloud, and filtering point cloud outside three-time plant spacing from the laser radar, so as to ensure that at least three fruit trees are included in forward looking distances on the left side and the right side of the laser radar;
s2, extracting trunk information of the fruit tree by using a PCL point cloud library, wherein the method specifically comprises the following steps of:
Extracting information of the trunk of the fruit tree: dividing the filtered three-dimensional point cloud, extracting single fruit trees by a K-mean clustering algorithm, and then extracting trunk point cloud from the single fruit trees by a random sampling consistency algorithm to finish the extraction of trunk information of the fruit trees;
s3, performing line end positioning by identifying trunks, and determining land navigation starting points, wherein the specific steps are as follows:
when the laser radar detects that the number of single-side trunks in the forward-looking range is less than three and the number of trunks is continuously reduced along with the movement of the robot, judging that the trunk with the farthest distance in the forward-looking range is a trunk at the end of a line, and the middle point of the connecting line of the trunks at the end of the line at the left side and the right side is a ground navigation starting point;
S4, extracting the features of the ground of the orchard, specifically:
When the robot reaches the ground navigation starting point, extracting the tree line width, the ground length and the fruit tree canopy diameter by utilizing point cloud data;
calculating the distance between trunk point cloud data at the tail ends of the two rows as the width of the tree row; projecting the original three-dimensional point cloud data onto an XOY plane, extracting the projected outline of the last fruit tree by adopting BoundaryEstimation algorithm, enabling the projection to be approximately equivalent to a circle, and calculating the diameter of the circle by curvature to be used as the diameter of the crown layer of the fruit tree; acquiring the distance from the robot to the boundary of the land of the orchard, and subtracting the radius of the crown layer of the fruit tree from the distance to obtain the length of the land;
S5, matching an effective navigation path by using a neural network, wherein the effective navigation path comprises the following specific steps:
according to the tree row width, the land length and the fruit tree canopy diameter, the BP neural network is utilized to match the effective navigation path;
Acquiring a large amount of end-of-line fruit tree point cloud data in the early stage, manually operating the robot to turn, recording a turning track, adopting a BP neural network, taking the width of a tree line, the length of a land and the diameter of a fruit tree canopy as input, and taking whether the turning is successful or not and a turning path as output, and constructing a position and information-turning state network model;
Inputting the tree row width, the land length and the fruit tree canopy diameter acquired in the step S4 into a position and information-turning state network model, and obtaining an optimal turning path by utilizing neural network matching;
s6, controlling the robot to turn to finish ground navigation, wherein the method specifically comprises the following steps:
After the effective navigation path is matched, the robot is controlled to navigate, the actual running path of the robot is obtained in real time by using the encoder and the IMU, and the actual path is adjusted according to the planned path, so that the ground navigation is completed.
2. The light intelligent orchard head navigation method of claim 1, wherein: in the step S1, first, a through filter is adopted to perform filtering treatment on point clouds in the directions of an X axis, a Y axis and a Z axis respectively, point clouds except for three times of plant spacing in the direction of the X axis are filtered according to a plant spacing Lp of a fruit tree, point clouds except for 1.5 times of line spacing in the direction of the Y axis are filtered according to a line spacing Lp of the fruit tree, and ground point clouds are filtered according to a minimum value Z min of a Z axis coordinate of the point clouds; then, by analyzing the field of each point and adopting StatisticalOutlierRemoval filters, the average distance from each point to all the adjacent points is calculated, and the points with the average distance outside the standard range are filtered, so that outliers are filtered.
3. The light intelligent orchard head navigation method of claim 2, wherein: in step S2, the filtered point cloud data is inspected by adopting a K-mean clustering algorithm, the point clouds are formed into n classes through the distance between the point clouds, then two classes with the minimum distance are combined into one class, the distance between the classes is recalculated, and iteration is carried out until the point cloud data of the single fruit tree are extracted; and then simplifying the trunk of the fruit tree into a cylindrical model, and extracting trunk point cloud from single-plant fruit tree point cloud by adopting a random sampling consistency algorithm.
CN202210036882.1A 2022-01-13 2022-01-13 Light intelligent orchard ground navigation method Active CN114485667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210036882.1A CN114485667B (en) 2022-01-13 2022-01-13 Light intelligent orchard ground navigation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210036882.1A CN114485667B (en) 2022-01-13 2022-01-13 Light intelligent orchard ground navigation method

Publications (2)

Publication Number Publication Date
CN114485667A CN114485667A (en) 2022-05-13
CN114485667B true CN114485667B (en) 2024-05-24

Family

ID=81512567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210036882.1A Active CN114485667B (en) 2022-01-13 2022-01-13 Light intelligent orchard ground navigation method

Country Status (1)

Country Link
CN (1) CN114485667B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294562B (en) * 2022-07-19 2023-05-09 广西大学 Intelligent sensing method for operation environment of plant protection robot
CN116048104B (en) * 2023-04-03 2023-06-30 华南农业大学 Orchard operation robot path planning method, system and electronic equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368158A (en) * 2011-09-15 2012-03-07 西北农林科技大学 Navigation positioning method of orchard machine
CN106017477A (en) * 2016-07-07 2016-10-12 西北农林科技大学 Visual navigation system of orchard robot
CN109782771A (en) * 2019-02-26 2019-05-21 西安交通大学 A kind of orchard mobile robot and edge of a field forward method
CN110298914A (en) * 2019-05-29 2019-10-01 江苏大学 A kind of method of fruit tree canopy characteristic map in orchard establishing
CN110440800A (en) * 2018-05-02 2019-11-12 世博生态环保技术股份有限公司 A kind of orchard spray Algorithms of Robots Navigation System
CN209657155U (en) * 2019-05-19 2019-11-19 西北农林科技大学 A kind of novel electric orchard self-navigation Operation Van
KR102069666B1 (en) * 2018-11-14 2020-01-23 주식회사 모빌테크 Real time driving route setting method for autonomous driving vehicles based on point cloud map
CN110825078A (en) * 2019-10-10 2020-02-21 江苏大学 Ground turning path control system of autonomous navigation tracked vehicle
CN111352420A (en) * 2020-03-03 2020-06-30 厦门大学 High-precision positioning and target alignment control method for laser navigation AGV
CN112363503A (en) * 2020-11-06 2021-02-12 南京林业大学 Orchard vehicle automatic navigation control system based on laser radar
CN112819830A (en) * 2021-01-24 2021-05-18 南京林业大学 Individual tree crown segmentation method based on deep learning and airborne laser point cloud
CN112965481A (en) * 2021-02-01 2021-06-15 成都信息工程大学 Orchard operation robot unmanned driving method based on point cloud map
CN112991435A (en) * 2021-02-09 2021-06-18 中国农业大学 Orchard end-of-row and head-of-row identification method based on 3D LiDAR
WO2021184757A1 (en) * 2020-03-14 2021-09-23 苏州艾吉威机器人有限公司 Robot vision terminal positioning method and device, and computer-readable storage medium
CN113807309A (en) * 2021-09-28 2021-12-17 北京石油化工学院 Orchard machine walking route planning method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210112106A (en) * 2020-03-04 2021-09-14 한국전자통신연구원 Method and apparatus for autonomous driving of mobile robot in orchard environment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368158A (en) * 2011-09-15 2012-03-07 西北农林科技大学 Navigation positioning method of orchard machine
CN106017477A (en) * 2016-07-07 2016-10-12 西北农林科技大学 Visual navigation system of orchard robot
CN110440800A (en) * 2018-05-02 2019-11-12 世博生态环保技术股份有限公司 A kind of orchard spray Algorithms of Robots Navigation System
KR102069666B1 (en) * 2018-11-14 2020-01-23 주식회사 모빌테크 Real time driving route setting method for autonomous driving vehicles based on point cloud map
CN109782771A (en) * 2019-02-26 2019-05-21 西安交通大学 A kind of orchard mobile robot and edge of a field forward method
CN209657155U (en) * 2019-05-19 2019-11-19 西北农林科技大学 A kind of novel electric orchard self-navigation Operation Van
CN110298914A (en) * 2019-05-29 2019-10-01 江苏大学 A kind of method of fruit tree canopy characteristic map in orchard establishing
CN110825078A (en) * 2019-10-10 2020-02-21 江苏大学 Ground turning path control system of autonomous navigation tracked vehicle
CN111352420A (en) * 2020-03-03 2020-06-30 厦门大学 High-precision positioning and target alignment control method for laser navigation AGV
WO2021184757A1 (en) * 2020-03-14 2021-09-23 苏州艾吉威机器人有限公司 Robot vision terminal positioning method and device, and computer-readable storage medium
CN112363503A (en) * 2020-11-06 2021-02-12 南京林业大学 Orchard vehicle automatic navigation control system based on laser radar
CN112819830A (en) * 2021-01-24 2021-05-18 南京林业大学 Individual tree crown segmentation method based on deep learning and airborne laser point cloud
CN112965481A (en) * 2021-02-01 2021-06-15 成都信息工程大学 Orchard operation robot unmanned driving method based on point cloud map
CN112991435A (en) * 2021-02-09 2021-06-18 中国农业大学 Orchard end-of-row and head-of-row identification method based on 3D LiDAR
CN113807309A (en) * 2021-09-28 2021-12-17 北京石油化工学院 Orchard machine walking route planning method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
最小二乘法与SVM组合的林果行间自主导航方法;刘星星等;《农业工程学报》;第37卷(第9期);157-164 *
果园行间3D_LiDAR导航方法;刘伟洪等;《农业工程学报》;第37卷(第9期);165-174 *

Also Published As

Publication number Publication date
CN114485667A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN114485667B (en) Light intelligent orchard ground navigation method
Bai et al. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review
US20220117218A1 (en) Autonomous system for light treatment of a plant
US11785873B2 (en) Detecting multiple objects of interest in an agricultural environment
CN102914967B (en) Autonomous navigation and man-machine coordination picking operating system of picking robot
US20230404056A1 (en) Multiaction treatment of plants in an agricultural environment
Edan Design of an autonomous agricultural robot
CN112363503B (en) Orchard vehicle automatic navigation control system based on laser radar
CN110765916A (en) Farmland seedling ridge identification method and system based on semantics and example segmentation
Smitt et al. Pathobot: A robot for glasshouse crop phenotyping and intervention
Karkee et al. A method for three-dimensional reconstruction of apple trees for automated pruning
WO2023050783A1 (en) Weeding robot and method and apparatus for planning weeding path thereof, and medium
Rahul et al. Image processing based automatic plant disease detection and stem cutting robot
Li et al. A multi-arm robot system for efficient apple harvesting: Perception, task plan and control
CN116128672A (en) Model-data combined driving intelligent greenhouse fertilizer preparation method and system
Jin et al. Intelligent tomato picking robot system based on multimodal depth feature analysis method
Yang et al. Vision based fruit recognition and positioning technology for harvesting robots
CN109782771A (en) A kind of orchard mobile robot and edge of a field forward method
WO2023069841A1 (en) Autonomous detection and control of vegetation
CN118115881B (en) Garden plant diseases and insect pests detection method and device
CN116026321A (en) Information acquisition and navigation method of fully-autonomous fruit tree information acquisition robot
Scarfe Development of an autonomous kiwifruit harvester: a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Industrial Automation at Massey University, Manawatu, New Zealand.
CN114485612B (en) Route generation method and device, unmanned operation vehicle, electronic equipment and storage medium
CN117565065B (en) Famous tea picking robot
Shamshiri et al. An overview of visual servoing for robotic manipulators in digital agriculture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant