CN114485667A - Light and intelligent orchard ground navigation method - Google Patents

Light and intelligent orchard ground navigation method Download PDF

Info

Publication number
CN114485667A
CN114485667A CN202210036882.1A CN202210036882A CN114485667A CN 114485667 A CN114485667 A CN 114485667A CN 202210036882 A CN202210036882 A CN 202210036882A CN 114485667 A CN114485667 A CN 114485667A
Authority
CN
China
Prior art keywords
point cloud
fruit tree
orchard
navigation
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210036882.1A
Other languages
Chinese (zh)
Other versions
CN114485667B (en
Inventor
郑永军
李文伟
刘伟洪
江世界
杨圣慧
许家伟
苏道毕力格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202210036882.1A priority Critical patent/CN114485667B/en
Publication of CN114485667A publication Critical patent/CN114485667A/en
Application granted granted Critical
Publication of CN114485667B publication Critical patent/CN114485667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Electromagnetism (AREA)
  • Molecular Biology (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Robotics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a light and intelligent orchard ground navigation method, and belongs to the technical field of agricultural intelligence. The method comprises the following steps: s1, acquiring original three-dimensional point cloud data of an orchard through a three-dimensional laser radar, and filtering the original three-dimensional point cloud; s2, extracting information of a fruit tree trunk by using a PCL point cloud base; s3, performing line end positioning by identifying the trunk, and determining a heading navigation starting point; s4, extracting the characteristics of the orchard head; s5, matching an effective navigation path by using a neural network; and S6, controlling the robot to turn to complete the ground navigation. The invention utilizes the laser radar to collect the point cloud data of the orchard, is more suitable for hilly orchards with strong illumination and complex terrain, adopts the neural network method to match the effective ground head steering path, and has high matching precision and strong anti-interference capability.

Description

Light and intelligent orchard ground navigation method
Technical Field
The invention belongs to the field of automatic navigation of robots, and particularly relates to a light and intelligent orchard ground navigation method.
Background
High-quality orchards mainly produced by fruits in China are mainly distributed in hilly areas such as Shandong, Chuan and Guangdong. With the continuous progress of the technology, some automatic equipment begins to replace the traditional manual work and is gradually applied to the links of production, management and protection, harvesting and the like of hilly orchards, so that the labor intensity is reduced, and the production efficiency is improved.
Inter-row navigation and ground navigation are necessary links for the orchard robot to realize autonomous operation in the whole process. However, currently, the research on navigation of orchard robots is mainly oriented to the inter-row environment, the research on turning at the ground is less, and most of the research on navigation at the ground is carried out around the field environment, and the difficulties are as follows: firstly, GNSS positioning information is lost due to orchard canopy closure, and real-time absolute coordinate positioning cannot be realized; secondly, the land conditions are greatly different, the land is difficult to identify, and the automatic line changing difficulty is higher under the restriction of factors such as terrain conditions, planting modes and the like. Therefore, these robots are generally switched to the manual driving mode during turning. Therefore, the method solves the difficulty of navigation of the orchard on the ground, and has practical significance for autonomous operation of the orchard robot.
The orchard heading navigation mainly comprises two parts of line end positioning and heading steering. For the end-of-row positioning of an orchard, two methods of GNSS and machine vision are adopted in most domestic researches. However, machine vision navigation is easily affected by illumination intensity, large detection errors are easily caused, certain limitations exist, most orchard environments are closed, GNSS signals are easily lost, and navigation accuracy is affected. Therefore, compared with visual navigation and GNSS navigation, the laser radar is less affected by the environment, the automatic navigation stability is high, and the method is more suitable for line and end positioning in hilly orchards with strong illumination, complex terrain and dense leaves.
The machine learning method has been widely used due to high precision and strong classification capability. Aiming at the ground steering of the robot, if the steering motion information of the robot during manual driving can be recorded and is used as a data set to train to obtain a network model, an optimal path can be effectively matched from steering paths according to the ground characteristics of an orchard, the ground steering strategy of the robot is determined, and finally the ground navigation is realized.
Disclosure of Invention
The invention aims to provide a light and intelligent orchard heading navigation method to solve the problems.
In order to achieve the purpose, the invention provides the following technical scheme:
a light intelligent orchard heading navigation method comprises the following steps:
s1, acquiring original three-dimensional point cloud data of an orchard through a three-dimensional laser radar, and filtering the original three-dimensional point cloud;
s2, extracting information of a fruit tree trunk by using a PCL point cloud base;
s3, performing line end positioning by identifying the trunk, and determining a heading navigation starting point;
s4, extracting the characteristics of the orchard head;
s5, matching an effective navigation path by using a neural network;
and S6, controlling the robot to turn to complete the ground navigation.
The method comprises the following specific steps:
s1, filtering three-dimensional point cloud of an orchard;
filtering the original three-dimensional point cloud, filtering outliers and ground point cloud, and simultaneously filtering point cloud beyond three plant distances away from the laser radar, so as to ensure that the forward-looking distances of the left side and the right side of the laser radar respectively comprise at least three fruit trees;
s2, extracting information of the fruit tree trunk;
carrying out segmentation processing on the filtered three-dimensional point cloud, extracting a single fruit tree through a K-mean clustering algorithm, and then extracting trunk point cloud from the single fruit tree by adopting a random sampling consistency algorithm to complete extraction of information of the fruit tree trunk;
s3, when the laser radar detects that the number of the trunks on one side in the forward-looking range is less than three and the number of the trunks is continuously reduced along with the movement of the robot, the trunk with the farthest distance in the forward-looking range is judged to be a row-end trunk, and the midpoint of the connecting line of the row-end trunks on the left side and the right side is a heading navigation starting point;
s4, after the robot reaches the ground navigation starting point, extracting the tree row width, the ground length and the diameter of the fruit tree canopy by using the point cloud data;
s5, according to the tree row width, the ground length and the diameter of the canopy of the fruit tree, matching an effective navigation path by using a BP neural network;
and S6, after the effective navigation path is matched, controlling the robot to navigate, acquiring the actual driving path of the robot in real time by using the encoder and the IMU, and adjusting the actual path according to the planned path to finish the ground navigation.
In step S1, a straight-through filter is first used to filter the point clouds in the X-axis, Y-axis, and Z-axis directions, the point clouds except three times the row spacing in the X-axis direction are filtered out according to the fruit tree row spacing Lp, the point clouds except 1.5 times the row spacing in the Y-axis direction are filtered out according to the fruit tree row spacing Ll, and the point clouds in the Z-axis coordinate are filtered out according to the minimum Z of the point cloudsminFiltering ground point cloud; then, a statistical outlierremove filter is adopted, the field of each point is analyzed, the average distance from each point to all the adjacent points of the point is calculated, and the points with the average distance outside the standard range are filtered out, so that outliers are filtered out;
in step S2, the filtered point cloud data is inspected by adopting a K-mean clustering algorithm, the point clouds are grouped into n classes according to the distance between the point clouds, then the two classes with the minimum distance are combined into one class, the distance between the classes is recalculated, and iteration is carried out until single-plant fruit tree point cloud data is extracted; then simplifying the fruit tree trunk into a cylindrical model, and extracting trunk point cloud from the single-plant fruit tree point cloud by adopting a random sampling consistency algorithm (RANSAC);
in step S3, when the laser radar detects that there are fewer trunks on one side in the forward-looking range and the number of trunks is continuously reduced along with the movement of the robot, it is determined that the trunk with the farthest distance in the forward-looking range is the row end trunk, the midpoint of the connecting line of the row end trunks on the left side and the right side is the headland navigation starting point, the robot is controlled to reach the headland navigation starting point, and the robot stops walking;
in step S4, calculating the distance between the point cloud data of the end trunk of the two side rows, i.e. the width of the tree row; projecting the original three-dimensional point cloud data to an XOY plane, extracting the outline of the projection of the fruit tree at the end of the row by adopting a BoundryEstimation algorithm, approximately equating the projection to a circle, and calculating the diameter of the projection through curvature, namely the diameter of the fruit tree canopy; acquiring the distance from the robot to the boundary of the orchard head, and subtracting the radius of the crown layer of the fruit tree by using the distance to obtain the length of the orchard head;
step S5 includes the following steps:
collecting a large amount of row-end fruit tree point cloud data in the early stage, manually operating the robot to steer, recording a steering track, adopting a BP neural network, taking the width of a tree row, the length of the head of the tree and the diameter of a fruit tree canopy as input, and taking whether steering is successful or not and a steering path as output, and constructing a network model of 'position and information-steering state';
and (4) inputting the tree row width, the head length and the fruit tree canopy diameter acquired in the step (S4) into a 'position and information-steering state' network model, and obtaining an optimal steering path by utilizing neural network matching.
Compared with the prior art, the invention has the beneficial effects that:
the method of the invention utilizes the laser radar to collect the point cloud data of the orchard, is more suitable for hilly orchards with strong illumination and complex terrain, adopts the neural network method to match the effective ground head steering path, and has high matching precision and strong anti-interference capability.
Drawings
The invention has the following drawings:
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2(a) is orchard raw three-dimensional point cloud data according to the present invention;
FIG. 2(b) is the three-dimensional point cloud data after the ground is filtered out according to the present invention;
FIG. 2(c) is a fruit tree trunk point cloud data of the present invention;
FIG. 3(a) is a first end-of-line positioning diagram of the present invention;
FIG. 3(b) is a schematic diagram of the end-of-line positioning of the present invention;
FIG. 3(c) is a schematic diagram of the end-of-line positioning of the present invention;
FIG. 4 is a schematic diagram of the present invention for extracting the headland feature;
FIG. 5 is a schematic diagram of the network model construction of the present invention;
FIG. 6(a) is a first schematic diagram of an optimal turning path according to the present invention;
FIG. 6(b) is a second schematic diagram of the optimal turning path of the present invention;
FIG. 6(c) is a third schematic diagram of the optimal divert path of the present invention;
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the light and intelligent orchard head-of-ground navigation method of the invention comprises the following steps:
s1, collecting three-dimensional original point cloud data of an orchard;
acquiring three-dimensional original point cloud data of an orchard through a three-dimensional laser radar, converting the original point cloud data from a spherical coordinate system to a Cartesian coordinate system OXYZ, and setting the center of a robot as an origin O, wherein an X axis points to the direction of a vehicle head in a horizontal plane, a Z axis is vertical to the ground and faces upwards, and a Y axis is determined by a right-hand rule;
s2, extracting information of a fruit tree trunk by using a PCL point cloud base;
s1, filtering three-dimensional point cloud of an orchard;
fig. 2(a) is orchard original three-dimensional point cloud data acquired by a laser radar, and as shown in fig. 2(b), the orchard original three-dimensional point cloud data is filtered to filter outliers and ground point clouds and simultaneously filter point clouds beyond three plant distances away from the laser radar, so that at least three fruit trees are included in the forward-looking distances of the left side and the right side of the laser radar, and the specific process is as follows:
firstly, filtering point clouds in the X-axis direction, the Y-axis direction and the Z-axis direction by using a straight-through filter, filtering out the point clouds except for three times of plant spacing in the X-axis direction according to the plant spacing Lp of a fruit tree, filtering out the point clouds except for 1.5 times of line spacing in the Y-axis direction according to the line spacing Ll of the fruit tree, and filtering out the point clouds in the Z-axis direction according to the minimum value Z of the Z-axis coordinate of the point cloudsminFiltering ground point cloud; then, a statistical outlierremove filter is adopted, the field of each point is analyzed, the average distance from each point to all the adjacent points of the point is calculated, and the points with the average distance outside the standard range are filtered out, so that outliers are filtered out;
s2, extracting point cloud of a fruit tree trunk;
carrying out segmentation processing on the filtered fruit tree point cloud, inspecting the filtered point cloud data through a K-mean clustering algorithm, forming the point clouds into n classes according to the distance between the point clouds, combining the two classes with the minimum distance into one class, recalculating the distance between the classes, and iterating until extracting the point cloud data of a single fruit tree; then simplifying the fruit tree trunk into a cylindrical model, extracting trunk point cloud from the single-plant fruit tree point cloud by adopting a random sampling consistency algorithm (RANSAC), wherein the extracted trunk point cloud is shown in a figure 2 (c);
s3, performing line end positioning by identifying the trunk, and determining a heading navigation starting point;
as shown in fig. 3(a) -3(c), in the inter-row movement process of the robot, when the laser radar detects that the number of the trunks on one side in the forward-looking range is less than three and the number of the trunks is continuously reduced along with the movement of the robot, the trunk with the farthest distance in the forward-looking range is determined to be the end-of-row trunk, the midpoint of the connecting line of the end-of-row trunks on the left side and the right side is the heading navigation starting point, and the robot is controlled to reach the heading navigation starting point and stop walking;
s4, extracting the characteristics of the orchard head;
as shown in fig. 4, after the robot reaches the start point of the head navigation, the original three-dimensional point cloud data is projected onto a two-dimensional plane, and the distance between the point cloud data of the end trunk of the two side rows is calculated, namely the tree row width H; extracting the outline of the projection of the fruit tree at the end of the row by adopting a BoundaryEstimation algorithm, approximately equating the projection to be a circle, and calculating the diameter of the projection through curvature, namely the diameter D of the canopy of the fruit tree; meanwhile, the distance from the robot to the boundary of the orchard head is acquired, the radius of the crown layer of the fruit tree is subtracted from the distance, and the length L of the orchard head is calculated;
s5, matching an effective navigation path by the neural network;
step S5 includes the following steps:
a) collecting a large amount of row end fruit tree point cloud data in the early stage, manually operating the robot to steer, recording a steering track, and constructing a position and information-steering state network model by adopting a BP neural network, as shown in figure 5, with the tree row width, the ground head length and the diameter of a fruit tree canopy as input, and with the success or failure of steering and the steering path as output;
b) taking the tree row width, the headland length and the fruit tree canopy diameter collected in the step S4 as inputs, obtaining an optimal turning path by using neural network matching, wherein under different inputs (tree row width, headland length and fruit tree canopy diameter), the optimal headland turning path of the robot is respectively shown in fig. 6(a), 6(b) and 6(c), and the turning paths are respectively in a U shape, an arc shape and a fishtail shape.
And S6, controlling the robot to turn to complete the ground navigation.
And after the effective path is matched, controlling the robot to navigate, acquiring an actual driving path of the robot in real time by using the encoder, the IMU and the laser odometer, and adjusting the actual path according to the planned path to finish the ground navigation.
The above embodiments are merely illustrative, and not restrictive, and those skilled in the relevant art can make various changes and modifications without departing from the spirit and scope of the invention, and therefore all equivalent technical solutions also belong to the scope of the invention.
Those not described in detail in this specification are within the skill of the art.

Claims (6)

1. A light intelligent orchard heading navigation method is characterized by comprising the following steps:
s1, acquiring original three-dimensional point cloud data of an orchard through a three-dimensional laser radar, and filtering the original three-dimensional point cloud;
s2, extracting information of a fruit tree trunk by using a PCL point cloud base;
s3, performing line end positioning by identifying the trunk, and determining a heading navigation starting point;
s4, extracting the characteristics of the orchard head;
s5, matching an effective navigation path by using a neural network;
and S6, controlling the robot to turn to complete the ground navigation.
2. The light intelligent orchard heading navigation method according to claim 1, characterized by comprising the following specific steps:
s1, orchard three-dimensional point cloud filtering: filtering the original three-dimensional point cloud, filtering outliers and ground point cloud, and simultaneously filtering point cloud beyond three plant distances away from the laser radar, so as to ensure that the forward-looking distances of the left side and the right side of the laser radar respectively comprise at least three fruit trees;
s2, extracting information of fruit tree trunks: carrying out segmentation processing on the filtered three-dimensional point cloud, extracting a single fruit tree through a K-mean clustering algorithm, and then extracting trunk point cloud from the single fruit tree by adopting a random sampling consistency algorithm to complete extraction of information of the fruit tree trunk;
s3, when the laser radar detects that the number of the trunks on one side in the forward-looking range is less than three and the number of the trunks is continuously reduced along with the movement of the robot, the trunk with the farthest distance in the forward-looking range is judged to be a row-end trunk, and the midpoint of the connecting line of the row-end trunks on the left side and the right side is a heading navigation starting point;
s4, after the robot reaches the ground navigation starting point, extracting the tree row width, the ground length and the diameter of the fruit tree canopy by using the point cloud data;
s5, according to the tree row width, the ground length and the diameter of the canopy of the fruit tree, matching an effective navigation path by using a BP neural network;
and S6, after the effective navigation path is matched, controlling the robot to navigate, acquiring the actual driving path of the robot in real time by using the encoder and the IMU, and adjusting the actual path according to the planned path to finish the ground navigation.
3. The light intelligent orchard head-of-ground navigation method according to claim 2, characterized in that: in step S1, a straight-through filter is first used to filter the point clouds in the X-axis, Y-axis, and Z-axis directions, the point clouds except three times the row spacing in the X-axis direction are filtered out according to the fruit tree row spacing Lp, the point clouds except 1.5 times the row spacing in the Y-axis direction are filtered out according to the fruit tree row spacing Ll, and the point clouds in the Z-axis coordinate are filtered out according to the minimum Z of the point cloudsminFiltering ground point cloud; then, by analyzing the field of each point, the average distance from each point to all its adjacent points is calculated, and points with average distances outside the standard range are filtered out, so as to filter out outliers.
4. The light intelligent orchard head-of-ground navigation method according to claim 2, characterized in that: in step S2, the filtered point cloud data is inspected by adopting a K-mean clustering algorithm, n types of point clouds are formed by the distance between the point clouds, then two types with the minimum distance are combined into one type, the distance between the types is recalculated, and iteration is carried out until single-plant fruit tree point cloud data are extracted; and then simplifying the fruit tree trunk into a cylindrical model, and extracting the trunk point cloud from the single-plant fruit tree point cloud by adopting a random sampling consistency algorithm.
5. The light intelligent orchard head-of-ground navigation method according to claim 2, characterized in that: in step S4, the distance between the two-side row end trunk point cloud data is calculated as the tree row width; projecting the original three-dimensional point cloud data onto an XOY plane, extracting the outline of the projection of the fruit tree at the end of the row by adopting a BoundryEstimation algorithm, approximately equating the projection to a circle, and calculating the diameter of the projection through curvature to be used as the diameter of the canopy of the fruit tree; and acquiring the distance from the robot to the boundary of the orchard head, and subtracting the radius of the crown layer of the fruit tree by using the distance to obtain the length of the orchard head.
6. The light intelligent orchard head-of-land navigation method according to claim 2, wherein step S5 comprises the steps of:
collecting a large amount of row-end fruit tree point cloud data in the early stage, manually operating the robot to steer, recording a steering track, adopting a BP neural network, taking the width of a tree row, the length of the head of the tree and the diameter of a fruit tree canopy as input, and taking whether steering is successful or not and a steering path as output, and constructing a position and information-steering state network model;
and (4) inputting the tree line width, the headland length and the fruit tree canopy diameter acquired in the step (S4) into a position and information-steering state network model, and obtaining an optimal steering path by utilizing neural network matching.
CN202210036882.1A 2022-01-13 2022-01-13 Light intelligent orchard ground navigation method Active CN114485667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210036882.1A CN114485667B (en) 2022-01-13 2022-01-13 Light intelligent orchard ground navigation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210036882.1A CN114485667B (en) 2022-01-13 2022-01-13 Light intelligent orchard ground navigation method

Publications (2)

Publication Number Publication Date
CN114485667A true CN114485667A (en) 2022-05-13
CN114485667B CN114485667B (en) 2024-05-24

Family

ID=81512567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210036882.1A Active CN114485667B (en) 2022-01-13 2022-01-13 Light intelligent orchard ground navigation method

Country Status (1)

Country Link
CN (1) CN114485667B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294562A (en) * 2022-07-19 2022-11-04 广西大学 Intelligent sensing method for operation environment of plant protection robot
CN116048104A (en) * 2023-04-03 2023-05-02 华南农业大学 Orchard operation robot path planning method, system and electronic equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368158A (en) * 2011-09-15 2012-03-07 西北农林科技大学 Navigation positioning method of orchard machine
CN106017477A (en) * 2016-07-07 2016-10-12 西北农林科技大学 Visual navigation system of orchard robot
CN109782771A (en) * 2019-02-26 2019-05-21 西安交通大学 A kind of orchard mobile robot and edge of a field forward method
CN110298914A (en) * 2019-05-29 2019-10-01 江苏大学 A kind of method of fruit tree canopy characteristic map in orchard establishing
CN110440800A (en) * 2018-05-02 2019-11-12 世博生态环保技术股份有限公司 A kind of orchard spray Algorithms of Robots Navigation System
CN209657155U (en) * 2019-05-19 2019-11-19 西北农林科技大学 A kind of novel electric orchard self-navigation Operation Van
KR102069666B1 (en) * 2018-11-14 2020-01-23 주식회사 모빌테크 Real time driving route setting method for autonomous driving vehicles based on point cloud map
CN110825078A (en) * 2019-10-10 2020-02-21 江苏大学 Ground turning path control system of autonomous navigation tracked vehicle
CN111352420A (en) * 2020-03-03 2020-06-30 厦门大学 High-precision positioning and target alignment control method for laser navigation AGV
CN112363503A (en) * 2020-11-06 2021-02-12 南京林业大学 Orchard vehicle automatic navigation control system based on laser radar
CN112819830A (en) * 2021-01-24 2021-05-18 南京林业大学 Individual tree crown segmentation method based on deep learning and airborne laser point cloud
CN112965481A (en) * 2021-02-01 2021-06-15 成都信息工程大学 Orchard operation robot unmanned driving method based on point cloud map
CN112991435A (en) * 2021-02-09 2021-06-18 中国农业大学 Orchard end-of-row and head-of-row identification method based on 3D LiDAR
KR20210112106A (en) * 2020-03-04 2021-09-14 한국전자통신연구원 Method and apparatus for autonomous driving of mobile robot in orchard environment
WO2021184757A1 (en) * 2020-03-14 2021-09-23 苏州艾吉威机器人有限公司 Robot vision terminal positioning method and device, and computer-readable storage medium
CN113807309A (en) * 2021-09-28 2021-12-17 北京石油化工学院 Orchard machine walking route planning method based on deep learning

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368158A (en) * 2011-09-15 2012-03-07 西北农林科技大学 Navigation positioning method of orchard machine
CN106017477A (en) * 2016-07-07 2016-10-12 西北农林科技大学 Visual navigation system of orchard robot
CN110440800A (en) * 2018-05-02 2019-11-12 世博生态环保技术股份有限公司 A kind of orchard spray Algorithms of Robots Navigation System
KR102069666B1 (en) * 2018-11-14 2020-01-23 주식회사 모빌테크 Real time driving route setting method for autonomous driving vehicles based on point cloud map
CN109782771A (en) * 2019-02-26 2019-05-21 西安交通大学 A kind of orchard mobile robot and edge of a field forward method
CN209657155U (en) * 2019-05-19 2019-11-19 西北农林科技大学 A kind of novel electric orchard self-navigation Operation Van
CN110298914A (en) * 2019-05-29 2019-10-01 江苏大学 A kind of method of fruit tree canopy characteristic map in orchard establishing
CN110825078A (en) * 2019-10-10 2020-02-21 江苏大学 Ground turning path control system of autonomous navigation tracked vehicle
CN111352420A (en) * 2020-03-03 2020-06-30 厦门大学 High-precision positioning and target alignment control method for laser navigation AGV
KR20210112106A (en) * 2020-03-04 2021-09-14 한국전자통신연구원 Method and apparatus for autonomous driving of mobile robot in orchard environment
WO2021184757A1 (en) * 2020-03-14 2021-09-23 苏州艾吉威机器人有限公司 Robot vision terminal positioning method and device, and computer-readable storage medium
CN112363503A (en) * 2020-11-06 2021-02-12 南京林业大学 Orchard vehicle automatic navigation control system based on laser radar
CN112819830A (en) * 2021-01-24 2021-05-18 南京林业大学 Individual tree crown segmentation method based on deep learning and airborne laser point cloud
CN112965481A (en) * 2021-02-01 2021-06-15 成都信息工程大学 Orchard operation robot unmanned driving method based on point cloud map
CN112991435A (en) * 2021-02-09 2021-06-18 中国农业大学 Orchard end-of-row and head-of-row identification method based on 3D LiDAR
CN113807309A (en) * 2021-09-28 2021-12-17 北京石油化工学院 Orchard machine walking route planning method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘伟洪等: "果园行间3D_LiDAR导航方法", 《农业工程学报》, vol. 37, no. 9, pages 165 - 174 *
刘星星等: "最小二乘法与SVM组合的林果行间自主导航方法", 《农业工程学报》, vol. 37, no. 9, pages 157 - 164 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294562A (en) * 2022-07-19 2022-11-04 广西大学 Intelligent sensing method for operation environment of plant protection robot
CN116048104A (en) * 2023-04-03 2023-05-02 华南农业大学 Orchard operation robot path planning method, system and electronic equipment

Also Published As

Publication number Publication date
CN114485667B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
Bai et al. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review
CN114485667B (en) Light intelligent orchard ground navigation method
US20220117218A1 (en) Autonomous system for light treatment of a plant
CN110765916B (en) Farmland seedling ridge identification method and system based on semantics and example segmentation
CN112363503B (en) Orchard vehicle automatic navigation control system based on laser radar
Smitt et al. Pathobot: A robot for glasshouse crop phenotyping and intervention
CN112715162A (en) System for intelligent string type fruit of picking
CN110421580A (en) A kind of field intelligent robot and its working method
WO2023050783A1 (en) Weeding robot and method and apparatus for planning weeding path thereof, and medium
Rahul et al. Image processing based automatic plant disease detection and stem cutting robot
CN113065562A (en) Crop ridge row extraction and leading route selection method based on semantic segmentation network
CN114260895A (en) Method and system for determining grabbing obstacle avoidance direction of mechanical arm of picking machine
CN116839570A (en) Crop interline operation navigation method based on sensor fusion target detection
Yang et al. Vision based fruit recognition and positioning technology for harvesting robots
CN110780669A (en) Forest robot navigation and information acquisition method
Feng et al. Design and test of harvesting robot for table-top cultivated strawberry
CN116576863A (en) Corn data acquisition robot crop inter-row navigation path identification method, computer equipment and medium
Peng et al. A combined visual navigation method for greenhouse spray robot
CN109782771A (en) A kind of orchard mobile robot and edge of a field forward method
Hutsol et al. Robotic technologies in horticulture: analysis and implementation prospects
CN115294562A (en) Intelligent sensing method for operation environment of plant protection robot
CN115407771A (en) Crop monitoring method, system and device based on machine vision
Xu et al. Geometric positioning and color recognition of greenhouse electric work robot based on visual processing
CN117053808B (en) Automatic navigation method for agricultural machinery in field crop planting environment
Wang et al. Research on UAV online visual tracking algorithm based on YOLOv5 and FlowNet2 for apple yield inspection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant