CN109753075B - Agriculture and forestry park robot navigation method based on vision - Google Patents

Agriculture and forestry park robot navigation method based on vision Download PDF

Info

Publication number
CN109753075B
CN109753075B CN201910084147.6A CN201910084147A CN109753075B CN 109753075 B CN109753075 B CN 109753075B CN 201910084147 A CN201910084147 A CN 201910084147A CN 109753075 B CN109753075 B CN 109753075B
Authority
CN
China
Prior art keywords
robot
orchard
sign
row
navigation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910084147.6A
Other languages
Chinese (zh)
Other versions
CN109753075A (en
Inventor
史云
李会宾
吴文斌
杨鹏
唐华俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Agricultural Resources and Regional Planning of CAAS
Original Assignee
Institute of Agricultural Resources and Regional Planning of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Agricultural Resources and Regional Planning of CAAS filed Critical Institute of Agricultural Resources and Regional Planning of CAAS
Priority to CN201910084147.6A priority Critical patent/CN109753075B/en
Publication of CN109753075A publication Critical patent/CN109753075A/en
Application granted granted Critical
Publication of CN109753075B publication Critical patent/CN109753075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a vision-based agriculture and forestry park robot navigation method, which comprises the following steps: s1, arranging an auxiliary label at the entrance and exit of the orchard row in the agriculture and forestry park; s2, when the robot moves to the front of the assistant sign, adjusting the distance and angle from the assistant sign, and then completing the turning. The method provided by the invention aims at the situation that the robot cannot be positioned depending on GPS positioning in the orchard, and can realize the functions of accurate travel, turning and searching of the next row of the robot by using the orchard row band detection method and the positioning guide sign under the situation that the robot does not have the GPS, so that the requirement of continuous work of the robot in the orchard is met. In the orchard environment, if GPS positioning is used, the situation that the GPS is frequently lost exists, the positioning is inaccurate, and the subsequent operation cost is large. The invention can realize the correct navigation of the robot under the condition of no GPS, thereby reducing the economic investment.

Description

Agriculture and forestry park robot navigation method based on vision
Technical Field
The invention relates to a navigation technology, in particular to a vision-based agriculture and forestry park robot navigation method.
Background
In the modern times, multifunctional agricultural robots are widely applied, so that the agricultural robots increasingly replace manual work to complete various agricultural activities in wide fields. The orchard production task in agriculture is complicated in types, such as: the procedures of flower thinning, fruit fixing, bagging, pruning, grass covering, irrigation, fertilization, pesticide spraying, pest control, staged harvesting and the like need a large amount of manpower and material resources, and meanwhile, a large amount of invalid investment and ecological pollution can be generated in a non-accurate orchard management mode, so that the price of fruits is increased. Aiming at the situations, the development of an intelligent and accurate robot suitable for orchard operation is imperative. The robot can automatically move to each corner of the orchard only by complete navigation capability, so that the robot can repeatedly and reliably complete each operation link.
In an orchard, autonomous work of a robot needs to be finished in a passageway between two rows of fruit trees, namely, each tree is operated along the passageway, and when the tree operation in one row is finished, the robot turns to the other row of the passageway to work until the whole orchard is finished. The robot can generally be located below the crown in the working process, if the robot is located by only depending on the GPS, the problems that satellite signals are shielded by the crown, multipath effect, radio frequency interference and the like can occur, and finally the robot position error is large, and even the robot cannot be located. The distance between orchard lines is mostly only about 2m, and the GPS is used for hardly acquiring an accurate inter-line positioning navigation effect; the electromagnetic navigation positioning can have better positioning and navigation effects, but the cost is higher.
The Chinese patent application 'CN 102368158A' proposes a navigation and positioning method for orchard machinery. The method comprises the steps of firstly, fusing laser scanning information and visual sensor information to construct a three-dimensional environment in front of an orchard machine and extract effective navigation information; then, when the orchard machine is about to run to the end of a fruit tree row, information measured by an internal sensor of the orchard machine is fused by using a Kalman filter to obtain navigation information of the orchard machine; and finally, when the orchard machine runs to the top of the orchard, planning the turning radius of the top of the orchard machine by adopting an optimal control method, measuring by using a GPS sensor to obtain the position information of the top of the orchard machine, and comparing the theoretical path coordinate with the actual position coordinate to obtain the navigation and positioning information for the orchard machine.
The chinese patent application "CN 205843680U" proposes a visual navigation method for orchard robot. The method comprises the steps of firstly, acquiring an orchard path environment image by using a CCD camera, then extracting a navigation datum line from the image, converting a navigation datum line parameter into course angle information in an actual environment, and transmitting the course angle information to a single chip microcomputer control module. In the robot walking process, two-dimensional code labels are installed on the side edges of orchard rows every 5m and are used for positioning the rows of the robot and guiding the running direction. And finally, the two-dimensional code label information is sent to a single chip microcomputer control module, and the single chip microcomputer controls a driving motor by changing the space-to-space ratio of PWM waves through a PID (proportion integration differentiation) controller, so that the robot can independently walk in the orchard environment.
In the above 2 schemes, when the robot turns between rows in the orchard, the research on the practical problem of how the robot goes out of the current working row and then accurately enters the next row is less. In the chinese patent application "CN 102368158A", a turning path is planned at the head of the field, and then the position of the robot is obtained by using GPS measurement, but in a densely planted orchard environment, there are risks of GPS signal loss and large positioning error, which causes great instability of turning of the robot. In addition, the subsequent positioning cost of the GPS is also high. Although the Chinese patent application 'CN 205843680U' utilizes the two-dimensional code to realize the prompt of the car steering information, no specific research is made on how to realize that the robot accurately enters the next row, but turning between rows is a key step of robot operation, so that the scheme cannot completely realize the autonomous driving of the robot in the orchard environment.
Disclosure of Invention
The invention provides a visual navigation and auxiliary sign guiding method, aiming at the problems that a GPS (global positioning system) cannot be accurately positioned and the turning between rows cannot be accurately performed when a robot runs in a densely planted orchard. The invention ensures that the robot can autonomously complete the operation in the semi-structured environment (clear fruit tree row distribution) of the orchard.
Specifically, the invention provides a visual agriculture and forestry park robot navigation method, which comprises the following steps: s1, arranging an auxiliary label at the entrance and exit of the orchard row in the agriculture and forestry park; s2, when the robot moves to the front of the assistant sign, adjusting the distance and angle from the assistant sign, and then completing the turning.
The invention adopts a visual navigation method, can obtain a larger detection range, has better semi-structured characteristics in the orchard environment, and can calculate the position and the posture of the robot and the orchard row by adopting a proper algorithm and external assistance so as to generate a control signal for steering the robot in time. Meanwhile, the visual method has lower economic cost and meets the requirement of agricultural development.
The beneficial effects of the invention include:
1. aiming at the situation that the robot cannot be positioned depending on GPS positioning in the orchard, the method can enable the robot to realize the functions of accurately traveling, turning and searching the next row by using the orchard row detection method and the positioning guide sign under the situation that the robot does not have the GPS, and meets the requirement of continuous work of the robot in the orchard. In the orchard environment, if GPS positioning is used, the situation that the GPS is frequently lost exists, the positioning is inaccurate, and the subsequent operation cost is large. The invention can realize the correct navigation of the robot under the condition of no GPS, thereby reducing the economic investment.
2. Aiming at the condition that the map is required to be relied on in general navigation, the invention can ensure that the robot can accurately drive in the inter-row space under the condition of no map guidance, simplify the navigation algorithm under the orchard scene and improve the navigation efficiency of the robot in the orchard. In the automatic driving of the trolley, the complex operations of autonomous path planning, autonomous turning and the like are avoided, and the navigation algorithm in an orchard scene is simplified.
3. The navigation strategy in the invention eliminates the restriction that the map exists in the navigation, so that the robot can still determine to run in the row under the guidance of no map, and the accurate operation of going out and going in is realized.
4. According to the invention, the binocular camera, the sign guide and the accurate turning strategy are utilized to realize the accurate line feed of the robot, the complexity of hardware is reduced, and the cost investment is reduced while the working efficiency of the robot is improved.
Drawings
FIG. 1 is a flow chart of one embodiment of the method of the present invention.
Fig. 2 is a perspective view of the assistant sign of the present invention.
Fig. 3 is a front view of the assistant sign of the present invention.
Fig. 4 is a left side view of the assistant sign of the present invention.
Fig. 5 is a top view of the assistant sign of the present invention.
FIG. 6 is a flow chart of another embodiment of the method of the present invention.
Detailed Description
Embodiments of the present invention will now be described with reference to the drawings, wherein like parts are designated by like reference numerals. The embodiments described below and the technical features of the embodiments may be combined with each other without conflict. The invention will be described below in relation to an orchard, but the invention is equally applicable in forest areas.
In one embodiment of the method of the present invention, as shown in FIG. 1, the method of the present invention comprises: and S1, arranging an auxiliary label at the entrance and exit of the orchard row in the agriculture and forestry park. S2, when the robot moves to the front of the assistant sign, adjusting the distance and angle from the assistant sign, and then completing the turning.
In one embodiment, the appearance of the secondary sign may be specially designed, as shown in fig. 2-5. The auxiliary label adopts a pattern with a remarkable structure in the middle, so that the center of the label is convenient to determine, and the square center design is adopted. A sphere with obvious edge characteristics is vertically connected to the center of the label to serve as a positioning aid. Generally, the sphere is located directly in front of the center graphic of the signage and is free of boundary occlusion from the center graphic. The design has the advantages that the central sphere and the central graph are simultaneously used as travel references of the robot, so that a straight line can be determined by using the sphere center of the sphere and the central point of the central graph, and the direction of the robot can be corrected only when the robot and the two points are collinear. The design results of the secondary sign are shown in fig. 2-5.
Step S2 includes: s2-1, scanning the assistant label, calculating the corner points in the assistant label image and the corner points of the central graph, wherein the center of the assistant label is provided with the central graph, and a positioning assistant is arranged right in front of the central graph of the assistant label.
The corner detection algorithms are many, and include a Moravec corner detection algorithm, a Harris corner detection algorithm, an SUSAN corner detection algorithm, a FAST corner detection algorithm and the like. And then determining the center position of the central pattern by using the detected corner points of the central pattern. Suppose that all the calculated central graphic angular points have N numbers, which are respectively (x)c1,yc1),(xc2,yc2),(xc3,yc3),…(xcN,ycN)。
And S2-2, calculating the central point of the central graph.
Let (x)sc,ysc) As the center point of the center graph, the calculation formula is shown as formula (1):
Figure BDA0001961199750000051
and S2-3, judging the area of the central graph.
After all the angular points are calculated, all the angular points are sequentially connected in sequence to form a unique closed graph, namely the central graph area. Here, taking a rectangular area as an example, the rectangle has 4 corner points arranged clockwise, which are respectively (x)c1,yc1),(xc2,yc2),(xc3,yc3) And (x)c4,yc4) Then, the area of the center pattern is calculated, and the calculation formula is shown as formula (2):
s=|xc1-xc2|*|yc1-yc4| (2)
s2-4, calculating the center coordinates of the positioning auxiliary object (the round ball).
Taking the positioning aid as a sphere, let (x)cc,ycc) And (3) obtaining a circular point of the sphere by fitting a circle in the image according to the circle center coordinate of the sphere, wherein the sphere has obvious edge characteristics. The sphere is a circle in the image, and the method for fitting the circle comprises a circle detection method based on Hough transform, a detection method based on curve fitting and the like, and (x) can be obtained by the methodcc,ycc)。
And S2-5, adjusting the deviation of the robot within a set threshold value range according to the difference between the coordinates of the positioning auxiliary object and the coordinates of the center graph.
If the robot faces the auxiliary label, the circle center coordinate and the center graph coordinate have small difference, if the robot recognizes that the coordinate of the circle center point is positioned on the left side of the center graph coordinate, the robot is positioned on the right side of the auxiliary label, so the robot needs to be adjusted leftwards, otherwise, the robot needs to be adjusted rightwards until the distance between the adjusted center graph coordinate and the circle center point coordinate is within the set threshold range, and the robot is judged to face the auxiliary label. Setting Δ d as a distance between 2 coordinates and Δ x as a deviation between 2 coordinates in the x direction, and determining a calculation formula of a relationship between the 2 coordinates as shown in formula (3):
Figure BDA0001961199750000061
set D0Is a critical distance threshold value between 2 coordinates, and judges the left and right deviation conditions of the robot as a formula (4)
Shown in the figure:
Figure BDA0001961199750000062
and S2-6, determining the distance between the robot and the auxiliary label and adjusting.
When the robot is over against the auxiliary label, the distance between the robot and the auxiliary label needs to be judged, and preparation is made for turning the robot to enter the next line. The distance measurement method can utilize a laser radar, a binocular camera and an area contrast method. In the embodiment, an area comparison method is adopted, namely, the distance between the robot and the auxiliary sign is determined by using the area of the central graph of the auxiliary sign occupying the whole image of the auxiliary sign, wherein the area of the central graph is inversely proportional to the distance between the robot and the sign. According to the characteristic, the corresponding relation between the area of the central graph and the distance can be measured and calculated under different distances. When the robot travels to within a distance threshold from the auxiliary sign, the advancement stops. Setting sallIs the area of the image, k is the ratio of the area of the central map to the area of the image, krefThe calculation formula of the critical threshold value of the ratio of the central graphic area to the image area is shown as formula (5):
Figure BDA0001961199750000063
and after the robot stops moving, rotating the robot by about 90 degrees towards one side of the row to be operated in the orchard.
Through step S2, the robot can have completed the operation of walking out of the orchard row. After the robot goes out, a new line of the orchard needs to be found, and the robot continues to work.
In the method of the invention, the robot is guided to find the entrance of a new line by setting an auxiliary sign. The same sign is placed on the furthest side of the entrance row from the robot. In the same manner as in step S2, the robot is guided to face the index plate, and then the distance between the robot and the index plate is adjusted to control the robot to move right to the center of the entrance of the new orchard. Then the trolley is turned 90 degrees towards the orchard row, so that the robot can be aligned to the inlet of the new orchard row, and the new orchard row is driven in a line changing manner. After the robot finds the entrance of the new orchard row, the robot enters the new orchard row in a straight line and starts to drive in the new orchard row.
Fig. 6 shows a flow chart of another embodiment of the method of the present invention. The method of the invention comprises the following steps: and T1, acquiring images of the orchard where the robot is located, and acquiring the travelable path of the robot through a classification fitting algorithm of the images. The calculation of the travelable path may be done externally and then stored in the robot, or may be done in the robot. And T2, calculating the center line position of the orchard row by the robot according to the drivable path, and driving along the center line. And T3, when the robot moves to the end of the orchard row (which can be an exit or an entrance), driving out the current orchard row by the guidance of the auxiliary signs arranged at the entrance and the exit, wherein the auxiliary signs are arranged at the end of the orchard row. T4, the robot finds the entrance positions of the adjacent orchard rows by the assistant sign (the entrance positions of the other orchard rows can also be found by the assistant sign provided at the entrance of each row). T5, the robot enters a new orchard row to continue driving.
In one embodiment, step T1 includes:
and T1-1, acquiring images in the orchard row through a camera installed on the robot, and carrying out feature space clustering processing on the acquired images in the orchard row. For example, the mean-shift algorithm can be used: 1. calculating a feature space of the image by using a nonparametric kernel density function; 2. and clustering each pixel point through a modal density function.
And T1-2, classifying the clustering result by using a graph partitioning algorithm. The method comprises the following steps: 1. defining a region to be partitioned; 2. the image is divided into a plurality of large areas through a similarity calculation function, and the large areas can be uniformly divided into a road area and a non-road area.
And T1-3, performing filtering optimization extraction on the road area. The method comprises the following steps: 1. converting the images of the divided road area and non-road area into gray level images; 2. the image is binarized according to the division threshold, and for example, the gradation value of the road area may be set to 0 and the gradation value of the non-road area may be set to 255.
And T1-4, extracting the boundary line of the road area. The Hough transform algorithm is preferably used, and the Hough transform has the advantage that a more accurate result can be calculated even if boundary points in the image are not on the same straight line.
Two boundary lines including the road region may be acquired by step T1.
In one embodiment, step T2 includes:
and T2-1, after acquiring the two boundary lines of the road area, calculating the distance between the robot and the two boundary lines.
The implementation process in the embodiment is as follows: the distances d1 and d2 from any point on the center column of the image to the two boundary lines are calculated. The equation of the straight line for the left boundary line is set as: a. the1x+Bly+C1The right boundary equation is 0: a. the2x+B2y+C2The selected point coordinate is (x) when the point coordinate is 00,y0) The distance calculation formula is shown in formula (6):
Figure BDA0001961199750000081
t2-2, controlling the vehicle to run along the center line of the road.
And controlling the vehicle to run along the center position of the road according to the principle that the vehicle needs to turn towards the boundary line direction according to the larger distance value. The calculation formula for judging the steering is shown in formula (7):
Figure BDA0001961199750000082
through step T2, the robot can autonomously travel between orchard rows until it reaches the exit of an orchard row.
The robot runs along the center of the current row among the orchard rows, and when the robot moves to the end of the orchard row, the robot needs to run out of the current row to enter the next row for continuous operation. Here, at the center of and just before the two boundary trees, an auxiliary sign is set to guide the robot to go out, that is, the robot is guided to go out for a certain distance in a specific direction, so that a space is conveniently reserved for the robot to turn around and enter the next row.
The above-described embodiments are merely preferred embodiments of the present invention, and general changes and substitutions by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention.

Claims (7)

1. A vision-based agriculture and forestry park robot navigation method is characterized by comprising the following steps:
s1, arranging an auxiliary label at the entrance and exit of an orchard row in an agriculture and forestry park, wherein the center of the auxiliary label is provided with a central pattern, and a positioning auxiliary object is arranged in front of the auxiliary label;
s2, when the robot moves in front of the assistant sign, adjusting the distance and angle from the assistant sign by the following method, and then completing the turning:
s2-1, scanning the auxiliary label, and calculating the corner points in the image of the auxiliary label and the corner points of the central graph;
s2-2, calculating the central point of the central graph;
s2-3, judging the area of the central graph;
s2-4, calculating the center coordinates of the positioning auxiliary object;
s2-5, adjusting the deviation of the robot within a set threshold range according to the difference between the coordinates of the positioning auxiliary object and the coordinates of the central graph;
and S2-6, determining the distance between the robot and the assistant sign according to the area of the center figure of the assistant sign occupying the whole image.
2. The navigation method according to claim 1, wherein in S2, comprising:
the auxiliary sign is rectangular, the central pattern is square, and the positioning auxiliary object is a round ball.
3. The navigation method of claim 1, further comprising:
t1, acquiring images of the orchard where the robot is located, and obtaining a travelable path of the robot through a classification fitting algorithm of the images;
t2, calculating the center line position of the orchard row according to the drivable path by the robot, and driving along the center line;
t3, the robot forms a robot to the entrance and exit of the orchard row, turns according to S1-S2,
t4, the robot finds the entrance position of the adjacent orchard row through the auxiliary label;
t5, the robot enters a new orchard row to continue driving.
4. The navigation method according to claim 3, wherein step T1 further comprises:
in T1-1, images in the orchard row are acquired by a camera mounted on the robot, and feature space clustering processing is performed on the acquired images in the orchard row;
t1-2, classifying the clustering result by using a graph partitioning algorithm;
t1-3, performing filtering optimization extraction on the road area;
and T1-4, extracting the boundary line of the road area.
5. The navigation method according to claim 4, wherein step T2 further comprises:
t2-1, after acquiring two boundary lines of the road area, calculating the distance between the robot and the two boundary lines;
t2-2, controlling the vehicle to run along the center line of the road according to the distance.
6. The navigation method of claim 5,
at T2-1, the distance between any point on the center line of the robot-captured image and the two boundary lines is calculated.
7. The navigation method of claim 4,
in T1-1, clustering is performed by using mean-shift algorithm, which comprises: 1) calculating a feature space of the image by using a nonparametric kernel density function; 2) clustering each pixel point through a modal density function;
t1-2 includes: 1) defining a region to be partitioned; 2) dividing the image into a plurality of large areas by a similarity calculation function, and dividing the large areas into a road area and a non-road area;
t1-3 includes: 1) converting the images of the divided road area and non-road area into gray level images; 2) and carrying out binarization on the image according to a segmentation threshold value.
CN201910084147.6A 2019-01-29 2019-01-29 Agriculture and forestry park robot navigation method based on vision Active CN109753075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910084147.6A CN109753075B (en) 2019-01-29 2019-01-29 Agriculture and forestry park robot navigation method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910084147.6A CN109753075B (en) 2019-01-29 2019-01-29 Agriculture and forestry park robot navigation method based on vision

Publications (2)

Publication Number Publication Date
CN109753075A CN109753075A (en) 2019-05-14
CN109753075B true CN109753075B (en) 2022-02-08

Family

ID=66406470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910084147.6A Active CN109753075B (en) 2019-01-29 2019-01-29 Agriculture and forestry park robot navigation method based on vision

Country Status (1)

Country Link
CN (1) CN109753075B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262497A (en) * 2019-06-27 2019-09-20 北京物资学院 A kind of semi-structure environment robot navigation control method and device
CN110516563A (en) * 2019-08-06 2019-11-29 西安电子科技大学 Agriculture transplanter intelligence method for path navigation based on DSP
CN113807118B (en) * 2020-05-29 2024-03-08 苏州科瓴精密机械科技有限公司 Robot edge working method, system, robot and readable storage medium
CN116048104B (en) * 2023-04-03 2023-06-30 华南农业大学 Orchard operation robot path planning method, system and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106370185A (en) * 2016-08-31 2017-02-01 北京翰宁智能科技有限责任公司 Mobile robot positioning method and system based on ground datum identifiers
CN106681321A (en) * 2016-12-16 2017-05-17 盐城工学院 RFID-based online scheduling control system of automatic guided vehicle
CN107766859A (en) * 2017-10-31 2018-03-06 广东美的智能机器人有限公司 Method for positioning mobile robot, device and mobile robot

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101451849B (en) * 2008-12-26 2011-06-15 天津理工大学 Multifunction marking for vision navigation of mobile object and synthesis navigation method
CN101452292B (en) * 2008-12-29 2011-05-04 天津理工大学 Fish glasses head omnidirectional vision aiming method based on sequence dual-color dot matrix type navigation mark
CN101660908B (en) * 2009-09-11 2011-02-16 天津理工大学 Visual locating and navigating method based on single signpost
US9625912B2 (en) * 2014-09-03 2017-04-18 Sharp Laboratories Of America, Inc. Methods and systems for mobile-agent navigation
CN104181926B (en) * 2014-09-17 2017-06-13 上海畔慧信息技术有限公司 The navigation control method of robot
CN105116885B (en) * 2015-07-16 2017-12-05 江苏大学 A kind of Autoamtic bait putting operation ship vision navigation method based on manual identification
CN106017477B (en) * 2016-07-07 2023-06-23 西北农林科技大学 Visual navigation system of orchard robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106370185A (en) * 2016-08-31 2017-02-01 北京翰宁智能科技有限责任公司 Mobile robot positioning method and system based on ground datum identifiers
CN106681321A (en) * 2016-12-16 2017-05-17 盐城工学院 RFID-based online scheduling control system of automatic guided vehicle
CN107766859A (en) * 2017-10-31 2018-03-06 广东美的智能机器人有限公司 Method for positioning mobile robot, device and mobile robot

Also Published As

Publication number Publication date
CN109753075A (en) 2019-05-14

Similar Documents

Publication Publication Date Title
CN109753075B (en) Agriculture and forestry park robot navigation method based on vision
CN110243372B (en) Intelligent agricultural machinery navigation system and method based on machine vision
Masuzawa et al. Development of a mobile robot for harvest support in greenhouse horticulture—Person following and mapping
CN112965481A (en) Orchard operation robot unmanned driving method based on point cloud map
CN112363503B (en) Orchard vehicle automatic navigation control system based on laser radar
CN109000649A (en) A kind of all directionally movable robot pose calibration method based on right angle bend feature
Zhang et al. 3D perception for accurate row following: Methodology and results
CN113778081B (en) Orchard path identification method and robot based on laser radar and vision
Nehme et al. Lidar-based structure tracking for agricultural robots: Application to autonomous navigation in vineyards
Li et al. 3D autonomous navigation line extraction for field roads based on binocular vision
Peng et al. Depth camera based row-end detection and headland manuvering in orchard navigation without GNSS
CN116839570B (en) Crop interline operation navigation method based on sensor fusion target detection
Morimoto et al. Vision-based navigation system for autonomous transportation vehicle
CN116576859A (en) Path navigation method, operation control method and related device
CN112416000A (en) Unmanned formula car environment sensing and navigation method and steering control method
Lan et al. Development of a robot with 3D perception for accurate row following in vineyard
Peng et al. A combined visual navigation method for greenhouse spray robot
Kurita et al. Localization method using camera and LiDAR and its application to autonomous mowing in orchards
Hanawa et al. Development of a stereo vision system to assist the operation of agricultural tractors
Li et al. Autonomous navigation for orchard mobile robots: A rough review
CN207908980U (en) A kind of closing orchard crawler tractor integrated navigation system
Villemazet et al. Multi-Camera GPS-Free Nonlinear Model Predictive Control Strategy to Traverse Orchards
Kunghun et al. A Rubber Tree Orchard Mapping Method via Image Processing
Inoue et al. Autonomous Navigation and Obstacle Avoidance in an Orchard Using Machine Vision Techniques for a Robotic Mower
Yu et al. A Camera/Ultrasonic Sensors Based Trunk Localization System of Semi-Structured Orchards

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant