CN115981334A - Vision-based grapery fruit tree inter-row navigation line extraction method - Google Patents

Vision-based grapery fruit tree inter-row navigation line extraction method Download PDF

Info

Publication number
CN115981334A
CN115981334A CN202310033198.2A CN202310033198A CN115981334A CN 115981334 A CN115981334 A CN 115981334A CN 202310033198 A CN202310033198 A CN 202310033198A CN 115981334 A CN115981334 A CN 115981334A
Authority
CN
China
Prior art keywords
fruit tree
fruit
line
row
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310033198.2A
Other languages
Chinese (zh)
Inventor
耿长兴
沈任远
顾海洋
刘万福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202310033198.2A priority Critical patent/CN115981334A/en
Publication of CN115981334A publication Critical patent/CN115981334A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a vision-based method for extracting inter-row leading lines of fruit trees in a vineyard, which comprises the following steps: acquiring a color image of a current fruit tree line by using an RGB camera; obtaining the fruit trees on two sides between the fruit tree rows and the enveloping frames of the supporting frames by using a YOLOP algorithm based on deep learning, and obtaining a mask map for semantic segmentation of a drivable region; and extracting the characteristic points on the two sides between the fruit tree lines by using an image morphological processing method, respectively performing linear fitting on the characteristic points on the two sides between the fruit tree lines by using a random sampling consistency algorithm, and taking an angular bisector as a navigation line. The method for extracting the leading line can deploy the fruit transportation robot under the vineyard scene, and the robot can carry out real-time direction adjustment by calculating the driving direction of the robot at the current moment and the deflection angle between the leading lines, so that the robot can drive without collision between fruit trees, and the tasks of transportation, routing inspection and the like between the fruit trees in the vineyard are completed.

Description

Vision-based grapery fruit tree inter-row navigation line extraction method
Technical Field
The invention relates to the technical field of fruit transportation robot guide rails, in particular to a vision-based method for extracting inter-row guidance lines of fruit trees in a vineyard.
Background
When the autonomous mobile robot runs between crop rows, sensors such as a camera, a laser and a radar need to acquire information of crops on two sides or road boundaries on two sides, and a navigation line is generated according to the information, so that the robot is guided to run between the crop rows.
The method mainly comprises three steps of extracting leading lines between crop rows by the mobile robot, wherein the first step is extracting characteristic points of crops or boundaries on two sides, the second step is fitting straight lines of the characteristic points of the crops or the boundaries on the two sides, and the third step is fitting of a middle leading line according to the straight lines generated on the two sides (generally, an angular bisector of the straight lines on the two sides is taken as the leading line). In the first step of extracting the feature points of the crops or the boundaries on two sides, the feature points of the crops or the boundaries need to be obtained through some machine vision means, and the feature points of non-fruit rows which possibly cause interference to fitting are filtered; in the process of line fitting in the second step, according to the characteristics of the key points on the two sides obtained in the first step, the algorithms such as a least square method, hough transform, random sampling consistency and the like are used for performing line fitting on the characteristic points to obtain fitting lines of crops or boundaries on the two sides; and in the leading line extraction process in the third step, a leading line with a fit in the middle is generated according to the straight lines on the two sides extracted in the second step, and an angular bisector is generally adopted as the leading line in the middle.
A pilot line extraction algorithm based on median point Hough transformation crop row detection is provided in a paper 'pilot line extraction method based on median point Hough transformation corn row detection'. Firstly, the traditional 2G-R-B algorithm is improved, and then the segmentation of the soil background and the corn seedling zone is realized by combining median filtering, a maximum inter-class variance method and morphological operation; secondly, extracting characteristic points of the corn seedling belts by using an averaging method, and fitting the corn seedling rows on two sides between ridges by using Hough transformation of a median point; and finally, taking the detected line lines of the corn seedlings on the two sides as navigation datum lines, and extracting the navigation line by using an included angle tangent formula.
However, in the vineyard scene shown in fig. 1, one side is a fruit tree branch, and the other side is a supporting rod, because the fruit tree branches have different shapes, strong illumination change and similar background foreground colors, the extraction of crops or boundary feature points on two sides is difficult to complete by using the traditional machine vision scheme based on threshold segmentation, gray scale change, edge detection and the like, and therefore the subsequent process of leading line fitting cannot be completed; in addition, in a vineyard scene, the side bottom of the fruit tree is usually free from weed coverage through weeding operation, and the side bottom of the supporting rod is usually covered by weeds, so that different characteristic point extraction strategies need to be formulated for different conditions at two sides; the existing research does not relate to a method for extracting a leading line in a vineyard scene, and the existing method for extracting the leading line in the vineyard scene is not suitable for the vineyard scene.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides a vision-based grapery inter-row navigation line extraction method, so that a fruit transportation robot can autonomously acquire navigation lines among grapery rows to perform transportation tasks.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
a vision-based grapery fruit inter-row leading line extraction method comprises the following steps:
acquiring a color image of a current fruit tree line by using an RGB camera;
obtaining the fruit trees on two sides between the fruit tree rows and the enveloping frames of the supporting frames by using a YOLOP algorithm based on deep learning, and obtaining a mask map for semantic segmentation of a drivable region;
and extracting the characteristic points on the two sides between the fruit tree lines by using an image morphological processing method, respectively performing linear fitting on the characteristic points on the two sides between the fruit tree lines by using a random sampling consistency algorithm, and taking an angular bisector as a navigation line.
Further, the specific process of the method is as follows:
step one), acquiring a frame of RGB image;
step two), obtaining the enveloping frames of the fruit tree side and the supporting frame side at two sides between the fruit tree rows through a deep learning-based YOLOP algorithm, and obtaining a mask map of the travelable area of the fruit tree rows;
step three), extracting the drivable area of the current fruit tree row, and removing the interference of the drivable areas of other fruit trees;
step four), extracting a characteristic point set A on the fruit tree side on the left side of the drivable area mask image of the current fruit tree row after the processing of the step three);
step five), after the processing of the step three), extracting a characteristic point set B on the side of the support frame from the right side of the mask map of the travelable area of the current fruit tree row;
step six), respectively carrying out random sampling consistency line fitting on the feature point set A and the feature point set B on the left side and the right side to obtain boundary line analytic formulas on the left side and the right side;
step seven), taking an angular bisector of the two side boundary lines obtained in the step six) as a leading line, wherein a calculation formula is as follows:
Figure 354805DEST_PATH_IMAGE002
in the formula, k is the slope of the leading line, k1 is the slope of the left boundary line, and k2 is the slope of the right boundary line;
step eight), calculating a deflection angle according to the slope k of the current leading line, and sending the deflection angle to a motion control module to control the corresponding fruit transport robot to move;
step nine) the process is repeated from step one).
Further, in the third step), firstly, performing expansion + erosion image morphological operation on the travelable region mask map, removing noise points, and then selecting the adhesion region with the largest area as the travelable region of the current fruit tree row.
Further, in the fourth step), a feature point set a on the fruit tree side is initialized, for the fruit tree side, the central line of the image is taken as a partition line, all envelope frames on the left side of the image are traversed, and for each envelope frame, if the lower right corner point falls within the travelable region, the lower right corner point of the envelope frame is taken as a feature point and is added into the set a to form the feature point set a on the fruit tree side.
Further, in the fifth step), a feature point set B on the support side is initialized, feature points are sampled at regular pixel value intervals in the y-axis direction on the right side of the mask map of the travelable region, and the feature points are added to the set B to form the feature point set B on the support side.
The invention has the beneficial effects that:
1. the characteristic points on the two sides can be stably and reliably extracted among the fruit rows in the vineyard, and a leading line is generated.
2. The method for extracting the leading line can deploy the fruit transportation robot under the vineyard scene, and the robot can carry out real-time direction adjustment by calculating the running direction of the robot at the current moment and the deflection angle between the leading lines, so that the robot can run without collision between fruit tree rows, and the tasks of transportation, routing inspection and the like between the fruit tree rows in the vineyard are completed.
Drawings
FIG. 1 is a view of a vineyard scene;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a mask diagram of drivable regions of fruit tree rows output by the YOLOP algorithm of the present invention;
fig. 4 is a mask map of the travelable region of the fruit tree row after the mask map of the travelable region in fig. 3 is subjected to dilation + morphological operation of erosion image and noise point removal according to the present invention, wherein fig. 4a is the mask map before travelable region screening, and fig. 4b is the mask map after travelable region screening;
FIG. 5 is a schematic diagram of the extraction of characteristic points on the fruit tree side according to the present invention;
FIG. 6 is a schematic diagram of the extraction of characteristic points on the side of the supporting frame according to the present invention;
fig. 7 is a schematic diagram of boundary straight lines on the left and right sides of a travelable region mask diagram of the current fruit tree row obtained by the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings in combination with embodiments.
A vision-based method for extracting inter-row navigation lines of fruit trees in a vineyard comprises the following steps:
acquiring a color image of a current fruit tree line by using an RGB camera;
the method comprises the steps that an YOLOP algorithm based on deep learning is used for obtaining envelope frames of fruit trees and supporting frames on two sides between fruit tree rows, the YOLOP algorithm can simultaneously complete target detection and semantic segmentation tasks, and a mask map for semantic segmentation of a travelable area is obtained;
and extracting the characteristic points on the two sides between the fruit tree lines by using an image morphological processing method, respectively performing linear fitting on the characteristic points on the two sides between the fruit tree lines by using a random sampling consistency algorithm, and taking an angular bisector as a navigation line.
As shown in fig. 2, the specific flow of the method is as follows:
step one), acquiring a frame of RGB image;
step two) obtaining the enveloping frames of the fruit tree side and the supporting frame side at two sides between the fruit tree rows through a deep learning-based YOLOP algorithm, and obtaining a mask map of the travelable area of the fruit tree rows, wherein the mask map is shown in FIG. 3;
step three), extracting the drivable area of the current fruit tree row, and removing the interference of the drivable areas of other fruit trees;
step four), extracting a characteristic point set A on the fruit tree side on the left side of the drivable area mask image of the current fruit tree row after the processing of the step three);
step five), after the processing of the step three), extracting a characteristic point set B on the side of the supporting frame from the right side of the mask image of the drivable area of the current fruit tree row;
step six), respectively carrying out random sampling consistency linear fitting on the feature point set A and the feature point set B on the left side and the right side to obtain boundary linear analytical expressions on the left side and the right side, wherein the boundary linear analytical expressions are shown in FIG. 7;
step seven), taking an angular bisector of the two side boundary lines obtained in the step six) as a navigation line, wherein the calculation formula is as follows:
Figure 250080DEST_PATH_IMAGE002
in the formula, k is the slope of the leading line, k1 is the slope of the left boundary line, and k2 is the slope of the right boundary line;
step eight), calculating a deflection angle according to the slope k of the current leading line, and sending the deflection angle to a motion control module to control the corresponding fruit transport robot to move;
step nine) the process is repeated from step one).
In the third step), firstly, performing dilation and image erosion morphological operations on the drivable area mask map to remove noise points, and then screening out an adhesion area with the largest area as the drivable area of the current fruit tree row, as shown in fig. 4a and 4b, the drivable area mask map of the current fruit tree row before and after screening is respectively.
In the fourth step), a feature point set a on the fruit tree side is initialized, as shown in fig. 5, for the fruit tree side, the central line of the image is taken as a partition line, all envelope frames on the left side of the image are traversed, and for each envelope frame, if a lower right corner point falls within a travelable area, a lower right corner point of the envelope frame is taken as a feature point and is added to the set a to form the feature point set a on the fruit tree side.
In the fifth step), firstly, initializing a feature point set B on the support frame side, as shown in fig. 6, since the bottom of the support frame is often covered by weeds, the robot may run into the grass and may cause danger, and it is necessary to limit the movement range of the robot within the travelable area, so that feature points are sampled at certain pixel values in the y-axis direction on the right side of the mask map of the travelable area, and are added to the set B to form the feature point set B on the support frame side.
Principle of the invention
The invention uses a deep learning (target detection, semantic segmentation) and traditional vision combined method to extract characteristic points on the left and right sides, firstly outputs envelope frames of all fruit trees and supporting frames and mask maps of ground travelable areas through a multitask (target detection, semantic segmentation) deep learning network, then accurately extracts the characteristic points on the two sides in a traditional vision mode, adopts different characteristic point extraction methods aiming at different characteristics of the fruit tree side and the supporting frame side between grapery trees, then respectively carries out straight line fitting, and takes an angular bisector as a navigation line.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A vision-based grapery fruit inter-row navigation line extraction method is characterized by comprising the following steps:
acquiring a color image of a current fruit tree line by using an RGB camera;
obtaining the fruit tree and support frame enveloping frames at two sides between fruit tree rows by using a YOLOP algorithm based on deep learning, and obtaining a mask map for semantic segmentation of a drivable region;
and extracting the characteristic points on the two sides between the fruit tree lines by using an image morphological processing method, respectively performing linear fitting on the characteristic points on the two sides between the fruit tree lines by using a random sampling consistency algorithm, and taking an angular bisector as a navigation line.
2. The vision-based grapery fruit inter-row leading line extraction method as claimed in claim 1, wherein the method comprises the following specific procedures:
step one), acquiring a frame of RGB image;
step two), obtaining the enveloping frames of the fruit tree side and the supporting frame side at two sides between the fruit tree rows through a deep learning-based YOLOP algorithm, and obtaining a mask map of the travelable area of the fruit tree rows;
step three), extracting the drivable area of the current fruit tree row, and removing the interference of the drivable areas of other fruit trees;
step four), extracting a characteristic point set A on the fruit tree side on the left side of the drivable area mask image of the current fruit tree row after the processing of the step three);
step five), after the processing of the step three), extracting a characteristic point set B on the side of the support frame from the right side of the mask map of the travelable area of the current fruit tree row;
step six), respectively carrying out random sampling consistency linear fitting on the feature point set A and the feature point set B on the left side and the right side to obtain boundary linear analytic expressions on the left side and the right side;
step seven), taking an angular bisector of the two side boundary lines obtained in the step six) as a leading line, wherein a calculation formula is as follows:
Figure 444281DEST_PATH_IMAGE001
in the formula, k is the slope of the leading line, k1 is the slope of the left boundary line, and k2 is the slope of the right boundary line;
step eight), calculating a deflection angle according to the slope k of the current leading line, and sending the deflection angle to a motion control module to control the corresponding fruit transport robot to move;
step nine) the process is repeated from step one).
3. The vision-based method for extracting the fruit tree inter-row navigation line in the grapevine according to claim 2, wherein in the third step), firstly, the mask map of the travelable area is expanded and the morphological operation of the erosion image is performed to remove noise points, and then the adhesion area with the largest area is selected as the travelable area of the current fruit tree row.
4. The vision-based vineyard fruit inter-row navigation line extraction method according to claim 3, wherein in the step four), a feature point set A on the fruit tree side is initialized, for the fruit tree side, all the envelope frames on the left side of the image are traversed by taking the image central line as a partition line, and for each envelope frame, if the lower right corner point falls within the travelable region, the lower right corner point of the envelope frame is added as a feature point to the set A to form the feature point set A on the fruit tree side.
5. The vision-based vineyard fruit inter-row leading line extraction method according to claim 4, wherein in the step five), a set B of feature points on the side of the support frame is initialized, the feature points are sampled at certain pixel values in the y-axis direction on the right side of the mask map of the travelable region, and the feature points are added to the set B to form the set B of feature points on the side of the support frame.
CN202310033198.2A 2023-01-10 2023-01-10 Vision-based grapery fruit tree inter-row navigation line extraction method Pending CN115981334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310033198.2A CN115981334A (en) 2023-01-10 2023-01-10 Vision-based grapery fruit tree inter-row navigation line extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310033198.2A CN115981334A (en) 2023-01-10 2023-01-10 Vision-based grapery fruit tree inter-row navigation line extraction method

Publications (1)

Publication Number Publication Date
CN115981334A true CN115981334A (en) 2023-04-18

Family

ID=85962927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310033198.2A Pending CN115981334A (en) 2023-01-10 2023-01-10 Vision-based grapery fruit tree inter-row navigation line extraction method

Country Status (1)

Country Link
CN (1) CN115981334A (en)

Similar Documents

Publication Publication Date Title
Bai et al. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review
CN110243372B (en) Intelligent agricultural machinery navigation system and method based on machine vision
Steward et al. The use of agricultural robots in weed management and control
US7248968B2 (en) Obstacle detection using stereo vision
CN105989601B (en) Agricultural AGV corn inter-row navigation datum line extraction method based on machine vision
CN112363503B (en) Orchard vehicle automatic navigation control system based on laser radar
CN105783935A (en) Visual navigation method for agricultural machine
CN108064560A (en) The automatic picker system of fruit and method based on Kinect depth of field cameras
CN111727457A (en) Cotton crop row detection method and device based on computer vision and storage medium
Kang et al. Sucker detection of grapevines for targeted spray using optical sensors
CN115568332A (en) Automatic following transportation platform for field environment and control method thereof
CN115560754A (en) Visual navigation method based on weed removal
RU2763451C1 (en) Automatic driving system for grain processing, automatic driving method and automatic identification method
Jin et al. Far-near combined positioning of picking-point based on depth data features for horizontal-trellis cultivated grape
Fei et al. Row‐sensing templates: A generic 3D sensor‐based approach to robot localization with respect to orchard row centerlines
WO2023055383A1 (en) Vehicle row follow system
CN107578447B (en) A kind of crop ridge location determining method and system based on unmanned plane image
CN112526989A (en) Agricultural unmanned vehicle navigation method and device, agricultural unmanned vehicle and storage medium
CN115280960B (en) Combined harvester steering control method based on field vision SLAM
CN115981334A (en) Vision-based grapery fruit tree inter-row navigation line extraction method
Hutsol et al. Robotic technologies in horticulture: analysis and implementation prospects
Patnaik et al. Weed removal in cultivated field by autonomous robot using LabVIEW
Fujinaga Cutting point detection for strawberry fruit harvesting and truss pruning by agricultural robot
CN115451965A (en) Binocular vision-based relative heading information detection method for transplanting system of rice transplanter
Rasmussen et al. Integrating stereo structure for omnidirectional trail following

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination