CN117804430A - Positioning navigation system and method for greenhouse of leaf vegetable harvesting robot - Google Patents

Positioning navigation system and method for greenhouse of leaf vegetable harvesting robot Download PDF

Info

Publication number
CN117804430A
CN117804430A CN202311855049.0A CN202311855049A CN117804430A CN 117804430 A CN117804430 A CN 117804430A CN 202311855049 A CN202311855049 A CN 202311855049A CN 117804430 A CN117804430 A CN 117804430A
Authority
CN
China
Prior art keywords
image
robot
positioning
navigation
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311855049.0A
Other languages
Chinese (zh)
Inventor
李传江
曹智军
庄天豪
戴绍军
张崇明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Normal University
Original Assignee
Shanghai Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Normal University filed Critical Shanghai Normal University
Priority to CN202311855049.0A priority Critical patent/CN117804430A/en
Publication of CN117804430A publication Critical patent/CN117804430A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/25Greenhouse technology, e.g. cooling systems therefor

Landscapes

  • Manipulator (AREA)

Abstract

The invention relates to a leaf vegetable harvesting robot greenhouse positioning navigation system and a leaf vegetable harvesting robot greenhouse positioning navigation method. Compared with the prior art, the invention has the advantages of high navigation precision, small number of sensors, strong real-time performance and the like.

Description

Positioning navigation system and method for greenhouse of leaf vegetable harvesting robot
Technical Field
The invention relates to the field of agricultural automation equipment, in particular to a greenhouse positioning navigation system and method for a leaf vegetable harvesting robot.
Background
Agricultural production needs to address challenges such as population growth, lack of labor, and limited resources, and the advent of picking robots has provided new opportunities to address these issues. And therefore becomes critical for achieving automated and refined agricultural production.
The invention discloses an agricultural robot positioning navigation method and system based on visual detection, wherein the system designs an agricultural robot visual auxiliary positioning method applicable to fast-growing crops in a greenhouse environment through three-dimensional laser radars, IMUs, cameras and the like which are installed by the agricultural robot. However, the method depends on the matching effect of the laser scanning information and the map to a great extent, and when a large error exists in the establishment of the map, the influence on positioning and navigation is large.
Global Navigation Satellite Systems (GNSS) cannot provide accurate positioning information under indoor environments due to the influence of shielding and multipath effects on signals, and conventional visual navigation systems are also influenced by factors such as light rays and crop diversity, so that accurate navigation routes cannot be provided. Thus, robot positioning and navigation in indoor agricultural scenarios has been a challenging task.
Disclosure of Invention
The invention aims to overcome the defect of low navigation precision in the prior art and provide a positioning navigation system and method for a leaf vegetable harvesting robot greenhouse.
The aim of the invention can be achieved by the following technical scheme:
the utility model provides a leaf dish is gathered robot greenhouse location navigation, including the robot, a plurality of ultra wide band location basic stations, first location label, the second location label, camera and treater, the robot is used for leaf dish to gather, a plurality of ultra wide band location basic stations distribute in the work area of robot, the camera is installed at the robot top, first location label and second location label are all installed on the robot, and be located the front and back both sides of robot advancing direction respectively, the camera is connected to the treater, first location label and second location label, the treater is according to the information of camera, first location label and second location label, the advancing direction of adjustment robot.
Further, the processor comprises a vision processing module, a positioning processing module and a controller, wherein the controller is respectively connected with the vision processing module and the positioning processing module, and the controller controls the advancing direction of the robot.
Further, the vision processing module comprises an image acquisition module, an image processing module and a navigation line calculating module, the image acquisition module receives the camera image, the image processing module processes the camera image to obtain a ridge edge position, the navigation line calculating module calculates the current travelling direction according to the ridge edge position, and the controller corrects the travelling route according to the current travelling direction.
Further, the positioning processing module comprises a data receiving module and a positioning calculating module, the data receiving module is connected with the first positioning tag and the second positioning tag, signals of all the ultra-wideband positioning base stations are received respectively, the positioning calculating module calculates positions of the first positioning tag and the second positioning tag according to the signals of the ultra-wideband positioning base stations, and a current orientation of the robot is obtained according to vectors pointing to the positions of the first positioning tag from the positions of the second positioning tag.
Further, the current orientation of the robot is adjusted to be parallel to the ridges before navigation begins.
According to a second aspect of the invention, a navigation method based on a positioning navigation system in a greenhouse of a leaf vegetable harvesting robot comprises the following steps:
s1: performing preliminary positioning of the robot;
s2: setting a main base station, establishing a global coordinate system, acquiring position information of ridges and robots by using a UWB positioning module, calculating and acquiring ridge vectors and robot body vectors, and adjusting the positions and directions of the robots by vector information;
s3: the camera is used for collecting the image information of the ridges, the processor is used for processing the image information of the ridges, and the advancing direction of the robot is corrected according to the processing result, so that the visual navigation of the robot among the ridges is realized.
Further, the process of processing the ridge image information by the processor comprises the following steps:
s31: extracting an interest region of the camera image to obtain an interest image;
s32: carrying out graying treatment for strengthening green components and reducing red components on the interest image, and carrying out normalization to obtain a gray image;
s33: performing binarization processing on the gray level image by a binarization processing method combining a fixed threshold method and an Otsu method, and performing color reversal on the processing result to obtain a binarized image;
s34: morphological processing is carried out on the binarized image, and a processed image is obtained;
s35: performing edge extraction on the processed image to obtain a ridge edge position in the image;
s36: and calculating the travelling route of the robot according to the ridge edge position.
Further, the calculation expression of the Otsu method is:
g=k 00 -μ) 2 +k 11 -μ) 2 =k 0 k k01 ) 2
wherein k is 0 ,k 1 The proportion of foreground and background pixel points to the whole image is mu 0 、μ 1 Respectively, the average gray level, mu is the average gray level of the whole image, g is the variance between the foreground and the background, N 0 For the number of pixels with gray values smaller than T, the corresponding gray level is from 0 to l-1, N 1 For the number of pixels with gray values larger than T, the corresponding gray levels are from l to n-1, p i Is the probability of the occurrence of a pixel having a gray value i.
Further, the morphological treatment comprises the following steps: and performing opening operation and closing operation on the binarized image, performing smoothing treatment on the image by using median filtering, and finally performing convex hull transformation operation to obtain the treated image.
Further, the region of interest is: the width is the same as the width of the original image, the length is 50% of the length of the original image, and the area is positioned in the center of the original image.
Compared with the prior art, the invention has the following beneficial effects:
1) The invention combines ultra wideband positioning (UWB) with machine vision, utilizes the positioning accuracy and anti-interference of ultra wideband positioning technology (UWB) and can provide high-precision position information, and the machine vision technology has the advantages of distinguishing the positions of vegetables and ridges and realizing precise path navigation, thereby improving the positioning and navigation precision of the leaf vegetable harvesting robot in the greenhouse.
2) The invention optimizes the position information output by the UWB positioning module by adopting the improved Kalman filtering, improves the stability and accuracy of UWB positioning, enables the acquired positioning information to be closer to the real motion state, thereby improving the positioning performance, simultaneously can acquire vector information of ridges and robots through UWB positioning, can judge the direction of the robots through the vector information, and avoids the installation of equipment such as inertial navigation, accelerometers and the like.
3) The invention adopts the camera module to capture the image of the target crop in real time, carries out image processing according to the modes of setting the ROI area, gray processing, image binary segmentation, morphological filtering, edge detection and the like, and finally obtains the characteristic point planning navigation line through Hough transformation so that the robot can adjust the position according to the real-time visual data.
4) The invention realizes the intelligent application of ultra-wideband positioning and visual guiding technology, can greatly reduce the labor intensity of workers and improves the agricultural production efficiency.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the invention.
FIG. 2 is a two-dimensional planar distribution diagram of a system according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a robot positioning process based on UWB according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a visual guidance procedure according to an embodiment of the invention.
Fig. 5 is a flow chart of inter-ridge image processing according to an embodiment of the present invention.
The figure indicates: 4. a robot; 5. the system comprises a first positioning tag, a 6 second positioning tag, a 1 first ultra-wideband positioning base station, a 2 second ultra-wideband positioning base station, a 3 third ultra-wideband positioning base station.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
The invention relates to a greenhouse positioning navigation system and a greenhouse positioning navigation method for a leaf vegetable harvesting robot.
The method part of the invention comprises the following steps:
s1: performing preliminary positioning of the robot;
s2: setting a main base station, establishing a global coordinate system, acquiring position information of ridges and robots by using a UWB positioning module, calculating and acquiring ridge vectors and robot body vectors, and adjusting the positions and directions of the robots by vector information;
s3: the camera is used for collecting the image information of the ridges, the processor is used for processing the image information of the ridges, and the advancing direction of the robot is corrected according to the processing result, so that the visual navigation of the robot among the ridges is realized.
Example 1
The invention relates to a positioning navigation system and a positioning navigation method for a greenhouse of a leaf vegetable harvesting robot,
as shown in fig. 1, the method of the present invention comprises the following steps:
s1: and in the early hardware deployment stage, a fixed base station is installed in the greenhouse, the ridge position is determined, and a positioning tag and a camera module are installed at the top of the robot.
S2: in the middle UWB positioning stage, a global coordinate system of a main base station is established, position information of ridges and robots is obtained by using a UWB positioning module, ridge vectors and robot body vectors are calculated and obtained, and the positions and directions of the robots are adjusted through the vector information.
S3: and in the later visual guidance stage, image information is acquired in real time through a camera at the top of the robot, and the advancing direction of the robot is continuously corrected by utilizing a visual processing technology, so that the visual navigation of the robot among ridges is realized.
As shown in fig. 2, the system part of the present invention includes a robot 4, a first ultra wideband positioning base station 1, a second ultra wideband positioning base station 2, a third ultra wideband positioning base station 3, a first positioning tag 5, a second positioning tag 6, a camera and a processor, wherein the robot is used for harvesting leaf vegetables, the three ultra wideband positioning base stations are distributed in a working area of the robot, the camera is mounted on the top of the robot, the first positioning tag 5 is mounted on the robot 4 in the same direction as the advancing direction of the robot 4, the second positioning tag 6 is mounted on the robot 4 in the opposite direction to the advancing direction of the robot 4, the processor is connected with the camera, the first positioning tag 5 and the second positioning tag 6, and the processor adjusts the advancing direction of the robot according to the information of the camera, the first positioning tag 5 and the second positioning tag 6.
The processor comprises a vision processing module, a positioning processing module and a controller, wherein the controller is respectively connected with the vision processing module and the positioning processing module, and controls the advancing direction of the robot.
The vision processing module comprises an image acquisition module, an image processing module and a navigation line calculating module, wherein the image acquisition module receives a camera image, the image processing module processes the camera image to obtain a ridge edge position, the navigation line calculating module calculates the current travelling direction according to the ridge edge position, and the controller corrects the travelling route according to the current travelling direction.
The positioning processing module comprises a data receiving module and a positioning calculation module, the data receiving module is connected with the first positioning label and the second positioning label, signals of all ultra-wideband positioning base stations are respectively received, the positioning calculation module calculates positions of the first positioning label and the second positioning label according to the signals of the ultra-wideband positioning base stations, and current orientation of the robot is obtained according to vectors pointing to the positions of the first positioning label from the positions of the second positioning label.
In the step S2, the positioning of the robot is finished mainly by utilizing UWB positioning, the direction of the robot is adjusted, the direction adjustment of the robot at the ridge position is realized, and the robot is ensured to be parallel to the ridge. Step S2 is specifically implemented as shown in fig. 3, and includes: firstly, the system directly obtains positioning information including the position information of the ridge and the position information of two labels on the robot through the UWB positioning module, and then the next step is carried out.
After the positioning information is acquired, the position information acquired by the UWB positioning module is optimized by utilizing improved Kalman filtering, so that the error between the measured value and the true value is reduced, and more accurate position information is acquired. Wherein the modified Kalman filter sets a threshold h and calculates a difference e between the observed value and the predicted value, as compared with the conventional Kalman filter n By comparing |e n The magnitudes of i and h determine the kalman gain to update the state and error covariance. The calculation formula is as follows:
the equation of state: x is x n =F n x n-1 +v n
Observation equation: z n =C n x n +w n
State prediction: x is x n|n-1 =F n|n-1 x n-1 +Q
Error covariance prediction: p (P) n|n-1 =F n|n-1 P n-1 F n|n-1 +Q
Difference between observed and predicted values: e, e n =z n -Cx n|n-1
Kalman gain: a is that n =C n P n|n-1 C T +R
Covariance update: p (P) n =(I-K n C)P n|n-1
And (5) updating the state: x is x n =x n|n-1 +K n e n
Wherein: v n To meet a mean of 0, the covariance matrix is the process noise of Q; w (w) n To meet the observation noise with the mean value of 0 and the covariance matrix of R; p is the error covariance; k is the Kalman gain.
Then, vector information of the ridge and the robot is calculated by using the optimized position information, the ridge vector information is directly obtained in a coordinate system from the determined ridge position, and two labels (P 1 、P 2 ) Acquisition, direction is defined by P 2 Pointing to P 1 I.e. the direction of the robot, P 1 For the coordinates of the first positioning tag 1, P 2 Is the coordinates of the second positioning tag 2.
After the vector information is acquired, calculating the direction of the robot and the included angle between the robot and the ridge through vector products, when the robot is judged to be parallel to the ridge and the direction is aligned to the ridge, the UWB positioning is finished, and when the direction of the robot deviates from the ridge, the direction of the robot is adjusted and the positioning information of the robot is acquired again.
Through the steps, the robot can be ensured to be positioned at the ridge position, the orientation of the robot is ensured, and the task of the UWB positioning part is completed.
In step S3, after the steps are finished, the agricultural robot moves among the ridges, the step is realized by visual guidance, the camera at the top of the robot is used for collecting the image information of the advancing direction of the robot at the current position, then the collected image is processed to obtain a navigation line, the navigation angle is calculated to judge whether the direction of the robot is corrected or not through a threshold value, and finally the motor of the machine is driven to realize the ridge riding control. The specific implementation steps of step S3, as shown in fig. 4, include:
an image of the forward direction at the current position is photographed by a camera mounted on the robot, and the image is subjected to a related prescribed formatting process to obtain an RGB color image.
Then, the image processing is performed, and the specific steps are as shown in fig. 5:
s31: because the image shot by the camera has a larger visual field range, the gradient of crops and ridges in a smaller region is not high, and the more distant roads are narrower, the more serious the path gradient is, meanwhile, in order to reduce the data calculation amount and eliminate larger interference, a region of interest (ROI) needs to be extracted from the acquired image, the width of the region of interest (ROI) in the application is consistent with that of the original image, and the length of the region of interest is 50% of that of the original image, which is close to the camera.
S32: after the ROI area is set, the image needs to be subjected to gray-scale processing in order to separate the image from the background and extract key information therefrom. Because crops in farmland mostly show green and the grey of soil, the reddish brown colour characteristic difference is great, can strengthen the G component, reduces R and carries out the graying, in order to reduce the influence of noise simultaneously, adopts improvement super green characteristic factor and normalization to handle in this application and realizes the gray processing of image. Wherein, the calculation formulas of the hyper-green eigenvalue (ExG) and normalization are as follows:
ExG=2G-R-B
wherein: r, G, B respectively represent the gray values of the three red, green and blue components in the RGB color space; f (x) and f 0 (x) Respectively representing the gray values of the images before and after normalization; f (f) max (x) And f min (x) Respectively representing the maximum value and the minimum value of the gray value of the original image.
S33, obtaining an image with even gray value distribution after the steps, wherein binarization processing is needed to be carried out on the image in order to obviously highlight foreground crop information and obviously inhibit soil background and other information. The binarization processing method combining the fixed threshold method and the Otsu method has the advantages of high calculation efficiency, good stability, automatic threshold segmentation and the like, and is widely applied. And simultaneously, in order to better extract characteristic information in the later period, binarization and color reversal treatment are carried out on the result of the Otsu algorithm, crops are displayed as black, and ridges are displayed as white. The Otsu algorithm flow is as follows:
let the proportion of foreground and background pixel points to the whole image be k 0 ,k 1 The average gray scale values are mu 0 、μ 1 The average gray level of the whole image is μ, and the variance between foreground and background is g. Let the number of pixels with gray values less than T be N 0 The gray level is from 0 to l-1, and the number of pixels with gray values larger than T is N 1 The gray level is from l to n-1, and the probability of the pixel with the gray value i appearing is p i Then:
g=k 00 -μ) 2 +k 11 -μ) 2 =k 0 k k01 ) 2
the Otsu algorithm makes the threshold value T obtained when the variance g between the crop and the background is maximum be the optimal threshold value, at the moment, the characteristic difference between the target area and the image background is maximum, the gray level image is segmented based on the threshold value T to obtain a binarized image, and finally binarization color reversal treatment is carried out.
S34: in order to obtain the characteristic information of the crop row, so as to fit a final navigation path, further extracting and simplifying a binarized image is needed, interference possibly existing in a crop area is eliminated, and a smooth communication path is obtained, so that the obtained binarized image is subjected to morphological processing, namely, firstly, opening operation and closing operation, then, median filtering is carried out, and finally, convex hull transformation operation is used. The on operation is used for eliminating interference points and interference objects in the image; the closing operation can connect objects which are mutually separated, and small gaps in the object are filled;
the median filtering replaces the pixel value of the current pixel point with the median of each point value in the neighborhood, so that the surrounding pixel values are close to the true value, and the smoothing effect is achieved; the convex hull transformation is to use convex polygons to contain all white pixel points in the picture, so as to ensure that the outline of the ridge is unchanged.
S35: after the image processing operations such as the graying processing, the image threshold segmentation and the morphological filtering, the ridge contour information based on the binarized image is obtained, but still an accurate navigation path cannot be determined, so that edge detection is added to determine the edge of the ridge. There are a variety of edge detection operators in image processing, such as: the Canny operator, the Robert operator, the Sobel operator and the like, wherein the Canny operator can accurately detect edges in the image, has better resistance to noise, and calculates edges of ridges by adopting the edge detection based on the Canny operator in the method.
S36: after the edge information of the ridges is obtained, the characteristic extraction is needed to be carried out to determine the navigation path of the agricultural robot among the ridges, so that the normal running of the agricultural robot is ensured. In the feature extraction, hough transformation can gradually transform the whole feature into a detected local feature, and linear coordinate space is transformed (x, y) to parameter space (r, theta) by utilizing the dual feature of points and lines to realize straight line fitting. The specific implementation mode is as follows: by projecting the points into the hough space using a large matrix of (r, θ), when the parameters (r 0 ,θ 0 ) If the number of projections exceeds a threshold value set in advance, then the parameter (r 0 ,θ 0 ) The represented line is then considered to be a satisfactory straight line. Therefore, the Hough transformation is adopted to process the images, the ridge line information on two sides of the path is extracted to fit, and the central lines of the two obtained straight lines are the required navigation lines.
And acquiring the navigation line through the image processing, calculating the navigation deflection angle of the new navigation line and the navigation line before, judging whether the direction of the robot needs to be corrected through a threshold value, correcting the navigation direction of the robot when the navigation deflection angle exceeds the threshold value, keeping the current direction when the navigation deflection angle does not exceed the threshold value, and finally driving a motor to rotate to realize the riding control of the robot.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (10)

1. The utility model provides a leaf dish harvesting robot greenhouse location navigation, its characterized in that, includes robot, a plurality of ultra wide band location basic station, first location label, second location label, camera and treater, the robot is used for leaf dish harvesting, a plurality of ultra wide band location basic stations distribute in the work area of robot, the camera is installed at the robot top, first location label and second location label are all installed on the robot to be located the front and back both sides of robot advancing direction respectively, camera, first location label and second location label are connected to the treater, the information of treater according to camera, first location label and second location label, the direction of advance of adjustment robot.
2. The positioning navigation system in a greenhouse of a leaf vegetable harvesting robot of claim 1, wherein the processor comprises a vision processing module, a positioning processing module and a controller, the controller is respectively connected with the vision processing module and the positioning processing module, and the controller controls the advancing direction of the robot.
3. The positioning navigation system in a greenhouse of a leaf vegetable harvesting robot according to claim 2, wherein the vision processing module comprises an image acquisition module, an image processing module and a navigation line calculating module, the image acquisition module receives the camera image, the image processing module processes the camera image to obtain a ridge edge position, the navigation line calculating module calculates a current travelling direction according to the ridge edge position, and the controller corrects the travelling route according to the current travelling direction.
4. The positioning navigation system in the greenhouse of the leaf vegetable harvesting robot according to claim 2, wherein the positioning processing module comprises a data receiving module and a positioning calculation module, the data receiving module is connected with the first positioning tag and the second positioning tag and respectively receives signals of all ultra-wideband positioning base stations, the positioning calculation module respectively calculates positions of the first positioning tag and the second positioning tag according to the signals of the ultra-wideband positioning base stations, and obtains the current orientation of the robot according to vectors pointing to the positions of the first positioning tag from the positions of the second positioning tag.
5. The positioning and navigation system in a greenhouse of a leaf vegetable harvesting robot of claim 4, wherein the current orientation of the robot is adjusted to be parallel to the ridges before navigation begins.
6. A navigation method based on a positioning navigation system in a greenhouse of a leaf vegetable harvesting robot as claimed in any one of claims 1-5, comprising the steps of:
s1: performing preliminary positioning of the robot;
s2: setting a main base station, establishing a global coordinate system, acquiring position information of ridges and robots by using a UWB positioning module, calculating and acquiring ridge vectors and robot body vectors, and adjusting the positions and directions of the robots by vector information;
s3: the camera is used for collecting the image information of the ridges, the processor is used for processing the image information of the ridges, and the advancing direction of the robot is corrected according to the processing result, so that the visual navigation of the robot among the ridges is realized.
7. The navigation method of claim 6, wherein the process of processing the ridge image information by the processor comprises the steps of:
s31: extracting an interest region of the camera image to obtain an interest image;
s32: carrying out graying treatment for strengthening green components and reducing red components on the interest image, and carrying out normalization to obtain a gray image;
s33: performing binarization processing on the gray level image by a binarization processing method combining a fixed threshold method and an Otsu method, and performing color reversal on the processing result to obtain a binarized image;
s34: morphological processing is carried out on the binarized image, and a processed image is obtained;
s35: performing edge extraction on the processed image to obtain a ridge edge position in the image;
s36: and calculating the travelling route of the robot according to the ridge edge position.
8. The navigation method of claim 7, wherein the Otsu method has a computational expression of:
k 0 +k 1 =1
g=k 00 -μ) 2 +k 11 -μ) 2 =k 0 k k01 ) 2
wherein k is 0 ,k 1 The proportion of foreground and background pixel points to the whole image is mu 0 、μ 1 Respectively, the average gray level, mu is the average gray level of the whole image, g is the variance between the foreground and the background, N 0 For the number of pixels with gray values smaller than T, the corresponding gray level is from 0 to l-1, N 1 For pixels having gray values greater than TPoint number, corresponding gray level from l to n-1, p i Is the probability of the occurrence of a pixel having a gray value i.
9. The navigation method of claim 7, wherein the morphological processing is performed by: and performing opening operation and closing operation on the binarized image, performing smoothing treatment on the image by using median filtering, and finally performing convex hull transformation operation to obtain the treated image.
10. The navigation method of claim 7, wherein the region of interest is: the width is the same as the width of the original image, the length is 50% of the length of the original image, and the area is positioned in the center of the original image.
CN202311855049.0A 2023-12-29 2023-12-29 Positioning navigation system and method for greenhouse of leaf vegetable harvesting robot Pending CN117804430A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311855049.0A CN117804430A (en) 2023-12-29 2023-12-29 Positioning navigation system and method for greenhouse of leaf vegetable harvesting robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311855049.0A CN117804430A (en) 2023-12-29 2023-12-29 Positioning navigation system and method for greenhouse of leaf vegetable harvesting robot

Publications (1)

Publication Number Publication Date
CN117804430A true CN117804430A (en) 2024-04-02

Family

ID=90421312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311855049.0A Pending CN117804430A (en) 2023-12-29 2023-12-29 Positioning navigation system and method for greenhouse of leaf vegetable harvesting robot

Country Status (1)

Country Link
CN (1) CN117804430A (en)

Similar Documents

Publication Publication Date Title
US7684916B2 (en) Method and system for vehicular guidance using a crop image
US7570783B2 (en) Method and system for vehicular guidance using a crop image
US7248968B2 (en) Obstacle detection using stereo vision
US20070001097A1 (en) Method and system for vehicular guidance using a crop image
CN107844750A (en) A kind of water surface panoramic picture target detection recognition methods
Jiang et al. A machine vision based crop rows detection for agricultural robots
US20070014434A1 (en) Method and system for vehicular guidance using a crop image
CN106599760B (en) Method for calculating running area of inspection robot of transformer substation
CN101916446A (en) Gray level target tracking algorithm based on marginal information and mean shift
CN110006444B (en) Anti-interference visual odometer construction method based on optimized Gaussian mixture model
CN103577833A (en) Abnormal intrusion detection method based on motion template
CN113450402B (en) Navigation center line extraction method for vegetable greenhouse inspection robot
CN116977902B (en) Target tracking method and system for on-board photoelectric stabilized platform of coastal defense
CN113781523A (en) Football detection tracking method and device, electronic equipment and storage medium
CN117804430A (en) Positioning navigation system and method for greenhouse of leaf vegetable harvesting robot
CN115451965B (en) Relative heading information detection method for transplanting system of transplanting machine based on binocular vision
CN117031424A (en) Water surface target detection tracking method based on navigation radar
Chen et al. Measurement of the distance from grain divider to harvesting boundary based on dynamic regions of interest.
CN115294562A (en) Intelligent sensing method for operation environment of plant protection robot
Zhang et al. An obstacle detection system based on monocular vision for apple orchardrobot
Rasmussen et al. Integrating stereo structure for omnidirectional trail following
Hou Analysis of visual navigation extraction algorithm of farm robot based on dark primary colour.
CN114625114A (en) Ground spraying system traveling path planning method based on machine vision
CN117456368B (en) Fruit and vegetable identification picking method, system and device
CN116563348B (en) Infrared weak small target multi-mode tracking method and system based on dual-feature template

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination