CN111201896A - Picking robot based on visual navigation and control method - Google Patents

Picking robot based on visual navigation and control method Download PDF

Info

Publication number
CN111201896A
CN111201896A CN201911384109.9A CN201911384109A CN111201896A CN 111201896 A CN111201896 A CN 111201896A CN 201911384109 A CN201911384109 A CN 201911384109A CN 111201896 A CN111201896 A CN 111201896A
Authority
CN
China
Prior art keywords
picking robot
image
module
steering
trunk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911384109.9A
Other languages
Chinese (zh)
Inventor
李伟
张文强
王蓬勃
耿长兴
马跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SUZHOU BOTIAN AUTOMATION TECHNOLOGY CO LTD
Original Assignee
SUZHOU BOTIAN AUTOMATION TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SUZHOU BOTIAN AUTOMATION TECHNOLOGY CO LTD filed Critical SUZHOU BOTIAN AUTOMATION TECHNOLOGY CO LTD
Priority to CN201911384109.9A priority Critical patent/CN111201896A/en
Publication of CN111201896A publication Critical patent/CN111201896A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D46/00Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
    • A01D46/30Robotic devices for individually picking crops
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Robotics (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a picking robot based on visual navigation and a control method, comprising the following steps: the visual navigation module is used for acquiring image information, and feeding back the acquired image data information to the industrial personal computer by acquiring trunk information in the field in real time; the image processing module is used for processing image data information and processing the received image data information by using the industrial personal computer; the calculation module is used for calculating the image pose after image processing and calculating the steering time of the picking robot by using the fuzzy controller according to the position deviation of the picking robot relative to the trunk in the field; and the execution module controls the steering of the picking robot according to the steering time so as to realize the autonomous walking of the picking robot. The method has the advantages of high precision, high recognition rate and strong robustness.

Description

Picking robot based on visual navigation and control method
Technical Field
The invention relates to the technical field of crop picking, in particular to a picking robot based on visual navigation and a control method.
Background
The existing crop picking is mainly finished manually, for example, medlar is a plant with homology of medicine and food, and has high medicinal value and economic value. At present, medlar is picked mainly by manual picking, and because the medlar branches have a plurality of thorns, hands are easy to scratch when the medlar branches are manually picked, and the efficiency is low, thereby restricting the development of the medlar industry.
In order to overcome the problems, a mechanical device is usually used for harvesting in the prior art, the mechanical device with an autonomous navigation technology has important significance for the intelligent picking of the medlar, but the existing autonomous navigation positioning research on farmland operation machinery mainly focuses on two modes, namely a GPS (global positioning system) mode and a machine vision navigation mode, and the vision navigation has the characteristics of good flexibility, low cost, high precision, strong noise resistance and the like, and is favorable for popularization and application of the technology. Because the wolfberry harvesting robot works in a non-structural environment and is influenced by uncertain factors such as natural illumination, biological diversity and the like, how to detect navigation information by using a machine vision technology becomes a difficult point. At present, the visual navigation research of agricultural machinery is mainly aimed at row-sowed crops in a farm field, and because the visual navigation research has obvious characteristics of furrows, a better identification basis is provided for a visual system. Visual navigation research and reports on relatively tall and big crops such as orchards and forest spaces are relatively few, and in the environments, the navigation difficulty is increased under the influences of random tree trunk space arrangement, different sizes and shapes, unobvious target background difference and the like.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the problems of low navigation precision and low recognition speed in the prior art, so that the picking robot based on visual navigation and the control method thereof are high in navigation precision and high in recognition speed.
In order to solve the above technical problem, the picking robot based on visual navigation of the present invention comprises: the visual navigation module is used for acquiring image information, and feeding back the acquired image data information to the industrial personal computer by acquiring trunk information in the field in real time; the image processing module is used for processing image data information and processing the received image data information by using the industrial personal computer; the calculation module is used for calculating the image pose after image processing and calculating the steering time of the picking robot by using the fuzzy controller according to the position deviation of the picking robot relative to the trunk in the field; and the execution module controls the steering of the picking robot according to the steering time so as to realize the autonomous walking of the picking robot.
In one embodiment of the present invention, the image processing module includes: the system comprises an image graying processing module, an image binaryzation and denoising module, a feature point extraction and navigation reference line generation module and an image interesting region information extraction module, wherein the image graying processing module is connected with the industrial personal computer, the image graying processing module is connected with the feature point extraction and navigation reference line generation module through the image binaryzation and denoising module, and the feature point extraction and navigation reference line generation module is connected with the image interesting region information extraction module.
In an embodiment of the present invention, the image binarization and denoising module segments the image by using a maximum entropy threshold method, including performing a morphological operation with a structural element of a designated pixel, labeling all contours with a threshold area larger than a set value in the image, as an object for next contour extraction, and removing the plaque noise smaller than the threshold.
In an embodiment of the present invention, in the feature point extraction and navigation datum line generation module, a minimum positive circumscribed rectangle is drawn for the extracted contour of which the threshold area is larger than a set value by using a feature point extraction method based on the minimum positive circumscribed rectangle, coordinates of a midpoint on the bottom side of the minimum positive circumscribed rectangle are calculated, the coordinates of the midpoint are the intersection point of the trunk and the ground, and a navigation datum line can be fitted by extracting feature points on the working side by using a least square method.
In an embodiment of the invention, the image interesting region information extraction module adopts a dynamic interesting region to track a single-side trunk in real time, and the method comprises the steps of initializing the position of the interesting region when a program starts, enabling the position of a central line of the interesting region to coincide with the position of the central line of the image, setting a plurality of initial parameters including an image midpoint abscissa, a leading line slope, a leading line and an x-axis intercept, and putting the trunk coordinates of which the trunk midpoint abscissa is smaller than the image midpoint abscissa into the same array when a first frame of image is processed, so as to generate a leading line equation.
In one embodiment of the invention, the fuzzy controller is in the form of double input single output, the input quantity is the transverse deviation and deflection angle of the picking robot, the output quantity is the turning time of the picking robot, the input quantity and the output quantity are fuzzified, the range of the transverse deviation and the range of the deflection angle are set according to the position deviation of the picking robot relative to a trunk in the field, wherein the transverse deviation and the deflection angle are divided into a plurality of fuzzy subsets, and the output quantity is the control of the turning direction and the degree of the picking robot and is also divided into a plurality of fuzzy subsets.
In one embodiment of the invention, the picking robot comprises a picking end comprising a first finger to secure a trunk branch and a second finger to cause the branch to oscillate back and forth.
In one embodiment of the invention, the second finger is connected to the power unit by a crank and rocker arrangement.
In one embodiment of the invention, the harvesting tip is fixed to a moving module, the moving module comprises a slider connected to the harvesting tip, the slider is slidably disposed on a first rail, the first rail is slidably disposed on a second rail, and the extension direction of the first rail is perpendicular to the extension direction of the second rail.
In one embodiment of the invention, the picking robot comprises a first steering power device, a first steel wire rope, a first steering clutch, a second steering power device, a second steel wire rope and a second right steering clutch, wherein the first steering power device is connected with a first shifting fork of the first steering clutch through the first steel wire rope, the second steering power device is connected with a second shifting fork of the second steering clutch through the second steel wire rope, and the first shifting fork and the second shifting fork are connected through a telescopic piece.
In one embodiment of the invention, the picking robot comprises a tracked chassis on which a camera frame is provided, the camera being mounted on the camera frame.
The invention also provides a picking robot control method based on visual navigation, which comprises the following steps:
compared with the prior art, the technical scheme of the invention has the following advantages:
the picking robot based on visual navigation and the control method thereof comprise the steps of collecting trunk information in the field in real time, and feeding back the collected image data information to an industrial personal computer; carrying out image processing on the received image data information by using the industrial personal computer; calculating the image pose after image processing, and calculating the steering time of the picking robot by using a fuzzy controller according to the position deviation of the picking robot relative to the trunk in the field; and controlling the steering of the picking robot according to the steering time to realize the autonomous walking of the picking robot. The method has the characteristics of high precision, high recognition rate, strong robustness and the like; the visual navigation module is low in cost and strong in transportability; in addition, the chassis truck has high steering corresponding speed, and the control method is simple and practical; moreover, the structure is simple, the operation is convenient, the cost is low, and the service life is long.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference is now made to the following detailed description of the embodiments of the present disclosure taken in conjunction with the accompanying drawings, in which
FIG. 1 is a schematic diagram of the components of a picking robot based on visual navigation according to the present invention;
FIGS. 2(a) -2 (d) are various schematic diagrams of the present invention when processing an image using an image processing module;
FIG. 3 is a schematic diagram of region of interest extraction according to the present invention;
FIG. 4 is a perspective view of a picking robot of the present invention based on visual navigation;
FIG. 5 is a schematic view of the recovery end of the present invention;
fig. 6 is a bottom view of the vision navigation based picking robot of the present invention.
The specification reference numbers indicate: 10-harvesting end, 11-first finger, 12-second finger, 13-crank rocker device, 14-power device, 20-moving module, 21-sliding block, 22-first rail, 23-second rail, 31A-first steering power device, 31B-second steering power device, 32A-first steel wire rope, 32B-second steel wire rope, 33A-first shifting fork, 33B-second shifting fork, 34-telescopic piece, 35-frame, 40-crawler chassis, 41-camera frame, 42-camera.
Detailed Description
Example one
As shown in fig. 1, the present embodiment provides a picking robot based on visual navigation, comprising: the visual navigation module is used for acquiring image information, and feeding back the acquired image data information to the industrial personal computer by acquiring trunk information in the field in real time; the image processing module is used for processing image data information and processing the received image data information by using the industrial personal computer; the calculation module is used for calculating the image pose after image processing and calculating the steering time of the picking robot by using the fuzzy controller according to the position deviation of the picking robot relative to the trunk in the field; and the execution module controls the steering of the picking robot according to the steering time so as to realize the autonomous walking of the picking robot.
This embodiment the picking robot based on vision navigation includes: the visual navigation module is used for acquiring image information, and feeding back the acquired image data information to the industrial personal computer by acquiring trunk information in the field in real time, so that the position of the trunk can be determined; the image processing module is used for processing image data information and processing the received image data information by using the industrial personal computer, so that subsequent calculation is facilitated; the calculation module is used for calculating the image pose after image processing, and calculating the steering time of the picking robot by using a fuzzy controller according to the position deviation of the picking robot relative to a trunk in the field, so that the path tracking of the picking robot can be realized; the execution module controls the steering of the picking robot according to the steering time to realize the autonomous walking of the picking robot.
The image processing module includes: the system comprises an image graying processing module, an image binaryzation and denoising module, a feature point extraction and navigation reference line generation module and an image interesting region information extraction module, wherein the image graying processing module is connected with the industrial personal computer, and performs image graying processing on received image data information by using the industrial personal computer, so that trunk features can be effectively extracted, the image graying processing module is connected with the feature point extraction and navigation reference line generation module through the image binaryzation and denoising module, and the feature point extraction and navigation reference line generation module is connected with the image interesting region information extraction module.
In the image graying processing module, trunk characteristics can be effectively extracted, wherein the graying formula of the trunk characteristics is as follows:
Figure BDA0002343067770000061
r, G, B are the gray values of the RGB color channels of the original color image, respectively, as shown in fig. 2 (a).
The image binarization and denoising module divides the image by using a maximum entropy threshold method, and comprises the steps of carrying out morphological opening operation by using structural elements of specified pixels, marking all contours with threshold areas larger than a set value in the image, using the contours as objects for next contour extraction, and removing plaque noise smaller than the threshold. Specifically, as shown in fig. 2(b), since the binarized image still has a small amount of noise, firstly, a morphological opening operation is performed with a structural element of 5 pixels × 1 pixels, then, all the contours with a threshold area larger than 30 in the image are marked to be used as objects for next contour extraction, and simultaneously, the plaque noise smaller than the threshold is removed, and the processed result is shown in fig. 2(c), the plaque noise is removed, so far, each remaining region represents an independent medlar tree trunk.
In the characteristic point extraction and navigation datum line generation module, drawing a minimum positive circumscribed rectangle on the extracted contour of which the threshold area is larger than a set value by a characteristic point extraction method based on the minimum positive circumscribed rectangle, calculating the coordinate of the middle point on the bottom edge of the minimum positive circumscribed rectangle, wherein the coordinate of the middle point is the intersection point of the trunk and the ground, and extracting the characteristic points on the operation side to fit the navigation datum line by using a least square method. Specifically, the minimum positive circumscribed rectangle is drawn for the contour with the threshold area larger than 30 extracted above by the feature point extraction method based on the minimum positive circumscribed rectangle, the coordinate p of the middle point on the bottom side of the minimum positive circumscribed rectangle is calculated, the middle point coordinate p is the intersection point of the medlar trunk and the ground, the feature points at the extracted operation side can be fitted to form the leading line by using the least square method, and the result is shown in fig. 2 (d).
In the image interesting region information extraction module, a dynamic interesting region is adopted to track a tree trunk on one side in real time, the method comprises the steps of initializing the position of the interesting region when a program starts, enabling the position of the central line of the interesting region to coincide with the position of the central line of the image, setting a plurality of initial parameters including an image midpoint abscissa, a leading line slope and a leading line and an x-axis intercept, and when a first frame of image is processed, putting the tree trunk coordinates of which the tree trunk midpoint abscissa is smaller than the image midpoint abscissa into the same array to generate a leading line equation. Specifically, the position of the region of interest is initialized at the beginning of the program, so that the position of the center line of the region of interest is coincident with the position of the center line of the imageAnd (6) mixing. Setting 3 initial parameters, namely the image center abscissa xmid is 320, the leading line slope k is 0, and the leading line intercept b with the x-axis is 0. When a first frame image is processed, putting the trunk coordinates of which the trunk midpoint abscissa is less than xmid into the same array A, generating a leading line equation y which is kx + b, and setting y1=kx+b-b1,y2Kx + b + b2, xmid 40- (b/k). The enclosed region formed by the straight line y-0, x-0, y 1-kx + b-b1, and y 2-kx + b + b2 is the region of interest, as shown in fig. 3. Through experiments, in the case of making the region of interest as small as possible, when b1 is 40, b2 is 20, the information of the unilateral trunk can be well retained, and the navigation path can be rapidly captured. When the interested area has no trunk information, the system automatically initializes the parameters and scans the whole area again.
The fuzzy control algorithm is widely applied to an algorithm, a fuzzy controller has the advantages that the fuzzy controller does not need to establish an accurate control model as long as effective input and output variables and an appropriate fuzzy control rule are defined, the fuzzy controller is in the form of double input and single output, the input variables are the transverse deviation and the deflection angle of the picking robot, the output variables are the steering time of the picking robot, the input variables and the output variables are fuzzified, the range of the transverse deviation and the range of the deflection angle are set according to the position deviation of the picking robot relative to a trunk in a field, the transverse deviation and the deflection angle are divided into a plurality of fuzzy subsets, the output variables are control of the steering direction and the degree of the picking robot, the fuzzy subsets are also divided into a plurality of fuzzy subsets, specifically, the structure of the fuzzy control is in the form of double input and single output, the input variables are the transverse deviation d and the deflection angle β of the picking robot, the output variables are the steering time t of the picking robot, the input variables is firstly fuzzified, the steering time t of the picking robot, the input variables is quantized into seven fuzzy subsets, the transverse deviation d and the deflection angle β, the deflection angle of the picking robot is calculated, the transverse deviation is expressed as a positive (RB, the.
When the fuzzy control rule can be described as ifd-x and β -y, and the t-z. system works, firstly, the industrial personal computer acquires picture information, the image information is transmitted to the fuzzy controller through image processing, coordinate system conversion and other steps, the transverse deviation d and the deflection angle β are calculated through defuzzification by adopting a gravity center method, an output variable t is calculated, the single chip microcomputer is adopted to receive the output variable t sent by the industrial personal computer, and the steering time of the picking robot is judged through an instruction, so that path tracking is realized.
After the picking robot realizes path tracking, the picking robot can begin to pick the crops, specifically, as shown in fig. 4 and 5, the picking robot includes a picking end 10, and fruit trees are harvested through vibration of the picking end 10, specifically, the picking end 10 includes a first finger 11 for fixing a trunk branch and a second finger 12 for driving the branch to vibrate back and forth, since the trunk branch is fixed by the first finger 11 during picking, the influence on other main branches or secondary branches can be avoided when the trunk branch is vibrated by the second finger 12. The first fingers 11 are located below the second fingers 12, wherein the number of the first fingers 11 is multiple, the first fingers are designed side by side, and the trunk branches are clamped between any two adjacent first fingers 11; the number of the second fingers 12 is multiple, the second fingers are designed side by side, the branches are clamped between any two adjacent second fingers 12 and are driven to vibrate back and forth, and the whole device is simple in structure, convenient to operate, low in cost and long in service life.
In order to drive the branches to vibrate back and forth, the second finger 12 is connected with a power device 14 through a crank rocker device 13. The power device 14 provides power, and the crank rocker device 13 converts the rotation motion into reciprocating swing motion to drive the second finger 12 to vibrate. In this embodiment, the power device 14 is a motor. The motor drives the crank connecting rod device, so that the frequency of the crank connecting rod device can be changed conveniently, the beating times of branches in unit time are changed, and the branches can be effectively protected from being damaged and broken by vibration in the harvesting process.
In order to control the position of the harvesting tip 10 in the vertical direction and the depth of the first finger 11 and the second finger 12 inserted into the branches, the harvesting tip 10 is fixed on a moving module 20, specifically, the moving module 20 comprises a slide block 21 connected with the harvesting tip 10, the slide block 21 is slidably arranged on a first track 22, the first track 22 is slidably arranged on a second track 23, the extending direction of the first track 22 is perpendicular to the extending direction of the second track 23, so that the harvesting tip 10 can be driven to move in the horizontal direction and the vertical direction.
As shown in fig. 6, in order to control the steering of the picking robot, the picking robot includes a first steering power device 31A, a first wire rope 32A, a first steering clutch, a second steering power device 31B, a second wire rope 32B, and a second right steering clutch, wherein the first steering power device 31A is connected to a first fork 33A of the first steering clutch through the first wire rope 32A, the first steering power device 31A drives the first wire rope 32A to control the movement of the first fork 33A, so that the first steering clutch is moved, and the steering of the picking robot is controlled; the second steering power device 31B is connected with a second shifting fork 33B of the second steering clutch through the second steel wire rope 32B, and the second steering power device 32A drives the second steel wire rope 32B to control the movement of the second shifting fork 33B, so that the second steering clutch moves, and the steering of the acquisition robot is controlled; the first fork 33A and the second fork 33B are connected through a telescopic member 34, which is beneficial to ensure the resetting of the first fork 33A and the second fork 33B.
In this embodiment, the first steering power device 31A and the second steering power device 31B respectively pull the steel wire rope to move so that the right wheel and the left wheel are separated from the main moving shaft, and therefore, both can adopt cylinders; one end of the steel wire rope is connected with the cylinder push rod, and the other end of the steel wire rope is connected with the clutch; the telescopic piece 34 adopts a spring; the first steering clutch and the second steering clutch are both connected with a power source and are used for separating and combining the power source.
How the steering of the picking robot is controlled is explained in detail below:
the picking robot cuts off the left wheel or the right wheel of the wheel to be separated from the movement main shaft by pulling the first steering clutch or the second steering clutch, so that the differential steering of the picking robot is realized. Specifically, when the cylinder contracts, the steel wire rope drives the steering clutch to move, so that the chain wheel loses the binding force, and the chain wheel stops rotating; when the cylinder extends out, the steering clutch resets under the action of the spring, so that the chain wheel is meshed with the moving main shaft, and the chain wheel continues to rotate.
The first steering power device 31A, the second steering power device 31B, and the first steering clutch and the second steering clutch are all fixed to a frame 35.
In order to realize the purpose of carrying related devices to reach a designated harvesting position, the picking robot comprises a crawler-type chassis 40, wherein the crawler-type chassis 40 is provided with three gears of high speed, medium speed and low speed, and can adapt to the traveling speed requirements under different working conditions. The crawler-type chassis truck 40 is provided with a camera frame 41, the camera 42 is arranged on the camera frame 41, the camera 42 can collect real-time working condition pictures in real time, and the collected pictures are uploaded to the industrial personal computer.
The picking robot further comprises a control unit, the visual navigation module, the image processing module, the calculation module and the execution module are all connected with the control unit, and the control unit is placed in a control cabinet, so that a trunk is identified through a visual technology, image information is processed, offset and deflection angle information is obtained after coordinate transformation, and finally steering of the robot is controlled through a fuzzy control method.
The picking robot based on visual navigation in this embodiment has the following working process:
the method comprises the steps of utilizing the camera 42 to collect trunk information in the field in real time, enabling the trunk information to be whitewashed, feeding back collected image data information to the industrial personal computer, conducting image processing on the received image data information through the industrial personal computer, specifically conducting graying processing, binarization processing and denoising processing on the image data information in sequence, extracting a rectangular framework, utilizing a least square method to fit a navigation line, dynamically extracting an image region of interest, calculating a pose, transmitting a transverse deviation d and a deflection angle β of the picking robot to a fuzzy controller, adopting a gravity center method to defuzzify and calculate an output variable t, adopting a single chip microcomputer to receive information sent by the industrial personal computer, and judging and controlling the stretching time of the air cylinder through an instruction so as to achieve path tracking.
The picking of crops in the embodiment includes picking of medlar, but is not limited to medlar, and other crops can also be used.
Example two
Based on the same inventive concept, the embodiment provides a picking robot control method based on visual navigation, the principle of solving the problems is similar to that of the picking robot based on visual navigation, and repeated parts are not repeated.
The control method of the picking robot based on the visual navigation in the embodiment comprises the following steps:
collecting information of a trunk in the field in real time, and feeding back the collected image data information to the industrial personal computer;
carrying out image processing on the received image data information by using the industrial personal computer;
calculating the image pose after image processing, and calculating the steering time of the picking robot by using a fuzzy controller according to the position deviation of the picking robot relative to the trunk in the field;
and controlling the steering of the picking robot according to the steering time to realize the autonomous walking of the picking robot.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (12)

1. A picking robot based on visual navigation, comprising:
the visual navigation module is used for acquiring image information, and feeding back the acquired image data information to the industrial personal computer by acquiring trunk information in the field in real time;
the image processing module is used for processing image data information and processing the received image data information by using the industrial personal computer;
the calculation module is used for calculating the image pose after image processing and calculating the steering time of the picking robot by using the fuzzy controller according to the position deviation of the picking robot relative to the trunk in the field;
and the execution module controls the steering of the picking robot according to the steering time so as to realize the autonomous walking of the picking robot.
2. The visual navigation-based picking robot of claim 1, wherein: the image processing module includes: the system comprises an image graying processing module, an image binaryzation and denoising module, a feature point extraction and navigation reference line generation module and an image interesting region information extraction module, wherein the image graying processing module is connected with the industrial personal computer, the image graying processing module is connected with the feature point extraction and navigation reference line generation module through the image binaryzation and denoising module, and the feature point extraction and navigation reference line generation module is connected with the image interesting region information extraction module.
3. The visual navigation-based picking robot of claim 2, wherein: the image binarization and denoising module divides the image by using a maximum entropy threshold method, and comprises the steps of carrying out morphological opening operation by using structural elements of specified pixels, marking all contours with threshold areas larger than a set value in the image, using the contours as objects for next contour extraction, and removing plaque noise smaller than the threshold.
4. The visual navigation-based picking robot of claim 3, wherein: in the characteristic point extraction and navigation datum line generation module, drawing a minimum positive circumscribed rectangle on the extracted contour of which the threshold area is larger than a set value by a characteristic point extraction method based on the minimum positive circumscribed rectangle, calculating the coordinate of the middle point on the bottom edge of the minimum positive circumscribed rectangle, wherein the coordinate of the middle point is the intersection point of the trunk and the ground, and extracting the characteristic points on the operation side to fit the navigation datum line by using a least square method.
5. The visual navigation-based picking robot of claim 4, wherein: in the image interesting region information extraction module, a dynamic interesting region is adopted to track a tree trunk on one side in real time, the method comprises the steps of initializing the position of the interesting region when a program starts, enabling the position of the central line of the interesting region to coincide with the position of the central line of the image, setting a plurality of initial parameters including an image midpoint abscissa, a leading line slope and a leading line and an x-axis intercept, and when a first frame of image is processed, putting the tree trunk coordinates of which the tree trunk midpoint abscissa is smaller than the image midpoint abscissa into the same array to generate a leading line equation.
6. The visual navigation-based picking robot of claim 1, wherein: the fuzzy controller adopts a double-input single-output mode, input quantity is the transverse deviation and deflection angle of the picking robot, output quantity is the turning time of the picking robot, fuzzification is carried out on the input quantity and the output quantity, the range of the transverse deviation and the range of the deflection angle are set according to the position deviation of the picking robot in the field relative to a trunk, the transverse deviation and the deflection angle are divided into a plurality of fuzzy subsets, the output quantity is control over the turning direction and degree of the picking robot, and the output quantity is also divided into a plurality of fuzzy subsets.
7. The visual navigation-based picking robot of claim 1, wherein: the picking robot comprises a picking end, wherein the picking end comprises a first finger for fixing a trunk branch and a second finger for driving the branch to vibrate back and forth.
8. The visual navigation-based picking robot of claim 7, wherein: the second finger is connected with the power device through a crank rocker device.
9. The visual navigation-based picking robot of claim 7, wherein: the harvesting terminal is fixed on the movable module, the movable module comprises a sliding block connected with the harvesting terminal, the sliding block is slidably arranged on a first rail, the first rail is slidably arranged on a second guide rail, and the extending direction of the first rail is perpendicular to the extending direction of the second rail.
10. The visual navigation-based picking robot of claim 1, wherein: picking robot includes first power device, first wire rope, first steering clutch and second steering power device, second wire rope, the right clutch that turns to of second, wherein first steering power device passes through first wire rope with the first shift fork of first steering clutch links to each other, the second steering power device passes through second wire rope with the second shift fork of second steering clutch links to each other, first shift fork with the second shift fork passes through the extensible member and links to each other.
11. The visual navigation-based picking robot of claim 1, wherein: the picking robot comprises a crawler-type chassis vehicle, a camera frame is arranged on the crawler-type chassis vehicle, and the camera is arranged on the camera frame.
12. A picking robot control method based on visual navigation is characterized by comprising the following steps:
collecting information of a trunk in the field in real time, and feeding back the collected image data information to the industrial personal computer;
carrying out image processing on the received image data information by using the industrial personal computer;
calculating the image pose after image processing, and calculating the steering time of the picking robot by using a fuzzy controller according to the position deviation of the picking robot relative to the trunk in the field;
and controlling the steering of the picking robot according to the steering time to realize the autonomous walking of the picking robot.
CN201911384109.9A 2019-12-28 2019-12-28 Picking robot based on visual navigation and control method Pending CN111201896A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911384109.9A CN111201896A (en) 2019-12-28 2019-12-28 Picking robot based on visual navigation and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911384109.9A CN111201896A (en) 2019-12-28 2019-12-28 Picking robot based on visual navigation and control method

Publications (1)

Publication Number Publication Date
CN111201896A true CN111201896A (en) 2020-05-29

Family

ID=70780459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911384109.9A Pending CN111201896A (en) 2019-12-28 2019-12-28 Picking robot based on visual navigation and control method

Country Status (1)

Country Link
CN (1) CN111201896A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112005726A (en) * 2020-09-07 2020-12-01 郑州轻工业大学 Intelligent fruit and vegetable picking device and method
CN112556606A (en) * 2020-12-24 2021-03-26 宁夏农林科学院农业经济与信息技术研究所(宁夏农业科技图书馆) Self-propelled wolfberry fruit actual measurement method and device based on binocular vision
CN113093743A (en) * 2021-03-30 2021-07-09 西北农林科技大学 Navigation control method based on virtual radar model and deep neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7854108B2 (en) * 2003-12-12 2010-12-21 Vision Robotics Corporation Agricultural robot system and method
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot
JP2016073265A (en) * 2014-10-02 2016-05-12 大介 阿部 Automatic fruit harvest device
CN105783935A (en) * 2016-03-07 2016-07-20 河北科技大学 Visual navigation method for agricultural machine
CN106576603A (en) * 2016-12-05 2017-04-26 华南农业大学 Combined vibrating comb type fruit picking device
JP2018206015A (en) * 2017-06-02 2018-12-27 パナソニック株式会社 Fruit detector and fruit detecting method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7854108B2 (en) * 2003-12-12 2010-12-21 Vision Robotics Corporation Agricultural robot system and method
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot
JP2016073265A (en) * 2014-10-02 2016-05-12 大介 阿部 Automatic fruit harvest device
CN105783935A (en) * 2016-03-07 2016-07-20 河北科技大学 Visual navigation method for agricultural machine
CN106576603A (en) * 2016-12-05 2017-04-26 华南农业大学 Combined vibrating comb type fruit picking device
JP2018206015A (en) * 2017-06-02 2018-12-27 パナソニック株式会社 Fruit detector and fruit detecting method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
徐春广,肖定国,郝娟: "《回转体的结构光测量原理》", 31 January 2017, 国防工业 *
李文洋: "猕猴桃采摘机器人视觉导航路径生成方法研究", 《农业科技辑》 *
汪懋华,李民赞: "《现代精细农业理论与实践》", 31 October 2012, 中国农业大学 *
衡力强,张孟玫: "《机械设计实用教程》", 31 October 2017, 北京航空航天大学出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112005726A (en) * 2020-09-07 2020-12-01 郑州轻工业大学 Intelligent fruit and vegetable picking device and method
CN112556606A (en) * 2020-12-24 2021-03-26 宁夏农林科学院农业经济与信息技术研究所(宁夏农业科技图书馆) Self-propelled wolfberry fruit actual measurement method and device based on binocular vision
CN113093743A (en) * 2021-03-30 2021-07-09 西北农林科技大学 Navigation control method based on virtual radar model and deep neural network

Similar Documents

Publication Publication Date Title
Tang et al. Recognition and localization methods for vision-based fruit picking robots: A review
Nguyen et al. Detection of red and bicoloured apples on tree with an RGB-D camera
Zhao et al. A review of key techniques of vision-based control for harvesting robot
Radcliffe et al. Machine vision for orchard navigation
CN111201896A (en) Picking robot based on visual navigation and control method
Åstrand et al. An agricultural mobile robot with vision-based perception for mechanical weed control
KR102017965B1 (en) Work vehicle
Dewi et al. Visual servoing design and control for agriculture robot; a review
Leu et al. Robotic green asparagus selective harvesting
Michaels et al. Vision-based high-speed manipulation for robotic ultra-precise weed control
Burks et al. Engineering and horticultural aspects of robotic fruit harvesting: Opportunities and constraints
Adhikari et al. 3D reconstruction of apple trees for mechanical pruning
CN111480457A (en) Automatic visual identification picking device for Chinese prickly ash and control method thereof
Miao et al. Efficient tomato harvesting robot based on image processing and deep learning
Yusuf et al. Blob analysis for fruit recognition and detection
Ji et al. Research on key technology of truss tomato harvesting robot in greenhouse
Almendral et al. Autonomous fruit harvester with machine vision
Mail et al. Agricultural harvesting robot concept design and system components: A review
Tarrío et al. A harvesting robot for small fruit in bunches based on 3-D stereoscopic vision
Liu et al. The Vision-Based Target Recognition, Localization, and Control for Harvesting Robots: A Review
CN105844264A (en) Oil peony fruit image identification method based on stress
Quan et al. Selecting candidate regions of clustered tomato fruits under complex greenhouse scenes using RGB-D data
Patnaik et al. Weed removal in cultivated field by autonomous robot using LabVIEW
Kitamura et al. Development of picking robot in greenhouse horticulture
Burks et al. Opportunity of robotics in precision horticulture.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200529

RJ01 Rejection of invention patent application after publication