CA3111952A1 - Mushroom harvesting vision system and method of harvesting mushrooms using said system - Google Patents

Mushroom harvesting vision system and method of harvesting mushrooms using said system Download PDF

Info

Publication number
CA3111952A1
CA3111952A1 CA3111952A CA3111952A CA3111952A1 CA 3111952 A1 CA3111952 A1 CA 3111952A1 CA 3111952 A CA3111952 A CA 3111952A CA 3111952 A CA3111952 A CA 3111952A CA 3111952 A1 CA3111952 A1 CA 3111952A1
Authority
CA
Canada
Prior art keywords
image
mushroom
matrix
coordinates
child
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CA3111952A
Other languages
French (fr)
Inventor
Agnesh Marsonia
Arun Deep Singh Dhillon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Champag Inc
Original Assignee
Champag Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Champag Inc filed Critical Champag Inc
Priority to CA3111952A priority Critical patent/CA3111952A1/en
Publication of CA3111952A1 publication Critical patent/CA3111952A1/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01GHORTICULTURE; CULTIVATION OF VEGETABLES, FLOWERS, RICE, FRUIT, VINES, HOPS OR SEAWEED; FORESTRY; WATERING
    • A01G18/00Cultivation of mushrooms
    • A01G18/70Harvesting

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Mycology (AREA)
  • Environmental Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Mushroom harvesting vision system including: a camera; a processor for running an algorithm including: an input module for selecting an ellipse diameter range, a sensitivity and accuracy values; an image input module for inputting at least one image, said at least one image depicting one or more mushrooms; a preprocessing module for preprocessing each image and converting each image into greyscale; and a detection module for detecting coordinates of at least one mushroom, the detection module comprising an image processor in which each image is processed to produce at least one image output matrix; wherein each row of each image output matrix comprises x, y, z coordinates of one of said at least one mushroom, a corresponding gripper orientation, and a corresponding mushroom orientation; and a robot comprising a gripper arm, wherein the robot utilizes the output matrix to automatedly pick the at least one mushroom with the gripper arm.

Description

MUSHROOM HARVESTING VISION SYSTEM AND METHOD OF HARVESTING
MUSHROOMS USING SAID SYSTEM
FIELD OF THE INVENTION
[001] The present invention relates to a mushroom harvesting vision system, as well as a method of harvesting mushrooms using said system.
BACKGROUND OF THE INVENTION
[002] Vision systems that can detect mushrooms are known, such as that disclosed in Canadian Patent No. 3,073,861C and W02020097727A1 (Inventor:
GOOD; Applicant: MYCIONICS INC.). Such vision systems can detect the properties of mushrooms (e.g., position, size, shapes, orientations, growth rates, volumes, mass, stem size, pivot point, maturity, and surrounding space), which can be used to determine strategies required for instructing a picking system for autonomous mushroom harvesting. Such vision systems comprise 3D scanners, each having a pair of camera apertures for capturing images and a laser slot for permitting a laser line to project from the vision system rail onto the mushroom below. The vision system rail can have its multiple 3D scanners aligned in one straight line to effectively form a combined line scanner within tightly constrained vertical spaces, while achieving sub-millimeter accuracy and high data throughput.
Such vision system rails can also generate color information that is overplayed on a 3D point cloud allowing for real-time disease detection, mushroom quality and type identification.
[003] The above documents also describe conventional automated harvesters comprising vision systems supported by a rail at one end of a frame, the vision system configured to scan a growing bed under the frame; and a picking system moveable within a working area defined by the frame.
[004] Other known vision systems include those disclosed in US Patent No.
9,730,394 and Canadian Patent application No. 2,943,302A1 (Inventor: VAN DE
VEGTE; Applicant VINELAND RESEARCH AND INNOVATIONS CENTRE INC.), which disclose a method and system for controlling harvesting of mushrooms from a bed during a mushroom graze harvest operation accounting for both mushroom separation and stagger, thereby providing for automatic selection of which mushrooms in the bed are to be harvested in a given shift.
Date Recue/Date Received 2021-03-12
[005] Such documents disclose a system for harvesting mushrooms from a bed, the system comprising:(a) one or more mushroom harvesters configured to pick mushrooms from the bed; (b) one or more cameras for locating mushrooms in the bed and measuring cap diameters of the mushrooms; and (c) a control apparatus operatively linked to the one or more cameras and operatively associated with the one or more mushroom harvesters, wherein the control apparatus is configured to receive image data from the one or more cameras and from the image data to determine cap diameters of the mushrooms, locate centroid positions of mushrooms having a cap diameter greater than a pre-determined value, and for mushrooms for which the centroid position was located calculate centroid-to-centroid distances to each neighboring mushroom, compare the centroid-to-centroid distances for sets of two mushrooms to sum of radii for the two mushrooms, count the number of interfering mushrooms to identify clumps of mushrooms to be thinned and determine which mushrooms to pick from the identified clumps of mushrooms, and wherein the control apparatus is configured to aid or operate the one or more mushroom harvesters to pick mushrooms having cap diameters equal to or greater than a pre-set value and pick the mushrooms determined to be picked from the identified clumps of mushrooms.
[006] US Patent No. 5,471,827 (Inventor: JANSSEN; Applicant: CCM BEHEER
B.V.) also discloses a device for the automatic selective harvesting of mushrooms grown on a growing bed that includes: at least one camera for observing the mushrooms on the growing bed; a carrier which is movable above the growing bed relative thereto, and is provided with apparatus for positioning one or more picking heads for picking mushrooms on the basis of information coming from each camera.
[007] US Patent No. 5,058,368 (inventor: WHEELER) discloses an apparatus for harvesting delicate produce, but particularly mushrooms. Said apparatus includes a picking head which is controlled to be positioned over an item of produce to be harvested, by a camera which scans a tray of said items and a control unit operating on a camera output to determine the co-ordinates of those items found to be suitable for picking, for example from the size of those items.
Date Recue/Date Received 2021-03-12 SUMMARY OF THE INVENTION
[008] In accordance with the present invention, there is provided mushroom harvesting vision system comprising: at least one camera; a processor for running an algorithm in a computer-operated algorithm, the computer-operated algorithm comprising: an input module for selecting an ellipse diameter range, a sensitivity value, and an accuracy value; an image input module for inputting at least one image into the computer-operated algorithm, said at least one image depicting one or more mushrooms and having been taken by the at least one camera; a preprocessing module for preprocessing each image and converting each image into greyscale; and a detection module for detecting coordinates of at least one mushroom, the detection module comprising an image processor in which each image is processed to produce at least one image output matrix; wherein each row of each image output matrix comprises x, y, z coordinates of one of said at least one mushroom, a corresponding gripper orientation, and a corresponding mushroom orientation; and a robot comprising a gripper arm, wherein the robot utilizes the output matrix to automatedly pick the at least one mushroom with the gripper arm.
BRIEF DESCRIPTION OF THE DRAWINGS
[009] Figure 1 is a schematic of an embodiment of the system according to an embodiment of the present invention.
[010] Figure 2 is a picture of a mushroom bed taken using a camera of an embodiment of a system of the present invention.
DETAILED DESCRIPTION OF THE ILLUSTRATIVE EMBODIMENTS
[011] Referring first to Figure 1, in a first aspect of the present invention, a mushroom harvesting vision system 10 is provided. The system 10 includes at least one camera 12 and a processor 14 for running a computer-operated algorithm 16. The computer-operated subsystem 16 includes an input module 18, wherein an ellipse diameter range, a sensitivity value, and an accuracy value are input parameters 19 that may be selected. The computer-operated subsystem 16 includes an image input module 20 for inputting at least one image into the computer-operated algorithm 16. The at least one image depicting one or more Date Recue/Date Received 2021-03-12 mushrooms and having been taken by the at least one camera 12. The computer-operated algorithm 16 includes a preprocessing module 22 for preprocessing each image and converting each image into greyscale; and a detection module 24 for detecting coordinates of at least one mushroom. The detection module 24 includes an image processor in which each image is processed to produce at least one image output matrix. Each row of each image output matrix comprises x, y, z coordinates of one of said at least one mushroom, a corresponding gripper orientation, and a corresponding mushroom orientation. The system includes a robot 26 comprising a gripper arm 28. The robot 26 utilizes the output matrix to automatedly pick the at least one mushroom with the gripper arm 26.
[012] The mushroom harvesting vision system can be used for harvesting a variety of mushrooms, preferably Agricus Bisporous mushrooms.
[013] The at least one camera used by the system of the present invention can be any camera known for use in such mushroom harvesting vision systems. In preferred embodiments, each camera is preferably an Intel D435i camera. In preferred embodiments, only a single camera is used. The camera is for taking pictures of the mushroom bed where harvesting will take place; an example of such a picture is shown at Figure 2. In embodiments, for example in the case of a rather large mushroom bed, multiple images are taken, each image representing an area of the mushroom bed to be harvested. In alternative, preferred embodiments, a single image is taken, said image being of the entire mushroom bed to be harvested.
[014] The processor used in the present invention can be any processor that is capable of running the computer-operated algorithm. In addition, the computer-operated algorithm can be written in any computer language. In preferred embodiments, the computer algorithm is written in Matlab.
[015] What follows is a description of the various modules of the computer-operated algorithm.
[016] Input module: With the input module, an ellipse diameter range, a sensitivity value, and an accuracy value are selected. In embodiments, these input parameters are automatically provided as an argument to the function (the algorithm can be run as a function) after being selected.
Date Recue/Date Received 2021-03-12
[017] The ellipse diameter range will determine the range of mushroom diameters (using the major axis length of each mushroom) that the mushroom harvesting vision system will include in the at least one output matrix for the robot to harvest. If a mushroom diameter is outside the ellipse diameter range, the mushroom will not be harvested. Naturally, as this value is a range, upper and lower limits can be selected, although it is also possible to select mushrooms of a specific size.
[018] The sensitivity value is another input parameter; the value of sensitivity depends on the type of room and its ambient lighting. The lighting of a room can be adjusted inside of the algorithm, in order to compensate for lighting that is 'too bright' or 'too dull'. The sensitivity value can preferably range from 0.9 to 0.99, going from very dull lighting to bright lighting. However, the skilled person would understand that different ranges can be used.
[019] The accuracy value is another input parameter, referring to the accuracy threshold, also referred to as the edge threshold. The accuracy threshold can be controlled inside of the algorithm in order to work with specific algorithmic accuracy.
This accuracy parameter is provided as an input argument in order to perform the detection as per the accuracy requirement for harvesting mushrooms.
[020] The accuracy threshold is an important parameter for quality control, as some users may require mushrooms to be precisely of a certain size range.
Thus, in that case, the accuracy parameter (function argument) needs to be set high.
[021] On the other hand, it is occasionally necessary (e.g. during cleaning), to harvest all size ranges of mushrooms. In such cases, the accuracy threshold may be set to a low value.
[022] Image input module: Each image input into the algorithm will have been taken using the at least one camera. The image input module is configured so as to receive the at least one image automatically after having been taken. Each image input into the algorithm can be an RGB image of size 1280*720 pixels, although different sizes and color schemes can be used, depending on the camera used and the needs of the user. As with the ellipse diameter range, the sensitivity value, and the accuracy value, each image is provided as an argument to the function. The image input module and the input module can be the same module, or separate modules, depending on the needs of the user.
Date Recue/Date Received 2021-03-12
[023] Preprocessing module: with the preprocessing module, each image is preprocessed using Gaussian and Prewitt filters. Each image is then further converted into a GrayScale image for further processing. It is to be understood that if the image in question is already in greyscale (e.g. the camera takes greyscale images), it is not necessary for the preprocessing module to convert the image into greyscale.
[024] Detection module: with the detection module, the coordinates of the mushrooms are automatedly detected based on each pre-processed image and the input parameters. In embodiments, the detection module is configured to process each image using an ellipse detection method, whereby the image is processed using a pixel density and a density characteristic of the image at all regions of the image based on each region's pixel coordinates, such that certain areas of the image are eliminated, and the detection of the coordinates of the at least one mushroom is a result of mathematical computation on all the regions.
[025] In embodiments, each image is treated as a parent matrix and each of a plurality of subsections on the image are treated as a child matrix, wherein the child matrices are used to process characteristics at each specific region and processing results for each child matrix are plotted back at the parent matrix.
[026] The coordinates and the processing results of each child matrix are interlinked using a graphing system from 0,0 up to 720,1280 (although this value may vary depending on the image used) with each adjacent child matrix, such that the coordinates from the child matrices do not overlap with adjacent child matrices.
[027] Furthermore, outside of the child matrix, the whole image is also processed as a parent matrix.
[028] As a result of such processing, the detection module produces the at least one image output matrix comprising x, y, z coordinates of one of said at least one mushroom, a corresponding gripper orientation, and a corresponding mushroom orientation (defined in more detail below).
[029] Image output matrix: Each output from the algorithm is a matrix of size n*5 (although larger sizes than "5" may be used if the user wishes to have additional output parameters), where n represents the number of detected mushrooms that Date Recue/Date Received 2021-03-12 the robot will harvest. The first three columns represent the x,y,z coordinates of a specific mushroom; the fourth column represents the gripper orientation for said mushroom, and the fifth column represents the mushroom orientation of said mushroom. The skilled person would understand these five values could occupy a different order in the matrix depending on the preferences of the user, so long as said matrix can be properly used with the robot.
[030] As mentioned, each row in the image output matrix corresponds to a different detected mushroom that the robot will pick. Once the robot has harvested a mushroom from one row, it can then proceed to the next row. In embodiments, multiple image output matrices can be produced, each matrix comprising information on one or more mushrooms that the system will harvest. For example, the detection module can output an image output matrix for each image that is input into the image input module.
[031] In preferred embodiments, each output matrix is converted into CSV
(comma separated value) format in order to be used in conjunction with the robot, although any format can be used, as long as it can be used in conjunction with the robot.
[032] X,v,z coordinates of the mushrooms: The output of coordinates of mushrooms (comprising x, y, and z coordinates) are based on the major axis length of the mushrooms.
[033] Gripper Orientation: The gripper orientation refers to the angle the gripper needs to be rotated in order to find the space between clusters, thus facilitating the picking process, such that the picking action does not harm either mushroom to be picked or neighboring mushrooms. As mentioned, the gripper orientation is an output of the algorithmic processing. In preferred embodiments, the value of the gripper orientation ranges from 1-4, although other ranges may be used depending on the preferences of the user. What follows is a more detailed explanation of the gripper orientation using various scenarios, taking into consideration the following matrix image:
Date Recue/Date Received 2021-03-12 1 , 446601 6) t.) _4 =
p....
.1 6 _ ( -.)1 .01.
[034] Scenario 1: In the above image, the figure represents a 3*3 matrix, the values range from 1:9. If '5' represents a mushroom that the system intends to pick, and the system knows that there are mushrooms at '1', '2', '4','6' and '9', then the only places where the robot may be able to fit the gripper fingers would be at '3' and '7'. Thus, by doing the respective computational analysis, the algorithm will give the result as index '3'.
[035] Scenario 2: once again, '5' represents a mushroom that the system intends to pick, and the system knows that there are mushrooms at '1', '2', '3','7','8' and '9'.
Now the only place where the robot may be able to fit the gripper fingers would be at '4' and '6'. Thus, by doing the respective computational analysis, the algorithm will give the result as index '4'.
[036] Scenario 3. Once again, '5' represents a mushroom that the system intends to pick, and the system knows that there are mushrooms at '1','3','4','6','7' and '9'.
Now the only place where the robot may be able to fit the gripper fingers would be at '2' and '8'. Thus, by doing the respective computational analysis, the algorithm will give the result as index '2'.
[037] The algorithm would perform a similar computation and give an output of '1', if there is space at '1' and '9'.
[038] In cases where there is no space, the algorithm will skip that mushroom, and will analyse the mushrooms at that mushroom's respective neighbor, such that the algorithm will find a way inwards and outwards to harvest and find the best possible path and ranking for harvesting mushrooms.
[039] The reason for solving such a problem can be thought of by analysing the image shown in Figure 2. In Figure 2, it can be seen that the mushrooms grow in Date Recue/Date Received 2021-03-12 clusters, and if a user were to simply keep a standard/fixed orientation of the gripper, said gripper will keep on hitting/bruising the neighbouring mushrooms of the mushroom that is required to be picked.
[040] Mushroom Orientation: The mushroom orientation refers to the angle of the mushrooms with respect to its respective ground (typically compost). This is useful in order to analyse the approach of the end effector (specifically the gripper) to harvest mushrooms. In preferred embodiments, the angle output of the mushroom orientation varies from about 45 degrees to about 135 degrees, although different ranges of angles may be used, depending on the preferences of the user.
[041] The robot used by the system of the present invention is configured to automatedly receive the image output matrix and to harvest mushrooms having regard to said image output matrix. By receiving the image output matrix, the robot can automatedly harvest mushrooms in such a manner as to harvest the mushrooms as desired by the user (e.g. having a certain ellipse diameter) in an efficient manner that minimizes (or preferably entirely avoids) hitting, damaging, or bruising any mushrooms (both the harvested mushrooms or any neighbouring mushrooms).
[042] As mentioned, the robot comprises a gripper that is capable of harvesting the mushrooms. Furthermore, the robot is capable of automatedly rotating and moving the gripper (which is preferably one and the same as the end effector) in order to harvest the mushrooms. The gripper can be any gripper used for such harvesting systems as known in the prior art.
[043] In embodiments, the robot may comprise more than one gripper. Moreover, in embodiments, the system of the present invention may comprise more than one robot, each robot having one or more grippers.
[044] The system of the present invention can be used in a mushroom harvester.
METHOD OF HARVESTING MUSHROOMS USING MUSHROOM HARVESTING
VISION SYSTEM
[045] In a second aspect of the present invention, a method of harvesting mushrooms using the mushroom harvesting vision system is provided, the method comprising:
Date Recue/Date Received 2021-03-12 = inputting an ellipse diameter range, a sensitivity value, and an accuracy value into an input module of a computer-operated algorithm, the computer-operated algorithm being run with a processor;
= taking at least one image using at least one camera of an area comprising one or more mushrooms;
= inputting each image into an image input module of the computer-operated algorithm;
= preprocessing each image using a preprocessing module of the computer-operated algorithm, thereby converting each image into greyscale;
= detecting the coordinates of at least one mushroom using a detection module of the computer-operated algorithm, said detection comprising processing each image using an image processor to produce at least one image output matrix; wherein each image output matrix comprises x, y, z coordinates of one of said at least one mushroom, a corresponding gripper orientation, and a corresponding mushroom orientation; and = harvesting said at least one mushroom using a robot comprising a gripper arm, wherein the robot utilizes the at least one output matrix to pick the at least one mushroom with the gripper arm.
[046] The method of the present invention is performed using the above-defined system. Accordingly, the ellipse diameter range, the sensitivity value, the accuracy value, the input module, the computer-operated algorithm, the at least one image, the at least one camera, the image input module, the preprocessing module the detection module, the image processor, the at least one image output matrix;
the x, y, z coordinates of the mushrooms, the gripper orientation, the mushroom orientation, the robot, and the gripper arm are as defined in the previous section.
[047] For clarity, in embodiments, the detecting step comprises processing the image using an ellipse detection method, whereby each image is processed using a pixel density and a density characteristic of the image at all regions of the image based on each region's pixel coordinates, such that certain areas of the image are eliminated, and the detection of the coordinates of the at least one mushroom is a result of mathematical computation on all the regions.
Date Recue/Date Received 2021-03-12
[048] Each image is treated as a parent matrix and each of a plurality of subsections on the image are treated as a child matrix, wherein the child matrices are used to process characteristics at each specific region and processing results for each child matrix are plotted back at the parent matrix.
[049] The coordinates and the processing results of each child matrix are interlinked using a graphing system from 0,0 up to 720,1280 (although this value may vary depending on the image used) with each adjacent child matrix, such that the coordinates from the child matrices do not overlap with adjacent child matrices.
Outside of the child matrix, the whole image is also processed as a parent matrix, thereby producing the at least one image output matrix.
[050] As mentioned in the system section, the method of harvesting mushrooms of the present invention is almost entirely automated, as the only step that the user would have to perform themselves is inputting the ellipse diameter range, the sensitivity value, and the accuracy value (and even this step can be performed automatedly), while the remaining steps are performed automatedly once the user decides to activate the system to harvest mushrooms.
[051] The scope of the claims should not be limited by the preferred embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
DEFINITIONS
[052] The use of the terms "a" and "an" and "the" and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context.
[053] The terms "comprising", "having", "including", and "containing" are to be construed as open-ended terms (i.e., meaning "including, but not limited to") unless otherwise noted.
[054] Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All subsets of values Date Recue/Date Received 2021-03-12 within the ranges are also incorporated into the specification as if they were individually recited herein.
[055] All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
[056] The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed.
[057] No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
[058] Herein, the term "about" has its ordinary meaning. In embodiments, it may mean plus or minus 10% or plus or minus 5% of the numerical value qualified.
[059] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
[060] The scope of the claims should not be limited by the preferred embodiments set forth in the examples but should be given the broadest interpretation consistent with the description as a whole Date Recue/Date Received 2021-03-12

Claims (5)

CLAIMS:
1. A mushroom harvesting vision system (10) comprising:
at least one camera (12);
a processor (14) for running an algorithm in a computer-operated algorithm (16), the computer-operated algorithm comprising:
an input module (18) for selecting an ellipse diameter range, a sensitivity value, and an accuracy value;
an image input module (20) for inputting at least one image into the computer-operated algorithm, said at least one image depicting one or more mushrooms and having been taken by the at least one camera (12);
a preprocessing module (22) for preprocessing each image and converting each image into greyscale; and a detection module (24) for detecting coordinates of at least one mushroom, the detection module comprising an image processor in which each image is processed to produce at least one image output matrix;
wherein each row of each image output matrix comprises x, y, z coordinates of one of said at least one mushroom, a corresponding gripper orientation, and a corresponding mushroom orientation; and a robot (26) comprising a gripper arm (28), wherein the robot utilizes the output matrix to automatedly pick the at least one mushroom with the gripper arm.
2. The system of claim 1, wherein the detection means is configured to process each image using an ellipse detection method, whereby each image is processed using a pixel density and a density characteristic of the image at all regions of the image based on each region's pixel coordinates, such that certain areas of the image are eliminated, and the detection of coordinates of the at least one mushroom is a result of mathematical computation on all the regions, and wherein each image is treated as a parent matrix and each of a plurality of subsections on the image are treated as a child matrix, the child matrices being used to process characteristics at each specific region and processing results for each child matrix being plotted back at the parent matrix, the coordinates and the processing results of each child matrix being interlinked using a graphing system with each adjacent child matrix, such that the coordinates from the child matrices Date Recue/Date Received 2021-03-12 do not overlap with adjacent child matrices, and outside of the child matrices, the image as a whole also being processed as a parent matrix, thereby producing the at least one image output matrix.
3. Method of harvesting mushrooms using a mushroom harvesting vision system, the method comprising, inputting an ellipse diameter range, a sensitivity value, and an accuracy value into an input module of a computer-operated algorithm;
taking at least one image using at least one camera of an area comprising one or more mushrooms;
inputting each image into an image input module of the computer-operated algorithm;
preprocessing each image using a preprocessing module of the computer-operated algorithm, thereby converting each image into greyscale;
detecting the coordinates of at least one mushroom using a detection module of the computer-operated algorithm, said detection comprising processing each image using an image processor to produce at least one image output matrix;
wherein each row of each image output matrix comprises x, y, z coordinates of one of said at least one mushroom, a corresponding gripper orientation, and a corresponding mushroom orientation; and harvesting said at least one mushroom using a robot comprising a gripper arm, wherein the robot utilizes the at least one output matrix to pick the at least one mushroom with the gripper arm.
4. The method according to claim 3, wherein the detecting step comprises processing each image using an ellipse detection method, whereby each image is processed using a pixel density and a density characteristic of the image at all regions of the image based on each region's pixel coordinates, such that certain areas of the image are eliminated, and the detection of the coordinates of the at least one mushroom is a result of mathematical computation on all the regions, and wherein each image is treated as a parent matrix and each of a plurality of subsections on the image are treated as a child matrix, the child matrices being used to process characteristics at each specific region and processing results for Date Recue/Date Received 2021-03-12 each child matrix being plotted back at the parent matrix, the coordinates and the processing results of each child matrix being interlinked using a graphing system with each adjacent child matrix, such that the coordinates from the child matrices do not overlap with adjacent child matrices, and, outside of the child matrices, the image as a whole also being processed as a parent matrix, thereby producing the image output matrix.
5. A mushroom harvester comprising the system as defined in claim 1 or 2.
Date Recue/Date Received 2021-03-12
CA3111952A 2021-03-12 2021-03-12 Mushroom harvesting vision system and method of harvesting mushrooms using said system Pending CA3111952A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA3111952A CA3111952A1 (en) 2021-03-12 2021-03-12 Mushroom harvesting vision system and method of harvesting mushrooms using said system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA3111952A CA3111952A1 (en) 2021-03-12 2021-03-12 Mushroom harvesting vision system and method of harvesting mushrooms using said system

Publications (1)

Publication Number Publication Date
CA3111952A1 true CA3111952A1 (en) 2022-09-12

Family

ID=83225931

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3111952A Pending CA3111952A1 (en) 2021-03-12 2021-03-12 Mushroom harvesting vision system and method of harvesting mushrooms using said system

Country Status (1)

Country Link
CA (1) CA3111952A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117426255A (en) * 2023-12-07 2024-01-23 南京农业大学 Automatic agaricus bisporus picking system and method based on vision and force sense feedback

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117426255A (en) * 2023-12-07 2024-01-23 南京农业大学 Automatic agaricus bisporus picking system and method based on vision and force sense feedback
CN117426255B (en) * 2023-12-07 2024-04-12 南京农业大学 Automatic agaricus bisporus picking system and method based on vision and force sense feedback

Similar Documents

Publication Publication Date Title
CN108566822B (en) Stalk position estimation device and stalk position estimation method
Kurtser et al. In-field grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D camera
US20230049158A1 (en) Crop scouting information systems and resource management
CN110223349A (en) A kind of picking independent positioning method
US8363905B2 (en) Automated image analysis of an organic polarized object
CN111339921A (en) Insect disease detection unmanned aerial vehicle based on lightweight convolutional neural network and detection method
CN112990103A (en) String mining secondary positioning method based on machine vision
CN112541383B (en) Method and device for identifying weed area
JP7049814B2 (en) Harvest robot system
Velumani Wheat ear detection in plots by segmenting mobile laser scanner data
CN108133471A (en) Agriculture Mobile Robot guidance path extracting method and device based on artificial bee colony algorithm under the conditions of a kind of natural lighting
JP4961555B2 (en) Method for locating target part of plant, target part locating device by the method, and working robot using the device
CA3111952A1 (en) Mushroom harvesting vision system and method of harvesting mushrooms using said system
CN113065562A (en) Crop ridge row extraction and leading route selection method based on semantic segmentation network
WO2021176081A1 (en) Quantifying biotic damage on plants, by separating plant-images and subsequently operating a convolutional neural network
Jayasekara et al. Automated crop harvesting, growth monitoring and disease detection system for vertical farming greenhouse
CN111369497B (en) Walking type tree fruit continuous counting method and device
Feng et al. Fruit Location And Stem Detection Method For Strawbery Harvesting Robot
CN113145473A (en) Intelligent fruit sorting system and method
Bachche et al. Distinction of green sweet peppers by using various color space models and computation of 3 dimensional location coordinates of recognized green sweet peppers based on parallel stereovision system
CN114788455A (en) Target detection-based tomato cluster single-grain picking method and system
CN107656287B (en) A kind of Boundary Extraction device and method of the crudefiber crop row based on laser radar
CN110688886A (en) Grafting clip posture identification method based on machine vision
Blasco et al. Machine vision for precise control of weeds
Wang ABC: Adaptive, Biomimetic, Configurable Robots for Smart Farms-From Cereal Phenotyping to Soft Fruit Harvesting