WO2024043775A1 - Autonomous method and system for detecting, grabbing, picking and releasing of objects - Google Patents

Autonomous method and system for detecting, grabbing, picking and releasing of objects Download PDF

Info

Publication number
WO2024043775A1
WO2024043775A1 PCT/MY2023/050009 MY2023050009W WO2024043775A1 WO 2024043775 A1 WO2024043775 A1 WO 2024043775A1 MY 2023050009 W MY2023050009 W MY 2023050009W WO 2024043775 A1 WO2024043775 A1 WO 2024043775A1
Authority
WO
WIPO (PCT)
Prior art keywords
autonomous
predetermined object
grabbing
autonomous system
ffb
Prior art date
Application number
PCT/MY2023/050009
Other languages
French (fr)
Inventor
Mohd Zulfahmi MOHD YUSOFF
Amirul Al Hafiz ABDUL HAMID
Muhamad Khuzaifah ISMAIL
Mohd Shiraz ARIS
Original Assignee
Sime Darby Plantation Intellectual Property Sdn Bhd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sime Darby Plantation Intellectual Property Sdn Bhd filed Critical Sime Darby Plantation Intellectual Property Sdn Bhd
Publication of WO2024043775A1 publication Critical patent/WO2024043775A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D87/00Loaders for hay or like field crops
    • A01D87/003Loaders for hay or like field crops with gripping or clamping devices
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D90/00Vehicles for carrying harvested crops with means for selfloading or unloading
    • A01D90/16Vehicles for carrying harvested crops with means for selfloading or unloading self-propelled

Definitions

  • the present invention relates generally to an autonomous method and system for detecting, grabbing, picking and releasing of objects. More specifically, the present invention relates to an autonomous method and system for detecting, grabbing, picking and releasing of oil palm fresh fruit bunches.
  • Harvesting is an important process in the oil palm plantation to obtain fresh fruit bunches (FFB) with excellent oil content and quality and by venturing into mechanisation programmes, the aim is essentially to reduce dependence on foreign labours and also to attract more locals to return to the field.
  • Harvesting operations in the oil palm industry requires the most number of workers consisting of cutting of oil palm fronds and FFB, evacuating FFB to roadside platforms, loose fruit collection and cleaning (frond stacking / bunch stalk cutting).
  • the oil palm yield depends on various factors such as but not limited to age, seed quality, soil conditions, climate, plantation management and timely harvesting and processing of the FFB.
  • the ripeness of FFB harvested is critical in maximising the quality and quantity of palm oil extracted.
  • Harvested fruits must be processed within 24 hours to minimise the build-up of fatty acids which reduces the quality of CPO extracted.
  • CGS-CIMB Research pointed out that SDP’s workforce in Malaysia hovers between 75% and 80% of its total requirement. Meanwhile, FGV Holdings Bhd’s workforce stands at only 75% of requirement, which is a significant decline from 90% at the end of the third quarter last year, it added. According to a pre-MCO survey by the Malaysian Palm Oil Board, there was a shortage of 31,021 harvesters among its respondents, which represents 76% of the industry players. CGS-CIMB Research said it was estimated that the shortage of workers translated into a production loss of 3.4 million tonnes and 0.86 million tonnes of CPO and palm kernel, respectively. [Source: Labour shortage getting worse in palm plantations, The Star, 2 June 2021 ]
  • Oil palm tree begins to produce oil palm fruits 30 months after being planted in the fields, however oil palm yield is relatively low at this stage. As the oil palm continues to mature, its yield increases and it reaches peak production in years 7 to 18. Yield starts to gradually decrease after 18 years.
  • the oil palm yield depends on various factors such as but not limited to age, seed quality, soil conditions, climate, plantation management and timely harvesting and processing of the FFB.
  • the ripeness of FFB harvested is critical in maximising the quality and quantity of palm oil extracted. Harvested fruits must be processed within 24 hours to minimise the build-up of fatty acids which reduces the quality of CPO extracted. [Source: wilmar-international.com]
  • Wheelbarrow is a traditional conventional method of FFB collection, which essentially is an easy machine to work with and does not pollute the environment.
  • wheelbarrow requires considerable amount of energy from the worker to function the wheelbarrow at full capacity - whereby the worker needs to lift, push and balance the device, worker has to lunge forward to set the load in motion and quickly correct his posture to get device under control when the load is heavy, going uphill with a heavy load may simply be impossible and accidents may occur if momentum is lost descending a slope or when hitting an obstacle.
  • a wheelbarrow can only carry few FFB at a time.
  • a battery powered wheelbarrow was eventually designed and fabricated to assist the harvesters, whereby, the battery powered wheelbarrow is more productive compared to a convention wheelbarrow.
  • a buffalo cart was used to replace the wheelbarrow whereby the buffalo replaces the worker’s need to pull the cart filled with FFB therefore the worker uses less energy and is able to focus on their task at hand.
  • the risks of the buffalo in contracting a disease, risk to be stolen is high as the demand for buffalo beef is high and the buffalo is also looked at as an asset for people.
  • the buffalo pulling the cart was then replaced with a mechanised version that uses engine, known as mechanical buffalo (MB Badang) throughout the industry.
  • the mechanical buffalo functions like a dump truck and is small powered ranging from 7 to 10 horse-powered tractors which has been modified in such a way that it has a cart that can load the FFB.
  • Mechanical buffalo in general improves the efficiency and productivity of the workers and lowers harvesting cost in the oil palm plantation. Loading of the FFB can be done either manually or using hydraulic grab by the mechanical buffalo. It also comes with a multipurpose wheel-type transporter (Badang crawler) to transport FFB in difficult areas such as peat, narrow terrace, undulating terrain and soggy ground. All in all, the mechanical buffalo generally reduces worker fatigue and has shown potential to increase labour productivity by reducing worker’s walking time, carrying and loading time.
  • CantasTM is a tool for the efficient harvesting of oil palm FFB describes about a motorized cutter as developed by Malaysian Palm Oil Board for harvesting FFB at less than 4.5m in height.
  • CantasTM is a hand-held cutter powered by a 1.3 horse-power petrol engine. Productivity of this machine is 560 to 750 FFB per day as compared to manual harvesting using sickle or chisel which has the capacity of only 250 to 350 FFB per day.
  • CantasTM is the first motorised cutter that is well accepted by the industry. [Source: Journal of Oil Palm Research Vol. 20 Dec 2008 p. 548-558 ]
  • Kulim (M) Berhad states that Kulim had already embark in the usage of motorized cutter for harvesting operation, usage of MB Badang mechanical buffalo, mini tractor with scissor lift trailer and life buffalo for FFB infield collection, usage of ‘Kulim Crane Free System’, crane netting for hukka bin system for mainline loading and transport.
  • the harvesting mechanization method adopted by Kulim (M) Berhad may not be the most efficient mechanization system available to use and only some are currently commercially workable in their environment. [Source: https://www.slideshare.net/MrPaucit/kulim-m-berhad-experience-palm-mech-2012- paper-by-mfam ]
  • Eargues Badang Sdn. Bhd. is a member of Kulim (M) Berhad Group of Companies provides oil palm harvesting equipment such as but not limited to MB Badang L100 Standard (10 horse-power capacity, loading capacity of 500 kgs and 120 to 150 hectares coverage per machine), MB Badang L100 Dumper (10 horse-power capacity, loading capacity of yookgs and 120 to 150 hectares coverage per machine), MB Badang L100 HyPivot (10 horse-power capacity, loading capacity of 350 kg and 120 to 150 hectares per machine), Badang Crawler L70 with load capacity of 300 kgs per trip for effective in-field FFB evacuation at peat area, Beluga T980 (track utility tractor) and Rhyno W700 (wheel utility tractor).
  • MB Badang L100 Standard 10 horse-power capacity, loading capacity of 500 kgs and 120 to 150 hectares coverage per machine
  • MB Badang L100 Dumper (10 horse-power capacity, loading capacity of yookgs and 120 to 150 hectares coverage per machine
  • the conventional mini tractor-trailer system for FFB evacuation is whereby separate groups of workers are employed namely the cutter and carrier groups, whereby the cutter cuts the FFB and places them along the harvesting path.
  • Carrier group comprises three workers which are the driver and two others who collects the cut FFB along the harvesting path and unloads them at the roadside.
  • the mini tractor-trailer serves about 200 to 250 ha per day and four to five cutters are required for cutting FFB and fronds.
  • a separate group is required to work on the loose fruit collection. In general, work productivity increases by having separate group for each task involved.
  • a mechanical loader was then introduced which eliminated the need for the two loaders as required in the mini tractor-trailer system. This system however mainly caters for flat and undulating areas, and not suitable for use at hilly terrain and terraced areas of the oil palm estates.
  • WO2O21126531A1 describes a method for performing automated machine vision-based defect detection, involves training a neural network to detect defects, receiving multiple historical datasets including multiple training images corresponding to the known defects, and obtaining a test image of an object.
  • the automated machine vision-based defect detection method involves training a neural network to detect defects, and receiving multiple historical datasets including multiple training images corresponding to the known defects.
  • Each training image is converted into a corresponding matrix representation, and inputted each corresponding matrix representation into the neural network to adjust weighted parameters based on the known defects.
  • a test image of an object is obtained, and extracted portions of the test image as multiple input patches for input into the neural network.
  • Each input patch is inputted into the neural network as a respective matrix representation to automatically generate a probability score for each input patch using the weighted parameters.
  • US20200242413A1 describes a method which involves capturing image data of the work field from a camera system, and binary thresholding the image data based on a presumed feature of the one or more predetermined features. In response to the binary thresholding, one or more groups of pixels are identified as candidates for the presumed feature. A filter is applied to the identified one or more groups of pixels to create filtered data, the filter comprises an aspect corresponding to the presumed feature. Based on the filter application, it is determined if the object has the presumed feature. This prior art does not describe the system and/or method of the present invention.
  • W02022099600A1 describes a computer-implemented method for performing object detection on images, involves rebalancing output sample distribution among classes at representation neural networks and classifier neural networks by using class-balanced loss.
  • the method involves obtaining (402) image head class input data and image tail class input data differentiated from the head class input data and respectively of two images each of an object to be classified.
  • the head and tail class input data are respectively inputted (404) into two separate parallel representation neural networks being trained to respectively generate head and tail features.
  • the head and tail features are inputted (408) into a classifier neural network to generate class-related data.
  • a class- balanced loss of one of the classes of the class-related data comprising factoring an effective number of samples of individual classes is generated (410).
  • An output sample distribution among the classes at the representation neural networks, classifier neural networks, or both is rebalanced (412) by using the class-balanced loss.
  • a computing device can receive an input image.
  • the computing device can process the image, and generate a convolutional feature map.
  • the convolutional feature map can be processed through a Region Proposal Network (RPN) to generate proposals for candidate objects in the image.
  • RPN Region Proposal Network
  • the computing device can process the convolutional feature map with the proposals through a Fast Region-Based Convolutional Neural Network (FRCN) proposal classifier to determine a class of each object in the image and a confidence score associated therewith.
  • FRCN Fast Region-Based Convolutional Neural Network
  • US20170000027A1 describes a method for the robotic picking of stemmed crops by the stem, comprising the use of a robotic mechanism comprising an arm or arms mounted on a moveable base, having a gripper/cutter mechanism that contains an attached or embedded blade, wherein the mechanism is configured in a way that it grabs and cuts the stem without grabbing the crop itself, whereby the crop is picked without being damaged.
  • a robotic mechanism comprising an arm or arms mounted on a moveable base, having a gripper/cutter mechanism that contains an attached or embedded blade, wherein the mechanism is configured in a way that it grabs and cuts the stem without grabbing the crop itself, whereby the crop is picked without being damaged.
  • US9475189B2 describes a harvesting system, comprising a frame, which is configured to move between multiple positions along a row of trees to be harvested multiple robots, which comprise movable robot arms and which are mounted on the frame facing a respective area to be harvested at a first position of the frame, wherein the robot arms are fitted with grippers that are configured to approach and grip the crop items, and each of the robots is configured to harvest crop items by reaching and gripping the crop items using the grippers from a respective angle of approach of the robot arm that is fixed for that robot, and wherein the robot arms of at least a subset of the robots are parallel to one another such that the angle of approach is common to the robots in the subset; one or more sensors, which are all fixed relative to the frame and do not move with the robot arms, and which are configured to acquire images of the area only from outside the trees.
  • US20060150602A1 describes a method of remotely guiding a harvesting device comprising the steps of taking images using a plurality of cameras at the harvester taking images, a network transmitting and receiving information between the harvester and the remote operator, a remote user interface receiving images from an imaging devices located at the harvester, an operator and computer at the remote user interface selecting objects to be harvested and transmitting the pixel location and timing information derived from the selected object to the harvester, a harvester using the information transmitted from the remote operator and, basing the targeted positioning information on the remote user information, positioning the collection device near the object to be harvested, the harvester controller detecting and repositioning the collector as needed for collection of objects to be harvested, the harvester controller commanding the collector to collect the object to be harvested.
  • This prior art does not describe the system and/or method of the present invention.
  • the present invention provides an autonomous method for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the method comprises the steps of (a) capturing a plurality of visual data from a surrounding area where at least one predetermined object is present using at least one imaging device, (b) transmitting the plurality of visual data by way of at least one signal to at least one computing device, (c) detecting the at least one predetermined object by the computing device which has been trained using a machine learning algorithm using a plurality of visual data, (d) transmitting at least one signal to at least one sensing directional means which moves at least one grabbing means towards direction of at least one predetermined object, (e) measuring distance between the at least one grabbing means and the at least one predetermined object by at least one sensor, (f) transmitting at least one signal from the at least one sensor to the at least one computing device, (g) transmitting at least one signal from the at least one computing device to a plurality of motion sensing devices, (h) moving of the at least one grabbing means
  • an autonomous system for detecting, grabbing, picking and releasing of at least one predetermined object
  • the system comprises (a) at least one imaging device which captures a plurality of visual data from a surrounding area where the at least one predetermined object is present, (b) at least one computing device transmitting the plurality of visual data byway of at least one signal to at least one computing device, wherein the computing device which has been trained using a machine learning algorithm using a plurality of visual data which detects the at least one predetermined object, (c) at least one sensing directional means which receives at least one signal from the at least one computing device, (d) at least one grabbing means which is moved towards direction of at least one predetermined object by the at least one sensing directional means, (e) at least one sensor which measures the distance between the least one grabbing means and the at least one predetermined object, wherein the at least one sensor transmits at least one signal to the at least one computing device, (f) a plurality of motion sensing devices which receives at least one signal from the
  • Figure 1 illustrates the system of the present invention for detecting, grabbing, picking and releasing the oil palm fresh fruit bunches (FFB);
  • FFB oil palm fresh fruit bunches
  • Figure 2 illustrates the flow chart of the system of the present invention for detecting, grabbing, picking and releasing the oil palm FFB;
  • FIG. 3 illustrates the operations or mechanism of the system of the present invention.
  • Figure 4 illustrates the mechanism of the system of the present invention.
  • the present invention relates generally to an autonomous method and system for detecting, grabbing, picking and releasing of objects. More specifically, the present invention relates to an autonomous method and system for detecting, grabbing, picking and releasing of oil palm fresh fruit bunches.
  • a first object of the present invention is to provide an autonomous method and system for detecting, grabbing, picking and releasing of oil palm FFB in a real-time mode using Al and machine learning deep learning algorithm.
  • a second object of the present invention is to provide an autonomous method and system with at least 90% accuracy (or more than 90%) for the detecting, grabbing, picking and releasing of oil palm FFB.
  • a third object of the present invention is to provide an autonomous method and system which is a breakthrough invention in the oil palm industry moving towards artificial intelligence with the aim to reduce dependence on foreign labours and also to attract more locals to work in the oil palm industry.
  • a fourth object of the present invention is to provide an autonomous method and system which is able to increase productivity with respect to the collection and evacuation of the oil palm FFB from the estates in comparison to the conventional means of doing so.
  • a fifth object of the present invention is to provide for an effective and efficient oil palm FFB collection and evacuation means from the oil palm estates as the ripeness of the oil palm FFB harvested is crucial in obtaining the quality of palm oil extracted as the harvested oil palm FFB must be processed within certain hours to minimise build-up of fatty acids which affects the quality of CPO extracted.
  • a sixth object of the present invention is to provide an autonomous method and system which is easy to be operated by anyone - in particular unskilled workers, easy to maintain, safe to use, stable, can operate smoothly and is sustainable. This would reduce reliance on foreign workers and/or highly skilled workers for oil palm FFB harvesting purposes.
  • a seventh object of the present invention is to provide continuous improvement in the oil palm operations in the estate byway of artificial intelligence.
  • An eighth object of the present invention is to provide a system whereby the operator of the present invention alone is able to manage the detecting, grabbing, picking and releasing of oil palm FFB on his own including driving back the movable vehicle to the collection bin to unload the FFB content.
  • a ninth object of the present invention is to provide an autonomous method and system which can be used with any types of movable vehicle depending on the preference of the user.
  • a tenth object of the present invention is to provide an autonomous method and system which is suitable for use on any soil conditions, both coastal and inland estates and during any types of weather conditions specifically tropical and subtropical climate conditions.
  • the present invention provides an autonomous method for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the method comprises: a) capturing a plurality of visual data from a surrounding area where at least one predetermined object is present using at least one imaging device (1) ; b) transmitting the plurality of visual data by way of at least one signal (Si) to at least one computing device (2); c) detecting the at least one predetermined object by the computing device (2) which has been trained using a machine learning algorithm using a plurality of visual data (A); d) transmitting at least one signal (S2) to at least one sensing directional means (3) which moves at least one grabbing means (9) towards direction of at least one predetermined object; e) measuring distance between the at least one grabbing means (9) and the at least one predetermined object by at least one sensor (4); f) transmitting at least one signal (S3) from the at least one sensor (4) to the at least one computing device (2); g) transmitting at least one signal (S4) from the at least one
  • the present invention also provides an autonomous system for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the system comprises: a) at least one imaging device (1) which captures a plurality of visual data from a surrounding area where the at least one predetermined object is present; b) at least one computing device (2) transmitting the plurality of visual data by way of at least one signal (Si) to at least one computing device (2), wherein the computing device (2) which has been trained using a machine learning algorithm using a plurality of visual data (A) which detects the at least one predetermined object; c) at least one sensing directional means (3) which receives at least one signal (S2) from the at least one computing device (2); d) at least one grabbing means (9) which is moved towards direction of at least one predetermined object by the at least one sensing directional means (3); e) at least one sensor (4) which measures the distance between the least one grabbing means (9) and the at least one predetermined object, wherein the at least one sensor (4) transmits at least one signal
  • the method and system may include the use of a movable vehicle.
  • the at least one imaging device (1) maybe a camera system which maybe positioned on a left side and/or right side of the movable vehicle.
  • the at least one camera system may capture the plurality of visual data of the predetermined objects in a real-time mode from the surrounding area of the movable vehicle and may display a plurality of visual data on a screen or display panel in the movable vehicle.
  • Surrounding area means on left side and/or right side of the movable vehicle on the ground of an oil palm estate.
  • the at least one grabbing means (9) may be a grabber arm.
  • the grabber arm may be a rotatable arm and the rotatable arm maybe rotatable 180°, either clockwise or anti-clockwise.
  • the plurality of the motion sensing devices (5, 6, 7, 8) may allow movement of the grabber arm.
  • the at least one controller (10) may be provided to control a hydraulic linear motor of the grabber arm to grab and pick up the at least one predetermined object.
  • the at least one controller (10) may be a microcontroller or a programmable logic controller (PLC).
  • the at least one computing device (2) may be a processing unit which comprises a graphics processing unit (GPU) and a central processing unit (CPU).
  • the GPU may detect the at least one predetermined object in a real-time mode.
  • the at least one sensing directional means (3) is at least one sensing directional valve.
  • the at least one sensing directional valve may be an electrically controlled load independent proportional valve group (PVG).
  • the PVG may receive at least one signal from the CPU to move a PVG spool.
  • the PVG spool may result in a movement of the grabber arm.
  • the grabber arm may move horizontally and/or vertically.
  • the CPU may run and control algorithm of the plurality of the motion sensing devices (5, 6, 7, 8).
  • the plurality of the motion sensing devices (5, 6, 7, 8) may be operated by a machine learning algorithm such as a deep learning algorithm.
  • the plurality of the motion sensing devices maybe at least one position encoder.
  • the at least one position encoder may be at least one linear encoder and at least one rotary encoder.
  • the at least one rotary encoder may be rotatable 180°, either clockwise or anti-clockwise.
  • the at least one rotary encoder may measure displacement or movement of a rotary motion of the grabber arm.
  • the at least one linear encoder maybe provided to measure displacement or movement of a linear motion of the grabber arm.
  • the at least one sensor (4) may be depth camera, such as a red, green, blue and depth (RGBD) camera.
  • RGBD red, green, blue and depth
  • the at least one predetermined object may be identified based on at least one predetermined feature or characteristic.
  • the at least one predetermined feature or characteristic may be appearance, shape, colour, size and/or in any combination thereof.
  • the at least one predetermined object maybe a fruit such as oil palm fresh fruit bunches (FFB).
  • FFB oil palm fresh fruit bunches
  • the autonomous method and system may be used on any soil conditions.
  • the autonomous method and system maybe used at coastal and inland estates.
  • the autonomous method and system may be used during any types of weather conditions such as tropical and subtropical climate conditions.
  • the autonomous method and system achieves at least 90% accuracy in detecting, grabbing, picking and releasing of the at least one predetermined object.
  • the present invention provides an autonomous system for detecting, grabbing, picking and releasing of at least one predetermined objects, the system comprising:
  • At least one imaging device which captures a plurality of visual data from a surrounding area where the at least one predetermined object is present;
  • At least one computing device (2) transmitting the plurality of visual data by way of at least one signal (Si) to at least one computing device (2), wherein the computing device (2) which has been trained using a machine learning algorithm using a plurality of visual data (A) which detects the at least one predetermined object;
  • At least one sensing directional means (3) which receives at least one signal (S2) from the at least one computing device (2);
  • At least one sensor (4) which measures the distance between the at least one grabbing means (9) and the at least one predetermined object, wherein the at least one sensor (4) transmits at least one signal (S3) to the at least one computing device (2);
  • a plurality of motion sensing devices which receives at least one signal (S4) from the at least one computing device (2), wherein the plurality of motion sensing devices (5, 6, 7, 8) moves the at least one grabbing means (9) towards a direction of the at least one predetermined object to enable the at least one grabbing means (9) to grab and pick up the at least one predetermined object before moving the at least one predetermined object towards a designated area and releasing the at least one predetermined object into the designated area.
  • the operator will activate the system by touching “START” on the screen or display panel in the movable vehicle.
  • the screen or display panel will then show “Right Camera”, “Left Camera”, “Emergency”, “Calibration”, “Right” or “Left.”
  • the operator can select “Right Camera” or “Left Camera” to see views of the oil palm FFB on the ground by the at least one imaging device (1).
  • the operator should firstly determine which side of the movable vehicle the oil palm FFB is on and estimate a location within reach between the grabber arm and the oil palm FFB.
  • the grabber arm starts from a resting position (‘Home’) which is the closest position to the movable vehicle.
  • the operator provides an instruction to the central processing unit, selecting ‘Right’ for grabbing and picking up the oil palm FFB on the right side of the movable vehicle or ‘Left’ for grabbing and picking up the oil palm FFB on the left side of the movable vehicle.
  • the readings of all the motion sensing devices (5, 6, 7, 8) will continuously change when the at least one grabber arm is moving until the oil palm FFB are detected by the at least one sensor (4).
  • the motion sensing devices’ (5, 6, 7, 8) specified / desired readings are met, the at least one grabber will move grab and pick up the oil palm FFB and release / drop the oil palm FFB into a bin or container as attached with the movable vehicle.
  • the central processing unit transmits at least one signal to the PVG’s spool to return the grabber arm to its resting position (‘Home’), after which the PVG’s spool will return to neutral.
  • the at least predetermined object for the purposes of this present invention means the oil palm FFB.
  • the present invention can also be used to detect, grab and pick up any other predetermined objects from the ground as long as the deep learning algorithm used is trained by a training dataset consisting of plurality of visual data of the predetermined objects of interest.
  • the deep learning algorithm of this present invention is an object detection means and operates in a real-time mode.
  • the predetermined objects are identified based on predetermined features or characteristics.
  • the predetermined features or characteristics are appearance, shape, colour, size and/or in any combination thereof.
  • the predetermined features or characteristics refer to the characteristics of the oil palm FFB.
  • Visual data (A) for this present invention means images or views and videos of the oil palm FFB on the ground including its appearances (i.e. shape, colour, texture etc.).
  • Visual data (A) obtained from the oil palm estates are used to train the deep learning algorithm of the present invention in order for the system to be able to accurately detect FFB on the ground in any oil palm estates (with different estate conditions) and at anytime point during actual operations.
  • the at least one imaging device (1) captures a plurality of visual data of the oil palm FFB from the surrounding area in the estates, transmits at least one signal (Si) to the at least one computing device (2) and to the at least one sensing directional means (3).
  • the at least one imaging device (1) is at least one camera system positioned on a left side and/or right side of the movable vehicle, preferably at least one positioned on the left side and at least one on the right side of the movable vehicle. The number of camera system can be determined based on preference and needs of the user of the present invention.
  • the at least one camera system captures the plurality of visual data of the oil palm FFB in a realtime mode on the left side and/or right side of the movable vehicle and displays the plurality of visual data on a screen or display panel in the movable vehicle.
  • any type of screen or display panel can be used for the purposes of this present invention, for example a liquid crystal display (LCD) screen, preferably a mini version which is suitable for use in the movable vehicle in the oil palm estates.
  • LCD liquid crystal display
  • the at least one computing device (2) comprises a graphics processing unit (GPU) and a central processing unit (CPU).
  • the GPU of this present invention detects the oil palm FFB in a real-time mode.
  • the GPU is a control system which runs a robot operating system (ROS), vision detection system for detecting the oil palm FFB on the ground and motion control system for the grabber arm.
  • ROS robot operating system
  • a Nvidia Jetson AGX Xavier 32GB GPU development kit is used for the purposes of the present invention.
  • ROS is a set of software libraries and tools that help you build robot applications.
  • ROS is an open-source robotics middleware suite. [Source: https://www.ros.org/]
  • the inventors of the present invention have used ROS which utilises Python and C++ programming languages to implement the system of the present invention.
  • the CPU for this present invention is a minicomputer.
  • the minicomputer runs and controls the algorithm of the plurality of motion sensing devices (5, 6, 7, 8).
  • the minicomputer used for the purposes of the present invention is an Intel® NUC Mini PCs.
  • the at least one sensing directional means (3) is at least one sensing directional valve which is an electrically controlled load independent proportional valve group (PVG).
  • the PVG receives at least one signal from the CPU to move a PVG spool.
  • the moving PVG spool results in the movement of the at least one grabber arm.
  • the at least one motion sensing device (5, 6, 7, 8) allows movement of the at least one grabber means (9).
  • the at least one grabber means (9) is at least one grabber arm.
  • the movement of the at least one grabber arm is up, down, left and/or right.
  • the at least one grabber arm is a rotatable arm, rotatable 180° either clockwise or anti-clockwise.
  • the at least one controller (io) is a microcontroller or a programmable logic controller (PLC).
  • the at least one controller (io) is used to control a hydraulic linear motion of the at least one grabber arm to grab and pick up the oil palm FFB.
  • the at least one controller (io) is at least one hydraulic cylinder with at least one linear actuator to provide the linear motion of the at least one grabber arm coupled with a PVG.
  • At least one signal (S2) is transmitted to at least one sensing directional means (3), the PVG which moves the grabber arm towards direction of the oil palm FFB.
  • the at least one sensor (4) (at least one depth camera) measures distance between the at least one grabbing means (9) which is a grabber arm and the oil palm FFB.
  • the at least one sensor (4) transmits at least one signal (S3) to the at least one computing device (2).
  • at least one signal (S4) is transmitted from the at least one computing device (2) to a plurality of motion sensing devices (5, 6, 7, 8).
  • the plurality of motion sensing devices (5, 6, 7, 8) moves the at least one grabbing means (9) towards a direction of the oil palm FFB.
  • the grabber arm is used to grab oil palm FFB, pick up oil palm FFB, move the oil palm FFB towards a designated compartment and release the oil palm FFB into the designated compartment or bin.
  • At least one grabber arm is sufficient for the purposes of the present invention, however more than one could be used depending on the preference and needs of the user of the present invention. Any type of grabber arm can be also used depending on the preference of the user.
  • the at least grabber arm for the purposes of the present invention comes with a claw arm to grab and pick up the oil palm FFB from the ground and is operated by two hydraulic cylinders and is attached to a grabber holder which connects the claw arm to the grabber arm.
  • the claw arm as operated by one hydraulic cylinder and is used to grasp the oil palm FFB and ungrasp or release them into a collection bin.
  • the claw arm shall have at least 2 fingers, preferably 3 fingers for stability in grabbing the oil palm FFB. The angle between each finger of the claw arm is evenly distributed at 180° (for 2 fingers) and 120° (for 3 fingers).
  • Visual data (A) are firstly collected for the machine learning process in order for the system of the present invention to be able to detect the oil palm FFB on the ground via the vision detection system.
  • the system is then operated in actual operating conditions in oil palm estates to obtain substantial training dataset.
  • At least one depth camera is attached to the grabber arm (or also can be attached to the movable vehicle) to obtain data on parameters such as angle, distance, speed and others when the system is being operated in an estate.
  • the at least one imaging device (1) is switched on in order to capture the visual data during the entire operations in the estate in realtime mode. As much visual data (A) as possible is collected in order to obtain the highest accuracy possible for detecting, grabbing, picking up and releasing of oil palm FFB into the designated area.
  • the system of the present invention was operated in various different estates with different estate conditions, different times of a day and different weather conditions (i.e. sunny and shady lighting, soil and grassy area etc.) for data variation purposes in order to train the system to obtain accuracy of more than 90%, and preferably as close as possible to 100%.
  • the visual data (A) collected are then stored in a drive such as a solid-state drive in computers, then transferred to the GPU for processing and to a workstation for classification and training.
  • the central processing unit runs and controls algorithm of the at least one motion sensing device (5, 6, 7, 8).
  • the plurality of motion sensing devices (5, 6, 7, 8) are operated by a machine learning algorithm, specifically a deep learning algorithm trained by a training dataset consisting of the plurality of visual data (A) of the oil palm FFB.
  • the plurality of motion sensing devices (5, 6, 7, 8) receives at least one signal (S4) from the at least one computing device (2), wherein the plurality of motion sensing devices (5, 6, 7, 8) moves the at least one grabbing means (9) towards a direction of the oil palm FFB to enable the at least one grabbing means (9) to grab and pick up the oil palm FFB before moving the oil palm FFB towards a designated area and releasing into the designated area.
  • the plurality of motion sensing device (5, 6, 7, 8) is at least one position encoder, preferably at least one linear encoder and at least one rotary encoder.
  • the number of the motion sensing devices (5, 6, 7, 8) can be selected / determined based on preference of the user of the present invention. Any type of motion sensing devices (5, 6, 7, 8) can be used for the present invention, preferably one that is able to withstand any type of weather conditions specifically tropical and subtropical climate conditions.
  • the at least one rotary encoder is to measure displacement or movement of a rotary motion of the at least one grabber arm.
  • the at least one rotary encoder is able to move 180° clockwise or anticlockwise.
  • the at least one linear encoder is to measure displacement or movement of a linear motion of the at least one grabber arm.
  • absolute encoders determine exact position of an object and suitable for use where the machine or process moves at a slow rate. Incremental encoders use a simpler method of counting movement and rely on establishing the position of the object by counting the number of pulses and then using that count to compute the position, therefore there is no unique digital signature that can be used to determine an absolute position. Incremental encoders measure the relative movement against some point of reference, whereas absolute encoders measure the position directly using a unique signal code that directly reflects the position.
  • the combination of use between rotary and linear encoders (5, 6, 7, 8) can be determined and decided based on preference and the needs of the user of the present invention.
  • the inventors of the present invention used a linear encoder (5), a combination of linear and/or rotary encoders (6, 7) and a rotary encoder (8) to demonstrate the system of the present invention.
  • Both absolute and incremental rotary encoders are used for the present invention and positioned at all rotational moving parts of the at least one grabbing means (9).
  • Three rotary encoders are used by the inventors of the present invention but less or more than three can be used depending on the rotary degree of freedom as required.
  • Combination of incremental and absolute rotary encoders have been used by the inventors to operate the system of the present invention, however any combination of absolute and incremental rotary encoders can be used depending on preference of the user of the present invention.
  • the at least one linear encoder is either absolute and/or incremental encoder and is an at least one linear transducer positioned at the least one grabber arm.
  • the number of linear encoders required can be determined by the preference of the user of the present invention whereby the maximum number required depends on the number of linear movements of the at least one grabber arm.
  • the at least one sensor (4) is at least one depth camera.
  • the at least one sensor (4) measures distance between the at least one grabber arm and the oil palm FFB and transmits at least one signal to the at least one computing device (2).
  • a depth camera is preferred for this present invention as it can sense and measure the depth of the oil palm FFB (by illuminating the object with infrared light or LED and analyse the reflected light) and corresponding pixel and texture information of the oil palm FFB.
  • the depth camera is able to create high definition visual data by identifying the oil palm FFB’s shape, appearance and texture.
  • depth cameras use visual features to measure depth, the cameras work well in most lighting conditions including outdoors.
  • the infrared projector within the depth cameras means that in low lighting conditions, the camera is still able to perceive depth details.
  • Another benefit is that the depth cameras do not interfere with each other in the same way that a coded light or time of flight camera would. Hence, this provides for a smooth process from the depth camera sending signals / commands to the PVG, which then causes the grabber arm to move in order to grab and pick up the oil palm FFB.
  • any type of depth camera can be used depending on the preference of the user of present invention as long as the necessary depth and colour obtained are sufficient for effective and accurate detection of the oil palm FFB on the ground of any oil palm plantation, preferably a high definition depth camera with Time of Flight (ToF) infra-red laser for depth estimation.
  • At least one depth camera is sufficient, however more than one can be used depending on the preference of the user of the present invention, for example the at least one depth camera can be positioned as such to obtain images at the front left, front right, rear left, rear right, front and back of the movable vehicle of the present invention.
  • the at least one depth camera is preferably a red, green, blue and depth (RGBD) camera.
  • the RGBD camera used for the purposes of the present invention is an Intel RealSense Depth Camera D435i (Jetson AGX Xavier Developer Kit).
  • RGBD camera for the purposes of this present invention means a camera which is used to deliver coloured images of objects by capturing light in red, green and blue wavelengths (RGB) which are visible lights with wavelength in the range of 400 to yoonm.
  • RGBD camera is a type of depth camera which provides both depth (D) and colour (RGB) data output in a real-time mode. RGBD cameras are able to do a pixel -to-pixel merging of RGB data and depth information to deliver both in a single frame.
  • the output of an incremental encoder provides information which is processed by the grabber arm into information such as position, speed and distance. Incremental signals provide a series of high and low waves which indicate movement from one position to the next. There is no specific indication provided by the rotary encoders but only an indication that the position has changed. The encoders report the changes automatically and the readings constantly changes until the oil palm FFB is detected.
  • the at least one depth camera After detecting the oil palm FFB, the at least one depth camera measures the distance between the grabber arm and the oil palm FFB and transmits the values to the computing device (2). Depth cameras project infrared light onto an area to improve the accuracy of the data and is able to use any light to measure depth data.
  • the software which consists of a deep learning algorithm used for this present invention is a You Only Look Once (YOLO) algorithm which is an object detection system targeted for a real-time processing.
  • YOLO You Only Look Once
  • the YOLO algorithm has been tested with the testing data for the detecting, grabbing, picking up and releasing of oil palm FFB, whereby the efficiency of the classification is at least 90% and expected to reach 100% with further machine learning or deep learning exercise.
  • Any type of movable vehicle can be used for the present invention depending on the preference of the user of the present invention for both inland and coastal estates.
  • Mechanical buffalo is preferred by the inventors of the present invention as it is small and light-weighted ( ⁇ 1 MT), constructed to be compact and robust with efficient weight distribution ratio to ensure excellent manoeuvre capabilities in hilly terrain and terraced areas, hence oil palm FFB collection and evacuation can be carried out in a fast and efficient manner.
  • the system of the present invention with the mechanical buffalo as the movable vehicle - being small and light weighted also reduces the tendency towards rutted and damage paths at the estates which are usually caused by heavier machines operating at the estates and comes with efficient weight distribution ratio and low centre of gravity which is crucial for operations at hilly terrain and terrace areas of the oil palm estates, taking into consideration accident risks such as the device toppling over that could potentially happen.
  • the system is suitable for use on any soil conditions.
  • the system is suitable for use at both coastal and inland estates.
  • the system is suitable for use during any types of weather conditions specifically tropical and subtropical climate conditions.
  • the system is capable to detecting, grabbing, picking up and releasing the oil palm FFB in a designated bin at an accuracy of at least 90%. It is expected that the accuracy level to be further increased to achieve 100% with continuous machine learning or deep learning exercise.
  • the proposed system of the present invention is a breakthrough invention in the oil palm industry focusing on automating the detecting, grabbing, picking up and releasing of oil palm FFB in the oil palm estates.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides an autonomous method and an autonomous system for detecting, grabbing, picking and releasing of oil palm fresh fruit bunches (FFB). The invention comprises using a camera to visual data from a surrounding area where there FFB is present, using a computer for processing the visual data by way of at least one signal (S1) to a processing unit (2) which comprises a graphics processing unit (GPU) and a central processing unit (CPU). The processing unit (2) has been trained using a machine learning algorithm using a plurality of visual data (A). Once the processing unit (2) confirms that a signal (S2) is indeed FFB, a sensing directional valve will move a grabber arm towards direction of the FFB. A sensor (4) is used to measure the distance between the grabber arm and the FFB. A signal (S3) will be transmitted from the sensor (4) to processing unit (2) which will then transmit a signal (S4) to the motion sensing devices (5, 6, 7, 8). The grabber arm will then move towards the direction of the FFB before grabbing, picking up, moving and releasing the FFB into a bin.

Description

AUTONOMOUS METHOD AND SYSTEM FOR DETECTING, GRABBING, PICKING AND RELEASING OF OBJECTS
FIELD OF INVENTION
The present invention relates generally to an autonomous method and system for detecting, grabbing, picking and releasing of objects. More specifically, the present invention relates to an autonomous method and system for detecting, grabbing, picking and releasing of oil palm fresh fruit bunches.
BACKGROUND OF INVENTION
The oil palm industry is very dependent on labour and requires many workers for its operations. Complete reliance on foreign workers is not an ultimate solution but only a temporary measure. In the face of raising labour issues such as labour costs, mechanisation and automation programmes are being ramped up by industry players to tackle the associated labour issues. The mechanisation of the oil palm sector is not considered as a luxury, but an imperative and is vital for industrial players to pursue further as the oil palm sector could be helped with more efficient use of labour and reduced dependence on foreign workers. [Source: theedgemarkets.com]
Harvesting is an important process in the oil palm plantation to obtain fresh fruit bunches (FFB) with excellent oil content and quality and by venturing into mechanisation programmes, the aim is essentially to reduce dependence on foreign labours and also to attract more locals to return to the field. Harvesting operations in the oil palm industry requires the most number of workers consisting of cutting of oil palm fronds and FFB, evacuating FFB to roadside platforms, loose fruit collection and cleaning (frond stacking / bunch stalk cutting).
The oil palm yield depends on various factors such as but not limited to age, seed quality, soil conditions, climate, plantation management and timely harvesting and processing of the FFB. The ripeness of FFB harvested is critical in maximising the quality and quantity of palm oil extracted. Harvested fruits must be processed within 24 hours to minimise the build-up of fatty acids which reduces the quality of CPO extracted.
The labour shortage issue faced by the domestic plantation industry is expected to reduce productivity and harvesting even further during the peak cycle season at the end of the year 2020. Malaysian Palm Oil Association (MOPA) states that local planters have already lost up to about 25% of potential yield throughout the series of lockdowns in the year 2020, without the services of some 37,000 foreign workers who had been sent home during the peak of the COVID-19 pandemic. [Source: Labour shortage heightens losses in palm oil yield, 14 September 2020] The labour shortage issue remains unresolved and is getting worse, resulting in production losses. To-date, there has been a freeze on the intake of foreign workers since the first movement control order which was imposed on March 18 2020. CGS-CIMB Research pointed out that SDP’s workforce in Malaysia hovers between 75% and 80% of its total requirement. Meanwhile, FGV Holdings Bhd’s workforce stands at only 75% of requirement, which is a significant decline from 90% at the end of the third quarter last year, it added. According to a pre-MCO survey by the Malaysian Palm Oil Board, there was a shortage of 31,021 harvesters among its respondents, which represents 76% of the industry players. CGS-CIMB Research said it was estimated that the shortage of workers translated into a production loss of 3.4 million tonnes and 0.86 million tonnes of CPO and palm kernel, respectively. [Source: Labour shortage getting worse in palm plantations, The Star, 2 June 2021 ]
Oil palm tree begins to produce oil palm fruits 30 months after being planted in the fields, however oil palm yield is relatively low at this stage. As the oil palm continues to mature, its yield increases and it reaches peak production in years 7 to 18. Yield starts to gradually decrease after 18 years. The oil palm yield depends on various factors such as but not limited to age, seed quality, soil conditions, climate, plantation management and timely harvesting and processing of the FFB. The ripeness of FFB harvested is critical in maximising the quality and quantity of palm oil extracted. Harvested fruits must be processed within 24 hours to minimise the build-up of fatty acids which reduces the quality of CPO extracted. [Source: wilmar-international.com]
Chisel and sickle are traditional tools used for harvesting oil palm FFB. Much efforts have been made to increase harvesting efficiency and productivity, lower harvesting cost by development of various tools such as motorized cutter, mechanical harvester and others. Wheelbarrow is a traditional conventional method of FFB collection, which essentially is an easy machine to work with and does not pollute the environment. However, wheelbarrow requires considerable amount of energy from the worker to function the wheelbarrow at full capacity - whereby the worker needs to lift, push and balance the device, worker has to lunge forward to set the load in motion and quickly correct his posture to get device under control when the load is heavy, going uphill with a heavy load may simply be impossible and accidents may occur if momentum is lost descending a slope or when hitting an obstacle. Also, a wheelbarrow can only carry few FFB at a time. To assist on the mechanical part of the wheelbarrow, a battery powered wheelbarrow was eventually designed and fabricated to assist the harvesters, whereby, the battery powered wheelbarrow is more productive compared to a convention wheelbarrow.
In order to address issues connected with the conventional wheelbarrow, a buffalo cart was used to replace the wheelbarrow whereby the buffalo replaces the worker’s need to pull the cart filled with FFB therefore the worker uses less energy and is able to focus on their task at hand. Generally, although there are many benefits of using a buffalo cart, there are several disadvantages as well - for example, the risks of the buffalo in contracting a disease, risk to be stolen is high as the demand for buffalo beef is high and the buffalo is also looked at as an asset for people.
The buffalo pulling the cart was then replaced with a mechanised version that uses engine, known as mechanical buffalo (MB Badang) throughout the industry. The mechanical buffalo functions like a dump truck and is small powered ranging from 7 to 10 horse-powered tractors which has been modified in such a way that it has a cart that can load the FFB. Mechanical buffalo in general improves the efficiency and productivity of the workers and lowers harvesting cost in the oil palm plantation. Loading of the FFB can be done either manually or using hydraulic grab by the mechanical buffalo. It also comes with a multipurpose wheel-type transporter (Badang crawler) to transport FFB in difficult areas such as peat, narrow terrace, undulating terrain and soggy ground. All in all, the mechanical buffalo generally reduces worker fatigue and has shown potential to increase labour productivity by reducing worker’s walking time, carrying and loading time.
Cantas™ is a tool for the efficient harvesting of oil palm FFB describes about a motorized cutter as developed by Malaysian Palm Oil Board for harvesting FFB at less than 4.5m in height. Cantas™ is a hand-held cutter powered by a 1.3 horse-power petrol engine. Productivity of this machine is 560 to 750 FFB per day as compared to manual harvesting using sickle or chisel which has the capacity of only 250 to 350 FFB per day. Cantas™ is the first motorised cutter that is well accepted by the industry. [Source: Journal of Oil Palm Research Vol. 20 Dec 2008 p. 548-558 ]
Article on the Mechanisation in Kulim (M) Berhad states that Kulim had already embark in the usage of motorized cutter for harvesting operation, usage of MB Badang mechanical buffalo, mini tractor with scissor lift trailer and life buffalo for FFB infield collection, usage of ‘Kulim Crane Free System’, crane netting for hukka bin system for mainline loading and transport. The harvesting mechanization method adopted by Kulim (M) Berhad may not be the most efficient mechanization system available to use and only some are currently commercially workable in their environment. [Source: https://www.slideshare.net/MrPaucit/kulim-m-berhad-experience-palm-mech-2012- paper-by-mfam ]
Edaran Badang Sdn. Bhd. is a member of Kulim (M) Berhad Group of Companies provides oil palm harvesting equipment such as but not limited to MB Badang L100 Standard (10 horse-power capacity, loading capacity of 500 kgs and 120 to 150 hectares coverage per machine), MB Badang L100 Dumper (10 horse-power capacity, loading capacity of yookgs and 120 to 150 hectares coverage per machine), MB Badang L100 HyPivot (10 horse-power capacity, loading capacity of 350 kg and 120 to 150 hectares per machine), Badang Crawler L70 with load capacity of 300 kgs per trip for effective in-field FFB evacuation at peat area, Beluga T980 (track utility tractor) and Rhyno W700 (wheel utility tractor). The conventional mini tractor-trailer system for FFB evacuation is whereby separate groups of workers are employed namely the cutter and carrier groups, whereby the cutter cuts the FFB and places them along the harvesting path. Carrier group comprises three workers which are the driver and two others who collects the cut FFB along the harvesting path and unloads them at the roadside. The mini tractor-trailer serves about 200 to 250 ha per day and four to five cutters are required for cutting FFB and fronds. A separate group is required to work on the loose fruit collection. In general, work productivity increases by having separate group for each task involved. A mechanical loader was then introduced which eliminated the need for the two loaders as required in the mini tractor-trailer system. This system however mainly caters for flat and undulating areas, and not suitable for use at hilly terrain and terraced areas of the oil palm estates.
WO2O21126531A1 describes a method for performing automated machine vision-based defect detection, involves training a neural network to detect defects, receiving multiple historical datasets including multiple training images corresponding to the known defects, and obtaining a test image of an object. The automated machine vision-based defect detection method involves training a neural network to detect defects, and receiving multiple historical datasets including multiple training images corresponding to the known defects. Each training image is converted into a corresponding matrix representation, and inputted each corresponding matrix representation into the neural network to adjust weighted parameters based on the known defects. A test image of an object is obtained, and extracted portions of the test image as multiple input patches for input into the neural network. Each input patch is inputted into the neural network as a respective matrix representation to automatically generate a probability score for each input patch using the weighted parameters. This prior art does not describe the system and/or method of the present invention.
US20200242413A1 describes a method which involves capturing image data of the work field from a camera system, and binary thresholding the image data based on a presumed feature of the one or more predetermined features. In response to the binary thresholding, one or more groups of pixels are identified as candidates for the presumed feature. A filter is applied to the identified one or more groups of pixels to create filtered data, the filter comprises an aspect corresponding to the presumed feature. Based on the filter application, it is determined if the object has the presumed feature. This prior art does not describe the system and/or method of the present invention.
W02022099600A1 describes a computer-implemented method for performing object detection on images, involves rebalancing output sample distribution among classes at representation neural networks and classifier neural networks by using class-balanced loss. The method involves obtaining (402) image head class input data and image tail class input data differentiated from the head class input data and respectively of two images each of an object to be classified. The head and tail class input data are respectively inputted (404) into two separate parallel representation neural networks being trained to respectively generate head and tail features. The head and tail features are inputted (408) into a classifier neural network to generate class-related data. A class- balanced loss of one of the classes of the class-related data comprising factoring an effective number of samples of individual classes is generated (410). An output sample distribution among the classes at the representation neural networks, classifier neural networks, or both is rebalanced (412) by using the class-balanced loss. This prior art does not describe the system and/or method of the present invention.
US9858496B2 describes systems, methods, and computer-readable media for providing fast and accurate object detection and classification in images are described herein. In some examples, a computing device can receive an input image. The computing device can process the image, and generate a convolutional feature map. In some configurations, the convolutional feature map can be processed through a Region Proposal Network (RPN) to generate proposals for candidate objects in the image. In various examples, the computing device can process the convolutional feature map with the proposals through a Fast Region-Based Convolutional Neural Network (FRCN) proposal classifier to determine a class of each object in the image and a confidence score associated therewith. The computing device can then provide a requestor with an output including the object classification and/or confidence score. This prior art does not describe the system and/ or method of the present invention.
US20170000027A1 describes a method for the robotic picking of stemmed crops by the stem, comprising the use of a robotic mechanism comprising an arm or arms mounted on a moveable base, having a gripper/cutter mechanism that contains an attached or embedded blade, wherein the mechanism is configured in a way that it grabs and cuts the stem without grabbing the crop itself, whereby the crop is picked without being damaged. This prior art does not describe the system and/or method of the present invention.
US9475189B2 describes a harvesting system, comprising a frame, which is configured to move between multiple positions along a row of trees to be harvested multiple robots, which comprise movable robot arms and which are mounted on the frame facing a respective area to be harvested at a first position of the frame, wherein the robot arms are fitted with grippers that are configured to approach and grip the crop items, and each of the robots is configured to harvest crop items by reaching and gripping the crop items using the grippers from a respective angle of approach of the robot arm that is fixed for that robot, and wherein the robot arms of at least a subset of the robots are parallel to one another such that the angle of approach is common to the robots in the subset; one or more sensors, which are all fixed relative to the frame and do not move with the robot arms, and which are configured to acquire images of the area only from outside the trees. This prior art does not describe the system and/or method of the present invention. US20060150602A1 describes a method of remotely guiding a harvesting device comprising the steps of taking images using a plurality of cameras at the harvester taking images, a network transmitting and receiving information between the harvester and the remote operator, a remote user interface receiving images from an imaging devices located at the harvester, an operator and computer at the remote user interface selecting objects to be harvested and transmitting the pixel location and timing information derived from the selected object to the harvester, a harvester using the information transmitted from the remote operator and, basing the targeted positioning information on the remote user information, positioning the collection device near the object to be harvested, the harvester controller detecting and repositioning the collector as needed for collection of objects to be harvested, the harvester controller commanding the collector to collect the object to be harvested. This prior art does not describe the system and/or method of the present invention.
SUMMARY OF INVENTION
Accordingly, the present invention provides an autonomous method for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the method comprises the steps of (a) capturing a plurality of visual data from a surrounding area where at least one predetermined object is present using at least one imaging device, (b) transmitting the plurality of visual data by way of at least one signal to at least one computing device, (c) detecting the at least one predetermined object by the computing device which has been trained using a machine learning algorithm using a plurality of visual data, (d) transmitting at least one signal to at least one sensing directional means which moves at least one grabbing means towards direction of at least one predetermined object, (e) measuring distance between the at least one grabbing means and the at least one predetermined object by at least one sensor, (f) transmitting at least one signal from the at least one sensor to the at least one computing device, (g) transmitting at least one signal from the at least one computing device to a plurality of motion sensing devices, (h) moving of the at least one grabbing means towards a direction of the at least one predetermined object by the plurality of motion sensing devices, (i) grabbing the at least one predetermined object by the at least one grabbing means, (j) picking up the at least one predetermined object by the at least one grabbing means, (k) moving the at least one predetermined object towards a designated compartment by the at least one grabbing means and (1) releasing the at least one predetermined object into the designated compartment by the at least one grabbing means.
Further provided is an autonomous system for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the system comprises (a) at least one imaging device which captures a plurality of visual data from a surrounding area where the at least one predetermined object is present, (b) at least one computing device transmitting the plurality of visual data byway of at least one signal to at least one computing device, wherein the computing device which has been trained using a machine learning algorithm using a plurality of visual data which detects the at least one predetermined object, (c) at least one sensing directional means which receives at least one signal from the at least one computing device, (d) at least one grabbing means which is moved towards direction of at least one predetermined object by the at least one sensing directional means, (e) at least one sensor which measures the distance between the least one grabbing means and the at least one predetermined object, wherein the at least one sensor transmits at least one signal to the at least one computing device, (f) a plurality of motion sensing devices which receives at least one signal from the at least one computing device, wherein the plurality of motion sensing devices moves the at least one grabbing means towards a direction of the at least one predetermined object to enable the at least one grabbing means to grab and pick the at least one predetermined object up before moving the at least one predetermined object towards a designated area and releasing the at least one predetermined object into the designated area. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates the system of the present invention for detecting, grabbing, picking and releasing the oil palm fresh fruit bunches (FFB);
Figure 2 illustrates the flow chart of the system of the present invention for detecting, grabbing, picking and releasing the oil palm FFB;
Figure 3 illustrates the operations or mechanism of the system of the present invention; and
Figure 4 illustrates the mechanism of the system of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE PRESENT
INVENTION
The present invention relates generally to an autonomous method and system for detecting, grabbing, picking and releasing of objects. More specifically, the present invention relates to an autonomous method and system for detecting, grabbing, picking and releasing of oil palm fresh fruit bunches.
It can be appreciated that the parameters for the present invention are not obvious for a person skilled in the art and have been tested and determined by the inventors based on numerous trials conducted, observations, discussions and combined expertise, which would not be able to be determined without much efforts and analysis. All prior arts as listed and referred to above do not specifically describe the system of this present invention.
Objectives of the present invention are as follows:
A first object of the present invention is to provide an autonomous method and system for detecting, grabbing, picking and releasing of oil palm FFB in a real-time mode using Al and machine learning deep learning algorithm.
A second object of the present invention is to provide an autonomous method and system with at least 90% accuracy (or more than 90%) for the detecting, grabbing, picking and releasing of oil palm FFB.
A third object of the present invention is to provide an autonomous method and system which is a breakthrough invention in the oil palm industry moving towards artificial intelligence with the aim to reduce dependence on foreign labours and also to attract more locals to work in the oil palm industry.
A fourth object of the present invention is to provide an autonomous method and system which is able to increase productivity with respect to the collection and evacuation of the oil palm FFB from the estates in comparison to the conventional means of doing so.
A fifth object of the present invention is to provide for an effective and efficient oil palm FFB collection and evacuation means from the oil palm estates as the ripeness of the oil palm FFB harvested is crucial in obtaining the quality of palm oil extracted as the harvested oil palm FFB must be processed within certain hours to minimise build-up of fatty acids which affects the quality of CPO extracted. A sixth object of the present invention is to provide an autonomous method and system which is easy to be operated by anyone - in particular unskilled workers, easy to maintain, safe to use, stable, can operate smoothly and is sustainable. This would reduce reliance on foreign workers and/or highly skilled workers for oil palm FFB harvesting purposes.
A seventh object of the present invention is to provide continuous improvement in the oil palm operations in the estate byway of artificial intelligence.
An eighth object of the present invention is to provide a system whereby the operator of the present invention alone is able to manage the detecting, grabbing, picking and releasing of oil palm FFB on his own including driving back the movable vehicle to the collection bin to unload the FFB content.
A ninth object of the present invention is to provide an autonomous method and system which can be used with any types of movable vehicle depending on the preference of the user.
A tenth object of the present invention is to provide an autonomous method and system which is suitable for use on any soil conditions, both coastal and inland estates and during any types of weather conditions specifically tropical and subtropical climate conditions.
While the present invention is described herein by way of example using illustrative drawings and embodiments, it should be understood that the detailed description are not intended to limit the invention to embodiments of drawing or drawings described and are not intended to limit the invention to the particular form disclosed but in contrary the invention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention.
The present invention is described herein by various embodiments with reference to the accompanying drawing wherein reference numerals used in the accompanying drawing correspond to the features through the description. However, the present invention may be embodied in many different forms and should not be construed as limited to embodiments set forth herein. Therefore, embodiments are provided so that this disclosure would be thorough and complete and will fully convey the scope of invention to those skilled in the art. Numeric values and ranges and materials as provided in the detailed description are to be treated as examples only and are not intended to limit the scope of the claims of the present invention.
Terminology and phraseology used herein is solely used for descriptive purposes and is not intended as limiting in scope. The words such as “including”, “comprising”, “having”, “containing” or “involving” and other variations is intended to be broad and cover the subject matter as described including equivalents and additional subject matter not recited such as other components or steps.
The present invention provides an autonomous method for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the method comprises: a) capturing a plurality of visual data from a surrounding area where at least one predetermined object is present using at least one imaging device (1) ; b) transmitting the plurality of visual data by way of at least one signal (Si) to at least one computing device (2); c) detecting the at least one predetermined object by the computing device (2) which has been trained using a machine learning algorithm using a plurality of visual data (A); d) transmitting at least one signal (S2) to at least one sensing directional means (3) which moves at least one grabbing means (9) towards direction of at least one predetermined object; e) measuring distance between the at least one grabbing means (9) and the at least one predetermined object by at least one sensor (4); f) transmitting at least one signal (S3) from the at least one sensor (4) to the at least one computing device (2); g) transmitting at least one signal (S4) from the at least one computing device (2) to a plurality of motion sensing devices (5, 6, 7, 8); h) moving of the at least one grabbing means (9) towards a direction of the at least one predetermined object by the plurality of motion sensing devices (5, 6, 7, 8); i) grabbing the at least one predetermined object by the at least one grabbing means (9); j) picking up the at least one predetermined object by the at least one grabbing means (9); k) moving the at least one predetermined object towards a designated compartment by the at least one grabbing means (9); and l) releasing the at least one predetermined object into the designated compartment by the at least one grabbing means (9).
The present invention also provides an autonomous system for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the system comprises: a) at least one imaging device (1) which captures a plurality of visual data from a surrounding area where the at least one predetermined object is present; b) at least one computing device (2) transmitting the plurality of visual data by way of at least one signal (Si) to at least one computing device (2), wherein the computing device (2) which has been trained using a machine learning algorithm using a plurality of visual data (A) which detects the at least one predetermined object; c) at least one sensing directional means (3) which receives at least one signal (S2) from the at least one computing device (2); d) at least one grabbing means (9) which is moved towards direction of at least one predetermined object by the at least one sensing directional means (3); e) at least one sensor (4) which measures the distance between the least one grabbing means (9) and the at least one predetermined object, wherein the at least one sensor (4) transmits at least one signal (S3) to the at least one computing device (2); and f) a plurality of motion sensing devices (5, 6, 7, 8) which receives at least one signal (S4) from the at least one computing device (2), wherein the plurality of motion sensing devices (5, 6, 7, 8) moves the at least one grabbing means (9) towards a direction of the at least one predetermined object to enable the at least one grabbing means (9) to grab and pick the at least one predetermined object up before moving the at least one predetermined object towards a designated area and releasing the at least one predetermined object into the designated area.
The method and system may include the use of a movable vehicle.
The at least one imaging device (1) maybe a camera system which maybe positioned on a left side and/or right side of the movable vehicle. The at least one camera system may capture the plurality of visual data of the predetermined objects in a real-time mode from the surrounding area of the movable vehicle and may display a plurality of visual data on a screen or display panel in the movable vehicle. Surrounding area means on left side and/or right side of the movable vehicle on the ground of an oil palm estate.
The at least one grabbing means (9) may be a grabber arm. The grabber arm may be a rotatable arm and the rotatable arm maybe rotatable 180°, either clockwise or anti-clockwise.
The plurality of the motion sensing devices (5, 6, 7, 8) may allow movement of the grabber arm.
The at least one controller (10) may be provided to control a hydraulic linear motor of the grabber arm to grab and pick up the at least one predetermined object. The at least one controller (10) may be a microcontroller or a programmable logic controller (PLC).
The at least one computing device (2) may be a processing unit which comprises a graphics processing unit (GPU) and a central processing unit (CPU). The GPU may detect the at least one predetermined object in a real-time mode.
The at least one sensing directional means (3) is at least one sensing directional valve. The at least one sensing directional valve may be an electrically controlled load independent proportional valve group (PVG). The PVG may receive at least one signal from the CPU to move a PVG spool. The PVG spool may result in a movement of the grabber arm. The grabber arm may move horizontally and/or vertically.
The CPU may run and control algorithm of the plurality of the motion sensing devices (5, 6, 7, 8). The plurality of the motion sensing devices (5, 6, 7, 8) may be operated by a machine learning algorithm such as a deep learning algorithm.
The plurality of the motion sensing devices (5, 6, 7, 8) maybe at least one position encoder. The at least one position encoder may be at least one linear encoder and at least one rotary encoder. The at least one rotary encoder may be rotatable 180°, either clockwise or anti-clockwise. The at least one rotary encoder may measure displacement or movement of a rotary motion of the grabber arm. The at least one linear encoder maybe provided to measure displacement or movement of a linear motion of the grabber arm.
The at least one sensor (4) may be depth camera, such as a red, green, blue and depth (RGBD) camera.
The at least one predetermined object may be identified based on at least one predetermined feature or characteristic. The at least one predetermined feature or characteristic may be appearance, shape, colour, size and/or in any combination thereof.
The at least one predetermined object maybe a fruit such as oil palm fresh fruit bunches (FFB).
The autonomous method and system may be used on any soil conditions.
The autonomous method and system maybe used at coastal and inland estates.
The autonomous method and system may be used during any types of weather conditions such as tropical and subtropical climate conditions.
The autonomous method and system achieves at least 90% accuracy in detecting, grabbing, picking and releasing of the at least one predetermined object. The details of the present invention will now be described in relation to the accompanying Figures i, 2, 3 and 4.
According to Figures 1 and 2, the present invention provides an autonomous system for detecting, grabbing, picking and releasing of at least one predetermined objects, the system comprising:
• at least one imaging device (1) which captures a plurality of visual data from a surrounding area where the at least one predetermined object is present;
• at least one computing device (2) transmitting the plurality of visual data by way of at least one signal (Si) to at least one computing device (2), wherein the computing device (2) which has been trained using a machine learning algorithm using a plurality of visual data (A) which detects the at least one predetermined object;
• at least one sensing directional means (3) which receives at least one signal (S2) from the at least one computing device (2);
• at least one grabbing means (9) which is moved towards direction of at least one predetermined object by the at least one sensing directional means (3);
• at least one sensor (4) which measures the distance between the at least one grabbing means (9) and the at least one predetermined object, wherein the at least one sensor (4) transmits at least one signal (S3) to the at least one computing device (2); and
• a plurality of motion sensing devices (5, 6, 7, 8) which receives at least one signal (S4) from the at least one computing device (2), wherein the plurality of motion sensing devices (5, 6, 7, 8) moves the at least one grabbing means (9) towards a direction of the at least one predetermined object to enable the at least one grabbing means (9) to grab and pick up the at least one predetermined object before moving the at least one predetermined object towards a designated area and releasing the at least one predetermined object into the designated area.
According to Figures 3 and 4, once the system is powered on, the operator will activate the system by touching “START” on the screen or display panel in the movable vehicle. The screen or display panel will then show "Right Camera", "Left Camera", "Emergency", "Calibration", "Right" or "Left." The operator can select “Right Camera” or “Left Camera” to see views of the oil palm FFB on the ground by the at least one imaging device (1). The operator should firstly determine which side of the movable vehicle the oil palm FFB is on and estimate a location within reach between the grabber arm and the oil palm FFB. The grabber arm starts from a resting position (‘Home’) which is the closest position to the movable vehicle. The operator provides an instruction to the central processing unit, selecting ‘Right’ for grabbing and picking up the oil palm FFB on the right side of the movable vehicle or ‘Left’ for grabbing and picking up the oil palm FFB on the left side of the movable vehicle. Once the operator is ready to grab and pick up the oil palm FFB, the operator must first touch the “Calibration” button on the screen or display panel, which transmits a signal to the PVG to move the PVG spool. When the PVG spool is moving, the grabber arm will move.
The readings of all the motion sensing devices (5, 6, 7, 8) will continuously change when the at least one grabber arm is moving until the oil palm FFB are detected by the at least one sensor (4). When the motion sensing devices’ (5, 6, 7, 8) specified / desired readings are met, the at least one grabber will move grab and pick up the oil palm FFB and release / drop the oil palm FFB into a bin or container as attached with the movable vehicle. When the grabbing, picking and releasing mechanism are completed, the central processing unit transmits at least one signal to the PVG’s spool to return the grabber arm to its resting position (‘Home’), after which the PVG’s spool will return to neutral.
In the preferred embodiment of the present invention, the at least predetermined object for the purposes of this present invention means the oil palm FFB. However, the present invention can also be used to detect, grab and pick up any other predetermined objects from the ground as long as the deep learning algorithm used is trained by a training dataset consisting of plurality of visual data of the predetermined objects of interest. The deep learning algorithm of this present invention is an object detection means and operates in a real-time mode.
The predetermined objects are identified based on predetermined features or characteristics. The predetermined features or characteristics are appearance, shape, colour, size and/or in any combination thereof. For the purposes of the present invention, the predetermined features or characteristics refer to the characteristics of the oil palm FFB.
Visual data (A) for this present invention means images or views and videos of the oil palm FFB on the ground including its appearances (i.e. shape, colour, texture etc.). Visual data (A) obtained from the oil palm estates are used to train the deep learning algorithm of the present invention in order for the system to be able to accurately detect FFB on the ground in any oil palm estates (with different estate conditions) and at anytime point during actual operations.
The at least one imaging device (1) captures a plurality of visual data of the oil palm FFB from the surrounding area in the estates, transmits at least one signal (Si) to the at least one computing device (2) and to the at least one sensing directional means (3). The at least one imaging device (1) is at least one camera system positioned on a left side and/or right side of the movable vehicle, preferably at least one positioned on the left side and at least one on the right side of the movable vehicle. The number of camera system can be determined based on preference and needs of the user of the present invention.
The at least one camera system captures the plurality of visual data of the oil palm FFB in a realtime mode on the left side and/or right side of the movable vehicle and displays the plurality of visual data on a screen or display panel in the movable vehicle.
Any type of screen or display panel can be used for the purposes of this present invention, for example a liquid crystal display (LCD) screen, preferably a mini version which is suitable for use in the movable vehicle in the oil palm estates.
The at least one computing device (2) comprises a graphics processing unit (GPU) and a central processing unit (CPU). The GPU of this present invention detects the oil palm FFB in a real-time mode. The GPU is a control system which runs a robot operating system (ROS), vision detection system for detecting the oil palm FFB on the ground and motion control system for the grabber arm. A Nvidia Jetson AGX Xavier 32GB GPU development kit is used for the purposes of the present invention.
The ROS is a set of software libraries and tools that help you build robot applications. ROS is an open-source robotics middleware suite. [Source: https://www.ros.org/]
The inventors of the present invention have used ROS which utilises Python and C++ programming languages to implement the system of the present invention.
The CPU for this present invention is a minicomputer. The minicomputer runs and controls the algorithm of the plurality of motion sensing devices (5, 6, 7, 8). The minicomputer used for the purposes of the present invention is an Intel® NUC Mini PCs.
The at least one sensing directional means (3) is at least one sensing directional valve which is an electrically controlled load independent proportional valve group (PVG). The PVG receives at least one signal from the CPU to move a PVG spool. The moving PVG spool results in the movement of the at least one grabber arm. The at least one motion sensing device (5, 6, 7, 8) allows movement of the at least one grabber means (9). The at least one grabber means (9) is at least one grabber arm. The movement of the at least one grabber arm is up, down, left and/or right. The at least one grabber arm is a rotatable arm, rotatable 180° either clockwise or anti-clockwise. The at least one controller (io) is a microcontroller or a programmable logic controller (PLC). The at least one controller (io) is used to control a hydraulic linear motion of the at least one grabber arm to grab and pick up the oil palm FFB. The at least one controller (io) is at least one hydraulic cylinder with at least one linear actuator to provide the linear motion of the at least one grabber arm coupled with a PVG.
At least one signal (S2) is transmitted to at least one sensing directional means (3), the PVG which moves the grabber arm towards direction of the oil palm FFB. The at least one sensor (4) (at least one depth camera) measures distance between the at least one grabbing means (9) which is a grabber arm and the oil palm FFB. The at least one sensor (4) transmits at least one signal (S3) to the at least one computing device (2). Further, at least one signal (S4) is transmitted from the at least one computing device (2) to a plurality of motion sensing devices (5, 6, 7, 8). The plurality of motion sensing devices (5, 6, 7, 8) moves the at least one grabbing means (9) towards a direction of the oil palm FFB. The grabber arm is used to grab oil palm FFB, pick up oil palm FFB, move the oil palm FFB towards a designated compartment and release the oil palm FFB into the designated compartment or bin.
At least one grabber arm is sufficient for the purposes of the present invention, however more than one could be used depending on the preference and needs of the user of the present invention. Any type of grabber arm can be also used depending on the preference of the user. The at least grabber arm for the purposes of the present invention comes with a claw arm to grab and pick up the oil palm FFB from the ground and is operated by two hydraulic cylinders and is attached to a grabber holder which connects the claw arm to the grabber arm. The claw arm as operated by one hydraulic cylinder and is used to grasp the oil palm FFB and ungrasp or release them into a collection bin. The claw arm shall have at least 2 fingers, preferably 3 fingers for stability in grabbing the oil palm FFB. The angle between each finger of the claw arm is evenly distributed at 180° (for 2 fingers) and 120° (for 3 fingers).
Visual data (A) are firstly collected for the machine learning process in order for the system of the present invention to be able to detect the oil palm FFB on the ground via the vision detection system. The system is then operated in actual operating conditions in oil palm estates to obtain substantial training dataset. At least one depth camera is attached to the grabber arm (or also can be attached to the movable vehicle) to obtain data on parameters such as angle, distance, speed and others when the system is being operated in an estate. The at least one imaging device (1) is switched on in order to capture the visual data during the entire operations in the estate in realtime mode. As much visual data (A) as possible is collected in order to obtain the highest accuracy possible for detecting, grabbing, picking up and releasing of oil palm FFB into the designated area. The system of the present invention was operated in various different estates with different estate conditions, different times of a day and different weather conditions (i.e. sunny and shady lighting, soil and grassy area etc.) for data variation purposes in order to train the system to obtain accuracy of more than 90%, and preferably as close as possible to 100%. The visual data (A) collected are then stored in a drive such as a solid-state drive in computers, then transferred to the GPU for processing and to a workstation for classification and training.
The central processing unit runs and controls algorithm of the at least one motion sensing device (5, 6, 7, 8). The plurality of motion sensing devices (5, 6, 7, 8) are operated by a machine learning algorithm, specifically a deep learning algorithm trained by a training dataset consisting of the plurality of visual data (A) of the oil palm FFB. The plurality of motion sensing devices (5, 6, 7, 8) receives at least one signal (S4) from the at least one computing device (2), wherein the plurality of motion sensing devices (5, 6, 7, 8) moves the at least one grabbing means (9) towards a direction of the oil palm FFB to enable the at least one grabbing means (9) to grab and pick up the oil palm FFB before moving the oil palm FFB towards a designated area and releasing into the designated area.
The plurality of motion sensing device (5, 6, 7, 8) is at least one position encoder, preferably at least one linear encoder and at least one rotary encoder. The number of the motion sensing devices (5, 6, 7, 8) can be selected / determined based on preference of the user of the present invention. Any type of motion sensing devices (5, 6, 7, 8) can be used for the present invention, preferably one that is able to withstand any type of weather conditions specifically tropical and subtropical climate conditions.
The at least one rotary encoder is to measure displacement or movement of a rotary motion of the at least one grabber arm. The at least one rotary encoder is able to move 180° clockwise or anticlockwise. The at least one linear encoder is to measure displacement or movement of a linear motion of the at least one grabber arm.
Generally, absolute encoders determine exact position of an object and suitable for use where the machine or process moves at a slow rate. Incremental encoders use a simpler method of counting movement and rely on establishing the position of the object by counting the number of pulses and then using that count to compute the position, therefore there is no unique digital signature that can be used to determine an absolute position. Incremental encoders measure the relative movement against some point of reference, whereas absolute encoders measure the position directly using a unique signal code that directly reflects the position.
The combination of use between rotary and linear encoders (5, 6, 7, 8) can be determined and decided based on preference and the needs of the user of the present invention. The inventors of the present invention used a linear encoder (5), a combination of linear and/or rotary encoders (6, 7) and a rotary encoder (8) to demonstrate the system of the present invention. Both absolute and incremental rotary encoders are used for the present invention and positioned at all rotational moving parts of the at least one grabbing means (9). Three rotary encoders are used by the inventors of the present invention but less or more than three can be used depending on the rotary degree of freedom as required. Combination of incremental and absolute rotary encoders have been used by the inventors to operate the system of the present invention, however any combination of absolute and incremental rotary encoders can be used depending on preference of the user of the present invention.
The at least one linear encoder is either absolute and/or incremental encoder and is an at least one linear transducer positioned at the least one grabber arm. The number of linear encoders required can be determined by the preference of the user of the present invention whereby the maximum number required depends on the number of linear movements of the at least one grabber arm.
The at least one sensor (4) is at least one depth camera. The at least one sensor (4) measures distance between the at least one grabber arm and the oil palm FFB and transmits at least one signal to the at least one computing device (2).
A depth camera is preferred for this present invention as it can sense and measure the depth of the oil palm FFB (by illuminating the object with infrared light or LED and analyse the reflected light) and corresponding pixel and texture information of the oil palm FFB. The depth camera is able to create high definition visual data by identifying the oil palm FFB’s shape, appearance and texture. [Source: https://lidarradar.com/info/differences-between-the-lidar-systems-and-depth-camera]
As depth cameras use visual features to measure depth, the cameras work well in most lighting conditions including outdoors. The infrared projector within the depth cameras means that in low lighting conditions, the camera is still able to perceive depth details. Another benefit is that the depth cameras do not interfere with each other in the same way that a coded light or time of flight camera would. Hence, this provides for a smooth process from the depth camera sending signals / commands to the PVG, which then causes the grabber arm to move in order to grab and pick up the oil palm FFB. Any type of depth camera can be used depending on the preference of the user of present invention as long as the necessary depth and colour obtained are sufficient for effective and accurate detection of the oil palm FFB on the ground of any oil palm plantation, preferably a high definition depth camera with Time of Flight (ToF) infra-red laser for depth estimation. At least one depth camera is sufficient, however more than one can be used depending on the preference of the user of the present invention, for example the at least one depth camera can be positioned as such to obtain images at the front left, front right, rear left, rear right, front and back of the movable vehicle of the present invention. The at least one depth camera is preferably a red, green, blue and depth (RGBD) camera. The RGBD camera used for the purposes of the present invention is an Intel RealSense Depth Camera D435i (Jetson AGX Xavier Developer Kit).
RGBD camera for the purposes of this present invention means a camera which is used to deliver coloured images of objects by capturing light in red, green and blue wavelengths (RGB) which are visible lights with wavelength in the range of 400 to yoonm. A RGBD camera is a type of depth camera which provides both depth (D) and colour (RGB) data output in a real-time mode. RGBD cameras are able to do a pixel -to-pixel merging of RGB data and depth information to deliver both in a single frame.
[Source: https://www.e-consystems.com/blog/caniera/technology/what-are-rgbd-cameras-why-rgbd- cameras-are-preferred-in-some-embedded-vision-applications/]
Once the operator has selected “Right” or “Left”, signals will be transmitted to the PVG causing the PVG spool to move which causes the grabber arm to move up, down, left or right. When this is happening, the linear and rotary encoders will constantly change direction and the depth camera will begin looking for the oil palm FFB by way of a vision detection.
The output of an incremental encoder provides information which is processed by the grabber arm into information such as position, speed and distance. Incremental signals provide a series of high and low waves which indicate movement from one position to the next. There is no specific indication provided by the rotary encoders but only an indication that the position has changed. The encoders report the changes automatically and the readings constantly changes until the oil palm FFB is detected.
After detecting the oil palm FFB, the at least one depth camera measures the distance between the grabber arm and the oil palm FFB and transmits the values to the computing device (2). Depth cameras project infrared light onto an area to improve the accuracy of the data and is able to use any light to measure depth data.
The software which consists of a deep learning algorithm used for this present invention is a You Only Look Once (YOLO) algorithm which is an object detection system targeted for a real-time processing. The YOLO algorithm has been tested with the testing data for the detecting, grabbing, picking up and releasing of oil palm FFB, whereby the efficiency of the classification is at least 90% and expected to reach 100% with further machine learning or deep learning exercise. Findings:
Same set of rows were prepared for oil palm FFB collection by the system of the present invention to demonstrate the accuracy and purpose of the present invention. Time motion study was conducted using the system of the present invention vs. a machine as operated manually by an operator. The study was done with the same number of oil palm FFB and by the same unskilled operator. The time as indicated in Table 1 is taken without including the trip to the collection bin.
Figure imgf000023_0001
Table i
It can be seen from the table above that the time to collect oil palm FFB per row is reduced by at least 5 minutes by an unskilled worker. This translates to a large amount of time savings and more areas can be covered per day basis for an 8-hour working period by any worker. Hence, the time taken to collect the oil palm FFB is greatly reduced with use of the autonomous method and system of the present invention and the inventors of the present invention have surprisingly found that the efficiency of the classification is more than 90% and expected to reach 100% with further machine learning exercise in the near future.
Any type of movable vehicle can be used for the present invention depending on the preference of the user of the present invention for both inland and coastal estates. Mechanical buffalo is preferred by the inventors of the present invention as it is small and light-weighted (<1 MT), constructed to be compact and robust with efficient weight distribution ratio to ensure excellent manoeuvre capabilities in hilly terrain and terraced areas, hence oil palm FFB collection and evacuation can be carried out in a fast and efficient manner. The system of the present invention with the mechanical buffalo as the movable vehicle - being small and light weighted also reduces the tendency towards rutted and damage paths at the estates which are usually caused by heavier machines operating at the estates and comes with efficient weight distribution ratio and low centre of gravity which is crucial for operations at hilly terrain and terrace areas of the oil palm estates, taking into consideration accident risks such as the device toppling over that could potentially happen. The system is suitable for use on any soil conditions. The system is suitable for use at both coastal and inland estates. The system is suitable for use during any types of weather conditions specifically tropical and subtropical climate conditions.
It is noted that the system is capable to detecting, grabbing, picking up and releasing the oil palm FFB in a designated bin at an accuracy of at least 90%. It is expected that the accuracy level to be further increased to achieve 100% with continuous machine learning or deep learning exercise.
Summary:
All prior arts as listed and referred to above do not specifically describe the autonomous method and system of the present invention. Apart from that, it is also not obvious byjust reading the prior art documents or information as listed above for experts in the field of interest to derive at the system and method of the present invention as the system and method have been determined by the inventors based on numerous trials / testing conducted, observations and discussions with combined expertise and experience in this field, which parameters and/or combination could not be determined without much efforts, testing and/or analysis or by just reviewing prior art documents in this field of interest. Hence, there remains a need in the art to provide a system and method per present invention.
To the best of the knowledge of the inventors of the present invention and based on prior arts available, there is no known system per present invention available or used in the industiy currently. The proposed system of the present invention is a breakthrough invention in the oil palm industry focusing on automating the detecting, grabbing, picking up and releasing of oil palm FFB in the oil palm estates.
The parameters for the present invention have been determined by the inventors based on numerous trials conducted, observations, discussions with combined expertise and experience in this field which parameters and/or combination could not be determined without much efforts, testing and/or analysis or by just reviewing prior art documents in this field. Hence, to the best of knowledge of the inventors, the present invention is novel and inventive.
Various modifications to these embodiments as described herein are apparent to those skilled in the art from the description and the accompanying drawings. The description is not intended to be limited to these embodiments as shown with the accompanying drawings but is to provide the broadest scope possible as consistent with the novel and inventive features disclosed. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications and variations that fall within the scope of the present invention and appended claims.

Claims

1. An autonomous method for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the method comprises: a) capturing a plurality of visual data from a surrounding area where at least one predetermined object is present using at least one imaging device (1); b) transmitting the plurality of visual data by way of at least one signal (Si) to at least one computing device (2); c) detecting the at least one predetermined object by the computing device (2) which has been trained using a machine learning algorithm using a plurality of visual data (A); d) transmitting at least one signal (S2) to at least one sensing directional means (3) which moves at least one grabbing means (9) towards direction of at least one predetermined object; e) measuring distance between the at least one grabbing means (9) and the at least one predetermined object by at least one sensor (4); f) transmitting at least one signal (S3) from the at least one sensor (4) to the at least one computing device (2); g) transmitting at least one signal (S4) from the at least one computing device (2) to a plurality of motion sensing devices (5, 6, 7, 8); h) moving of the at least one grabbing means (9) towards a direction of the at least one predetermined object by the plurality of motion sensing devices (5, 6, 7, 8); i) grabbing the at least one predetermined object by the at least one grabbing means (9); j) picking up the at least one predetermined object by the at least one grabbing means (9); k) moving the at least one predetermined object towards a designated compartment by the at least one grabbing means (9); and l) releasing the at least one predetermined object into the designated compartment by the at least one grabbing means (9).
2. The autonomous method of Claim i, wherein the autonomous method includes use of a movable vehicle.
3. The autonomous method of Claim 2, wherein the at least one imaging device (1) is a camera system located and positioned on a left side and/or right side of the movable vehicle.
4. The autonomous method of Claim 3, wherein the at least one camera system which captures the plurality of visual data of the predetermined objects in a real-time mode and displays a plurality of visual data on a screen or display panel in the movable vehicle.
5. The autonomous method of Claim 1, wherein the at least one grabbing means (9) is a grabber arm.
6. The autonomous method of Claim 5, wherein the grabber arm is a rotatable arm and the rotatable arm is rotatable 180°, either clockwise or anti-clockwise.
7. The autonomous method of Claim 6, wherein the plurality of the motion sensing devices (5, 6, 7, 8) allows movement of the grabber arm.
8. The autonomous method of Claim 7, wherein at least one controller (10) is provided to control a hydraulic linear motor of the grabber arm to grab, pick and release the at least one predetermined object.
9. The autonomous method of Claim 8, wherein the at least one controller (10) is a microcontroller or a programmable logic controller (PLC).
10. The autonomous method of Claim 1, wherein the at least one computing device (2) is a processing unit which comprises a graphics processing unit (GPU) and a central processing unit (CPU).
11. The autonomous method of Claim 10, wherein the GPU detects the at least one predetermined object in a real-time mode.
12. The autonomous method of Claim 1, wherein the at least one sensing directional means (3) is at least one sensing directional valve.
13. The autonomous method of Claim 12, wherein the least one sensing directional valve is an electrically controlled load independent proportional valve group (PVG).
14- The autonomous method of Claim 13, wherein the PVG receives at least one signal (S2) from the CPU to move a PVG spool.
15. The autonomous method of Claim 14, wherein the PVG spool results in a movement of the grabber arm.
16. The autonomous method of Claim 15, wherein the grabber arm moves horizontally and/or vertically.
17. The autonomous method of Claim 10, wherein the CPU runs and controls algorithm of the plurality of the motion sensing devices (5, 6, 7, 8).
18. The autonomous method of Claim 17, wherein the plurality of the motion sensing devices (5, 6, 7, 8) is operated by a machine learning algorithm such as a deep learning algorithm.
19. The autonomous method of Claim 18, wherein the plurality of the motion sensing devices (5, 6, 7, 8) is at least one position encoder.
20. The autonomous method of Claim 19, wherein the at least one position encoder is at least one linear encoder and at least one rotary encoder.
21. The autonomous method of Claim 20, wherein the at least one rotary encoder is rotatable 180°, either clockwise or anti-clockwise.
22. The autonomous method of Claim 20, wherein the at least one rotary encoder measures displacement or movement of a rotary motion of the grabber arm.
23. The autonomous method of Claim 20, wherein the at least one linear encoder is to measure displacement or movement of a linear motion of the grabber arm.
24. The autonomous method of Claim 1, wherein the at least one sensor (4) is a depth camera, such as a red, green, blue and depth (RGBD) camera.
25. The autonomous method of Claim 1, wherein the at least one predetermined object is identified based on at least one predetermined feature or characteristic.
26. The autonomous method of Claim 25, wherein the at least one predetermined feature or characteristic is appearance, shape, colour, size and/or in any combination thereof.
27. The autonomous method of Claim i, wherein the at least one predetermined object is a fruit such as oil palm fresh fruit bunches (FFB).
28. The autonomous method of Claim 1, wherein the autonomous method can be used on any soil conditions.
29. The autonomous method of Claim 1, wherein the autonomous method can be used at coastal and inland estates.
30. The autonomous method of Claim 1, wherein autonomous method can be used during any types of weather conditions such as tropical and subtropical climate conditions.
31. The autonomous method of Claim 1, wherein autonomous method achieves at least 90% accuracy in detecting, grabbing, picking and releasing of the at least one predetermined object.
32. An autonomous system for detecting, grabbing, picking and releasing of at least one predetermined object, wherein the system comprises: a) at least one imaging device (1) which captures a plurality of visual data from a surrounding area where the at least one predetermined object is present; b) at least one computing device (2) transmitting the plurality of visual data by way of at least one signal (Si) to at least one computing device (2), wherein the computing device (2) which has been trained using a machine learning algorithm using a plurality of visual data (A) which detects the at least one predetermined object; c) at least one sensing directional means (3) which receives at least one signal (S2) from the at least one computing device (2); d) at least one grabbing means (9) which is moved towards direction of at least one predetermined object by the at least one sensing directional means (3); e) at least one sensor (4) which measures the distance between the least one grabbing means (9) and the at least one predetermined object, wherein the at least one sensor (4) transmits at least one signal (S3) to the at least one computing device (2); and f) a plurality of motion sensing devices (5, 6, 7, 8) which receives at least one signal (S4) from the at least one computing device (2), wherein the plurality of motion sensing devices (5, 6, 7, 8) moves the at least one grabbing means (9) towards a direction of the at least one predetermined object to enable the at least one grabbing means (9) to grab and pick up the at least one predetermined object before moving the at least one predetermined object towards a designated area and releasing the at least one predetermined object into the designated area.
33. The autonomous system of Claim 32, wherein the autonomous system is detachably connected to a movable vehicle.
34. The autonomous system of Claim 33, wherein the at least one imaging device (1) is a camera system positioned on a left side and/or right side of the movable vehicle.
35. The autonomous system of Claim 34, wherein the at least one camera system which captures the plurality of visual data of the predetermined objects in a real-time mode and displays a plurality of visual data on a screen or display panel in the movable vehicle.
36. The autonomous system of Claim 32, wherein the at least one grabbing means (9) is a grabber arm.
37. The autonomous system of Claim 36, wherein the grabber arm (9) is a rotatable arm and the rotatable arm is rotatable 180°, either clockwise or anti-clockwise.
38. The autonomous system of Claim 37, wherein the plurality of the motion sensing devices (5, 6, 7, 8) allows movement of the grabber arm.
39. The autonomous system of Claim 38, wherein at least one controller (10) is provided to control a hydraulic linear motor of the grabber arm (9) to grab, pick and release the at least one predetermined object.
40. The autonomous system of Claim 39, wherein the at least one controller (10) is a microcontroller or a programmable logic controller (PLC).
41. The autonomous system of Claim 32, wherein the at least one computing device is a processing unit (2) which comprises a graphics processing unit (GPU) and a central processing unit (CPU).
42. The autonomous system of Claim 41, wherein the GPU detects the at least one predetermined object in a real-time mode.
43. The autonomous system of Claim 32, wherein the at least one sensing directional means (3) is at least one sensing directional valve.
44- The autonomous system of Claim 43, wherein at least one sensing directional valve is an electrically controlled load independent proportional valve group (PVG).
45. The autonomous system of Claim 44, wherein the PVG receives at least one signal (S2) from the CPU to move a PVG spool.
46. The autonomous system of Claim 45, wherein the PVG spool results in a movement of the grabber arm.
47. The autonomous system of Claim 46, wherein the grabber arm moves horizontally and/or vertically.
48. The autonomous system of Claim 41, wherein the CPU runs and controls algorithm of the plurality of the motion sensing devices (5, 6, 7, 8).
49. The autonomous system of Claim 48, wherein the plurality of the motion sensing devices (5, 6, 7, 8) is operated by a machine learning algorithm such as a deep learning algorithm.
50. The autonomous system of Claim 49, wherein the plurality of the motion sensing devices (5, 6, 7, 8) is at least one position encoder.
51. The autonomous system of Claim 50, wherein the at least one position encoder is at least one linear encoder and at least one rotary encoder.
52. The autonomous system of Claim 51, wherein the at least one rotary encoder is rotatable 180°, either clockwise or anti-clockwise.
53. The autonomous system of Claim 51, wherein the at least one rotary encoder measures displacement or movement of a rotary motion of the grabber arm.
54. The autonomous system of Claim 51, wherein the at least one linear encoder is to measure displacement or movement of a linear motion of the grabber arm.
55. The autonomous system of Claim 32, wherein the at least one sensor (4) is a depth camera, such as a red, green, blue and depth (RGBD) camera.
56. The autonomous method of Claim 32, wherein the at least one predetermined object is identified based on at least one predetermined feature or characteristic.
57- The autonomous system of Claim 56, wherein the at least one predetermined feature or characteristic is appearance, shape, colour, size and/or in any combination thereof.
58. The autonomous system of Claim 32, wherein the at least one predetermined object is a fruit such as oil palm fresh fruit bunches (FFB).
59. The autonomous system of Claim 32, wherein the autonomous system can be used on any soil conditions.
60. The autonomous system of Claim 32, wherein the autonomous system can be used at coastal and inland estates.
61. The autonomous system of Claim 32, wherein autonomous system can be used during any types of weather conditions such as tropical and subtropical climate conditions.
62. The autonomous system of Claim 32, wherein autonomous system achieves at least 90% accuracy in detecting, grabbing, picking and releasing of the at least one predetermined object.
PCT/MY2023/050009 2022-08-25 2023-02-17 Autonomous method and system for detecting, grabbing, picking and releasing of objects WO2024043775A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2022004645 2022-08-25
MYPI2022004645 2022-08-25

Publications (1)

Publication Number Publication Date
WO2024043775A1 true WO2024043775A1 (en) 2024-02-29

Family

ID=85985122

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2023/050009 WO2024043775A1 (en) 2022-08-25 2023-02-17 Autonomous method and system for detecting, grabbing, picking and releasing of objects

Country Status (1)

Country Link
WO (1) WO2024043775A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1700000A (en) 1927-07-29 1929-01-22 Ernest A Tesch Combination door brace and latch
US20060150602A1 (en) 2005-01-07 2006-07-13 Stimmann Eric M Method and apparatus for remotely assisted harvester
US9475189B2 (en) 2015-02-22 2016-10-25 Ffmh-Tech Ltd. Multi-robot crop harvesting machine
US9858496B2 (en) 2016-01-20 2018-01-02 Microsoft Technology Licensing, Llc Object detection and classification in images
US20200242413A1 (en) 2018-03-28 2020-07-30 The Boeing Company Machine vision and robotic installation systems and methods
JP2021088009A (en) * 2019-12-02 2021-06-10 株式会社クボタ Robot hand and agricultural robot
WO2022099600A1 (en) 2020-11-13 2022-05-19 Intel Corporation Method and system of image hashing object detection for image processing
US20220183230A1 (en) * 2019-01-24 2022-06-16 Ceres Innovation, Llc Harvester with robotic gripping capabilities

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1700000A (en) 1927-07-29 1929-01-22 Ernest A Tesch Combination door brace and latch
US20060150602A1 (en) 2005-01-07 2006-07-13 Stimmann Eric M Method and apparatus for remotely assisted harvester
US9475189B2 (en) 2015-02-22 2016-10-25 Ffmh-Tech Ltd. Multi-robot crop harvesting machine
US9858496B2 (en) 2016-01-20 2018-01-02 Microsoft Technology Licensing, Llc Object detection and classification in images
US20200242413A1 (en) 2018-03-28 2020-07-30 The Boeing Company Machine vision and robotic installation systems and methods
US20220183230A1 (en) * 2019-01-24 2022-06-16 Ceres Innovation, Llc Harvester with robotic gripping capabilities
JP2021088009A (en) * 2019-12-02 2021-06-10 株式会社クボタ Robot hand and agricultural robot
WO2022099600A1 (en) 2020-11-13 2022-05-19 Intel Corporation Method and system of image hashing object detection for image processing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Source: Labour shortage getting worse in palm plantations", THE, 2 June 2021 (2021-06-02)
EATON: "Electro-hydraulic Proportional Load Sensing DCV Solution (SLV20)", 1 November 2020 (2020-11-01), pages 1 - 21, XP093056350, Retrieved from the Internet <URL:https://www.eaton.com/content/dam/eaton/hydraulics/valves/valve-documents/eaton-slv20-valve-catalog-en-us.pdf> [retrieved on 20230621] *
MOHAMAD HANIFF JUNOS ET AL: "An optimized YOLO-based object detection model for crop harvesting system", IET IMAGE PROCESSING, IET, UK, vol. 15, no. 9, 18 March 2021 (2021-03-18), pages 2112 - 2125, XP006110905, ISSN: 1751-9659, DOI: 10.1049/IPR2.12181 *
SOURCE: JOURNAL OF OIL PALM RESEARCH, vol. 20, December 2008 (2008-12-01), pages 548 - 558

Similar Documents

Publication Publication Date Title
US8381501B2 (en) Agricultural robot system and method
AU2005314708B2 (en) Agricultural robot system and method
Bechar et al. Agricultural robots for field operations. Part 2: Operations and systems
Grift et al. A review of automation and robotics for the bio-industry
Bonadies et al. A survey of unmanned ground vehicles with applications to agricultural and environmental sensing
Zhang et al. The use of agricultural robots in orchard management
Defterli Review of robotic technology for strawberry production
AU2017377676B2 (en) Crop scanner
Stoelen et al. Low-cost robotics for horticulture: A case study on automated sugar pea harvesting
WO2024043775A1 (en) Autonomous method and system for detecting, grabbing, picking and releasing of objects
Arima et al. Traceability based on multi-operation robot; information from spraying, harvesting and grading operation robot
Tianjing et al. Developments in Automated Harvesting Equipment for the Apple in the Orchard
AU2021101375A4 (en) IoT Based Agri Robotic Hands for Plucking Under Ground Crops &amp; Automatic Spraying
Liu et al. History and present situations of robotic harvesting technology: a review
Burks et al. Orchard and vineyard production automation
Tiedemann et al. Challenges of Autonomous In-field Fruit Harvesting and Concept of a Robotic Solution.
Burks et al. Opportunity of robotics in precision horticulture.
Ismail et al. Research and development of oil palm harvester robot at Universiti Putra Malaysia
Garg et al. Revolutionizing Cotton Harvesting: Advancements and Implications
Kurhade et al. Review on “Automation in Fruit Harvesting
Sahoo et al. Robotics application in agriculture
Mail et al. Agricultural Harvesting Robot Concept Design and System Components: A Review. AgriEngineering 2023, 5, 777–800
Ronzhin et al. Theoretical foundations to control technological and robotic operations with physical manipulations of agricultural products
Timofejevs et al. Computer Vision System for Autonomous Sea Buckthorn Harvesting Robot
EP4011198A1 (en) A forestry autonomous vehicle

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202317063260

Country of ref document: IN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23716698

Country of ref document: EP

Kind code of ref document: A1