WO2021225528A1 - Système et procédé pour l'amélioration basée sur l'intelligence artificielle (ia) d'opérations de récolte - Google Patents

Système et procédé pour l'amélioration basée sur l'intelligence artificielle (ia) d'opérations de récolte Download PDF

Info

Publication number
WO2021225528A1
WO2021225528A1 PCT/SG2021/050255 SG2021050255W WO2021225528A1 WO 2021225528 A1 WO2021225528 A1 WO 2021225528A1 SG 2021050255 W SG2021050255 W SG 2021050255W WO 2021225528 A1 WO2021225528 A1 WO 2021225528A1
Authority
WO
WIPO (PCT)
Prior art keywords
features
data
harvesting
extracted
fruit
Prior art date
Application number
PCT/SG2021/050255
Other languages
English (en)
Inventor
Kamal Mannar
Manik BHANDARI
Si Jie LIM
Kock Zui LIM
Prashanth KULKARNI
Tau Herng Lim
Pankaj Kumar
Original Assignee
Vulcan Ai Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vulcan Ai Pte. Ltd. filed Critical Vulcan Ai Pte. Ltd.
Publication of WO2021225528A1 publication Critical patent/WO2021225528A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24317Piecewise classification, i.e. whereby each classification requires several discriminant rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • the present disclosure relates to a system and method for AI-based improvement of harvesting operations, in particular, to a system and method for AI-based improvement of harvesting operations using a combination of sensors and ensemble of custom algorithms and signal processing techniques.
  • An aspect of an embodiment of the disclosure relates to a system and method that combines and processes a variety of data from various sources including the Internet of Things (IoT), edge devices such as smart bands equipped on individual harvesters deployed, and image or video content at the harvesting point.
  • the system processes the data from various sources and generates information including automated and accurate grading to identify quality of harvest, and diagnostic information to recommend changes to harvesting pattern and worker behaviour to improve harvest quality.
  • the system and method of the present disclosure provides artificial intelligence (Al)-based sensing and predictions in either real time or in batches as required by the user.
  • the system of the present disclosure may be applied across a variety of industries including palm oil, rubber, and fruit plantations like banana, avocado, cotton, and coffee.
  • the system and method as disclosed utilizes a multitude of sensors and AI algorithms to extract relevant information from data sources and identify targeted actions to improve quality and yield. Examples of such targeted actions include prescribing the best harvesting schedule, identifying anomalies in real time to help correct harvester behaviour, and vision- based grading at the harvesting point to provide suggestions of downstream process interventions (e.g. changing the setpoints in the mill based on ripeness of fruit).
  • targeted actions include prescribing the best harvesting schedule, identifying anomalies in real time to help correct harvester behaviour, and vision- based grading at the harvesting point to provide suggestions of downstream process interventions (e.g. changing the setpoints in the mill based on ripeness of fruit).
  • the system and method as disclosed also utilizes a combination of features for the classification and grading of fruit, including using generative adversarial networks (GAN) based image enhancement to accurately separate fruit bunches.
  • GAN generative adversarial networks
  • the system and method as disclosed utilizes other features that contribute to accurate grading of fruit and other related information, including socket features, age, planting material, weather, first level features, cut quality, genetic anomalies and pest infestation.
  • a computer-implemented method for improving harvesting of crops comprising: collecting data from one or more data sources; extracting one or more features from the data; determining a quality grading from the extracted one or more features; generating a harvesting data model based on one or more of the collected data, the extracted one or more features, and the determined quality grading; and using the harvesting data model to determine one or more operational decisions.
  • the collected data is images and videos of harvested produce.
  • the step of determining a quality grading from the extracted one or more features comprises: identifying a number and extent of one or more objects of interest in the images and videos of harvested produce; generating a set of features for each of the one or more objects of interest; and predicting a quality of harvested produce.
  • the step of identifying a number and extent of one or more objects of interest comprises selecting a first algorithm based on at least one of: age of tree; planting material; and water stress.
  • the step of identifying a number and extent of one or more objects of interest comprises enhancing of the images and videos of harvested produce using generative adversarial network models.
  • the step of generating a set of features for each of the one or more objects of interest comprises extracting a portion of the images and videos of harvested produce corresponding to each of the one or more objects of interest.
  • the set of features for each of the one or more objects of interests are selected from the group consisting of: size; shape; socket detection; wavelet-based features; and colour-based features.
  • the quality of harvested produce is selected from the group consisting of: ripeness grade; pest infestation; disease infestation; harvest cut quality; and one or more predefined genetic anomalies.
  • the one or more data sources is a device comprising one or more sensors and the data is movement and location data.
  • the extracted one or more features are selected from the group consisting of: movement pattern; and location information.
  • the step of extracting one or more features further comprises processing the extracted one or more features to determine a likely action being carried out.
  • the step of processing the extracted one or more features is based on at least one of location data; and characteristics of a tree being harvested.
  • the method further comprises: comparing the extracted movement pattern with an ideal movement pattern stored on a database; and generating a first alert if the extracted movement pattern differs from the ideal movement pattern.
  • the method further comprises: comparing the location information of the determined likely action being carried out with an expected action at a location with the same location information; and generating a second alert if the determined likely action being carried out differs from the expected action.
  • the method further comprises using the harvesting data model to predict yield at a tree level.
  • the method further comprises comparing the predicted yield to an actual yield; and determining a reason if the predicted yield differs from the actual yield.
  • the method further comprises: quantifying an impact of worker actions based on partial dependence analysis; identifying key worker actions impacting yield; and prioritizing key worker actions.
  • the method further comprises: determining one or more thresholds for worker action based on partial dependence analysis; and deploying the one or more thresholds on the device.
  • a system for improving harvesting of crops comprising: at least one memory device and at least one processor, the at least one processor is operably coupled to the at least one memory device; the at least one memory device comprising instructions to be executed by the at least one processor, the instructions instruct the at least one processor: receives data from one or more data sources; extracts one or more features from the data; determines a quality grading from the extracted one or more features; generates a harvesting data model based on one or more of the collected data, the extracted one or more features, and the determined quality grading; and uses the harvesting data model to determine one or more operational decisions.
  • the system further comprises: a device comprising one or more sensors, the one or more sensors configured to collect movement and location data to be processed by the processor.
  • a computer-readable storage medium comprising instructions to be executed with at least one memory device and at least one processor, when executing the instructions, the at least one processor: receives data from one or more data sources; extracts one or more features from the data; determines a quality grading from the extracted one or more features; generates a harvesting data model based on one or more of the collected data, the extracted one or more features and the determined quality grading; and uses the harvesting data model to determine one or more operational decisions.
  • FIG. 1 is a schematic illustration of a system for improving harvesting of crops, in accordance with embodiments of the present disclosure
  • FIG. 2 is a flowchart of a process running on computer system for improving harvesting of crops, in accordance with embodiments of the present disclosure
  • FIGs. 3 to 19 illustrate examples of data sources and features extracted from each type of data source, in accordance with embodiments of the present disclosure
  • FIGs. 3 to 6 are flowcharts and examples of data extraction from aerial data sources, in accordance with embodiments of the present disclosure
  • FIG. 3 is a flowchart of a process of data extraction from aerial data sources, in accordance with embodiments of the present disclosure
  • Fig. 4 is an example of display of information on an image of an area of interest (AOI), in accordance with embodiments of the present disclosure
  • Fig . 5 is a flowchart showing a series of processing steps carried out to extract tree level features from images captured from aerial sources, in accordance with embodiments of the present disclosure
  • Fig. 6 is an example of display of extracted features on an image of an AOI, in accordance with embodiments of the present disclosure
  • FIGs. 7 to 15 are flowcharts and examples of data extraction from images and videos from data sources on site at a plantation, in accordance with embodiments of the present disclosure
  • FIG. 7 is a flowchart of a process of data extraction from data sources on site at a plantation, in accordance with embodiments of the present disclosure
  • Fig. 8A is a picture of an image captured by an edge device
  • Fig. 8B is a picture of an image showing the identification of regions of fruit bunches based on wavelet processing and morphological operations, in accordance with embodiments of the present disclosure
  • FIGs. 9A and 9B are examples of photographs showing fruit bunches output from operation, in accordance with embodiments of the present disclosure.
  • FIGs. 10A to 10E are photographs of examples of different colour-based features used for different pipelines based on age, planting material and weather features, in accordance with embodiments of the present disclosure
  • Figs. 11A to 1 ID are images of results from an AI trained to detect empty fruit sockets in an image of a palm oil fresh fruit bunch, in accordance with embodiments of the present disclosure
  • Fig. 12 is an image of an example of a grading output for an image of palm fruits, in accordance with embodiments of the present disclosure
  • FIG. 13 schematically illustrates a process of detecting genetic anomalies, in accordance with embodiments of the present disclosure
  • Fig. 14 is a photograph of a palm fruit with a rat bite mark identified, in accordance with embodiments of the present disclosure
  • Fig. 15 illustrates output from an AI segmentation model used to identify and classify a stem as having v-cut shape, in accordance with embodiments of the present disclosure
  • FIGs. 16 to 19 are flowcharts and examples of data extraction from smart devices, in accordance with embodiments of the present disclosure.
  • FIG. 16 is a flowchart of a process of data extraction from smart devices, in accordance with embodiments of the present disclosure
  • FIG. 17 illustrates an example of output after pre-processing of data extracted from smart devices, in accordance with embodiments of the present disclosure
  • Fig. 18 schematically illustrates an example of a process to train a model to recognise different AI-based harvesting features, in accordance with embodiments of the present disclosure
  • Fig. 19 schematically illustrates an example of a process by which computer system detects movement anomaly, in accordance with embodiments of the present disclosure
  • FIG. 20 is a schematic illustration of a harvesting data model incorporating all the features relevant for harvest and yield mapped on a tree level, in accordance with embodiments of the present disclosure
  • FIGs. 21 to 26B illustrate the processes by which a harvesting data model may be used to predict and optimise operational decisions, in accordance with embodiments of the present disclosure
  • FIG. 21 is a schematic illustration of a process for the determination of operational decision of harvester behaviour optimization, in accordance with embodiments of the present disclosure
  • Fig. 22 is a flowchart of a time-to-harvest artificial (TTH) intelligence (AI) model used for the determination of the operational decision of harvester schedule prediction, in accordance with embodiments of the present disclosure
  • Fig. 23 is a flowchart of a process may be used to develop a new TTH model following existing best processes or may be used to retrain an existing TTH model, in accordance with embodiments of the present disclosure
  • Fig. 24 is a flowchart of a process through which a TTH model may determine optimized harvesting parameters, in accordance with embodiments of the present disclosure
  • Fig. 25 is a flowchart illustrating a process of determining anomalies, in accordance with embodiments of the present disclosure
  • Fig. 26A is an example of an image of a stalk with harvesting practice -based anomaly (long-stalk v-cut), in accordance with embodiments of the present disclosure
  • Fig. 26B is an example of an image of a rat bite at a fresh fruit bunch, in accordance with embodiments of the present disclosure
  • Fig. 27A is an example of an image of sour rot in grapes, in accordance with embodiments of the present disclosure.
  • Fig. 27B is an example of an image of black rot in grapes, in accordance with embodiments of the present disclosure.
  • the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently. Unless otherwise indicated, use of the conjunction “or” as used herein is to be understood as inclusive (any or all of the stated options).
  • AI Artificial Intelligence
  • AI models refers to mathematical algorithms that are “trained” using sample data, or “training data”, and human expert input to make predictions or replicate a decision an expert would make when provided that same information.
  • Fig. 1 is a schematic illustration of a system 100 for improving harvesting of crops, in accordance with embodiments of the present disclosure.
  • the processes of the present invention are incorporated in a system which is adapted to receive data from various sources to analyse received data and produce alerts and recommended courses of action.
  • System 100 may comprise a conventional general purpose or dedicated computer system 102 comprising a display 104, at least one processor or processing device 108 programmed to run artificial intelligence (AI) models, i.e. mathematical algorithms trained using data and human expert input to replicate a decision an expert would make when provided that same information, and perform analysis routines, an antenna 112 for wireless communication, an input device 116 for receiving input, at least one memory device 120 and a storage 124.
  • AI artificial intelligence
  • computer system 102 may be connected to one or more data sources 106.
  • Computer system 102 may be connected to one or more data sources through a wired connection or a wireless connection with a communication network.
  • Computer system 102 may communicate with the one or more data sources 106 through a mobile application or a client application executing on the one or more data sources.
  • the application may run on the one or more data sources 106 through usage of a “web browser” as a universal client, or be stored locally on the one or more data sources 106 as a mobile application (i.e. an “app”).
  • Computer system 102 may receive data from one or more data sources 106, process information received from the one or more data sources 106, and provide alerts and indications to a user through the one or more data sources 106.
  • the at least one memory device 120 or the storage 124 may store instructions to be executed by the at least one processor 108.
  • the at least one memory device 120 and storage 124 may also store information on AI models to be executed by the at least one processor 108. Additional information that may be stored by the at least one memory device 120 or storage 124 may include a database retrievable during process 200.
  • Fig. 2 is a flowchart of a process 200 running on computer system 102 for improving harvesting of crops, in accordance with embodiments of the present disclosure.
  • Process 200 starts with collecting or obtaining data from one or more data sources 106.
  • data and data sources include images and videos from aerial data sources 212 like satellites and unmanned aerial vehicles (UAVs), images and videos from data sources on site at a plantation 216 like edge based imaging systems, movement and location data from devices 220 comprising one or more sensors like smart-bands and smart-watches worn by individual harvesters, and weather and meteorological data from external sources 224 like weather agencies.
  • aerial data sources 212 like satellites and unmanned aerial vehicles (UAVs)
  • movement and location data from devices 220 comprising one or more sensors like smart-bands and smart-watches worn by individual harvesters
  • weather and meteorological data from external sources 224 like weather agencies.
  • the data is transmitted to computer system 102.
  • the at least one memory devices 120 of computer system 102 stores one or more AI models or “trained” mathematical algorithms that extract one or more features 208 from the data from data sources 106.
  • the features 208 extracted depends on the data source 106 from which the features 208 are extracted.
  • Tree level features 228 may be extracted from aerial data sources 212
  • harvest quality features 232 may be extracted from data sources on site at a plantation 216
  • worker movement features 236 may be extracted from devices 220 comprising one or more sensors
  • water stress features 240 may be extracted from external sources 224.
  • computer system 102 may determine a quality grading from the one or more features 208 extracted.
  • Computer system 102 then generates a harvesting data model 244 by integrating all the extracted features 208 at a tree level. All the features 208 extracted from each data source 106 are stored in the harvesting data model 244.
  • the harvesting data model 244 may then then utilized to determine operational decisions 248 through AI and optimization models. Examples of operational decisions 248 are harvester behaviour optimization 252, harvester schedule prediction 256, pest and disease prediction 260, and mill setpoint prediction 264.
  • Figs. 3 to 19 illustrate examples of data sources 106 and features 208 extracted from each type of data source 106, in accordance with embodiments of the present disclosure. Figs.
  • FIG. 3 to 6 are flowcharts and examples of data extraction from aerial data sources 212, in accordance with embodiments of the present disclosure.
  • Image data from aerial data sources 212 like satellites, UAVs and drones may be used to identify and geotag individual trees to map tree level features 228.
  • Tree level features 228 are vegetative features on a tree level that are relevant to predict harvest yield from each tree. Examples of tree level features 228 include ripeness of fruit collected, worker visit frequency to a tree, and harvest actions taken by a worker at a tree.
  • Fig. 3 is a flowchart of a process of data extraction from aerial data sources 212, in accordance with embodiments of the present disclosure.
  • Process 300 of data extraction from aerial data sources 212 commences in operation 304, where images of an Area of Interest (AOI) are sourced and captured.
  • An AOI is defined by a user by generating geographical information system (GIS) information to identify boundaries.
  • GIS geographical information system
  • An example of GIS information is shape, which is provided in the form of a file with a .shp extension.
  • the AOI may be a field, part of a plantation, or any area that is of interest to the user.
  • Images are collected by relevant satellites or UAVs based on the user defined AOI. The images collected are then cut to the shape of interest based on GIS operations.
  • individual trees and their location may be detected in operation 308.
  • Individual trees may be detected based on multi-spectral input using a combination of computer vision and deep learning approaches. This is carried out using signal processing, e.g. processes like wavelet processing and adaptive histogram equalization to improve the signal-to-noise ratio (S/N ratio), and AI models.
  • AI models are also trained to detect tree location by translating tree pixel locations to coordinates (e.g. latitude and longitude) to mark the location of each tree. Examples of AI models include Convolutional Neural Networks (CNN) and AI segmentation models. The AI models are trained by having a user manually annotate trees.
  • CNN Convolutional Neural Networks
  • AI segmentation models are trained by having a user manually annotate trees.
  • the AI models may then be tuned to increase the accuracy of detection by varying the architecture of the AI model (e.g. by changing the number of layers or type of filters), and hyper-parameters (e.g. learning rate, or number of epochs).
  • every tree is provided and marked with a unique identification number.
  • the unique identification number and coordinates of each tree allows for geo-fencing and the subsequent mapping of all activities to each specific tree for other data sources 106.
  • This information may be displayed on an image of the AOI.
  • Fig. 4 is an example of display of information on an image of an area of interest (AOI), in accordance with embodiments of the present disclosure.
  • tree canopy may also be identified in operation 308.
  • Tree canopy provides important information on the growth and health of trees. In addition, accurate identification of tree canopy may ensure that features predictive of growth and harvest yield are identified accurately for each tree and may assist in excluding portions of images that do not represent trees (e.g. shadows and ground vegetation). Once the trees and their locations are detected in operation 308, a series of processing steps may be carried out in operation 312 to extract tree level features 228.
  • Fig. 5 is a flowchart showing a series of processing steps carried out to extract tree level features 228 from images captured from aerial sources 212 in operation 312, in accordance with embodiments of the present disclosure.
  • Operation 312 commences with pre-processing in operation 316, where image anomalies and noise are detected and removed from images obtained in operations 304 and 308 to obtain pre-processed images.
  • image anomaly is shadows cast by neighbouring trees on tree canopy. These regions with shadows would have vegetation index values which are non-informative and are thus noise.
  • Areas representing anomalies and noise may be identified based on pixel values for Normalized Difference Vegetation Index (NDVI), and such areas are removed from subsequent calculations.
  • NDVI Normalized Difference Vegetation Index
  • tree level features 228 may be extracted from the pre-processed images in operation 320.
  • the tree level features 228 extracted in operation 320 may be predictors of tree condition, fruit generation behaviour or fruit yield, and may be split into three categories: canopy level vegetation indices 324, AI Canopy Features 328, and Change Detection Features 332.
  • Fig. 6 is an example of display of extracted features on an image of an AOI, in accordance with embodiments of the present disclosure.
  • canopy level vegetation indices 324 are a variety of vegetation indices that are correlated with the health and growth (biomass and nutrient profile) of trees. Vegetation indices may be industry standard indices and may be calculated using only pixels within the area that are identified as tree canopy. Examples of industry standard indices include normalized difference vegetation index (NDVI), dark green colour index (DGCI), difference vegetation index (DVI) and normalized difference red edge (NDRE) index.
  • NDVI normalized difference vegetation index
  • DGCI dark green colour index
  • DVI difference vegetation index
  • NDRE normalized difference red edge
  • AI canopy features 328 may be extracted in operation 320.
  • the AI canopy features 328 may be crop-specific and related to the growth or health of trees.
  • An AI model may be customized to derive specific features indicative of tree growth and health.
  • One example of an AI canopy feature 328 is the presence of areas that indicate growth of trees.
  • a sub AI model may be run to segment the pre-processed images obtained in operation 316 to detect areas indicative of growth of the tree. For example, growth in palm trees may be detected based on the presence of spears. A palm spear grows through the centre of a palm tree and is the youngest and most critical part of a palm tree.
  • Damage to a spear or lack of spears in a palm tree may indicate severe stress to the tree that may be affecting tree growth and future yield during harvest.
  • Another example of an AI canopy feature 328 is leaf shape anomaly, which may indicate specific nutrient deficiencies or pest infestations.
  • an AI model may be built to detect a specific defect in leaves called a fish tail pattern which may indicate a deficiency in boron.
  • Yet another example of an AI canopy feature 328 is leaf feature variation between mature and young leaves in the same tree.
  • an AI model could generate and identify differences in colour or vegetation indices between older leaves on the outer portion of the canopy and younger leaves on the inner portion of the canopy. Differences in features between mature and young leaves may indicate specific anomalies such as nutrient deficiency which could be reflected in younger leaves earlier than older leaves, or a pest attack as certain pests only attack younger or older leaves.
  • change detection features 332 may be extracted in operation 320.
  • the canopy size, vegetation indices and AI derived features identified in operation 320 may be tracked over time for the same tree. For example, the features may be identified every three months based on new images obtained in operation 304 to identify anomalies.
  • Table 1 below illustrates some examples of change detection features 332 leveraging different features derived by an example of an AI model of the present disclosure.
  • Table 1 Change Detection Features Obtained from an Example of an AI model
  • all the features extracted in operations 304 to 320 including tree identification number, tree location (latitude and longitude), canopy size, canopy level vegetation indices 324, AI canopy features 328 and change detection features 332 may be stored and integrated into harvesting data model 244.
  • another data source 106 may be data sources on site at a plantation 216 and feature 208 extracted may be harvest quality features 232.
  • Figs. 7 to 14 are flowcharts and examples of data extraction from images and videos from data sources on site at a plantation 216, in accordance with embodiments of the present disclosure.
  • Fig. 7 is a flowchart of a process 700 of data extraction from data sources on site at a plantation 216, in accordance with embodiments of the present disclosure.
  • Image data from such data sources are used to identify harvest quality features 232, which determine the quality of fruits harvested.
  • An example of such a data source is an edge-based imaging system or edge device like a smartphone.
  • Process 700 of extracting data from images and videos from data sources on site at a plantation 216 commences at operation 704 where an application for image and video collection is started on an edge device.
  • the application processes the images and videos collected to extract harvest quality features 232.
  • the application may run on the edge device with or without wireless or internet connectivity.
  • the images and videos collected are mapped to a tree identification number based on geofencing around each tree location captured previously in operation 308.
  • data that may be relevant for the running of algorithms to obtain harvest quality features 232 may be extracted from a database utilizing the tree identification number. Examples of data that may be relevant include the age of the tree, planting material i.e. the variety, clone or seed type of the tree, and relevant weather features such as water stress, which are variables that impact how a fruit or fruit branch may look like and the type of harvest features that may be relevant for a specific trees.
  • computer system 102 may guide the user through the application during image acquisition using a series of checks performed on the edge device. This is to ensure that the image quality (e.g. resolution, sharpness, and brightness) of any images or videos collected is adequate for accurate processing in subsequent operations. Such checks may be run while the user captures the images and videos.
  • One example of a check is the quality of the image.
  • a computer vision algorithm in the system may detect basic quality control issues such as blurring, distortion and low resolution. If the system identifies any images or videos as low quality, the application may prompt the user to take the image or video again to capture an image or video with higher quality.
  • a check is the harvest detection area and fruit layout pattern to ensure that any images or videos collected are taken at an appropriate distance from the fruit or fruit bunch to provide sufficient resolution for the system 100 to extract detailed harvest level features 232 in process 200.
  • Feature engineering and signal processing are preferred to ensure that the analysis is fast with light processing overhead, although an AI-based segmentation model may also be applied.
  • a pipeline of image processing algorithms may be leveraged to extract specific features that indicate potential fruit areas and detect areas with fruits. For example, wavelet-based filtering may be used to extract specific features indicating potential fruit areas and morphological operations may be used to detect areas with fruits.
  • the algorithms employed may be custom-tuned for fruits or fresh fruit bunches of particular interest to detect regions in the collected images and videos containing the fruits of interest.
  • Fig. 8 A is a picture of an image captured by an edge device
  • Fig. 8B is a picture of an image showing the identification of regions of fruit bunches based on wavelet processing and morphological operations, in accordance with embodiments of the present disclosure.
  • duplication or fraud detection to detect any repeat or duplicate images that may be uploaded by a harvester as proof of harvest.
  • Duplication or fraud detection may be carried out by identifying duplicate images based on the metadata of an image (which includes the latitude and longitude of image taken) and any similarities in fruit layout pattern between images collected in the same device. If any duplicate images are detected, a warning may be provided to the user and the violation may be logged on system 100 with the duplicated image and details of the harvester using the edge device.
  • a pipeline selector may identify relevant AI features, an appropriate sequence of AI features and model parameters and threshold based on tree and weather characteristics obtained in operation 712.
  • the pipeline selector decides which pipeline 724 the process 700 continues down. Examples of such characteristics are the age of the tree, type of planting material and the weather. This is important as the appearance of fruits and harvest attributes of fruits depend heavily on tree and weather characteristics.
  • the age of the tree is important as the age of the tree determines the size of the fruit and the colour of ripe fruit bunches. For example, younger palm trees may generate fruit that are more likely to be unripe when harvested, so the threshold for providing alerts and penalties for harvesters may be adjusted when harvesting fruit from younger trees.
  • Trees under water stress (derived based on rainfall history, soil type and age of tree) have a different behaviour with respect to fruit bunch size, ripeness characteristics and colour.
  • Table 2 below illustrates variations of pipelines classifications based on age, planting material, and weather.
  • Each pipeline may be custom trained to improve the accuracy of harvest quality detection. For example, there may be a first pipeline process 724a for prime trees (i.e. >8 years of age) of clone type 1 that are not experiencing water stress, a second pipeline process 724b for mature trees (i.e. 6 - 8 years age) of clone type 2 experiencing high water stress, and a third pipeline process 724c for immature trees (i.e. ⁇ 6 years of age) of clone type 1 that are not experiencing water stress.
  • prime trees i.e. >8 years of age
  • mature trees i.e. 6 - 8 years age
  • immature trees i.e. ⁇ 6 years of age
  • each pipeline process 724 may commence with image enhancement in operation 728. This is in contrast with prior art solutions that only analyse the colour of fruits and fresh fruit bunches to grade fruit. Images and videos collected may have noise in the form of dirt, mud or leaves which may reduce the accuracy of downstream feature extraction. In addition, the images and videos collected may have varying quality.
  • the original resolution of an image may be enhanced with an AI system using generative adversarial networks (GAN).
  • GAN generative adversarial networks
  • visual representation of specific features of interest may be amplified using a cycle GAN-based approach to improve downstream processing accuracy.
  • A-GAN based approach may be selectively applied to certain pipelines.
  • empty sockets where fruits have fallen from a fruit bunch are an important feature used to determine the level of ripeness of the fruit. In younger immature trees, the fruits and the empty sockets are quite small and may be overlooked.
  • the AI system using GAN may amplify the visual representation of such empty sockets to enable easy detection of sockets.
  • the cycle GAN model may be trained based on annotated images where the sockets are visually identified or enhanced.
  • each pipeline process 724 may continue with fruit bunch detection and separation in operation 732 to detect objects of interest which are then processed individually in subsequent operations.
  • the objects of interest may be individual fruit, or may be individual fruit bunches.
  • Fruit brunches are typically harvested separately but may be pooled together for collection after harvest.
  • Each fruit bunch must be identified and the region in an image representing each fruit bunch must be separated to understand the harvest quality.
  • fruit bunch detection and separation also identify the number of fruit bunches harvested, which is indicative of harvest quantity.
  • Each fruit bunch is identified and the region of an image representing each fruit bunch is clipped for further processing.
  • Figs. 9A and 9B are examples of photographs showing fruit bunches output from operation 732, in accordance with embodiments of the present disclosure.
  • the region of an image representing each fruit bunch is termed a mask 736.
  • a variation of convolutional neural network for example an instance segmentation model, is used to identify both individual fresh fruit bunches and their mask 736.
  • hyper-parameters used by the image segmentation model in operation 732 of each pipeline process 724 differs. Examples of hyper-parameters that are varied include the number of anchor points and the number of proposal regions used in the segmentation model for detecting individual instances.
  • each pipeline process 724 may continue with first level feature extraction in operation 736 to extract a set of features for each fruit or fruit bunch, where basic features are extracted and captured as part of harvest quality features 232.
  • a basic feature that may be extracted in operation 736 is colour-based features.
  • Different representations of the original image acquired with AI guidance in operation 716 may be derived using image processing methods like manipulating hue, saturation, value (HSV), presenting the image in greyscale, and manipulating hue, saturation, lightness (HSL) to emphasize different components of the image.
  • HSV manipulating hue, saturation, value
  • HSL hue, saturation, lightness
  • Each pipeline process 724 may be custom tuned to specific colour representations that best differentiate features that may be of interest such as the stem of the fruit, empty sockets, or ripeness of the fruit.
  • filters on colour ranges may be employed and customized for each pipeline process 724.
  • Figs. 10A to 10E are photographs of examples of different colour-based features used for different pipelines based on age, planting material and weather features, in accordance with embodiments of the present disclosure.
  • Wavelet-based features Another basic feature that may be extracted in operation 736 is wavelet-based features. Wavelet-based signal processing may be carried out to differentiate different parts of fruits or fruit bunches. For example, stems of fruits may look very distinct in certain frequency sub bands as compared to other frequency sub-bands, which may help the system 100 in extracting these components of the fruit in images in process 200.
  • Yet another basic feature that may be extracted in operation 736 is empty sockets in fruit bunches which may indicate dropped fruits.
  • the level ripeness of fruits or fruit bunches is determined by the presence of fruitlets that have fallen from the fruit bunch.
  • the level of ripeness of the palm oil fruit is determined by the number of dropped fruitlets (or the number of empty sockets left behind by the dropped fruitlets).
  • system 100 may extract this fruit-specific feature from images obtained in operation 716 in process 200.
  • the number of empty sockets may be determined with an AI model using an ensemble of (i) template matching results which looks for patterns related to empty sockets, and (ii) convolution neural network-based segmentation model that takes wavelet based features and colour channels (HSV and RGB) as inputs.
  • the AI model employed may extrapolate the total number of empty sockets based on bunch size and other factors such as age of the tree, planting material etc.
  • Figs. 11A to 1 ID are images of results from an AI trained to detect empty fruit sockets in an image of a palm oil fresh fruit bunch, in accordance with embodiments of the present disclosure.
  • Fig. 11A illustrates an RGB colour space
  • Fig. 1 IB illustrates a transformed colour space
  • Fig. 11C illustrates actual empty sockets
  • Fig. 11D illustrates predicted empty sockets.
  • bunch size which corresponds to the size characteristics of the fruit or fruit bunch. Yield from a fruit bunch (e.g. oil extraction rate for palm) may be based on the number of individual fruits in the bunch with respect to the total size of the bunch. This may be calculated by determining the fruit volume to total bunch volume ratio. The fruit volume may be calculated based on individual fruit sizes in an image, while the total size of a fruit bunch in terms of area and volume may be estimated from the image. Any estimations may be calibrated based on a standard size object placed at the collection point for reference.
  • Yield from a fruit bunch e.g. oil extraction rate for palm
  • the fruit volume may be calculated based on individual fruit sizes in an image, while the total size of a fruit bunch in terms of area and volume may be estimated from the image. Any estimations may be calibrated based on a standard size object placed at the collection point for reference.
  • Yet another basic feature that may be extracted in operation 736 is the shape of the fruits and fruit bunches.
  • the shape of a fruit or a fruit bunch may be correlated to certain genetic anomalies and disease anomalies that reduce the yield of the tree significantly.
  • parthenocarpic fruit bunches with a low number of fruitlets may be linked to poor pollination, and diseases such as Ganoderma fungi may impact fruit bunch yield and quality.
  • Anomalies may be identified by capturing the shape of the individual fruit and comparing it to a standard shape and size for the given age and planting material of the tree.
  • each pipeline process 724 may continue with the detection of additional harvest quality features 232 using AI models in operation 740 to predict a quality of the harvested produce.
  • An example of a harvest quality feature 232 that may be detected using AI models in operation 740 is deformation or bruising. Areas with bruising are detected based on sample images with bruising and using a combination of HSV-based colour transformation and segmentation AI models. Deformation and bruising have a direct impact on the quality of the fruit. In addition to being visually unappealing, bruising can also cause internal damage to the fruit.
  • Bruising also influences empty sockets or loose fruit, thus the detection of bruising may assist in the accuracy of grade detection, which is another harvest quality feature 232 that may be detected using AI models in operation 740.
  • Fruits or fruit bunches may be graded in categories such as unripe to under-ripe, ripe, over-ripe and rotten, or may be graded on a scale- based measurement.
  • Typical simple imaging processing methods and features such as colour- based segmentation may not provide sufficiently accurate grading as the colour of the fruit is significantly influenced by factors like age, genetic material, and seasonal variations. Therefore, the use of an AI grading model is preferred.
  • the AI grading model employed may be a convolutional neural network-based model that combines multiple features as input to the model to predict the grade of the fruit.
  • Examples of input into the model may include texture indicators (e.g. wavelet-based features), colour transformation (e.g. HSV), empty socket detection (basic feature described above), and bruising condition.
  • Text indicators e.g. wavelet-based features
  • colour transformation e.g. HSV
  • empty socket detection basic feature described above
  • bruising condition e.g.
  • Empty sockets are indicative of ripeness as ripe fruit bunches have loose fruits that fall off in the tree. However, as rough handling of fruit bunch during harvest may loosen fruits resulting in empty sockets, bruising indicator is also accounted for to separate out the effect of handling.
  • Fig. 12 is an image of an example of a grading output for an image of palm fruits, in accordance with embodiments of the present disclosure.
  • the grading of each fruit may be indicated on the image in various forms, including highlighting in different colours.
  • FIG. 13 schematically illustrates a process 1300 of detecting genetic anomalies, in accordance with embodiments of the present disclosure. As genetic anomalies are often correlated to the appearance and shape of the fruit, genetic anomalies may be detected visually.
  • Process 1300 commences in operation 1304 where images of fruits are obtained. The images may be images of fruits obtained in any operations carried out by system 100 in process 200. After images of fruits are obtained in operation 1304, the boundaries of each fruit are identified in operation 1308.
  • Operation 1308 may be performed through the use of an AI-based model such as a semantic segmentation model that extracts a mask or area of a fruit bunch from an image.
  • the semantic segmentation model may be trained based on annotated images with the boundaries of each fruit bunch identified for the model to leam.
  • shape metrics are calculated in operation 1312, which focuses on mathematically representing a shape of an object, e.g., extracting an identified boundary of a fruit bunch as a convex hull polygon.
  • shape metrics are calculated in operation 1312, two types of analyses are carried out: statistical analysis in operation 1316 and AI-based analysis in operation 1320.
  • the distribution of the shape calculated in operation 1312 is analysed and acceptable thresholds are identified.
  • images of fruit bunches may be captured from different distances, oriented differently, and may vary in size, procrustes shape distance, a computation method that is invariant to orientation and scale, may be used to perform statistical shape analysis, although other methods may be employed.
  • Fresh fruit bunches with significant anomalies are identified by comparing a convex hull shape of a given fresh fruit bunch (FFB) with an average convex hull shape of FFB from trees of the same age and clone using procrustes shape distance to identify if the given FFB has significant deformation in shape compared to the average shape.
  • FFB convex hull shape of a given fresh fruit bunch
  • AI-based analysis in operation 1320 the images obtained in operation 1304 and the shape metrics calculated in operation 1312 are analysed to classify the images into normal or deformed.
  • the analysis and classification may be carried out using a convolutional neural network-based model that had been trained to differentiate a given image of FFB into normal or abnormal based on obtain training images of both normal FFB bunches and abnormal FFB bunches.
  • FIG. 14 is a photograph of a palm fruit with a rat bite mark identified, in accordance with embodiments of the present disclosure.
  • Another example of pest and disease infestation is fruit rot due to fungus.
  • the anomalies are matched to a database of known damages and fruit anomalies.
  • Information on the database may include shapes of bite marks and areas affected by the bite marks (e.g., fruits near the stem or lower in bunch) which may assist in the identification of the type of pest and disease for remedial action.
  • a harvest quality feature 232 that may be detected using AI models in operation 740 is cut quality or harvest action as harvesting of fruits from a tree requires a particular approach to cutting the fruit to minimize damage to the tree and maximise yield. For example, it is important to minimize the stalk length on a fruit brunch after harvest to reduce the weight of the bunch for transport and to reduce the absorption of oil by the stalk to improve subsequent oil extraction.
  • AI models of each pipeline process 724 may be trained to detect stalks in images of fruit bunches and classify the shape of the stalk to indicate whether the cut made to the stalk was made well. Examples of classification of the shape of the stalk include v-cut and long stalk. Fig.
  • FIG. 15 illustrates output from an AI segmentation model used to identify and classify a stem as having v-cut shape, in accordance with embodiments of the present disclosure.
  • the AI models used to detect v-cuts and long stalks may be customised for different age groups and planting material.
  • another data source 106 may be devices 220 comprising one or more sensors, i.e. smart devices 220, that are equipped on individual harvesters or workers and feature 208 extracted may be worker movement features 236.
  • Figs. 16 to 19 are flowcharts and examples of data extraction from smart devices 220, in accordance with embodiments of the present disclosure. Fig.
  • Worker movement features 236 may include movement and location data from smart devices 220 like smart-bands and smart-watches worn by individual harvesters.
  • the movement features 236 are analysed to ensure that a variety of required actions are performed properly, in an appropriate sequence and on-time to maximise quality and quantity of produce harvested. Examples of such actions are the inspecting of trees to identify if fruit is present and/or ready for harvesting, pruning of frond and/or leaves, and harvesting of fruit.
  • Computer system 102 extracts relevant movement data from smart devices 220, identifies worker movement and location of movement, determines a movement pattern to correlate actions to a tree with a particular tree identification number and detects anomalies in worker movement to determine if the action is performed in the right pattern in process 200.
  • Process 1600 of data extraction from smart devices 220 includes data acquisition in operation 1604, data pre-processing in operation 1608, deriving AI-based harvesting features in operation 1612, and detecting anomalies in operation 1616.
  • process 1600 commences with data acquisition in operation 1604.
  • Data may be acquired from smart devices 220 worn by individual workers or harvesters.
  • the smart devices 220 comprise sensors which may detect movement and actions of the individual wearing it, the smart device 220 capable of collecting sensor values from such sensors. Examples of information obtained from sensors include GPS location, accelerometer and gyroscope data across x, y, and z-axes. Due to limited processing capability, such information may be transmitted to an edge device through a companion application installed on the edge device.
  • process 1600 may continue with data pre-processing in operation 1608.
  • Data pre-processing includes several processes. One example is the standardisation of sensor values across different axes and removing noise from the data collected using low-pass filters and wavelet-based denoising of individual sensor value.
  • computer system 102 may align the time series for the sensor values using interpolation to fill any gaps in specific given time staps so that the data from sensors may be jointly processed.
  • Fig. 17 illustrates an example of output after pre-processing of data extracted from smart devices 220 in operation 1604, in accordance with embodiments of the present disclosure. Each sensor value (y-axis) is plotted against time (x-axis).
  • process 1600 may continue with deriving Al-based harvesting features in operation 1612.
  • Al-based harvesting features are Al-based features that may be relevant from a harvesting or agricultural viewpoint. Table 3 below illustrates examples of Al-based harvesting features that may be derived in operation 1612 and the relevance of each feature .
  • computer system 102 may store a defined library of features to be derived. Such library may include patterns relating to the upkeep of the specific crop, as well as patterns relating to the harvesting of the specific crop.
  • Fig. 18 schematically illustrates an example of a process 1800 to train a model to recognise different Al-based harvesting features, in accordance with embodiments of the present disclosure.
  • Computer system 102 may use pre-built AI models that can automatically categorize incoming data stream into a library of different actions done by a worker or harvester. These AI models may be built on training data that captures action performed by different workers to ensure that the AI model leams the natural variability in the movement patterns.
  • Process 1800 may commence with obtaining sensor values and data from smart devices 220 in operation 1804. There may be nine sensor readings from an individual worker or harvester.
  • Process 1800 may continue with operation 1808, where computer system 102 may leverage a convolutional long short-term memory network (LSTM) or artificial recurrent neural network (RNN) to classify time sequences.
  • LSTM convolutional long short-term memory network
  • RNN artificial recurrent neural network
  • a convolutional LSTM model combines the spatial aspects of data (to capture the relationship between different sensor inputs) and recurrent layer that captures the time series relationship of each variable to generate time segments of data obtained and probability of the segment belonging to each AI feature.
  • Another architecture for the neural network could be stacked convolutional neural network with varying dilation of the filter to capture information across time resolutions.
  • Each time segment of data may include a start time of the segment, a location (e.g. latitude and longitude) where the data was obtained, and probability vectors for certain AI features.
  • process 1800 may continue with utilizing a machine learning model in operation 1812 to process the data.
  • computer system 102 may geofence a segment identified in operation 1808 to a given tree identification number. The location may be based on GPS coordinates or any other appropriate method.
  • Information tagged to the tree identification number, including tree height, and canopy size, may be taken into account using a second AI model like random forest or Gradient Boosting Machine (GBM) model to refine and enhance the confidence level of probability vectors for certain AI features determined in operation 1808, to determine a likely action being carried out.
  • GBM Gradient Boosting Machine
  • the information tagged to a tree identification number assists the computer system 102 in enhancing the predictors as certain conditions increase the likelihood of certain actions being carried out.
  • the action being carried out by movement of an arm is dependent on tree-level factors such as the height of the tree and the canopy.
  • AI features determined and stored in operation 1812 include the detected action (e.g. pruning), the tree identification number on which such action is detected, and the time stamp of the detection.
  • process 1600 may continue with operation 1616 where computer system 102 detects anomalies in the AI-based harvesting features determined in operation 1612 by comparing the extracted movement pattern with an ideal movement pattern stored on a database. Anomalies may be detected in the location, the sequence of movements, and the movement. A first anomaly that may be detected is location anomaly. Location anomaly refers to actions that may be performed on a wrong tree location, or inappropriate actions performed on a particular tree location. Such location anomalies may impact the harvest yield on a specific day, as well as future harvest potential. For example, when harvest is not performed as scheduled for a given tree, this may result in over-ripe or rotten fruit which would reduce overall yield. Table 4 below provides examples of location anomalies that may be detected in operation 1612.
  • sequence anomaly which may refer to an anomaly in the sequence of actions performed.
  • a sequence anomaly may be identified if multiple AI features are detected at a tree with the same tree identification number. Such actions may occur sequentially or within a defined window. For example, the time window may be 30 minutes. The sequence of these AI features is checked to confirm that they follow established agronomy practices. Any deviation from the sequence results would be detected as an anomaly.
  • FIG. 19 schematically illustrates an example of a process 1900 by which computer system 102 detects movement anomaly, in accordance with embodiments of the present disclosure.
  • Anomalies in motion may result in an ineffective motion, leading to a sub-optimal effect thus impacting yield or outcome .
  • the manual application of fertilizer does not comply with the required or desired hand pattern or gesture, this may result in uneven fertilizer application, such that fertilizer may not reach the area where the roots of the tree are located.
  • Process 1900 commences in operation 1904 where computer system 102 extracts a segment of data corresponding to a specific action performed at a tree with a specific tree identification number.
  • An example of an action may be fertilizing.
  • the data generated and collected is a complex pattern over multiple dimensions, and preferably over nine dimensions, as the human body motion has a high degree of freedom and movement of the human body is in three-dimensional (3D) space involving several factors like location, velocity, and acceleration over time.
  • This complex pattern over multiple dimensions may be converted to lower dimensions to capture the movement data on nine axes in a one-dimensional vector with values over time.
  • a non-linear principal component analysis (PCA) based approach may be used when converting the data to lower dimension to minimise reconstruction loss, i.e.
  • PCA principal component analysis
  • the non-linear PCA approach employed may be a non-linear PCA with a radial basis function kernel (RBF) which may be used to generate lower dimension space values.
  • RBF radial basis function kernel
  • Other equivalent approaches such as manifold based learning (e.g. T-sne) may also be employed.
  • T-sne manifold based learning
  • process 1900 may continue with operation 1908 where computer system 102 may compare the resulting vectors overtime with similar features for the same tree with the same identification number, or for trees of similar age from a feature library, to determine if the particular movement is anomalous. If the computer system 102 determines in operation 1908 that a particular movement is anomalous, computer system 102 may trigger or register an alert. For example, if a detection motion for fertilization significantly deviates from an expected motion recorded in the feature library, an alert may be generated on or sent to the edge device held by the individual worker or harvester for immediate correction. Any anomaly or distance values may also be stored in the harvesting data model 244 as predictors of expected yield from trees related to observed anomaly.
  • another data source 106 may be weather data 224 and feature 208 extracted may be water stress features 240.
  • Plant yield may be impacted by weather especially water stress which may be caused by either lack of water or excess water/flooding.
  • Water stress experienced by a tree may be determined by a combination of weather data (rainfall in mm), soil conditions (e.g. clay vs sandy soils) and environmental conditions such as Vapor Pressure Deficit (VPD) which is highly correlated to evapotranspiration (loss of water from the trees to the atmosphere).
  • VPD Vapor Pressure Deficit
  • a range of features may be created to capture lagged rainfall and water stress indicators.
  • Features may include a moving average of rainfall over given time period (e.g. 15 days, 30 days etc.), intensity features such as number of hours over given time period (e.g. 15 days, 30 days etc.) where more than a defined amount of rainfall (e.g. 100 mm) is observed within that hour etc.
  • Fig. 20 is a schematic illustration of a harvesting data model 244 incorporating all the features relevant for harvest and yield mapped on a tree level, in accordance with embodiments of the present disclosure.
  • the harvesting data model 244 created for each tree may include the following information:
  • Tree level features 228 extracted from aerial data sources 212 including growth, canopy, foliage level disease or anomalies, and growth or canopy change detection features;
  • Water stress features 240 extracted from weather data from external sources 224 provide history of weather information that is relevant for yield prediction including derived features that capture the water stress levels for a given tree based on soil type, terrain, age of tree etc.
  • the harvesting data model 244 may be used for prediction and optimisation of key plantation or agricultural operations and may be used to build operational decision 248 systems to support various operations. Examples of operational decisions 248 are harvester behaviour optimization 252, harvester schedule prediction 256, pest and disease prediction 260 and mill setpoint prediction 264.
  • FIGs. 21 to 26B illustrate the processes by which the harvesting data model 244 may be used to predict and optimise operational decisions 248, in accordance with embodiments of the present disclosure.
  • Fig. 21 is a schematic illustration of a process 2100 for the determination of operational decision 248 of harvester behaviour optimization 252, in accordance with embodiments of the present disclosure.
  • harvester behaviour optimization 252 may be performed to identify targeted corrective actions for individual workers which can then be used to prioritize training and deploy monitoring on smart devices or edge devices to provide targeted and real time alerts to the workers.
  • process 2100 commences with a data harvesting model 244.
  • a yield prediction model 2104 is used on the harvesting data model 244 to predict the yield at each tree level for a given time window (daily, weekly etc.), using historical data from the data model consisting of tree information (age, planting material, soil etc.), weather (water stress etc.), health and nutrient information and harvesting quality from the tree etc, but excluding worker movement features 236.
  • An example of the yield prediction model 2104 is a machine learning model using gradient boosted machines (GBM), although other models may be used.
  • GBM gradient boosted machines
  • the type of yield analysed may be metric, such as number of ripe fruits, or rate, such as oil extraction rate.
  • the delta, or difference, between the actual yield in given time window (e.g. daily recorded yield) and the predicted yield is calculated to determine the difference between what was produced and what was expected. If the delta or difference is negative, it may indicate yield loss that may be related to harvester actions and not related to tree, environment, health, and weather factors. If the delta or difference is positive, it means that harvester behaviour may have led to positive changes in yield compared to what was seen in the population.
  • a yield delta attribution model 2108 may then be used to leverage a combination of tree level features 228, certain harvest quality features 232 such as health anomalies, water stress features 240 with worker movement features 236 to predict a daily delta in yield for a given worker over a period of time, and preferably over a period of one year. Partial dependence analysis may be leveraged to quantify an impact of worker features and/or anomalies on the delta yield.
  • the yield delta attribution model 2108 may identify a worker anomaly of not harvesting trees away from the road frequently as a key cause of drop in harvest yield and may quantify the impact to be up to 5% for the given worker. Another example may be that a worker not performing the required harvesting cut (e.g.
  • V-cut may cause a 10% drop in harvest yield for a given worker.
  • Key worker actions impacting yield may be identified based on the results of the quantification of the impact of worker features on yield. Based on the identified key worker movement feature 236 and their calculated impact on harvest yield and quality, a prioritized list of actions may be generated for each worker to improve on. Corrective action and micro-supervision 2112 may be carried out on the list of actions generated by the yield delta attribution model 2108. The list of actions generated may be used to identify a training plan for a specific worker and to develop specific optimal thresholds for each worker movement features 236 and anomalies for the specific worker.
  • the training plan and developed thresholds may then be deployed on edge devices or smart devices and trigger real-time alerts when the worker movement features 236 identified for the specific worker performs a suboptimal action or if the specific worker deviates significantly from the developed threshold. For example, if a V-cut is not being performed on fruits from a mature tree, an alert is immediately activated on the edge device or smart device to remind the specific worker of the action.
  • harvester schedule prediction 256 Another operational decision 248 that may be determined is harvester schedule prediction 256. Timing of harvest influences the quality and quantity of harvest yield. Based on current practice, the timing of harvest is determined by the workers and thus the quantity and quality of harvest yield is highly dependent on the field worker’s level of experience. More experienced workers can draw on their years of experience to identify the optimal time of harvesting for fruit. As a result, experienced workers harvest better quality fruit that fetch a higher price. On the other hand, less experienced workers are more prone to harvesting lower quality fruit while they are picking up experience as a harvester. Therefore, the AI grading system previously employed in operation 740 may be used to optimise the time of harvesting, regardless of worker experience, to optimize quantity and quality of yield. Individual workers may use edge devices to take images of agricultural produce.
  • the AI grading system may determine the level of ripeness of the fruit identified on the images and inform the worker whether to harvest the fruit now or wait for a few more days before harvesting.
  • harvester schedule prediction 256 may be carried out to optimize harvesting operation by leveraging data features generated from the fused data in the harvesting data model 244 to build an AI model to predict the optimal time for each tree to be harvested. Such harvester schedule prediction may be subsequently used to drive better outcomes, such as optimized resource allocation in the field, improved yield quality, and less wastage.
  • An example of an AI model that may be used for harvester schedule prediction 256 is a time-to-harvest (TTH) AI model.
  • Fig. 22 is a flowchart of a TTH AI model 3200 used for the determination of the operational decision 248 of harvester schedule prediction 256, in accordance with embodiments of the present disclosure.
  • the TTH AI model has a fused feature database 3204 from which various data features 3208 may be derived. Examples of data features 3208 include historical yield and fruit quality, when the tree was last harvested, irrigation and fertilization schedules, tree properties such as species variety, age, and tree health, as well as weather events such as average rainfall since last harvest, total amount of rainfall since last harvest, and severity of weather anomalies such as El Nino conditions.
  • Historical data containing both the yield and data features may be used to train the AI model to predict the right time to harvest to meet a required yield.
  • the approaches to the AI model employed may be a random forest or a support vector machine that leams to predict yield at a given date based on historical data features corresponding to weather, tree properties, harvest history etc.
  • the TTH AI model may score the data features 3208 to predict for each tree the number of days till the fruits are at optimal quality 2212, and then generate a sorted table or list of plants in order of predicted time to harvest 2216.
  • the sorted table or lost may be based on maximum expected yield for harvesting a given tree or trees within an area for the given date.
  • the sorted table or list may then be sent to a harvest optimizer to generate an optimal harvesting plan 2220 or used for further scheduling or optimization.
  • An optimal harvesting plan may be generated by assigning available harvesters in order of expected yield from individual trees or trees within a region.
  • the harvesting plan may be carried out and the actual harvesting activity and harvest quality data, and other updated features, may be continuously recorded by computer system 102 to provide feedback 2224 to the AI model 3200.
  • the feedback may be provided to the AI model as input through periodic or real-time updates 2228 from data sources 106, including ongoing harvesting and monitoring activity, which is then integrated back into the fused feature database 3204.
  • the performance of the deployed TTH AI model may be continuously monitored and calibrated 2232 as the performance may degrade over time due to factors such as changing data distributions and different operating conditions.
  • Monitoring and calibration 2232 may be carried out by comparing the predicted TTH and the corresponding quality of fruit that was harvested according to the predictions and deriving a metric from it such as MAPE (Mean Absolute Percentage Error) between predicted yield for harvesting on given date vs actual observed yield).
  • a model performance report 2236 on the derived metric and other statistics may be generated based on the continuous monitoring and calibrating 2232 of the TTH AI model. If the TTH AI model is determined to be performing poorly, i.e. if the metric drops below a certain threshold, an alert may be sent to the relevant stakeholder to trigger model retraining, where the TTH AI model undergoes model training and validation 3240 before being deployed to replace the existing TTH AI model 3244. Model retraining may also be manually triggered by a user.
  • Fig. 23 is a flowchart of a process 2300 may be used to develop a new TTH model following existing best processes or may be used to retrain an existing TTH model, in accordance with embodiments of the present disclosure.
  • Process 2300 may commence with training data 2304.
  • Training data 2304 may be a historical dataset e.g. from prior years, which include both yield and data features that can be used to predict the yield (weather, tree properties, harvest history). Training data 2304 may be used to train the AI model to predict future yield given the data features.
  • the training data 2304 may be split 2308 to generate three sets of data: a training set 2312, a validation set 2316, and a holdout set 3320.
  • the training set 2312 may optionally undergo data augmentation 3324 to ensure that the data is balanced (e.g. proportional amount of data for different yields).
  • One method of data augmentation corresponding to sparse categories of yield could be SMOTE (Synthetic Minority Oversampling Technique).
  • the data that has undergone data augmentation 3324 may be integrated with the data in the validation set 2316 to carry out model training 3328, followed by hyper-parameter tuning 2332 that may identify the best values for mathematical parameters during training of the AI model.
  • data in holdout set 3320 may be used to carry out model testing 2336 to identify the accuracy of the AI model using metrics such as MAPE or classification error.
  • Fig. 24 is a flowchart of a process 2400 through which the TTH model may determine optimized harvesting parameters, in accordance with embodiments of the present disclosure.
  • the fused data in the fused feature database 3204 may be used to optimize harvesting parameters 2404 including minimization of the harvesting time 2408, minimal harvester movement 2412, and appropriate harvester to harvest the fruits with specific grade 2416.
  • the harvesting activities may be optimised for different objectives, subject to various constraints. Examples of objectives include maximizing quantity of fruit, maximizing quality of fruits harvested, and maximizing monetary value based on the current market price of commodity.
  • the data features 2420 utilised for harvest optimisation include location of tree and time to harvest, available manpower, number of harvesters, duration of each shift, number of collection vehicles for transportation to the processing plant, fruit grade for harvesting, capacity of downstream processing units, harvester grade, harvester efficiency by type of tree (height of tree, size of fruit etc.), weather (impairment to harvesting efficiency with current weather), and plantation parameters like shape and size.
  • the data features 2420 and optimized harvesting parameters 2305 are taken into account by an AI algorithm to optimize the multivariate harvest problem 2424.
  • the model performance 2428 is then evaluated based on achieved maximum yield across a given number of trees and workers. An optimized model maximum yield is compared to theoretical maximum yield under ideal conditions where the number of workers is not a constraint for harvest (i.e.
  • a printable report that provides information on tree locations to be harvested and sequence of visit to these trees may be generated for each individual harvester 2432.
  • a wearable device may be used during harvest to confirm the trees were visited and harvested in the right manner 2436 is carried out.
  • FIG. 25 is a flowchart illustrating a process 2500 of determining anomalies, in accordance with embodiments of the present disclosure.
  • anomalies are identified by detecting damages that may be due to pest and diseases (P&D) or harvesting practices.
  • the data features 2504 used for anomaly detection include images of fruits, harvesting time and fruit grade.
  • Anomalies are identified using AI-based anomaly detection 2508 to identify fruit- level anomalies 2512.
  • Fruit-level anomalies include features associated with pests and diseases and harvesting practices.
  • Examples include features resulting from processing of harvested fruits (e.g. v-cut in the stem), fruit damage, rat bites and Tirathaba infestation. These anomalies are detected in the pre-processed images previously obtained in operation 316 and the features in the pre-processed images previously obtained in operation 316 may be flattened except for features associated with pests and diseases and harvesting practices. The harvesting time and grade of harvested fruits may also be utilized as they may scale up the impact of anomalies on the quality of harvested fruits.
  • Fig. 26A is an example of an image of a stalk with harvesting practice-based anomaly (long-stalk v-cut), while Fig. 26B is an example of an image of a rat bite at a fresh fruit bunch, in accordance with embodiments of the present disclosure.
  • a list of fruit-specific anomalies may be customized for each type of fruit.
  • Fig. 27A is an example of an image of sour rot in grapes
  • Fig. 27B is an example of an image of black rot in grapes, in accordance with embodiments of the present disclosure.
  • These anomalies specific to grapes may be included in a list of anomalies specifically customised for the grape fruit.
  • Quality assessment of AI models employed 2516 may be carried out to determine performance of the model 2520. If the model performance 2520 is good, the data may be transferred for harvest quality assessment 2524. If the model performance 2520 is not good, further model training 2528 and model validation 2532 may be carried out until the anomaly detection model achieves a required accuracy.
  • mill setpoint prediction 264 Another operational decision 248 that may be determined is mill setpoint prediction 264.
  • Some types of fruit require added processing before it can be consumed by end users. For example, palm oil fruits need to be processed in the mill to extract palm oil and crude palm oil from the fruit. To achieve the most extract out of the palm oil fruit, the set points of the processing mill need to be optimized. The output quality and the optimal utilization of the resources at the processing plant can only be achieved through continuous calibration of the parameters at the processing plant. For some fruits, the optimization of set points may be quite complicated, given that there are multiple set points that need to be adjusted.
  • Process 200 running on computer system 102 uses an AI-based grading system to automate grading for the mill. This enables mill operations to understand the ripeness distribution for each batch of fruit. The information is then used by the mill operations to dynamically calibrate the set points of the processing mill so that they are optimized to extract the most out of the fruit. In the case of palm oil fruits, optimizing the set points of the processing mill allows the mill to improve the oil extraction rate (OER).
  • OER oil extraction rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

La présente invention concerne un procédé mis en œuvre par ordinateur pour améliorer la récolte de cultures, lequel procédé consiste à collecter des données à partir d'une ou plusieurs sources de données, à extraire une ou plusieurs caractéristiques à partir des données collectées, à déterminer un classement de qualité à partir de la ou des caractéristiques extraites, à générer un modèle de données de récolte sur la base des données collectées, de la ou des caractéristiques extraites et/ou du classement de qualité déterminé, et à utiliser le modèle de données de récolte pour déterminer une ou plusieurs décisions opérationnelles. Le procédé peut être utilisé sur un système comprenant au moins un dispositif de mémoire et au moins un processeur, ou un support de stockage lisible par ordinateur comprenant des instructions à exécuter au moyen d'au moins un dispositif de mémoire et d'au moins un processeur.
PCT/SG2021/050255 2020-05-08 2021-05-07 Système et procédé pour l'amélioration basée sur l'intelligence artificielle (ia) d'opérations de récolte WO2021225528A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10202004263X 2020-05-08
SG10202004263X 2020-05-08

Publications (1)

Publication Number Publication Date
WO2021225528A1 true WO2021225528A1 (fr) 2021-11-11

Family

ID=78468779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2021/050255 WO2021225528A1 (fr) 2020-05-08 2021-05-07 Système et procédé pour l'amélioration basée sur l'intelligence artificielle (ia) d'opérations de récolte

Country Status (1)

Country Link
WO (1) WO2021225528A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160202227A1 (en) * 2015-01-14 2016-07-14 Accenture Global Services Limited Precision agriculture system
US20170161560A1 (en) * 2014-11-24 2017-06-08 Prospera Technologies, Ltd. System and method for harvest yield prediction
US20190050948A1 (en) * 2017-08-08 2019-02-14 Indigo Ag, Inc. Machine learning in agricultural planting, growing, and harvesting contexts
WO2019073472A1 (fr) * 2017-10-13 2019-04-18 Atp Labs Ltd. Système et procédé de gestion et de fonctionnement d'une chaîne logistique de fabrication de produits d'origine agricole
US20190335674A1 (en) * 2016-10-24 2019-11-07 Board Of Trustees Of Michigan State University Methods for mapping temporal and spatial stability and sustainability of a cropping system
US20200250426A1 (en) * 2019-02-05 2020-08-06 Farmers Edge Inc. Harvest Confirmation System and Method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161560A1 (en) * 2014-11-24 2017-06-08 Prospera Technologies, Ltd. System and method for harvest yield prediction
US20160202227A1 (en) * 2015-01-14 2016-07-14 Accenture Global Services Limited Precision agriculture system
US20190335674A1 (en) * 2016-10-24 2019-11-07 Board Of Trustees Of Michigan State University Methods for mapping temporal and spatial stability and sustainability of a cropping system
US20190050948A1 (en) * 2017-08-08 2019-02-14 Indigo Ag, Inc. Machine learning in agricultural planting, growing, and harvesting contexts
WO2019073472A1 (fr) * 2017-10-13 2019-04-18 Atp Labs Ltd. Système et procédé de gestion et de fonctionnement d'une chaîne logistique de fabrication de produits d'origine agricole
US20200250426A1 (en) * 2019-02-05 2020-08-06 Farmers Edge Inc. Harvest Confirmation System and Method

Similar Documents

Publication Publication Date Title
US20210209705A1 (en) System and Method for Managing and Operating an Agricultural-Origin-Product Manufacturing Supply Chain
Saxena et al. A survey of image processing techniques for agriculture
CN112836623B (zh) 设施番茄农事决策辅助方法及装置
CN113223040B (zh) 基于遥感的香蕉估产方法、装置、电子设备和存储介质
CN116543316B (zh) 一种利用多时相高分辨率卫星影像识别稻田内草皮的方法
CN116227758B (zh) 基于遥感技术和深度学习的农产品成熟度预测方法及系统
Panda et al. Distinguishing blueberry bushes from mixed vegetation land use using high resolution satellite imagery and geospatial techniques
CN116030343A (zh) 一种基于机器视觉识别的农作物病虫害监管系统
JP7311102B2 (ja) 農作物生育推定装置、農作物生育推定システム、農作物生育推定方法、及びプログラム
CN114581768A (zh) 一种作物倒伏无人机监测方法及装置
Juyal et al. Crop growth monitoring using unmanned aerial vehicle for farm field management
Hacking et al. Vineyard yield estimation using 2-D proximal sensing: A multitemporal approach
CN111582035B (zh) 一种果树树龄识别方法、装置、设备及存储介质
Singla et al. Spatiotemporal analysis of LANDSAT Data for Crop Yield Prediction.
Motie et al. Identification of Sunn-pest affected (Eurygaster Integriceps put.) wheat plants and their distribution in wheat fields using aerial imaging
US20220309595A1 (en) System and Method for Managing and Operating an Agricultural-Origin-Product Manufacturing Supply Chain
WO2021225528A1 (fr) Système et procédé pour l'amélioration basée sur l'intelligence artificielle (ia) d'opérations de récolte
Chakraborty et al. Early almond yield forecasting by bloom mapping using aerial imagery and deep learning
Lakshmi et al. A Review on Developing Tech-Agriculture using Deep Learning Methods by Applying UAVs
Mangla et al. Statistical growth prediction analysis of rice crop with pixel-based mapping technique
Nuraeni et al. Spatial machine learning for monitoring tea leaves and crop yield estimation using sentinel-2 imagery,(A Case of Gunung Mas Plantation, Bogor)
Kerfs et al. Machine vision for strawberry detection
Nikolaos Forecasting and classifying potato yields for precision agriculture based on time series analysis of multispectral satellite imagery. 66
Suleiman et al. A PROPOSED MODEL FOR PREDICTING THE MATURITY OF GROUNDNUT
RUBANGA Integration of ICT and Artificial Intelligence Techniques to Enhance Tomato Production

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21799662

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21799662

Country of ref document: EP

Kind code of ref document: A1