NL2028679B1 - A vision system for providing data related to the plant morphology of a plant using deep learning, as well as a corresponding method. - Google Patents
A vision system for providing data related to the plant morphology of a plant using deep learning, as well as a corresponding method. Download PDFInfo
- Publication number
- NL2028679B1 NL2028679B1 NL2028679A NL2028679A NL2028679B1 NL 2028679 B1 NL2028679 B1 NL 2028679B1 NL 2028679 A NL2028679 A NL 2028679A NL 2028679 A NL2028679 A NL 2028679A NL 2028679 B1 NL2028679 B1 NL 2028679B1
- Authority
- NL
- Netherlands
- Prior art keywords
- plant
- image
- module
- vision system
- pixels
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
A vision system for providing data related to the plant morphology of a plant, said vision system comprising a sensor module comprising at least one camera for capturing an image of a plant and comprising at least one sensor arranged for providing a corresponding depth image of said plant, a feature mapping module comprising a sensor feature model, wherein the feature mapping module is arranged for providing annotated pixels in said image and/or said depth image obtained form from said sensor module using deep learning in combination with said sensor feature model, wherein at least some of said pixels in said image have an annotation corresponding to said data related to said plant morphology of said plant, a plant mapping module comprising a plant model, wherein said plant mapping module is arranged to receive said depth image or said image of said plant and to receive said annotated pixels in said image or said depth image of said plant and to determine at least one part of said plant using said received depth image or said image and said received annotated pixels by providing a point cloud of at least a subset of said annotated pixels, a world mapping module arranged for receiving said point cloud, and for determining said data related to the plant morphology of said plant based on said received point cloud.
Description
Title A vision system for providing data related to the plant morphology of a plant using deep learning, as well as a corresponding method.
Technical field The present disclosure is directed to a vision system and, more specifically, to a vision system and a method for providing data related to the plant morphology of a plant using deep learning. Background Vision system for providing data related to the plant morphology of a plant are known. One of the known vision systems is described in the US patent
10520482.
US 10520482 describes systems and method for monitoring agricultural products. In particular, it relates to monitoring fruit production, plant growth, and plant vitality. According to examples, a plant analysis system is configured to determine a spectral signature of a plant based on spectral data, and plant colour based on photographic data.
The spectral signature and plant colour are associated with assembled point cloud data. Morphological data of the plant is then generated based on the assembled point cloud data. A record of the plant is created that associates the plant with the spectral signature, plant colour, spectral data, assembled point cloud data, and morphological data, and is stored in a library.
It is noted that the field of agriculture is undergoing a particular transition. After an industrialization of many farming processes and a series of automation initiatives, the introduction of high-tech tools in the farming process continues. It is envisioned that this process will continue even further. Summary
It is an object of the present disclosure to provide for a vision system for providing data related to the plant morphology of a plant, which vision system is an improvement compared to prior art vision systems.
It is a further object of the present disclosure to provide for a corresponding method.
In a first aspect of the present disclosure, there is provided a vision system for providing data related to the plant morphology of a plant, said vision system comprising: - a sensor module comprising at least one camera for capturing an image of a plant and comprising at least one sensor arranged for providing a corresponding depth image of said plant; - a feature mapping module comprising a feature model, wherein the feature mapping module is arranged for providing annotated pixels in said image and/or said depth image obtained form said sensor module using deep learning in combination with said feature model, wherein at least some of said pixels in said image have an annotation corresponding to said data related to said plant morphology of said plant; - a plant mapping module comprising a plant model, wherein said plant mapping module is arranged to receive said depth image or said image of said plant and to receive said annotated pixels in said image or said depth image of said plant and to determine at least one part of said plant using said received depth image or said image and said received annotated pixels by providing a point cloud of at least a subset of said annotated pixels; - a world mapping module arranged for receiving said point cloud, and for determining said data related to the plant morphology of said plant based on said received point cloud.
It was the insight of the inventor that it is beneficial to use deep learning in vision systems for a variety of reasons, among other things that the vision system becomes more accurate and more versatile.
In accordance with the present disclosure, degp gaming is a types of machine learning, which is a subset of artificial intelligence.
Machine learning relates to processing devices that are able to perform tasks without being explicitly programmed. Their ability to perform some complex tasks, like gathering data from an image or video, may fall far short of what humans are capable of.
Deep learning models may introduce a sophisticated approach to machine learning and are set to tackle these challenges because they've been specifically modeled after the human brain. Complex, multi-layered deep neural networks are construed to allow data to be passed between nodes in highly connected manners. The result is a non-linear transformation of the data that is increasingly abstract.
It may take quite a bit of data volume to feed and build a deep learning based system. However, the results of the system may be generated immediately, and there may be little, or no, need for human intervention once the deep learning models are in place.
A deep learning model may use different types of neural networks, for example convolutional neural networks and recurrent neural networks.
Convolutional neural network may comprise algorithms that are created to work with images. It uses a process of applying a weight-based filter across every elements of an image, thereby aiding a processing device to understand and react to elements within the image.
Recurrent Neural networks may introduce a memory aspect in the algorithms. The processing device may be able to store, and understand, previous data and the decision concerning the data, and may use that understanding in reviewing the instantaneous data.
Although the present disclosure is more suitable for deep learning algorithms based on convolutional neural networks, any type of model, such as a machine learning model, deep learning model, mathematical model, etc., may be utilized.
The present disclosure is thereby directed to using deep learning techniques and a knowledge of a plant to obtain data related to the plant morphology of any plant. One of the benefits is that this removes the time-consuming manual task of measuring plant morphology. Another benefit is that this can be used to control robotics for tasks such as cutting leaves and harvesting crops. Another benefit is that this can be used for any plant and environment. Another benefit is that the performance improves over time due to the concept that the algorithms improve over time. Another benefit is that multiple devices may be able to share their data in order to improve over time. These benefits are explained later below.
The present disclosure is directed to providing data related to the plant morphology of a plant. The plant morphology is related to the physical form and external structure of a plant. It is thus related to the development, form, and structure of a plant, and, by implication, an attempt to interpret these on the basis of similarity of plants.
Plant morphology may thus refer to describing or measuring the external structure of a plant in terms of locations (cm), areas (cm2), volumes (cm3) or derived properties, such as weight (density * volume). For growers of plants such as cucumber, tomatoes, strawberries and more it is common to manually measure the plant morphology. Typical plant morphology measurements include: - Plant or plant part (e.g. fruit, flower, stem, leaf, root) height, width, depth - Plant or plant part (e.g. fruit, flower, stem, leaf, root) area [cm2] - Plant or plant part (e.g. fruit, flower, stem, leaf, root) volume [cm3] - Plant or plant part (e.g. fruit, flower, stem, leaf, root} growth speed [cm/day], [cm2/day], [cm3/day] - Distances between plants or plant parts, such as inter-nodal distance [cm] or inter-plant distance [cm] or distance growth speed [cm/day] - Number of plant or plant parts, such as number of flowers, number of fruits, or development speed of number of plants or plant parts such as number of flowers per day.
- Plant or plant part condition, such as diseases or health status (e.g. tension in leafs) - Counting fruits/vegetables by life cycle stage The vision system can also output a plant development forecast, such as - a prediction of the leaf area growth over a time period, - a prediction of the plant height growth over a time period, - a prediction of when to harvest fruits and how much.
Plant morphology is considered to be important for growers, as it provide inputs to measure the health condition of a field with plants.
In accordance with the present disclosure, the vision system comprises a sensor module comprising at least one camera for capturing an image of a plant and comprising at least one sensor arranged for providing a corresponding depth image of the plant.
5 The vision system may, for example, comprise an RGB-D camera for providing the image of the plant and for providing said corresponding depth image of said plant. That is, one single RGB-D camera can be used for obtaining the image as well as the depth image.
RGB-D cameras are a specific type of depth sensing devices that work in association with a RGB camera. These types of cameras are able to augment a conventional image with depth information, i.e. data related to the distance to the sensor, in a per-pixel basis. It may thus be possible to have depth data for each of the pixels in the image.
An another option is to include two, or more, cameras. By using multiple cameras, the depth of each of the pixels in an image may be calculated.
Yet another option is to provide for a laser scanner device, for laser scanning the view of the camera, thereby obtaining depth data for each of the pixels of the image captured by the camera. The depth image may comprise depth information of all pixels in the image captured by the at least one camera, or may comprise a depth information of a subset of pixels in the image captured by the at least one camera.
In the end, the sensor module should be able to provide an image of the plant as well as depth information corresponding to the provided image of the plant.
In addition to the above, the sensor module may comprise a heat sensor for obtaining heat data related to the image of the plant. The sensor module may also comprise a hyper spectral camera.
The image obtained by the at least one camera and the other images, for example the depth image and/or the above described heat image, may be aligned in a next step of the process.
The vision system further comprises a feature mapping module, comprising a feature model, wherein the feature mapping module is arranged for providing annotated pixels in the image and/or the depth image obtained from the sensor module using deep learning in combination with the feature model. Here, at least some of the pixels in the image have an annotation corresponding to the data related to the plant morphology of the plant, i.e. the data that is to be provided by the vision system.
The feature mapping module may, for example, provide for classification labels for classifying the pixels, or a subset of pixels in any of the images provided by the sensor module. Another option is that the feature mapping module may denote bounding boxes for indicating a particular region of interest, for example a node of plant, i.e. a point where the leaf branch is attached to e.g. the main stem. Yet another option is that the output is related to a segmentation process, wherein the plant is segmented for obtaining a realistic digital representation of the plant. This may include semantic segmentation as well as instance segmentation. Other options include (poly)line detection and keypoint detection. Even a further option is that the above described outputs are combined with each other.
It is further noted that other vision models may be included, for example for detecting physical markers like QR codes and Aruco codes.
As mentioned above, the feature mapping module comprises a feature model, and uses deep learning models for providing its corresponding output. It is noted that the wording deep is typically related to the number of layers through which the data is transformed.
The vision system further comprises a plant mapping module, comprising a plant model, for providing a point cloud of at least a subset of the annotated pixels.
The plant model may comprises features related to the expected size of a leaf, the colour of a leaf, the number of nodes, the spread of the nodes, and all kinds of other things.
In the end, a point cloud may be provided of the region of interest, for example the plant, or the leaf of a plant, or anything alike. Pixels not belonging to the region of interest may be neglected.
A point cloud is, for example, a set of positions denoted as (x,y) or (x,y,z). It may also be a list of position associated with features like (x,y,z,f1,f2,f3,...), wherein f1, f2 and f3 are particular features. That means that each position, or coordinate, may be associated with an annotation, i.e. a feature. An annotation may,
for example, be a number, a piece of text, an image, a further data type, or a combination of data types.
It is noted that the vision system in accordance with the present disclosure is able to determine the plant morphology of a plant or of multiple plants.
Further, it is noted that a user may be able to determine actions, i.e. pre-determine rules for triggering a particular action. For example, if the output of the vision system meets a particular rule, the user may be notified by mail, SMS, or anything alike.
For example, if the measured area of multiple leaves exceeds a particular threshold, a user may be notified. As an alternative a robot may be notified, wherein the robot performs a particular action like cutting a leave or anything alike.
In an example, the data related to the plant morphology of said plant comprises any of: - Plant or plant part (e.g. fruit, flower, stem, leaf, root) height, width, depth - Plant or plant part (e.g. fruit, flower, stem, leaf, root) area [cm2] - Plant or plant part (e.g. fruit, flower, stem, leaf, root) volume [cm3] - Plant or plant part (e.g. fruit, flower, stem, leaf, root) growth speed [cm/day], [cm2/day], [cm3/day] - Distances between plants or plant parts, such as inter-nodal distance [cm] or inter-plant distance [cm] or distance growth speed [cm/day] - Number of plant or plant parts, such as number of flowers, number of fruits, or development speed of number of plants or plant parts such as number of flowers per day.
- Plant or plant part condition, such as diseases or health status (e.g. tension in leafs) - Counting fruits/vegetables by life cycle stage The vision system can also output a plant development forecast, such as - a prediction of the leaf area growth over a time period, - a prediction of the plant height growth over a time period, - a prediction of when to harvest fruits and how much.
In a further example, the feature mapping module comprises any of: - a classification module for annotating said image;
- an object detection module for detecting objects in said image; - a semantic segmentation module for annotating pixels in said image; - a keypoint detection module for detecting points in said image; - a (poly)line detection module for classifying plant parts.
The classification module may, for example, make annotation on a per pixel basis, or on a per region basis, or on a per image basis. The classification may be extensive. For example, a particular classification may be assigned to a pixel, wherein the classification is retrieved from a long list of possible classifications.
The object detection module may be used for detecting objects in the image, for example a node, or a leaf, or the life stage/blooming stage of a fruit, or anything alike.
In another example, the feature mapping module comprising: - receiving means for receiving said feature model.
The receiving means may, for example, be based on a public network interface, like the internet, or a USB interface, or anything alike. The feature model may be downloaded from the internet. Multiple feature models may exist, wherein a particular owner of the vision system may choose one feature model that is to be used within the vision system. If that particular feature model does not provide the desired results, the owner of the vision system may decide to retrieve a different feature model.
It is further noted that a network of vision systems may be provided, wherein the vision systems are interconnected to each other, either actively or passively. The feature models may be shared among the vision systems, or data related to the feature models may be shared among the vision systems, for increasing the accuracy of a particular vision system.
In an example, the plant mapping module comprises: - receiving means for receiving said plant model.
This example is in line with the receiving means for receiving the feature model.
In a further example, the vision system comprises an identification module arranged for performing a plant identification step for identifying said plant in said image, and for retrieving a feature model and an associated plant model of said identified plant.
The advantage hereof is that the vision system does not need to be tailored to each and every plant available out there. The vision system may be arranged to identify the plant, and to retrieve the corresponding feature model and plant model accordingly.
It is noted that multiple reasons may exist for updating a model, which include the insight that a generic model requires much more memory and computational power and that a specialized model provides for a higher accuracy.
In a further example, at least one output of said sensor module, said feature mapping module, said plant mapping module and said world mapping module is stored in a database called vision data, wherein said vision system further comprises: - a client portal for annotating said vision data for improving any of said sensor module, said feature mapping module, said plant mapping module and said world mapping module.
The vision system may be a handheld system. A handheld system is a system that is able to be carried by a person. For example the size of a phone or tablet.
In a second aspect, there is provided a method for providing data related to the plant morphology of a plant, using a vision system in accordance with any of the previous claims, wherein said method comprises the steps of: - capturing, by said sensor module, an image of a plant and providing a corresponding depth image of said plant; - providing, by said feature mapping module, annotated pixels in said image and/or said depth image obtained from said sensor module using deep learning in combination with said feature model, wherein at least some of said pixels in said image have an annotation corresponding to said data related to said plant morphology of said plant; - receiving, by said plant mapping module comprising a plant model, said depth image or said image of said plant and receiving said annotated pixels in said image or depth image of said plant and determining at least one part of said plant using said received depth image or said image and said received annotated pixels by providing a point cloud of at least a subset of said annotated pixels;
- receiving, by said world mapping module, said point cloud and determining said data related to the plant morphology of said plant based on said received point cloud.
The advantages as explained with respect to the first aspect of the present disclosure, being the vision system, are also applicable to the second aspect of the present disclosure, being the method for providing data related to the plant morphology of a plant.
In an example, the sensor module comprises an RGB-D camera for providing said image of said plant and for providing said corresponding depth image of said plant.
In a further example, the data related to the plant morphology of said plant comprises any of: - Plant or plant part (e.g. fruit, flower, stem, leaf, root) height, width, depth - Plant or plant part (e.g. fruit, flower, stem, leaf, root) area [cm2] - Plant or plant part (e.g. fruit, flower, stem, leaf, root) volume [cm3] - Plant or plant part (e.g. fruit, flower, stem, leaf, root) growth speed [cm/day], [cm2/day], [cm3/day] - Distances between plants or plant parts, such as inter-nodal distance [cm] or inter-plant distance [cm] or distance growth speed [cm/day] - Number of plant or plant parts, such as number of flowers, number of fruits, or development speed of number of plants or plant parts such as number of flowers per day.
- Plant or plant part condition, such as diseases or health status (e.g. tension in leafs) - Counting fruits/vegetables by life cycle stage The vision system can also output a plant development forecast, such as - a prediction of the leaf area growth over a time period, - a prediction of the plant height growth over a time period, - a prediction of when to harvest fruits and how much.
In another example, the step of providing said annotated pixels comprises any of:
- annotating said pixels using a classification module; - annotating said pixels using an object detection module for detecting objects in said image; - annotating said pixels using a semantic segmentation module; - detecting point in said image using a keypoint detection module.
In an example, the method comprises the step of: - receiving, by said receiving means, said feature model.
In a further example, the method comprises the step of - receiving, by receiving means, said plant model.
In yet another example, the step of receiving further comprises performing, by an identification module, a plant identification step for identifying said plant in said image, and retrieving an associated plant model and feature model of said identified plant.
In another example, at least one output of said sensor module, said feature mapping module, said plant mapping module and said world mapping module is stored in a database called vision data, wherein said method further comprises: - annotating, by a client portal of said system, said vision data for improving any of said sensor module, said feature mapping module, said plant mapping module and said world mapping module.
In a third aspect, there is provided a computer readable medium having instructions stored thereon which, when executed by a vision system cause said vision system to implement a method in accordance with any of the examples as provided above.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Brief description of the Drawings Figure 1 discloses, schematically, a vision system in accordance with the present disclosure; Figure 2 discloses, schematically, different kinds of applications for the vision system in accordance with the present disclosure;
Figure 3a discloses a further schematic view of a vision system in accordance with the present disclosure; Figure 3b discloses another schematic view of a vision system in accordance with the present disclosure; Figure 4 discloses, schematically, a plant; Figure 5 discloses, schematically, the steps that are being performed by the vision system in accordance with the present disclosure. Detailed description Figure 1 discloses, schematically, a vision system 101 in accordance with the present disclosure.
The vision system 101 is arranged for providing data related to the plant morphology of a plant 104.
The vision system 101 comprises a sensor module comprising at least one camera 102 for capturing an image of the plant 104. Further, at least one sensor is provided (not shown) for providing a corresponding depth image of the plant 102. The sensor and the camera 102 may be integrated into a single RGB-D camera. The sensor module 102 is connected to a processing unit 103, wherein the processing unit 103 schematically discloses the feature mapping module, the plant mapping module and the world mapping module, as will be discussed later below.
The interface between the sensor module 102 and the processing unit 103 may be wired or wireless. The camera 102 may be any of an RGB, RGBD, heat camera, hyperspectral camera or a combination of these.
The vision system is arranged to use the camera 102 to measure a particular property of the plant 104. The data provided by the camera 102 is processed by the processing unit 103 by using one or more deep learning models 103. As there are many different types of plants and environments, the models 103 may generally need to be updated and optimized for a specific plant and environment.
In one example, the models, for example the feature model and the plant model, are pre-configured on the vision system 101. In another implementation the processing unit 103 first performs out a plant identification step for identifying the specific plant 104. Once the plant 104 has been identified it may retrieve the model, i.e. the feature and plant model, from a model database 105. In another example the user 107 selects a model from the model database 105 via a smart phone 108 or the like.
Plant morphology measurements may be stored in a data base 108. This data might contain the raw camera data and/or analysed plant morphology results. As even within a specific plant family morphological differences may exist, and greenhouse/open air conditions of plants might change depending on e.g. growers, the model 103 may be optimized for local conditions — like lighting, shadows, etc.
For this purpose, the user 107 may have the possibility to verify the measurements of the vision system 101 and can make corrections by annotating the data using, for example, a portal. These corrections can be entered into the database 106 using an application interface 108. Once a user has entered correction on the measurement into the data storage 106, a learning server 109 may be notified. The learning server 109 will process the corrections and generates a new improved model 103 that is stored in the model database 105. The model 103 may be updated either by pulling or pushing a new model to the vision system 101.
Using the application interface 108, i.e. the portal, the user 107 may have access to multiple applications, including and not limiting to data visualization, monitoring crop growth, device management, data correction/annotation, data analytics, disease detection/alert/prediction, yield forecasting, management of UAV'’s and more.
In some examples a system may consist of multiple vision systems 101 that are either owned by the same user 107, or own by multiple different user at once. The different vision systems 101 may share the model database 105 and/or the learning server 109. A user 107 may thus share a model 103 from the model database 105 with other users 107.
It is noted that the vision system 101 may thus comprises means for comparing the models that are being used by different users with each other. A model that is used by a first user may outperform a model that is used by a second user, such that the second user may request the first user to share its model.
Further, an interface, just like the one as indicated with reference numeral 108, may be used by a user for adding visual markers to an image that may be tracked by the vision system 101. The interface may also be used for inputting a property that is to be taught by the vision system 101, for example counting particular parts, measuring a particular leaf area, or anything alike.
The user may also use the interface 108 for defining automatic triggers, i.e. when the vision system 101 detects a particular event, an automated action may be performed. The automated action may, for example, be that a user gets notified or anything alike. The user may, additionally, provide a region of interest in an image where it is likely that the event is to occur.
On top of the above, it is noted that the vision system may be equipped to detect physical visual markers, like QR codes to identify unique plants or locations. The result may be a feature in the feature mapping module output. Another option is a physical visual marker to initiate an action in the vision system, such as downloading a new plant model, performing a particular measurement, sending particular data like a measurement or anything alike. Yet another option is an Aruco code to calibrate one or more cameras of to measure a distance visually.
Even further, an interface may be provided to update the world mapping module, more specifically the specific measurements that the world mapping module is able to perform.
Figure 2 discloses, schematically, different kinds of applications for the vision system in accordance with the present disclosure.
The vision system 202 may be permanently, or semi-permanently, installed in a greenhouse 201, or at an open field. The advantage hereof is that the vision system 202 has predefined distances to certain objects, thereby improving possible calibration phases during the process.
The vision system 202 may also be implemented as a user handheld device 204 such that a user is able to carry, and move, the vision system 202 for making individual plant measurements. Visual marking 205 may be provided for aiding the vision system, like user types in plant id code, RFID tags or anything alike. In another example a position sensor is used.
The vision system 202 may also be implemented in a robot 206 or a drone 209 that is able to scan the environment, for example the greenhouse or an open air field. Visual marks such as QR codes 205 may be used to link the measurement to an individual plant. A position sensor 211 can be used to link the measurement to a position in the greenhouse or open air field. Also, a visual mark such as QR codes 205 can be used to link the measurement a position in the greenhouse or open air field. The vision system 202 may be implemented in robot 208 or drone 209, or anything a like 215, that cuts leaves of a plant 206, harvests crops, sprays pesticides/nutrition or perform other tasks 209. The device 202 provides the robot or drone with information including, but not limited, plant parts (leaves, nodes, vegetables, fruits, flower), real-world coordinates of these parts, orientation of the parts, area, volume, weight and more.
The vision system 202 may, for example, be coupled with a sorting & packaging machine, a quality measuring machine, weight / scale machine or the like. The vision system 202 may provide instructions, or may provide information, to these kinds of machines such that these kinds of machines are able to take appropriate actions.
The vision system 202 may also comprise augmented reality glasses 214, which may be used for projecting additional info on top of the field of view of the user. The additional info may be provided by the vision system 202.
Figure 3a discloses a further schematic view of a vision system in accordance with the present disclosure.
The model may comprise a data pipeline 310, 309, 308, 307, 301 that processes the data from the vision module 302, i.e. the RGB camera 303 in combination with a depth sensor 311, in several steps. Deep learning models may be used to process the data from the camera 302 to detect parts of plants in e.g an RGB image, such as stems, branch nodes, leaves, flowers, and more.
Multiple techniques may be used for this detection, such as bounding box (object) detection, segmentation, keypoint detection and (poly)line detection.
The detected plant parts in the image data from the sensor module 302 may be used by the plant mapping module 305 to calculate a 3D point cloud of each detected part. In one example, the world model fits a line to the detected stem segments. Branch nodes are detected by finding T-shape or Y-shape line combination. To assure which part of the line is which part of the plant (e.g. main stem versus branch stem) a plant mapping module 305 may be used. In another example, a deep learning model is used to directly detect branch nodes and the locations of the branch.
The plant mapping module 305 may use the point cloud data to estimate various plant properties, such as position and orientation, volume, area, age (life-stage), ripeness, healthiness, diseases and more of parts.
As environments are usually crowded with many plants, plants might obstruct each other visually.
An additional (deep learning) model may be part of the plant mapping module 305 that predicts/reconstruct obstructed parts, so area, volume, weight and other estimations are corrected.
By updating the deep learning models and/or plant model, new properties of the plant can be estimated.
This makes the vision system extensible for learning new properties.
The model may also have a method to improve automatically for varying visual conditions (e.g. environmental). This is called self-supervised learning.
The grower needs to confirm the performance during normal (daytime) conditions.
Next, the device needs to be installed on a fixed place.
It may monitor the same plant over a particular time period.
It may track features over different frames over time.
It can generate new data when it detects changing visual (e.g. environment) conditions and using the tracked annotations.
Figure 3b discloses a more detailed version of the vision system as shown in figure 3a.
That is, the feature mapping module 304 may comprise the objection detection module 321 as well as the segmentation module 322, and may include the segmentation mapping 324 or the BBS 323. Figure 4 discloses, schematically, a plant 401. The image shown in figure 4 is appended in that bounding boxes 402 are shown, as well as nodes 404 that have been detected, as well as leaves 403 that have been detected.
A segmentation process may be performed as indicated with reference numeral 405. Figure 5 discloses, schematically, steps that are being performed by the vision system in accordance with the present disclosure.
Reference numeral 511 indicates the plant of which images are to be captured by the camera 512. The camera 512 is arranged to capture a colour image, i.e. an RGB image, as indicated with reference numeral 501 and is arranged to generate a depth map as indicated with reference numeral 502. Reference numeral 508 shows the bounding boxes around a node in the colour image as well as in the depth map.
The vision system may detect an object in the colour image, as indicated with reference numeral 503. Reference numeral 504 shows the same detected object but in the depth image provided by the camera 512. Based on these two objects, isolated from the depth map and the colour image, an annotated point cloud may be generated by the vision system as indicated with reference numeral 505.
In the end results, multiple measurements may be performed, like measuring a particular position of a plant part like a node 507, measuring a particular direction of a plant part like the stem or the branch 5086.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “Comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Any reference signs in the claims should not be construed as limiting the scope thereof
Claims (18)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2028679A NL2028679B1 (en) | 2021-07-09 | 2021-07-09 | A vision system for providing data related to the plant morphology of a plant using deep learning, as well as a corresponding method. |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2028679A NL2028679B1 (en) | 2021-07-09 | 2021-07-09 | A vision system for providing data related to the plant morphology of a plant using deep learning, as well as a corresponding method. |
Publications (1)
Publication Number | Publication Date |
---|---|
NL2028679B1 true NL2028679B1 (en) | 2023-01-16 |
Family
ID=79171039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2028679A NL2028679B1 (en) | 2021-07-09 | 2021-07-09 | A vision system for providing data related to the plant morphology of a plant using deep learning, as well as a corresponding method. |
Country Status (1)
Country | Link |
---|---|
NL (1) | NL2028679B1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100322477A1 (en) * | 2009-06-04 | 2010-12-23 | Peter Schmitt | Device and method for detecting a plant |
US10520482B2 (en) | 2012-06-01 | 2019-12-31 | Agerpoint, Inc. | Systems and methods for monitoring agricultural products |
US20200007847A1 (en) * | 2016-01-15 | 2020-01-02 | Blue River Technology Inc. | Plant feature detection using captured images |
-
2021
- 2021-07-09 NL NL2028679A patent/NL2028679B1/en active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100322477A1 (en) * | 2009-06-04 | 2010-12-23 | Peter Schmitt | Device and method for detecting a plant |
US10520482B2 (en) | 2012-06-01 | 2019-12-31 | Agerpoint, Inc. | Systems and methods for monitoring agricultural products |
US20200007847A1 (en) * | 2016-01-15 | 2020-01-02 | Blue River Technology Inc. | Plant feature detection using captured images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sun et al. | Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering | |
Gené-Mola et al. | Fruit detection and 3D location using instance segmentation neural networks and structure-from-motion photogrammetry | |
Qureshi et al. | Machine vision for counting fruit on mango tree canopies | |
Font et al. | Vineyard yield estimation based on the analysis of high resolution images obtained with artificial illumination at night | |
BR112020026356A2 (en) | SYSTEMS, DEVICES AND METHODS FOR DIAGNOSIS IN GROWTH STAGE FIELD AND CULTURE YIELD ESTIMATE IN A PLANT AREA | |
US20230049158A1 (en) | Crop scouting information systems and resource management | |
Bargoti et al. | A pipeline for trunk detection in trellis structured apple orchards | |
Farjon et al. | Deep-learning-based counting methods, datasets, and applications in agriculture: A review | |
Latif et al. | Deep learning based intelligence cognitive vision drone for automatic plant diseases identification and spraying | |
Kurtser et al. | Statistical models for fruit detectability: spatial and temporal analyses of sweet peppers | |
CN114818909A (en) | Weed detection method and device based on crop growth characteristics | |
US20240037724A1 (en) | Plant detection and display system | |
WO2021154624A1 (en) | System and method for performing machine vision recognition of dynamic objects | |
US20230389474A1 (en) | Method for determining a fruit to be harvested and a device for harvesting a fruit | |
Tian et al. | Machine learning-based crop recognition from aerial remote sensing imagery | |
US20230326012A1 (en) | Automated plant grouping and tracking using image data | |
Motie et al. | Identification of Sunn-pest affected (Eurygaster Integriceps put.) wheat plants and their distribution in wheat fields using aerial imaging | |
NL2028679B1 (en) | A vision system for providing data related to the plant morphology of a plant using deep learning, as well as a corresponding method. | |
WO2024072711A1 (en) | Remote sensing for intelligent vegetation trim prediction | |
CN114651283A (en) | Seedling emergence by search function | |
Nery et al. | Facing digital agriculture challenges with knowledge engineering | |
Ahn et al. | An overview of perception methods for horticultural robots: From pollination to harvest | |
Bulanon et al. | Machine vision system for orchard management | |
FAISAL | A pest monitoring system for agriculture using deep learning | |
Malik et al. | Machine learning-based Prediction of Cotton farming using ARIMA Model |