WO2013083146A1 - Method and device for estimating development parameters of plants - Google Patents

Method and device for estimating development parameters of plants Download PDF

Info

Publication number
WO2013083146A1
WO2013083146A1 PCT/EP2011/006222 EP2011006222W WO2013083146A1 WO 2013083146 A1 WO2013083146 A1 WO 2013083146A1 EP 2011006222 W EP2011006222 W EP 2011006222W WO 2013083146 A1 WO2013083146 A1 WO 2013083146A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
segmentation map
segmentation
leaf
stereo
Prior art date
Application number
PCT/EP2011/006222
Other languages
French (fr)
Inventor
Florentin WÖRGÖTTER
Alexey ABRAMOV
Eren Erdal AKSOY
Babette DELLEN
Original Assignee
Georg-August-Universität Göttingen Stiftung Öffentlichen Rechts
Universitat Politècnica De Catalunya
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Georg-August-Universität Göttingen Stiftung Öffentlichen Rechts, Universitat Politècnica De Catalunya filed Critical Georg-August-Universität Göttingen Stiftung Öffentlichen Rechts
Priority to PCT/EP2011/006222 priority Critical patent/WO2013083146A1/en
Publication of WO2013083146A1 publication Critical patent/WO2013083146A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Definitions

  • the invention relates to a method for estimating development parameters of plants.
  • the work leading to this invention has received funding from the European Community Seventh Framework Programme FP7/2007-2013 under grant agreement no. 247947.
  • Controlling development parameters of plants is particularly interesting in plantations. These parameters can for example include the growth rate of plants, which is important to maximize harvest while minimizing or at least optimizing the use of materials like water, nutrients, insecticides and other resources.
  • Other development parameters of plants of interest can be the shape and the pose of the leaves of the plant in order to monitor a lack of water, for example. Especially in large plantations this can only be done in an economically reasonable way by automation.
  • EP 1 564 542 a system for analyzing changes in the state during a plant growing process is known.
  • This system comprises a plurality of cases for growing the plants to be observed.
  • the system further comprises conveying means for conveying the plurality of cases such that the position of the camera relative to the case is the same for each image.
  • the images are then stored on a computer and processed in a data and image processing part of the system.
  • Unfortunately the document does not teach how the data processing works and what exactly is to be done.
  • a device and a method for estimating the growth of parts of leaves of plants is known.
  • a circular disc is cut from a leaf of the plant of interest. This disc is then put into a solution. The disc swims on the surface of this solution. This swimming disc is then photographed at different times leading to a time series of images. From this time series the growth rate of the sheet from the leaf can be estimated.
  • the image of the sheet has to be segmented in order to estimate the size of the disc. In order to do so one automatically has to separate the sheet from the background in the image. Since the disc swims in a clear solution the background of the image is formed by the bottom of the case. It can hence be designed in a way to generate a maximum contrast between the sheet and the background making it very easy for image software to distinguish the sheet from the background. In addition from the knowledge of the level of solution one can calculate the distance between the disc and the camera from this and the size of the disc on the image one can simply calculate the original size of the disc.
  • a method for estimating development parameters of plants according to the present invention includes the steps of
  • stereo image comprises a left frame and a right frame
  • a stereo image is taken, which comprises a left frame and a right frame. These are images are taken from slightly different points of view. This can be done by any system acquiring colour, grey value or infrared images from plants. Any system acquiring stereo images can be used. Also systems acquiring time-of-flight or structured light images are usable.
  • the stereo image is processed by a segmentation and feature extraction algorithm to obtain the biometric description of plant leaves at every time step.
  • the output might for example include leaf sizes and/or leaf shape and/or pose descriptors.
  • a left segmentation map is computed from the left frame.
  • a right frame segmentation map is computed from the right frame. This is done by using a frame inherent feature such as colour. Both frames are segmented into regions having a similar value of this feature. Each of these regions defines a segment of the corresponding segmentation map and is provided with a unique label. Corresponding regions in the left frame and the right frame must carry the same label. This might be difficult or even not possible for all segments because the leaves are usually only weakly textured. Segmentation of both the left frame and the right frame is called co- segmentation. Any known method for performing such a co-segmentation can be used.
  • a disparity map is calculated. To do so both the left frame and the right frame are used. This can be done by any method known from the art. Due to the weak texturing of the leaves and the probably many occlusions, the results of most methods known from other applications will not deliver sufficiently good results. Methods using structured light are applicable. Nevertheless these methods have only limited applicability under daylight and/or outdoor conditions.
  • a point on a leaf is slightly shifted when comparing its position in the left frame with its position in the right frame.
  • This shift is called disparity and is stored in the disparity map.
  • the disparity is a measure for the distance of the point on the leaf from the camera.
  • the distance can be computed based on the disparity and camera parameters.
  • methods using for example time-of-flight sensors to estimate this distance can be used to obtain the disparity map. Also these sensors have a limited applicability in outdoor environments.
  • the left segmentation map, the right segmentation map and the disparity map are then combined to obtain the stereo segmentation map. From this the segment boundaries of the different segments can be extracted. Since segment boundaries do not necessarily coincide with true leaf boundaries, one cannot simply use segment boundaries to calculate the size of leaves.
  • a leaf model is fitted to the segment boundaries in order to find out the boundaries of leaves from the segmentation, from which a description of the leaf is obtained.
  • This model is a model of the shape ans/or other parameters of a leaf of the plant one is interested in. The model is based on the knowledge about the plant and is preferably individually adapted to the particular kind of plants under consideration. If another sort of plants is to be monitored one preferably changes the leaf model in order to model the leaves of the respective plant in an optimal way.
  • the fitting can be done by any method that can fit shape descriptors from the model to the leaf segments, specifically a method that fits three- dimensional surfaces using the depth from the disparity map to the boundary of the leaf segments. Depth in this context means the distance of the leaf to the camera.
  • the method used can also fit one-dimensional curves in two- and three-dimensional space to the segment boundaries.
  • the size of the leaves can be easily calculated. If these steps are repeated for different, in particular equidistant time steps, a time series of the results, especially the size of leaves, can be calculated. This time series can then be used to calculate the growth rate. With a method according to the current invention it is possible to calculate the size LSi(t) of the leaves of a living plant automatically although the leaves might be very weakly textured and at least partially occluded by other leaves.
  • the method can also compute a colour description LCo(t) of the modelled leaves. This can be interesting in order to examine the colouring of the leaves.
  • the method can also compute shape and/or pose descriptors LSh(t). These descriptors can be used to quantify and/or measure relative orientation and the position of plant leaves. With such a method also a curling of the leaf, for example due to a lack or excess of water, can be included and estimated.
  • the development state of the plant is computed for every time step.
  • the growth rate can be computed from temporal changes of the leaf size LSi(t).
  • leaf mobility can be determined from the pose and shape changes of the leaves.
  • the disparity map is computed using a phase-based stereo algorithm.
  • the left frame and the right frame of the stereo image both consist of a plurality of pixels.
  • the signals are transformed using a transformation based on Gabor-functions, as for example a transformation using a Gabor filter bank consisting of different spatiotemporal frequency channels in order to separate the disparity information from a phase difference.
  • This has the advantage, that differences in the intensity or the amplitude of the signals of different pixels do no longer influence the result as long as the intensity at a certain pixel is larger than a given threshold. From the difference of the phase of the left and right Gabor filter a disparity value can be computed. By taking all Gabor filter into account, a disparity map is obtained.
  • the left segmentation map and the right segmentation map are computed using a Metropolis-based image segmentation algorithm.
  • the method finds the equilibrium states of a given Hamiltonian
  • the variables ⁇ , and ⁇ are the spin state variables assigned to the respective pixels, which can be in a number of discrete states.
  • the Hamiltonian and hence the interaction strength take the form of Potts model. Equilibrium states of the model are characterized by areas of aligned spins. Connected areas of spins which are in the same spin state define a segment.
  • the calculation of the left segmentation map and the right segmentation map includes the steps of
  • the left frame and the right frame show the same plant photographed from only slightly shifted points of view. Hence the left segmentation map and the right segmentation map are very similar. Thus by choosing the left or right segmentation map as an initial state for the second Metropolis-based image segmentation algorithm the calculation time is reduced. In addition a predominantly consistent labelling of the left and right frame can be obtained.
  • the initial state for the Metropolis-based image segmentation algorithm used for partitioning the first one of the left and the right frame is the respective segmentation map at a previous time. Pref- erably the latest time is used. Of course it is also possible to use the left and/or the right segmentation map of a previous time to partition also the second one of the left and the right frame. Using this special initial state again reduces the calculation time drastically, as long as the changes in the plant under investigation are small from one stereo image to the next one, or from the one, the segmentation map of the previous time belongs to and the current one. In addition, a predominantly consistent labelling of consecutive frames can be achieved. Since plants are known to grow rather slowly, this can be assumed correctly.
  • repetition time of half an hour, one or two hours or just 20 minutes is possible.
  • other repetition times meaning the time difference between two taken stereo images, are possible, too.
  • much shorter repetition times are possible, such as for example, five, two or one minute. Even repetition times of several seconds can be achieved.
  • the method according to the present invention can also be performed ordering the steps in a different order.
  • extracting the segment boundaries includes calculating disparity value for a centre of mass of each segment in the stereo segmentation map and identifying and correcting false boundaries.
  • False boundaries for example are boundaries that are caused by occlusions of leaves. These boundaries do exist between two segments in the stereo segmentation map and refer to two neighbouring segments but they do not correspond to an outer boundary of the leaf of the living plant that is at least partially occluded. This lower leaf extends underneath the upper leaf. Hence, the boundary only belongs to the upper leaf but does not belong to the lower leaf. Thus, this boundary at least for the lower, has to be removed. In order to avoid misinterpretation of the boundaries during fitting the boundaries to the model these boundaries can also be fully removed from the stereo segmentation map.
  • the disparity value of the centre of mass is calculated for each segment and the segments are then ordered according to this disparity of the centre of mass. Once this is done it is known which could be occluded by which other leaves lying above it. Hence the boundaries between two such leaves can be removed.
  • an optic flow map from at least two left frames or at least two right frames of different times is calculated.
  • This optic flow map is then used to check time consistency of the growth rate. From this one could for example easily detect splitting or merging of two neighbouring segments.
  • a splitting would mean, that a segment, that is assumed to correspond to one leaf of the living plant corresponds to at least two leaves, instead. This can then be corrected, by for example refitting the new boundaries resulting from the new information to the model and recalculating the size of the leaves for stereo images taken at earlier times.
  • a merging of two segments into one segment can be a strong hint of an error in the segmentation and partitioning of the stereo segmentation map. This can be corrected using information obtained from optic flow calculations.
  • the optic flow map is calculated from at least five left frames or at least five right frames.
  • the fitting of the segment boundaries to the leaf model provides shape parameters of the leaf and fitting errors for each segment. From the shape parameters one then can easily calculate the true size of the leaves of the plant by simply feeding the shape parameters in the model.
  • the fitting errors are minimized by merging or dividing adjacent segments in the stereo segmentation map.
  • the fitting of the segment boundaries extracted from the stereo segmentation map can for example lead to a large fitting error if by mistake one segment, that has been identified to correspond to one leaf, belongs to more than one leaf instead.
  • This segment shows an outer shape that does not properly fit to the shape of leaves assumed in the model. In this case a large fitting error occurs.
  • One then can simply try and fit the segment boundaries of this segment by more than one, for example two, leaves. If the segment corresponds to two for example partially occluding leaves, the fitting error strongly decreases.
  • the boundaries of two adjacent segments can be fitted by one leaf if the fitting error, that occurs when the two segments are fitted by two leaves are large.
  • the plant development parameters include the growth rate and/or the leaf shape and/or the pose of the leaves of the plant. Fitting the segment boundaries to the leaf model leads to a description of the leaves of the plant, including the orientation and the pose of the leaves. Hence, this orientation can be directly obtained from the fitting results. Once the orientation of the leaves is known one can easily extract the two-dimensional shape of each leaf of the plant. From this the size of the leaves and from this the growth and the growth rate of the plant can easily be calculated.
  • interesting plant development parameters might also be a colour histogram in order to monitor the colour of the leaves. From this one can obtain information about the health state of the plants. If there is for example a lack of water, the leaves of the plant will turn brown, which can be seen in a colour histogram.
  • a device for estimating plant development parameters of plants according to the present invention includes a stereo camera and an electronic device suitable for performing the method previously described.
  • Fig. 1 is a schematic view on a device according to one embodiment of the present invention.
  • Fig. 2 is a flow chart of the main process according to one embodiment of the present invention.
  • Fig. 3 is a flow chart of the algorithm shown in Fig. 2.
  • Fig. 4 is a flow chart of a Metropolis-based co-segmentation according to one embodiment of the present invention.
  • Fig. 5 is a flow chart of a Metropolis-based co-segmentation according to another embodiment of the present invention.
  • Fig. 6 is a flow chart of a correction algorithm for segmentation errors.
  • Fig. 7 is a flow chart of a correction algorithm for fitting errors.
  • Fig. 8 illustrates the results of the steps in an algorithm according to Fig. 3.
  • Fig. 1 shows a device for estimating plant development parameters according to an embodiment of the present invention. It contains a stereo camera with a left camera 2 and a right camera 4, which together take a stereo image of a plant 6 that is to be examined.
  • the stereo camera is connected to a computer 8, which is capable of performing the necessary steps for a method according to the present invention.
  • a computer 8 which is capable of performing the necessary steps for a method according to the present invention.
  • both the left camera 2 and the right camera 4 are provided with an individual connection 10 to the computer, which is only schematically sketched by a solid line.
  • Fig. 2 shows a flow chart of the method.
  • the stereo camera takes a stereo image.
  • This stereo image contains a left frame L(t) taken with the left camera 2 (step 11) and a right frame R(t) taken with the right camera 4 (step 12). Both frames are then processed by an algorithm 20 in order to obtain the plant development parameters one is interested in.
  • these are denoted to be the size of the leaves LSi(t), the colour histogram of the leaves LCo(t) and the shape and/or pose of the leaves LSh(t).
  • leaf descriptors are then used to calculate plant development descriptors, such as growth rate and plant mobility measures and other development parameters on is interested in, in step 30.
  • plant development descriptors such as growth rate and plant mobility measures and other development parameters on is interested in, in step 30.
  • plant development descriptors such as growth rate and plant mobility measures and other development parameters on is interested in, in step 30.
  • plant development descriptors such as growth rate and plant mobility measures and other development parameters on is interested in, in step 30.
  • plant development descriptors such as growth rate and plant mobility measures and other development parameters on is interested in, in step 30.
  • Fig. 3 shows a more detailed flow chart of the algorithm 20 in Fig. 2.
  • the left frame L(t) and the right frame R(t), both taken at time t are input into the algorithm 20.
  • a left segmentation map Sl(t) is computed from the left frame L(t)
  • a right segmentation map Sr(t) is computed from the right frame R(t).
  • a disparity map D(t) is computed in step 22. While the left segmentation map and the right segmentation map contain information about the segments of the respective frame only, the disparity map D(t) contains the disparity information, from which the distance of a segment from the stereo camera can be computed.
  • the disparity map D(t) and the left segmentation map Sl(t) and the right segmentation map Sr(t) are used to obtain a stereo segmentation map SS(t) in step 23, containing both the information about the segments in the left frame L(t) and the right frame R(t) and the information about the disparity and hence the distance of the segments from the stereo camera.
  • the segment boundaries SB(t) are extracted from the stereo segmentation map SS(t).
  • These segment boundaries SB(t) are then fitted to a leaf model 25, which is a model of the shape and/or other parameters of the leaves under consideration. If the sort of plant that is to be examined changes one probably also has to change the leaf model 25, in order to at least approximately describe the leaf shape correctly.
  • Both the input from the leaf model 25 and the segment boundaries SB(t) are used to obtain a leaf description in step 26.
  • the plant development parameters one is interested in can be calculated in method steps 271 , 272 and 273, leading to the size of the leaves LSi(t), the colour histogram of the leaves LCo(t) or the shape and /or the pose of the leaves LSh(t), respectively. From a time series of these parameters other parameters can be obtained such as the growth and a growth rate and/or other development parameters of the plant.
  • Fig. 4 is a flow chart of the Metropolis-based image segmentation algorithm initialization.
  • the Metropolis-based image segmentation algorithm can be used to calculate the left segmentation map Sl(t) and/or the right segmentation map Sr(t). It is shown for the left segmentation map Sl(t) only but is should be understood that it works for the right segmentation map Sr(t) in an analogous way.
  • the left frame L(t) from the stereo image is fed into the Metropolis-based image segmentation algorithm in step 2112, leading to the left segmentation map Sl(t).
  • the initial spin configuration SCi(t) that is used as a starting point for the Metropolis-based image segmentation algorithm can be chosen from many different initial configurations. This choice is made in step 2111.
  • the initial spin configuration SCi(t) can be chosen to be the left segmentation map Sl(t-1) at previous time t-1. If the changes in the left segmentation map between time t and the previous time t-1 are small, then this choice strongly reduces the iterations needed in the Metropolis- based image segmentation algorithm and hence also reduces the calculation time. In addition, a consistent labelling is achieved.
  • Fig. 5 shows a flow chart of the initialization of the Metropolis-based image segmentation algorithm, in which information from the other frame is used.
  • the initial spin configuration SCi(t) can be chosen in step 2111 to be for example the right segmentation map Sr(t) at the same time t or can be estimated from the disparity map D(t) at the same time.
  • these choices of the initial spin configuration strongly reduce the number of Metropolis-iterations.
  • Fig. 6 shows a flow chart of another embodiment of the segmentation algorithm.
  • the left segmentation map Sl(t) is to be obtained.
  • the initial spin configuration is chosen as shown in Fig. 5.
  • the disparity map D(t) or the right segmentation map Sr(t) at the same time t as the left frame L(t) is used.
  • the resulting left segmentation map Sl(t) is then checked for consistency in step 2113 by comparing it to the optic flow OF and/or the left segmentation map Sl(t) at the previous time t-1.
  • step 2114 which are then fed into both, the spin initialization step 2111 and the Metropolis-based image segmentation algorithm, in step 2112 in form of a correction signal C(t). Afterwards the left frame L(t) is segmented again in step 2112 using the correction signal in order to get better results.
  • this method can also be used for the right segmentation map Sr(t). It is also possible to use information from the same segmentation map at earlier times, such as Sl(t-1) or the optic flow OF, first and to compare it with results obtained from processing the other frame, such as the right segmentation map Sr(t) or the disparity map D(t) at the same time t.
  • Fig. 7 shows a flow chart of the algorithm to create the leaf descriptions from the segment boundaries SB(t) obtained from the stereo segmentation map SS(t).
  • Both the segmentation boundaries SB(t) and the leaf model 25 is input to a fitting algorithm in step 261. From this a fitting error E flt is pro- vided for each created leaf description. Based on the fitting errors Em for different leaf descriptions merges are proposed in step 262 and used to create modified segment boundaries SB * (t). These are then sent back to step 261 and are fitted to the leaf model 25 again. The resulting description again provides a fitting error ⁇ 3 ⁇ 4 and step 262 is performed again. This procedure is repeated several times and the leaf descriptions with the lowest fitting error E ⁇ are selected in step 263.
  • Fig. 8 shows the results of each step of the algorithm as sketched in Fig. 3.
  • the same reference numbers are used as in Fig. 3.
  • Both frames are used to calculate the disparity map D(t) in step 22, from which depth information of the segments can be calculated using camera parameters. This depth is the distance of the respective segment from the camera.
  • segmentation maps Sl(t) and Sr(t) and the disparity map D(t) are then used to build a stereo segmentation map SS(t) in step 23, showing both the segment information from the segmentation maps and the disparity information from the disparity map D(t). From this the three-dimensional segment boundaries SB(t) can be calculated in step 24.
  • step 26 These are then fitted to the leaf model 25 in step 26 leading to a leaf description in step 27 from which the interesting development parameters of the plants can be extracted.

Abstract

The invention refers to a method for estimating development parameters of plants, including the steps of: i. taking a stereo image of the plants at time t wherein the stereo image comprises a left frame L(t) and a right frame R(t), ii. computing a left segmentation map Sl(t) of the left frame L(t) and a right segmentation map Sr(t) of the right frame R(t) using a frame inherent feature, iii. computing a disparity map D(t) using the left frame L(t) and the right frame R(t) iv. combining the left segmentation map Sl(t), the right segmentation map Sr(t) and the disparity map to obtain a stereo segmentation map SS(t) v. extracting segment boundaries of each segment in the stereo segmentation map SS(t) vi. fitting the segment boundaries to a leaf model vii. repeating the previous steps for different times, viii. calculating the development parameters from the results of the fitting at different times.

Description

Method and device for estimating development parameters of plants
The invention relates to a method for estimating development parameters of plants. The work leading to this invention has received funding from the European Community Seventh Framework Programme FP7/2007-2013 under grant agreement no. 247947.
Controlling development parameters of plants is particularly interesting in plantations. These parameters can for example include the growth rate of plants, which is important to maximize harvest while minimizing or at least optimizing the use of materials like water, nutrients, insecticides and other resources. Other development parameters of plants of interest can be the shape and the pose of the leaves of the plant in order to monitor a lack of water, for example. Especially in large plantations this can only be done in an economically reasonable way by automation. In the past several efforts were made to implement such an automatic growth control system. In order to do so a time series of photographic images is acquired and stored. From these images the plant development parameters have to be extracted. Some of the development parameters, such as the growth of the plant, change rather slowly. In order to monitor these parameters one nevertheless has to deal with much faster changes, such as for example the changing pose and orientation of the leaves of the plant throughout a day. This orientation changes for example with the changing direction of incoming sunlight.
From different reactions of plants on an external stimulus such as for example light, different genetic variants of a sort of plant can be distinguished. Hence, with a method for estimating development parameters of plants, such as for example a movement of at least parts of the plants under the influence of light can be monitored automatically. Such a method can then be used as an automatic method of phenotypic analysis.
In modern plantations the growth of thousands of especially small plants like seedlings is to be monitored. Of course one can not provide an individual camera for each plant. Hence, one camera has to be used to monitor the development parameters of a plurality of plants. If the images in the time series are to be compared by looking for differences between two images taken at different times, the position of the camera relative to the plant has to be exactly the same for all images. This means that the camera position has to be highly reproducable.
From EP 1 564 542 a system for analyzing changes in the state during a plant growing process is known. This system comprises a plurality of cases for growing the plants to be observed. The system further comprises conveying means for conveying the plurality of cases such that the position of the camera relative to the case is the same for each image. The images are then stored on a computer and processed in a data and image processing part of the system. Unfortunately the document does not teach how the data processing works and what exactly is to be done.
From DE 10 2008 060 141 A1 a device and a method for estimating the growth of parts of leaves of plants is known. In this method a circular disc is cut from a leaf of the plant of interest. This disc is then put into a solution. The disc swims on the surface of this solution. This swimming disc is then photographed at different times leading to a time series of images. From this time series the growth rate of the sheet from the leaf can be estimated.
The image of the sheet has to be segmented in order to estimate the size of the disc. In order to do so one automatically has to separate the sheet from the background in the image. Since the disc swims in a clear solution the background of the image is formed by the bottom of the case. It can hence be designed in a way to generate a maximum contrast between the sheet and the background making it very easy for image software to distinguish the sheet from the background. In addition from the knowledge of the level of solution one can calculate the distance between the disc and the camera from this and the size of the disc on the image one can simply calculate the original size of the disc.
Unfortunately this method only works for parts of leaves of the plants one is interested in. The information obtained by this method is not directly related to the growth of the living plant. The plant might react on a given solution in a very different way than the disc which is examined. It would therefore be advantageous to automatically control and monitor the growth of living plants by directly looking at them.
Unfortunately this introduces a lot of additional problems and difficulties. Since a living plant usually has more than one leaf, one has to be able to separate one leaf from another. This is difficult because leaves of plants usually are very weakly textured. In addition different leaves might overlap, such that lower leaves are not completely visible. Once a leaf has been identified on an image one still can not compute the original size of the leaf since the distance between the leaf and the camera influences the size of the leaf on the image and because the leaf might not be entirely visible due to occlusions. Without knowing this distance the true size of the leaf can not be calculated.
In contrast to the system which is known from DE 10 2008 060 141 A1 the orientation of leaves of a living plant relative to the camera is not known. This orientation has also to be known in order to calculate the true size of the leaf.
It is therefore an object of the invention to provide a method and a device with which one is able to estimate development parameters of living plants.
A method for estimating development parameters of plants according to the present invention includes the steps of
o taking a stereo image of the plants at time t,
wherein the stereo image comprises a left frame and a right frame,
o computing a left segmentation map of the left frame and a right segmentation map of the right frame using a frame inherent feature,
o computing a disparity map using the left frame and the right frame,
o combining the left segmentation map, the right segmentation map and the disparity map to obtain a stereo segmentation map, o extracting segment boundaries of each segment in the stereo segmentation map,
o fitting the segment boundaries to a leaf model, o repeating the previous steps for different times,
o calculating development parameters of plants from the results of the fittings at different times.
For time t a stereo image is taken, which comprises a left frame and a right frame. These are images are taken from slightly different points of view. This can be done by any system acquiring colour, grey value or infrared images from plants. Any system acquiring stereo images can be used. Also systems acquiring time-of-flight or structured light images are usable.
The stereo image is processed by a segmentation and feature extraction algorithm to obtain the biometric description of plant leaves at every time step. The output might for example include leaf sizes and/or leaf shape and/or pose descriptors. These steps are repeated for different, in particular equidistant times and a time series of the results is calculated. This time series can then be used to calculate the plant development description.
A left segmentation map is computed from the left frame. A right frame segmentation map is computed from the right frame. This is done by using a frame inherent feature such as colour. Both frames are segmented into regions having a similar value of this feature. Each of these regions defines a segment of the corresponding segmentation map and is provided with a unique label. Corresponding regions in the left frame and the right frame must carry the same label. This might be difficult or even not possible for all segments because the leaves are usually only weakly textured. Segmentation of both the left frame and the right frame is called co- segmentation. Any known method for performing such a co-segmentation can be used.
From the left and the right segmentation map no information can be obtained about overlapping or covering of leaves. Due to the weak texturing of the leaves or due to the covering of leaves it is possible, that the two segmentation maps show different numbers of segments. It is for example possible, that a leaf is visible only from one of the two slightly different points of view.
In a separate step a disparity map is calculated. To do so both the left frame and the right frame are used. This can be done by any method known from the art. Due to the weak texturing of the leaves and the probably many occlusions, the results of most methods known from other applications will not deliver sufficiently good results. Methods using structured light are applicable. Nevertheless these methods have only limited applicability under daylight and/or outdoor conditions.
Due to the slightly different points of view of the left and the right frame a point on a leaf is slightly shifted when comparing its position in the left frame with its position in the right frame. This shift is called disparity and is stored in the disparity map. The disparity is a measure for the distance of the point on the leaf from the camera. The distance can be computed based on the disparity and camera parameters. Hence, also methods using for example time-of-flight sensors to estimate this distance can be used to obtain the disparity map. Also these sensors have a limited applicability in outdoor environments.
In principle it does not matter whether the left and the right segmentation maps are calculated before the disparity or vice versa. Depending on the methods used a specific order of the calculations might make sense, for example when results from one calculation can be reused in another one.
The left segmentation map, the right segmentation map and the disparity map are then combined to obtain the stereo segmentation map. From this the segment boundaries of the different segments can be extracted. Since segment boundaries do not necessarily coincide with true leaf boundaries, one cannot simply use segment boundaries to calculate the size of leaves. A leaf model is fitted to the segment boundaries in order to find out the boundaries of leaves from the segmentation, from which a description of the leaf is obtained. This model is a model of the shape ans/or other parameters of a leaf of the plant one is interested in. The model is based on the knowledge about the plant and is preferably individually adapted to the particular kind of plants under consideration. If another sort of plants is to be monitored one preferably changes the leaf model in order to model the leaves of the respective plant in an optimal way.
The fitting can be done by any method that can fit shape descriptors from the model to the leaf segments, specifically a method that fits three- dimensional surfaces using the depth from the disparity map to the boundary of the leaf segments. Depth in this context means the distance of the leaf to the camera. The method used can also fit one-dimensional curves in two- and three-dimensional space to the segment boundaries.
From the computed leaf shapes obtained during the fitting the size of the leaves can be easily calculated. If these steps are repeated for different, in particular equidistant time steps, a time series of the results, especially the size of leaves, can be calculated. This time series can then be used to calculate the growth rate. With a method according to the current invention it is possible to calculate the size LSi(t) of the leaves of a living plant automatically although the leaves might be very weakly textured and at least partially occluded by other leaves.
The method can also compute a colour description LCo(t) of the modelled leaves. This can be interesting in order to examine the colouring of the leaves. The method can also compute shape and/or pose descriptors LSh(t). These descriptors can be used to quantify and/or measure relative orientation and the position of plant leaves. With such a method also a curling of the leaf, for example due to a lack or excess of water, can be included and estimated.
From the signals LSi(t), LCo(t) and LSh(t) corresponding to the size of the leaf, the colour of the leaf and/or the shape and/or the pose of the leaf respectively, the development state of the plant is computed for every time step. Of course, it is also possible to obtain the development state from only some or even only one of the mentioned signals. Specifically the growth rate can be computed from temporal changes of the leaf size LSi(t). One can also compute conformational changes of the plant, for example a change in the relative poses of the leaves. Also leaf mobility can be determined from the pose and shape changes of the leaves.
In a preferred embodiment of the method the disparity map is computed using a phase-based stereo algorithm. The left frame and the right frame of the stereo image both consist of a plurality of pixels. The signals are transformed using a transformation based on Gabor-functions, as for example a transformation using a Gabor filter bank consisting of different spatiotemporal frequency channels in order to separate the disparity information from a phase difference. This has the advantage, that differences in the intensity or the amplitude of the signals of different pixels do no longer influence the result as long as the intensity at a certain pixel is larger than a given threshold. From the difference of the phase of the left and right Gabor filter a disparity value can be computed. By taking all Gabor filter into account, a disparity map is obtained.
Preferably the left segmentation map and the right segmentation map are computed using a Metropolis-based image segmentation algorithm. The method finds the equilibrium states of a given Hamiltonian
H = -∑Ji Jcriaj , where Jjj is the interaction strength between the signals
<>j>
of two pixels "i" and "j" which depends on the (feature) distance between the signals. The variables σ, and σ , are the spin state variables assigned to the respective pixels, which can be in a number of discrete states. In a particular embodiment of the method, the Hamiltonian and hence the interaction strength take the form of Potts model. Equilibrium states of the model are characterized by areas of aligned spins. Connected areas of spins which are in the same spin state define a segment.
Using this method has the advantage that by a clever choice of the initial state, which is input in the Metropolis-based image segmentation algorithm, the time needed for the calculation can be drastically reduced.
Hence, in a preferred embodiment the calculation of the left segmentation map and the right segmentation map includes the steps of
• partitioning one of the left frame and the right frame into segments using the Metropolis-based image segmentation algorithm,
• providing labels to the segments, forming the respective
segmentation map,
• using the labels from the left frame to define the initial state for the Metropolis-based image segmentation algorithm used to partition the right frame or vice versa.
It is known that the left frame and the right frame show the same plant photographed from only slightly shifted points of view. Hence the left segmentation map and the right segmentation map are very similar. Thus by choosing the left or right segmentation map as an initial state for the second Metropolis-based image segmentation algorithm the calculation time is reduced. In addition a predominantly consistent labelling of the left and right frame can be obtained.
In a preferred embodiment the initial state for the Metropolis-based image segmentation algorithm used for partitioning the first one of the left and the right frame is the respective segmentation map at a previous time. Pref- erably the latest time is used. Of course it is also possible to use the left and/or the right segmentation map of a previous time to partition also the second one of the left and the right frame. Using this special initial state again reduces the calculation time drastically, as long as the changes in the plant under investigation are small from one stereo image to the next one, or from the one, the segmentation map of the previous time belongs to and the current one. In addition, a predominantly consistent labelling of consecutive frames can be achieved. Since plants are known to grow rather slowly, this can be assumed correctly. Hence, a repetition time of half an hour, one or two hours or just 20 minutes is possible. Of course, other repetition times, meaning the time difference between two taken stereo images, are possible, too. In particular when one is interested in a reaction of the plants under consideration to an external stimulus, which might take place in several minutes or one or two hours, much shorter repetition times are possible, such as for example, five, two or one minute. Even repetition times of several seconds can be achieved.
The method according to the present invention can also be performed ordering the steps in a different order. One can in particular take stereo images at several times without having fully performed the segmentation and calculation steps on the foregoing stereo image. This is particularly advantageous if the reaction and/or development of the plant under considerations takes place very fast. One can then first tale the necessary stereo images and do the segmentation and calculation later. Such a method is still in the scope of the present invention.
Having obtained a stereo segmentation map, it is necessary to separate one segment corresponding to one leaf of the living plant from a
neighbouring segment corresponding to another leaf. In order to simplify this and to make it more reliable, it is advantageous that extracting the segment boundaries includes calculating disparity value for a centre of mass of each segment in the stereo segmentation map and identifying and correcting false boundaries.
False boundaries for example are boundaries that are caused by occlusions of leaves. These boundaries do exist between two segments in the stereo segmentation map and refer to two neighbouring segments but they do not correspond to an outer boundary of the leaf of the living plant that is at least partially occluded. This lower leaf extends underneath the upper leaf. Hence, the boundary only belongs to the upper leaf but does not belong to the lower leaf. Thus, this boundary at least for the lower, has to be removed. In order to avoid misinterpretation of the boundaries during fitting the boundaries to the model these boundaries can also be fully removed from the stereo segmentation map.
This can be achieved using any method to remove these boundaries that is known from the art. Preferably the disparity value of the centre of mass is calculated for each segment and the segments are then ordered according to this disparity of the centre of mass. Once this is done it is known which could be occluded by which other leaves lying above it. Hence the boundaries between two such leaves can be removed.
In a preferred embodiment of the present invention an optic flow map from at least two left frames or at least two right frames of different times is calculated. This optic flow map is then used to check time consistency of the growth rate. From this one could for example easily detect splitting or merging of two neighbouring segments. A splitting would mean, that a segment, that is assumed to correspond to one leaf of the living plant corresponds to at least two leaves, instead. This can then be corrected, by for example refitting the new boundaries resulting from the new information to the model and recalculating the size of the leaves for stereo images taken at earlier times.
A merging of two segments into one segment can be a strong hint of an error in the segmentation and partitioning of the stereo segmentation map. This can be corrected using information obtained from optic flow calculations.
The more previous frames are used to calculate the optic flow map, the more accurate and reliable are the results. It is therefore be preferable that the optic flow map is calculated from at least five left frames or at least five right frames.
Preferably the fitting of the segment boundaries to the leaf model provides shape parameters of the leaf and fitting errors for each segment. From the shape parameters one then can easily calculate the true size of the leaves of the plant by simply feeding the shape parameters in the model.
In a preferred embodiment the fitting errors are minimized by merging or dividing adjacent segments in the stereo segmentation map. This again leads to better and more reliable results of the method. The fitting of the segment boundaries extracted from the stereo segmentation map can for example lead to a large fitting error if by mistake one segment, that has been identified to correspond to one leaf, belongs to more than one leaf instead. This segment then shows an outer shape that does not properly fit to the shape of leaves assumed in the model. In this case a large fitting error occurs. One then can simply try and fit the segment boundaries of this segment by more than one, for example two, leaves. If the segment corresponds to two for example partially occluding leaves, the fitting error strongly decreases. Analogously the boundaries of two adjacent segments can be fitted by one leaf if the fitting error, that occurs when the two segments are fitted by two leaves are large.
Even if a disparity value of a centre of mass of the segments has been computed it might stay undiscovered, that two segments correspond to one leaf or vice versa instead, due to for example very similar distances of the two leaves from the camera. On the other hand a leaf could be wrinkled or curled such that different points on this one leaf have rather different distances from the camera, leading to two segments in the stereo segmentation map, identified to belong to a part of this leaf each.
In a preferred embodiment of the method the plant development parameters include the growth rate and/or the leaf shape and/or the pose of the leaves of the plant. Fitting the segment boundaries to the leaf model leads to a description of the leaves of the plant, including the orientation and the pose of the leaves. Hence, this orientation can be directly obtained from the fitting results. Once the orientation of the leaves is known one can easily extract the two-dimensional shape of each leaf of the plant. From this the size of the leaves and from this the growth and the growth rate of the plant can easily be calculated. Interesting plant development parameters might also be a colour histogram in order to monitor the colour of the leaves. From this one can obtain information about the health state of the plants. If there is for example a lack of water, the leaves of the plant will turn brown, which can be seen in a colour histogram.
A device for estimating plant development parameters of plants according to the present invention includes a stereo camera and an electronic device suitable for performing the method previously described.
Brief description of the drawings.
Fig. 1 is a schematic view on a device according to one embodiment of the present invention.
Fig. 2 is a flow chart of the main process according to one embodiment of the present invention.
Fig. 3 is a flow chart of the algorithm shown in Fig. 2.
Fig. 4 is a flow chart of a Metropolis-based co-segmentation according to one embodiment of the present invention.
Fig. 5 is a flow chart of a Metropolis-based co-segmentation according to another embodiment of the present invention.
Fig. 6 is a flow chart of a correction algorithm for segmentation errors.
Fig. 7 is a flow chart of a correction algorithm for fitting errors.
Fig. 8 illustrates the results of the steps in an algorithm according to Fig. 3.
Description of preferred embodiments
Fig. 1 shows a device for estimating plant development parameters according to an embodiment of the present invention. It contains a stereo camera with a left camera 2 and a right camera 4, which together take a stereo image of a plant 6 that is to be examined. The stereo camera is connected to a computer 8, which is capable of performing the necessary steps for a method according to the present invention. In Fig. 1 both the left camera 2 and the right camera 4 are provided with an individual connection 10 to the computer, which is only schematically sketched by a solid line.
Fig. 2 shows a flow chart of the method. At time t the stereo camera takes a stereo image. This stereo image contains a left frame L(t) taken with the left camera 2 (step 11) and a right frame R(t) taken with the right camera 4 (step 12). Both frames are then processed by an algorithm 20 in order to obtain the plant development parameters one is interested in. In Fig. 2 these are denoted to be the size of the leaves LSi(t), the colour histogram of the leaves LCo(t) and the shape and/or pose of the leaves LSh(t).
These so called leaf descriptors are then used to calculate plant development descriptors, such as growth rate and plant mobility measures and other development parameters on is interested in, in step 30. Of course it is possible to use other parameters or more or fewer parameters one is interested in.
At a later time t+1 , which simply denotes the next time to perform the steps of the method, a new stereo image is taken. From these a time se- ries of the interesting plant development parameters can be calculated.
Fig. 3 shows a more detailed flow chart of the algorithm 20 in Fig. 2. The left frame L(t) and the right frame R(t), both taken at time t are input into the algorithm 20. In step 211 a left segmentation map Sl(t) is computed from the left frame L(t) and in step 212 a right segmentation map Sr(t) is computed from the right frame R(t). From both the left frame L(t) and the right frame R(t) a disparity map D(t) is computed in step 22. While the left segmentation map and the right segmentation map contain information about the segments of the respective frame only, the disparity map D(t) contains the disparity information, from which the distance of a segment from the stereo camera can be computed.
The disparity map D(t) and the left segmentation map Sl(t) and the right segmentation map Sr(t) are used to obtain a stereo segmentation map SS(t) in step 23, containing both the information about the segments in the left frame L(t) and the right frame R(t) and the information about the disparity and hence the distance of the segments from the stereo camera. In step 24 the segment boundaries SB(t) are extracted from the stereo segmentation map SS(t). These segment boundaries SB(t) are then fitted to a leaf model 25, which is a model of the shape and/or other parameters of the leaves under consideration. If the sort of plant that is to be examined changes one probably also has to change the leaf model 25, in order to at least approximately describe the leaf shape correctly. Both the input from the leaf model 25 and the segment boundaries SB(t) are used to obtain a leaf description in step 26.
Once the leaf description has been obtained the plant development parameters one is interested in can be calculated in method steps 271 , 272 and 273, leading to the size of the leaves LSi(t), the colour histogram of the leaves LCo(t) or the shape and /or the pose of the leaves LSh(t), respectively. From a time series of these parameters other parameters can be obtained such as the growth and a growth rate and/or other development parameters of the plant.
Fig. 4 is a flow chart of the Metropolis-based image segmentation algorithm initialization. The Metropolis-based image segmentation algorithm can be used to calculate the left segmentation map Sl(t) and/or the right segmentation map Sr(t). It is shown for the left segmentation map Sl(t) only but is should be understood that it works for the right segmentation map Sr(t) in an analogous way.
The left frame L(t) from the stereo image is fed into the Metropolis-based image segmentation algorithm in step 2112, leading to the left segmentation map Sl(t). The initial spin configuration SCi(t) that is used as a starting point for the Metropolis-based image segmentation algorithm can be chosen from many different initial configurations. This choice is made in step 2111. The initial spin configuration SCi(t) can be chosen to be the left segmentation map Sl(t-1) at previous time t-1. If the changes in the left segmentation map between time t and the previous time t-1 are small, then this choice strongly reduces the iterations needed in the Metropolis- based image segmentation algorithm and hence also reduces the calculation time. In addition, a consistent labelling is achieved.
It is also possible to use more than only the previous left segmentation map Sl(t-1). If one can calculate the optic flow OF from more than one previous left segmentation map, one can use this to extrapolate this optic flow to guess what the equilibrium state of the Hamiltonian at time t might be. This also can strongly reduce the number of iterations in the Metropolis-based image segmentation algorithm. Both possible choices are denoted in Fig. 4.
Fig. 5 shows a flow chart of the initialization of the Metropolis-based image segmentation algorithm, in which information from the other frame is used. If, as shown, the left segmentation map Sl(t) is to be computed in step 2112 the initial spin configuration SCi(t) can be chosen in step 2111 to be for example the right segmentation map Sr(t) at the same time t or can be estimated from the disparity map D(t) at the same time. As for the two possibilities denoted in Fig. 4, also these choices of the initial spin configuration strongly reduce the number of Metropolis-iterations.
Fig. 6 shows a flow chart of another embodiment of the segmentation algorithm. From the left frame L(t) that is input in the Metropolis-based image segmentation algorithm in step 2112 the left segmentation map Sl(t) is to be obtained. In step 2111 the initial spin configuration is chosen as shown in Fig. 5. Hence, the disparity map D(t) or the right segmentation map Sr(t) at the same time t as the left frame L(t) is used. The resulting left segmentation map Sl(t) is then checked for consistency in step 2113 by comparing it to the optic flow OF and/or the left segmentation map Sl(t) at the previous time t-1. By this merging and splitting of segments can be detected, leading to an error signal E(t). From this the necessary modifications are obtained in step 2114 which are then fed into both, the spin initialization step 2111 and the Metropolis-based image segmentation algorithm, in step 2112 in form of a correction signal C(t). Afterwards the left frame L(t) is segmented again in step 2112 using the correction signal in order to get better results. Of course, this method can also be used for the right segmentation map Sr(t). It is also possible to use information from the same segmentation map at earlier times, such as Sl(t-1) or the optic flow OF, first and to compare it with results obtained from processing the other frame, such as the right segmentation map Sr(t) or the disparity map D(t) at the same time t.
Fig. 7 shows a flow chart of the algorithm to create the leaf descriptions from the segment boundaries SB(t) obtained from the stereo segmentation map SS(t). Both the segmentation boundaries SB(t) and the leaf model 25 is input to a fitting algorithm in step 261. From this a fitting error Eflt is pro- vided for each created leaf description. Based on the fitting errors Em for different leaf descriptions merges are proposed in step 262 and used to create modified segment boundaries SB*(t). These are then sent back to step 261 and are fitted to the leaf model 25 again. The resulting description again provides a fitting error Ε¾ and step 262 is performed again. This procedure is repeated several times and the leaf descriptions with the lowest fitting error E^ are selected in step 263.
Fig. 8 shows the results of each step of the algorithm as sketched in Fig. 3. For the sake of simplicity the same reference numbers are used as in Fig. 3. From the left frame L(t) 11 and the right frame R(t) 12, which show slightly different things respective segmentation map Sl(t) and Sr(t) are calculated. Both segmentation maps can lead to a different number of identified segments 211 , 212. Both frames are used to calculate the disparity map D(t) in step 22, from which depth information of the segments can be calculated using camera parameters. This depth is the distance of the respective segment from the camera.
The segmentation maps Sl(t) and Sr(t) and the disparity map D(t) are then used to build a stereo segmentation map SS(t) in step 23, showing both the segment information from the segmentation maps and the disparity information from the disparity map D(t). From this the three-dimensional segment boundaries SB(t) can be calculated in step 24.
These are then fitted to the leaf model 25 in step 26 leading to a leaf description in step 27 from which the interesting development parameters of the plants can be extracted.

Claims

Claims 1) A method for estimating development parameters of plants, including the steps of i. taking a stereo image of the plants at time t wherein the stereo image comprises a left frame L(t) and a right frame R(t), ii. computing a left segmentation map Sl(t) of the left frame L(t) and a right segmentation map Sr(t) of the right frame R(t) using a frame inherent feature, iii. computing a disparity map D(t) using the left frame L(t) and the right frame R(t) iv. combining the left segmentation map Sl(t), the right segmentation map Sr(t) and the disparity map to obtain a stereo segmentation map SS(t) v. extracting segment boundaries of each segment in the stereo segmentation map SS(t) vi. fitting the segment boundaries to a leaf model vii. repeating the previous steps for different times, viii. calculating the development parameters from the results of the fitting at different times. 2) The method according to claim 1 , characterized in that the disparity map is computed using a phase-based stereo algorithm. 3) The method of claim 1 or 2, characterized in that the left segmentation map Sl(t) and the right segmentation map Sr(t) are computed using a Metropolis-based image segmentation algorithm.4) The method of claim 3, characterized in that computing the left segmentation map Sl(t) and the right segmentation map Sr(t) includes
1. partitioning one of the left frame and the right frame into segments using the Metropolis-based image segmentation algorithm;
2. providing labels to the segments, forming the respective segmentation map;
3. using the labels of the left frame to define the initial state for the Metropolis-based image segmentation algorithm used to partition the right frame or vice versa.
5) The method of claim 3 or 4, characterized in that a left
segmentation map Sl(t') or a right segmentation map Sr(t') of a former time t' are used as initial state for the Metropolis-based image segmentation algorithm used to compute the left
segmentation map Sl(t) or the right segmentation map Sr(t) of time t.
6) The method according to one of the preceding claims,
characterized in that extracting the segment boundaries includes calculating a disparity value of the centre of mass of each segment in the stereo segmentation map and identifying and correcting false boundaries.
7) The method according to one of the preceding claims,
characterized in that it further includes the steps:
i. calculating an optic flow map from at least two left frames or at least two right frames of different times
ii. using the optic flow map to check a time consistency of the plant development parameters.
8) The method of claim 7, characterized in that the optic flow map is calculated from at least five left frames or at least five right frames.
9) The method of one of the preceding claims characterized in that fitting the segment boundaries to the leaf model provides shape parameters of the leaf shapes LSh(t) and fitting errors for each segment.
10) The method of claim 9, characterized in that the fitting errors are minimized by merging or dividing adjacent segments in the stereo segmentation map. 11) The method of one of the preceding claims characterized in that the development parameters include the leaf size and/or the leaf shape and/or the leaf pose.
12) A device for estimating development parameters of plants, including a stereo camera and an electronic device suitable for performing the method according to one of the claims 1 to 9.
PCT/EP2011/006222 2011-12-09 2011-12-09 Method and device for estimating development parameters of plants WO2013083146A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2011/006222 WO2013083146A1 (en) 2011-12-09 2011-12-09 Method and device for estimating development parameters of plants

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2011/006222 WO2013083146A1 (en) 2011-12-09 2011-12-09 Method and device for estimating development parameters of plants

Publications (1)

Publication Number Publication Date
WO2013083146A1 true WO2013083146A1 (en) 2013-06-13

Family

ID=45349151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/006222 WO2013083146A1 (en) 2011-12-09 2011-12-09 Method and device for estimating development parameters of plants

Country Status (1)

Country Link
WO (1) WO2013083146A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109647719A (en) * 2017-10-11 2019-04-19 北京京东尚科信息技术有限公司 Method and apparatus for sorting cargo

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1564542A1 (en) 2004-02-17 2005-08-17 Hitachi, Ltd. Plant growth analyzing system and method
DE102008060141A1 (en) 2008-12-03 2010-06-10 Forschungszentrum Jülich GmbH Method for measuring the growth of leaf discs and a device suitable for this purpose

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1564542A1 (en) 2004-02-17 2005-08-17 Hitachi, Ltd. Plant growth analyzing system and method
DE102008060141A1 (en) 2008-12-03 2010-06-10 Forschungszentrum Jülich GmbH Method for measuring the growth of leaf discs and a device suitable for this purpose

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALENYA G ET AL: "3D modelling of leaves from color and ToF data for robotized plant measuring", ROBOTICS AND AUTOMATION (ICRA), 2011 IEEE INTERNATIONAL CONFERENCE ON, IEEE, 9 May 2011 (2011-05-09), pages 3408 - 3414, XP032033915, ISBN: 978-1-61284-386-5, DOI: 10.1109/ICRA.2011.5980092 *
ALEXEY ABRAMOV ET AL: "3D Semantic Representation of Actions from efficient stereo-image-sequence segmentation on GPUs", 5TH INTERNATIONAL SYMPOSIUM 3D DATA PROCESSING, VISUALIZATION AND TRANSMISSION - 3DPVT'10, 20 May 2010 (2010-05-20), Espace Saint-Martin, Paris, France, pages 1 - 8, XP055032607, Retrieved from the Internet <URL:http://www.dpi.physik.uni-goettingen.de/~eaksoye/papers/3DPVT_2010.pdf> [retrieved on 20120712] *
BABETTE DELLEN ET AL: "Segmenting color images into surface patches by exploiting sparse depth data", APPLICATIONS OF COMPUTER VISION (WACV), 2011 IEEE WORKSHOP ON, IEEE, 5 January 2011 (2011-01-05), pages 591 - 598, XP031913628, ISBN: 978-1-4244-9496-5, DOI: 10.1109/WACV.2011.5711558 *
BISKUP ET AL: "A stereo imaging system for measuring structural parameters of plant canopies", PLANT CELL AND ENVIRONMENT, WILEY-BLACKWELL PUBLISHING LTD, GB, vol. 30, no. 10, 1 January 2007 (2007-01-01), pages 1299 - 1308, XP007912123, ISSN: 0140-7791, DOI: 10.1111/J.1365-3040.2007.01702.X *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109647719A (en) * 2017-10-11 2019-04-19 北京京东尚科信息技术有限公司 Method and apparatus for sorting cargo
CN109647719B (en) * 2017-10-11 2020-07-31 北京京东振世信息技术有限公司 Method and device for sorting goods

Similar Documents

Publication Publication Date Title
Liu et al. Monocular camera based fruit counting and mapping with semantic data association
Xiang et al. Recognition of clustered tomatoes based on binocular stereo vision
Jin et al. Corn plant sensing using real‐time stereo vision
CN103366361B (en) Region growing methods and there is the region growing methods of mark function
Reddy et al. Analysis of classification algorithms for plant leaf disease detection
CN110415230B (en) CT slice image semantic segmentation system and method based on deep learning
JP6341265B2 (en) Accumulated object recognition method and apparatus
CN109146948A (en) The quantization of crop growing state phenotypic parameter and the correlation with yield analysis method of view-based access control model
Barth et al. Angle estimation between plant parts for grasp optimisation in harvest robots
CN106845806A (en) The remote-sensing monitoring method and system of farmland planting state
CN106951905A (en) Apple identification and localization method on a kind of tree based on TOF camera
Chen et al. Monocular positioning of sweet peppers: An instance segmentation approach for harvest robots
CN109934108A (en) The vehicle detection and range-measurement system and implementation method of a kind of multiple target multiple types
CN101908214B (en) Moving object detection method with background reconstruction based on neighborhood correlation
US20190065890A1 (en) System and method for plant leaf identification
Baharav et al. In situ height and width estimation of sorghum plants from 2.5 d infrared images
Wu et al. Tracking-reconstruction or reconstruction-tracking? Comparison of two multiple hypothesis tracking approaches to interpret 3D object motion from several camera views
Soares et al. Plantation Rows Identification by Means of Image Tiling and Hough Transform.
CN110660070B (en) Rice vein image extraction method and device
Zhang et al. Automatic flower cluster estimation in apple orchards using aerial and ground based point clouds
CN115641364A (en) Embryo division cycle intelligent prediction system and method based on embryo dynamics parameters
WO2013083146A1 (en) Method and device for estimating development parameters of plants
CN111402235B (en) Growth recording method and device for colony image, electronic equipment and storage medium
Wang et al. Detection of corn plant population and row spacing using computer vision
CN116304992A (en) Sensor time difference determining method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11796627

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11796627

Country of ref document: EP

Kind code of ref document: A1