CN116580310A - Crop growth condition monitoring method based on image recognition - Google Patents

Crop growth condition monitoring method based on image recognition Download PDF

Info

Publication number
CN116580310A
CN116580310A CN202310861829.XA CN202310861829A CN116580310A CN 116580310 A CN116580310 A CN 116580310A CN 202310861829 A CN202310861829 A CN 202310861829A CN 116580310 A CN116580310 A CN 116580310A
Authority
CN
China
Prior art keywords
point cloud
point
plant
data
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310861829.XA
Other languages
Chinese (zh)
Other versions
CN116580310B (en
Inventor
李相国
许洁纯
刘林
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Areson Technology Corp
Original Assignee
Areson Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Areson Technology Corp filed Critical Areson Technology Corp
Priority to CN202310861829.XA priority Critical patent/CN116580310B/en
Publication of CN116580310A publication Critical patent/CN116580310A/en
Application granted granted Critical
Publication of CN116580310B publication Critical patent/CN116580310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a crop growth condition monitoring method based on image recognition, which comprises the steps of obtaining a multi-view picture of a plant and a reference object thereof, and obtaining the actual occupied area of the plant by carrying out image recognition on the picture; calculating the total leaf area of the plant according to the average leaf area and the total leaf number of the plant; and obtaining the leaf area index of the plant according to the actual occupied area and the total leaf area. The application realizes accurate monitoring of crop growth by improving the extraction precision of leaf area indexes.

Description

Crop growth condition monitoring method based on image recognition
Technical Field
The application relates to the technical field of crop growth condition monitoring, in particular to a crop growth condition monitoring method based on image recognition.
Background
Crop growth information reflects the growth condition and trend of crops and is an important component of agricultural condition information. The vegetation index, leaf area index and the like are common monitoring indexes capable of reflecting crop growth vigor. Leaf Area Index (LAI) refers to the sum of plant Leaf areas per unit of surface Area in the vertical direction. LAI is an important parameter describing the growth status of plants, and has important significance for environmental research and crop production management.
In the prior art, the method for monitoring the crop growth by calculating the LAI mainly comprises the following steps:
ground observation: the LAI of a single plant or in a small range is measured by adopting a ground observation method, such as spraying fluorescein, covering net, direct shearing method and the like. This method can directly obtain LAI of plants, but is limited to small samples, and is difficult to spread to the whole area.
Unmanned aerial vehicle remote sensing: and obtaining large-range LAI information, such as multispectral images, laser radars and the like, by adopting an unmanned aerial vehicle remote sensing technology. The unmanned aerial vehicle can acquire high-resolution and high-frequency data, and is beneficial to monitoring the growth process of crops in real time.
Satellite remote sensing: and a satellite remote sensing technology is utilized to acquire wide LAI data, such as MODIS, landsat and other satellites. Satellite telemetry may provide global LAI information, but with lower resolution and longer time intervals.
Machine learning based LAI estimation: and estimating the LAI value from the remote sensing data by adopting a machine learning method, such as a support vector machine, a random forest and the like. The method can quickly acquire the LAI information and has higher precision, but a pre-trained model is needed.
However, the calculation of the leaf area index in the prior art still has low precision, and cannot be simultaneously suitable for small plants with small leaf numbers and large plants with large leaf numbers.
Disclosure of Invention
The application provides a crop growth condition monitoring method based on image recognition, which is used for accurately monitoring the crop growth condition by improving the extraction precision of leaf area indexes.
In order to solve the technical problems, the embodiment of the application provides a crop growth monitoring method based on image recognition, which comprises the following steps:
acquiring a multi-view picture of a plant and a reference object thereof, and obtaining the actual occupied area of the plant by carrying out image recognition on the picture;
calculating the total leaf area of the plant according to the average leaf area and the total leaf number of the plant;
obtaining a leaf area index of the plant according to the actual occupied area and the total leaf area;
monitoring the growth vigor of the plant according to the leaf area index;
the actual occupied area of the plant is obtained by carrying out image recognition on the picture, specifically:
performing size conversion on the picture, and then performing graying treatment to obtain a gray scale picture;
performing binarization processing on the gray level image to obtain a binarized image;
removing a small area region in the binarized image by adopting a method for removing a small connected region, and identifying and reserving a first region and a second region, wherein the first region is a plant region, and the second region is a reference object region;
identifying the number of white pixels in the first area and the second area, marking the number as a first white pixel number, removing the second area, and identifying the number of white pixels in the first area, marking the number as a second white pixel number;
obtaining the actual occupied area of the plant according to the first white pixel number, the second white pixel number and the actual area of the reference object;
the actual occupation area of the plant is obtained according to the first white pixel number, the second white pixel number and the actual area of the reference object, specifically:
wherein ,representing said first number of white pixels,/v>Representing said second number of white pixels, +.>The actual area of the reference object is shown.
As one preferable scheme, the actual occupation area of the plant is obtained by performing image recognition on the picture, specifically:
performing size conversion on the picture, and then performing graying treatment to obtain a gray scale picture;
performing binarization processing on the gray level image to obtain a binarized image;
removing a small area region in the binarized image by adopting a method for removing a small connected region, and identifying and reserving a first region and a second region, wherein the first region is a plant region, and the second region is a reference object region;
identifying the number of white pixels in the first area and the second area, marking the number as a first white pixel number, removing the second area, and identifying the number of white pixels in the first area, marking the number as a second white pixel number;
and obtaining the actual occupied area of the plant according to the first white pixel number, the second white pixel number and the actual area of the reference object.
As one preferable mode, the total blade number is obtained according to the following steps:
according to the multi-view picture, a multi-view plant image sequence is obtained from the multi-view picture, and then three-dimensional point cloud data of the plant are extracted from the multi-view plant image sequence;
after removing background noise points in the three-dimensional point cloud data, dividing the three-dimensional point cloud data, removing useless points and reserving plant point clouds;
dividing the plant point cloud to obtain a first part point cloud and a second part point cloud, removing the second part point cloud, and reserving the first part point cloud; the first part of point cloud is a blade point cloud, and the second part of point cloud is other part of point cloud of the plant;
and carrying out single-leaf separation treatment on the leaf point cloud, and counting the number of single leaves to obtain the total leaf number.
As one preferable solution, the removing the background noise point in the three-dimensional point cloud data specifically includes:
calculating a first average distance and a first standard deviation between all data points in the three-dimensional point cloud data, and obtaining a preset interval according to the first average distance and the first standard deviation, wherein the preset interval is used for judging whether the data points are outliers;
and calculating the average distance between the current data point and the adjacent data point of each data point to be recorded as a second average distance, and deleting the data points with the second average distance in the preset interval as outliers.
As one preferable scheme, the dividing the three-dimensional point cloud data, removing useless points and retaining plant point clouds, specifically includes:
selecting a data pair: selecting a data pair; randomly selecting two data points from the three-dimensional point cloud data, and calculating a straight line where the two data points are located;
statistics of data point number: calculating an error value of each data point in the three-dimensional point cloud data relative to the straight line, counting the number of the data points with the error value smaller than a preset threshold value, and recording the number of the data points obtained by the calculation;
repeatedly executing the selected data pairs and the statistics data point number until the maximum iteration number is reached;
and selecting the maximum value in the data point number obtained in multiple iterations, marking a straight line corresponding to the maximum value as a first straight line, and deleting three-dimensional point cloud data below the horizontal plane where the first straight line is positioned.
As one preferable scheme, the dividing the plant point cloud to obtain a first part point cloud and a second part point cloud specifically includes:
extracting a point cloud skeleton of the plant point cloud to obtain a first skeleton point set;
initializing a point cloud set: dividing the first skeleton point set into a plurality of first point clouds according to a RAIN drop path by adopting a RAIN drop algorithm, and combining the plurality of first point clouds into a first point cloud set, wherein the RAIN drop path is a path generated by raindrops randomly generated by the RAIN drop algorithm falling on the plant point cloud;
traversing the collection: selecting one of said first point clouds from said first point clouds;
updating the point cloud set: performing skeleton extraction on the first point cloud set to obtain a second skeleton point set, dividing the second skeleton point set into a plurality of second point cloud sets according to a raindrop path by adopting a RAIN algorithm, and updating the first point cloud set by adopting the second point cloud sets;
judging the end condition: repeatedly executing the process of updating the point cloud set until the preset iteration times are reached, judging whether the data points in the first point cloud set are on the same straight line within a preset error range, if so, dividing the first point cloud set in the plant point cloud into a second part of point cloud, and dividing the rest part of the plant point cloud into the first part of point cloud; if not, judging whether the first point cloud set is traversed, and if not, repeatedly executing the processes of traversing the set, updating the point cloud set and judging the ending condition;
and filtering the second part of point cloud in the plant point cloud.
As one preferable scheme, the single-leaf separation treatment is performed on the leaf point cloud, specifically:
the spatial characteristics of each point in the blade point cloud comprise the spatial smoothness of the point and a plane normal vector of neighborhood fitting, and a first neighbor point set matrix of each point in the blade point cloud is calculated through an iterative principal component analysis method;
removing points with Euclidean distance larger than a first preset threshold value in the first neighbor point set matrix to obtain a second neighbor point set matrix, wherein the Euclidean distance is the distance from the points to a fitting plane;
calculating a covariance matrix of the second neighbor point set matrix, and updating the fitting plane according to the covariance matrix;
acquiring seed points in the blade point cloud, and establishing a small patch area with the same characteristics according to the seed points to obtain segmented small patch clusters;
and taking the small patches in the small patch clusters as units, carrying out region extension according to the adjacent relation of the patches and the coplanarity between the patches, splicing the plurality of patches after the region extension into a space structure, and dividing the space structure into single blades when the point covered by the space structure exceeds a preset second threshold value.
As one preferable scheme, the leaf area index of the plant is obtained according to the actual occupied area and the total leaf area, and specifically:
wherein ,representing the leaf area index->Representing the total leaf area,/->Representing the actual footprint.
As one preferable mode, the preset interval is [ first average distance-first standard deviation, first average distance+first standard deviation ].
Compared with the prior art, the embodiment of the application has the beneficial effects that at least one of the following points is adopted:
the application obtains the actual occupied area of the plant by carrying out image processing on the multi-view picture, calculates the total leaf area of the plant according to the average leaf area and the total leaf number of the plant, and finally obtains the leaf area index of the plant according to the actual occupied area and the total leaf area. The method introduces the total leaf number when calculating the leaf area index, and the acquisition of the total page number is a more accurate parameter, so that the leaf area index calculated by the method has high precision, and the method can be applied to small plants with less leaf numbers and large plants with more leaf numbers. And further, the precision of extracting the leaf area index can be improved, so that the accurate monitoring of the crop growth situation can be realized.
Drawings
FIG. 1 is a flow chart of a crop growth monitoring method based on image recognition in one embodiment of the application;
FIG. 2 is a multi-view image of a plant and its references based on an image recognition method for crop growth monitoring in one embodiment of the application;
FIG. 3 is a flow chart of a method of segmenting three-dimensional point cloud data in one embodiment of the application;
FIG. 4 is a flow diagram of a method of performing a single leaf separation process on the leaf point cloud in one embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application, and the purpose of these embodiments is to provide a more thorough and complete disclosure of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", "a third", etc. may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of the present application, it should be noted that all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application, as the particular meaning of the terms described above in the present application will be understood to those of ordinary skill in the art in the detailed description of the application.
An embodiment of the present application provides a crop growth monitoring method based on image recognition, please refer to fig. 1, fig. 1 shows a flow chart of a crop growth monitoring method based on image recognition in one embodiment of the present application, which includes the following steps:
step S1: and obtaining a multi-view image of the plant and a reference object thereof, and obtaining the actual occupied area of the plant by carrying out image recognition on the image.
As an embodiment, as shown in fig. 2, pictures of plants at different angles and their references are collected or photographed by a camera, and in order to eliminate the influence of the photographing angle and the photographing distance on the actual floor area of the citrus plant, a specific area of the reference is required to be placed beside the citrus tree during photographing, so as to calculate the actual floor area of the citrus tree.
As one embodiment, the obtaining the actual floor area of the plant by performing image processing on the picture includes the following steps:
step S11: and performing size conversion on the picture, and then performing graying treatment to obtain a gray scale picture.
Step S12: and carrying out binarization processing on the gray level image to obtain a binarized image.
Step S13: and removing a small area region in the binarized image by adopting a method for removing a small connected region, and identifying and reserving a first region and a second region, wherein the first region is a plant region, and the second region is a reference object region.
Step S14: and identifying the white pixel number of the first area and the second area, namely the first white pixel number, and identifying the white pixel number of the first area after removing the second area, namely the second white pixel number.
Step S15: and obtaining the actual occupied area of the plant according to the first white pixel number, the second white pixel number and the actual area of the reference object. Specifically, the actual footprint of the plant is calculated according to the following formula:
wherein ,representing said first number of white pixels,/v>Representing said second number of white pixels, +.>The actual area of the reference is represented, and the actual area of the reference is measured by priori experience.
Step S2: the total leaf area of the plant is calculated from the average leaf area and the total leaf number of the plant.
As one example, 24 leaves were randomly collected from the plant as samples, and the average area of the 24 leaves was determined.
As one example, the total blade number is obtained according to the following steps:
step S21: according to the multi-view picture, a multi-view plant image sequence is obtained from the multi-view picture, and then three-dimensional point cloud data of the plant are extracted from the multi-view plant image sequence; and a narrower base line and more mutually overlapped areas are arranged between two adjacent pictures of the multi-view picture. Specifically, after three-dimensional point cloud data of the plant are extracted from the multi-view plant image sequence, coordinates of each data point are transformed to three-dimensional coordinates in the real world, an x-axis of the three-dimensional coordinates is located on a horizontal plane, a plane formed by the x-axis and a y-axis is perpendicular to the horizontal plane, and then a z-axis can be determined according to a right-hand coordinate system method. Preferably, the three-dimensional point cloud data is extracted by using an OenMVG and OpenMVS combined method, or the three-dimensional point cloud data is extracted by using a three-dimensional laser radar method.
Step S22: after removing background noise points in the three-dimensional point cloud data, dividing the three-dimensional point cloud data, removing useless points and reserving plant point clouds;
step S23: dividing the plant point cloud to obtain a first part point cloud and a second part point cloud, removing the second part point cloud, and reserving the first part point cloud; the first part of point cloud is a blade point cloud, and the second part of point cloud is other part of point cloud of the plant;
step S24: extracting leaf point clouds from the plant point clouds, then carrying out single-leaf separation treatment on the leaf point clouds, and counting the number of single leaves to obtain the total leaf number.
As one embodiment, the removing the background noise point in the three-dimensional point cloud data specifically includes:
step S221: calculating a first average distance D and a first standard deviation S between all data points in the three-dimensional point cloud data, and obtaining a preset interval according to the first average distance D and the first standard deviation S, wherein the preset interval is used for judging whether the data points are outliers;
step S222: and calculating the average distance between each data point and the adjacent point of the data point as a second average distance, and deleting the data points with the second average distance not in the preset interval as outliers. The adjacent points are data points with the distance from the current data point being smaller than a third preset threshold value, and when the average distance between the data point and the adjacent points is calculated, the average distance between the current data point and all the adjacent points is required to be calculated.
As one embodiment, the deleting the data point of the second average distance within the preset interval as an outlier specifically includes:
and judging whether the second average distance is in the [ D-S, D+S ] interval, if not, judging the current data point as an outlier and deleting.
Because the three-dimensional point cloud data still includes data other than plants (such as flowerpots, soil, references, etc.), further elimination of the dead points is also required.
As an embodiment, as shown in fig. 3, the three-dimensional point cloud data is segmented, useless points are removed, and plant point clouds are reserved, specifically:
step S223: selecting a data pair; and randomly selecting two data points from the three-dimensional point cloud data, and calculating a straight line where the two data points are located. The expression of the straight line isiRepresenting the number of iterations.
Step S224: counting the number of data points; calculating an error value of each data point in the three-dimensional point cloud data relative to the linear model, counting the number of data points with the error value smaller than a preset threshold value, and recording the number N of data points obtained by the calculation. Specifically, substituting the x coordinate of the current data point into the straight line to obtain the y coordinate of the current data point in the straight line, and calculating the difference value between the y coordinate of the straight line and the actual y coordinate of the data point.
Step S225: repeatedly executing the selected data pairs and the statistics data point number until the maximum iteration number is reached;
step S226: and selecting the maximum value in the number N of the data points obtained in multiple iterations, marking a straight line corresponding to the maximum value as a first straight line, and deleting the three-dimensional point cloud data below the horizontal plane where the first straight line is positioned. And the three-dimensional point cloud data above the horizontal plane where the first straight line is positioned is plant point cloud.
As one embodiment, the dividing the plant point cloud to obtain a first part point cloud and a second part point cloud includes the following steps:
step S231: and extracting a point cloud skeleton of the plant point cloud to obtain a first skeleton point set. The point cloud skeleton extraction is the prior art, and is not described in detail herein.
Step S232: initially, the method comprisesThe method comprises the steps of initializing point clouds, namely dividing the first skeleton point set into a plurality of first point clouds according to a raindrop path by adopting RAIN algorithm, and combining the plurality of first point clouds into a first point cloud setM = 1,2,..m, M is the number of first point clouds; the raindrop path is a path generated by raindrops randomly generated by the RAIN algorithm falling on the plant point cloud. The raindrops randomly generated by the RAIN algorithm fall on any position of the plant point cloud, and the first skeleton point set can be divided into a plurality of first point clouds according to the path of the raindrops.
Initializing m=1, and the iteration number k=1.
Step S233: traversing the collection; one of the first point clouds is selected from the first point clouds. From the first point cloud setThe first point cloud set of the plurality of point cloud sets starts to traverse, and the first point cloud sets are sequentially selected.
Step S234: updating the point cloud set; performing skeleton extraction on the first point cloud set to obtain a second skeleton point setDividing the second skeleton point set into a plurality of second point clouds according to a raindrop path by adopting RAIN algorithm>Updating the first point cloud set with the second point cloud set>
Step S235: judging the ending condition, namely repeatedly executing the process of updating the point cloud set until the preset iteration times are reached (namely, when k is smaller than the preset iteration times, k=k+1 and returning to step S234), judging whether the data points in the first point cloud set are on the same straight line within a preset error range, if so, dividing the first point cloud set in the plant point cloud into a second part point cloud, and dividing the rest part into the first part point cloud; if not, judging whether the first point cloud set is traversed, and if not, repeatedly executing the processes of traversing the set, updating the point cloud set and judging the ending condition.
If not, the process of traversing the set, updating the point cloud set and judging the ending condition is repeatedly executed, specifically, when M is smaller than M, m=m+1 and the process returns to step S233.
Step S236: and filtering the second part of point cloud in the plant point cloud. The second partial point cloud is obtained in steps S232 to S235 as the point cloud of the other parts except the blade, such as the stem point cloud, and the first partial point cloud is the blade point cloud, so the second partial point cloud needs to be filtered out.
As an embodiment, as shown in fig. 4, the single-leaf separation processing is performed on the leaf point cloud, which specifically includes:
step S241: each point in the blade point cloudComprises spatial smoothness in which the points are located and a planar normal vector of a neighborhood fit>Calculating a first neighbor point set matrix of each point in the blade point cloud by an iterative principal component analysis method>. Specifically, said->Is +.>The first neighbor point set matrixIs->The K neighbor domain point set matrix of (2) can be calculated to obtain a +/in each iteration principal component analysis method>Neighborhood +.>Distance of dots->,/>Representation->Midpoint (at the middle point)>Distance to the fitting plane. When the first neighbor point set matrix +.>Ending the iteration when the size of (2) remains unchanged.
Step S242: and removing points with Euclidean distance larger than a first preset threshold value in the first neighbor point set matrix to obtain a second neighbor point set matrix, wherein the Euclidean distance is the distance from the points to the fitting plane. Specifically, it will>/>Is->Removed from the first neighbor set matrix, wherein +.>And a threshold value is preset for the first.
Step S243: and calculating a covariance matrix of the second neighbor point set matrix, and updating the fitting plane according to the covariance matrix.
Step S244: and obtaining seed points in the blade point cloud, and establishing a small patch area with the same characteristics according to the seed points to obtain a segmented small patch cluster. The seed point is a first neighbor point set matrix corresponding to the center point of the blade point cloudPoint of highest smoothness in +.>. And then, according to the seed points, carrying out region growth under the point scale to establish a small patch region with the same characteristics.
Step S245: and taking the small patches in the small patch clusters as units, carrying out region extension (or growth) according to the adjacent relation of the patches and the coplanarity between the patches, splicing a plurality of patches after the region extension into a space structure, and dividing the space structure into single blades when the point covered by the space structure exceeds a preset second threshold value.
Step S3: and obtaining the leaf area index of the plant according to the actual occupied area and the total leaf area. Specifically, the leaf area index of the plants was calculated according to the following formula:
wherein ,representing the leaf area index->Representing the total leaf area,/->Representing the actual footprint.
Step S4: and monitoring the growth vigor of the plant according to the leaf area index.
The application obtains the actual occupied area of the plant by carrying out image processing on the multi-view picture, calculates the total leaf area of the plant according to the average leaf area and the total leaf number of the plant, and finally obtains the leaf area index of the plant according to the actual occupied area and the total leaf area. The method introduces the total leaf number when calculating the leaf area index, and the acquisition of the total page number is a more accurate parameter, so that the leaf area index calculated by the method has high precision, and the method can be applied to small plants with less leaf numbers and large plants with more leaf numbers. The application monitors the plant growth situation by calculating the leaf surface index, and can better guide the production processes of leaf fertilizer spraying, crop water and fertilizer supply, disease control and the like.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that modifications and adaptations to the present application may occur to one skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (8)

1. The crop growth condition monitoring method based on image recognition is characterized by comprising the following steps of:
acquiring a multi-view picture of a plant and a reference object thereof, and obtaining the actual occupied area of the plant by carrying out image recognition on the picture;
calculating the total leaf area of the plant according to the average leaf area and the total leaf number of the plant;
obtaining a leaf area index of the plant according to the actual occupied area and the total leaf area;
monitoring the growth vigor of the plant according to the leaf area index;
the actual occupied area of the plant is obtained by carrying out image recognition on the picture, specifically:
performing size conversion on the picture, and then performing graying treatment to obtain a gray scale picture;
performing binarization processing on the gray level image to obtain a binarized image;
removing a small area region in the binarized image by adopting a method for removing a small connected region, and identifying and reserving a first region and a second region, wherein the first region is a plant region, and the second region is a reference object region;
identifying the number of white pixels in the first area and the second area, marking the number as a first white pixel number, removing the second area, and identifying the number of white pixels in the first area, marking the number as a second white pixel number;
obtaining the actual occupied area of the plant according to the first white pixel number, the second white pixel number and the actual area of the reference object;
the actual occupation area of the plant is obtained according to the first white pixel number, the second white pixel number and the actual area of the reference object, specifically:
wherein ,representing said first number of white pixels,/v>Representing said second number of white pixels, +.>The actual area of the reference object is shown.
2. The image recognition-based crop growth monitoring method according to claim 1, wherein the total leaf number is obtained according to the steps of:
according to the multi-view picture, a multi-view plant image sequence is obtained from the multi-view picture, and then three-dimensional point cloud data of the plant are extracted from the multi-view plant image sequence;
after removing background noise points in the three-dimensional point cloud data, dividing the three-dimensional point cloud data, removing useless points and reserving plant point clouds;
dividing the plant point cloud to obtain a first part point cloud and a second part point cloud, removing the second part point cloud, and reserving the first part point cloud; the first part of point cloud is a blade point cloud, and the second part of point cloud is other part of point cloud of the plant;
and carrying out single-leaf separation treatment on the leaf point cloud, and counting the number of single leaves to obtain the total leaf number.
3. The crop growth monitoring method based on image recognition according to claim 2, wherein the removing of the background noise point in the three-dimensional point cloud data is specifically:
calculating a first average distance and a first standard deviation between all data points in the three-dimensional point cloud data, and obtaining a preset interval according to the first average distance and the first standard deviation, wherein the preset interval is used for judging whether the data points are outliers;
and calculating the average distance between the current data point and the adjacent data point of each data point to be recorded as a second average distance, and deleting the data points with the second average distance in the preset interval as outliers.
4. The method for monitoring crop growth conditions based on image recognition according to claim 3, wherein the steps of dividing the three-dimensional point cloud data, removing useless points and retaining plant point clouds are as follows:
selecting a data pair: randomly selecting two data points from the three-dimensional point cloud data, and calculating a straight line where the two data points are located;
statistics of data point number: calculating an error value of each data point in the three-dimensional point cloud data relative to the straight line, counting the number of the data points with the error value smaller than a preset threshold value, and recording the number of the data points obtained by the calculation;
repeatedly executing the selected data pairs and the statistics data point number until the maximum iteration number is reached;
and selecting the maximum value in the data point number obtained in multiple iterations, marking a straight line corresponding to the maximum value as a first straight line, and deleting three-dimensional point cloud data below the horizontal plane where the first straight line is positioned.
5. The method for monitoring crop growth conditions based on image recognition according to claim 4, wherein the dividing the plant point cloud to obtain a first part point cloud and a second part point cloud comprises the following steps:
extracting a point cloud skeleton of the plant point cloud to obtain a first skeleton point set;
initializing a point cloud set: dividing the first skeleton point set into a plurality of first point clouds according to a RAIN drop path by adopting a RAIN drop algorithm, and combining the plurality of first point clouds into a first point cloud set, wherein the RAIN drop path is a path generated by raindrops randomly generated by the RAIN drop algorithm falling on the plant point cloud;
traversing the collection: selecting one of said first point clouds from said first point clouds;
updating the point cloud set: performing skeleton extraction on the first point cloud set to obtain a second skeleton point set, dividing the second skeleton point set into a plurality of second point cloud sets according to a raindrop path by adopting a RAIN algorithm, and updating the first point cloud set by adopting the second point cloud sets;
judging the end condition: repeatedly executing the process of updating the point cloud set until the preset iteration times are reached, judging whether the data points in the first point cloud set are on the same straight line within a preset error range, if so, dividing the first point cloud set in the plant point cloud into a second part of point cloud, and dividing the rest part of the plant point cloud into the first part of point cloud; if not, judging whether the first point cloud set is traversed, and if not, repeatedly executing the processes of traversing the set, updating the point cloud set and judging the ending condition;
and filtering the second part of point cloud in the plant point cloud.
6. The crop growth condition monitoring method based on image recognition according to claim 5, wherein the single-leaf separation processing is performed on the leaf point cloud, specifically:
the spatial characteristics of each point in the blade point cloud comprise the spatial smoothness of the point and a plane normal vector of neighborhood fitting, and a first neighbor point set matrix of each point in the blade point cloud is calculated through an iterative principal component analysis method;
removing points with Euclidean distance larger than a first preset threshold value in the first neighbor point set matrix to obtain a second neighbor point set matrix, wherein the Euclidean distance is the distance from the points to a fitting plane;
calculating a covariance matrix of the second neighbor point set matrix, and updating the fitting plane according to the covariance matrix;
acquiring seed points in the blade point cloud, and establishing a small patch area with the same characteristics according to the seed points to obtain segmented small patch clusters;
and taking the small patches in the small patch clusters as units, carrying out region extension according to the adjacent relation of the patches and the coplanarity between the patches, splicing the plurality of patches after the region extension into a space structure, and dividing the space structure into single blades when the point covered by the space structure exceeds a preset second threshold value.
7. The method for monitoring crop growth conditions based on image recognition according to claim 6, wherein the leaf area index of the plant is obtained according to the actual occupation area and the total leaf area, specifically:
wherein ,representing the leaf area index->Representing the total leaf area,/->Representing the actual footprint.
8. The image recognition-based crop growth monitoring method of any one of claims 1 to 7, wherein the preset interval is [ first average distance-first standard deviation, first average distance+first standard deviation ].
CN202310861829.XA 2023-07-14 2023-07-14 Crop growth condition monitoring method based on image recognition Active CN116580310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310861829.XA CN116580310B (en) 2023-07-14 2023-07-14 Crop growth condition monitoring method based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310861829.XA CN116580310B (en) 2023-07-14 2023-07-14 Crop growth condition monitoring method based on image recognition

Publications (2)

Publication Number Publication Date
CN116580310A true CN116580310A (en) 2023-08-11
CN116580310B CN116580310B (en) 2023-10-20

Family

ID=87540053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310861829.XA Active CN116580310B (en) 2023-07-14 2023-07-14 Crop growth condition monitoring method based on image recognition

Country Status (1)

Country Link
CN (1) CN116580310B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117678455A (en) * 2024-02-01 2024-03-12 山东华亮重工机械有限公司 Automatic planting equipment for container vegetables

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101979960A (en) * 2010-09-29 2011-02-23 南京信息工程大学 Laser and image-based leaf area measurement device
CN111666946A (en) * 2020-05-26 2020-09-15 东华大学 Plant point cloud single-blade segmentation method based on point cloud over-segmentation and surface patch growth
CN113538560A (en) * 2021-07-09 2021-10-22 电子科技大学 Leaf area index extraction method based on three-dimensional reconstruction
WO2023052055A1 (en) * 2021-09-30 2023-04-06 Robert Bosch Gmbh Vegetation monitoring device, vegetation monitoring system and vegetation monitoring method for monitoring vegetation health in a garden
CN115994939A (en) * 2022-10-26 2023-04-21 南京林业大学 Tree leaf area estimation method based on ground laser point cloud

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101979960A (en) * 2010-09-29 2011-02-23 南京信息工程大学 Laser and image-based leaf area measurement device
CN111666946A (en) * 2020-05-26 2020-09-15 东华大学 Plant point cloud single-blade segmentation method based on point cloud over-segmentation and surface patch growth
CN113538560A (en) * 2021-07-09 2021-10-22 电子科技大学 Leaf area index extraction method based on three-dimensional reconstruction
WO2023052055A1 (en) * 2021-09-30 2023-04-06 Robert Bosch Gmbh Vegetation monitoring device, vegetation monitoring system and vegetation monitoring method for monitoring vegetation health in a garden
CN115994939A (en) * 2022-10-26 2023-04-21 南京林业大学 Tree leaf area estimation method based on ground laser point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈玉青等: "基于Android 手机平台的冬小麦叶面积指数快速测量系统", 《农业机械学报》, vol. 48, pages 123 - 128 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117678455A (en) * 2024-02-01 2024-03-12 山东华亮重工机械有限公司 Automatic planting equipment for container vegetables
CN117678455B (en) * 2024-02-01 2024-04-23 山东华亮重工机械有限公司 Automatic planting equipment for container vegetables

Also Published As

Publication number Publication date
CN116580310B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN104766058B (en) A kind of method and apparatus for obtaining lane line
CN106529469B (en) Unmanned aerial vehicle-mounted LiDAR point cloud filtering method based on self-adaptive gradient
DE112018000605T5 (en) Information processing apparatus, data management apparatus, data management system, method and program
CN112418188A (en) Crop growth whole-course digital assessment method based on unmanned aerial vehicle vision
Zhou et al. An integrated skeleton extraction and pruning method for spatial recognition of maize seedlings in MGV and UAV remote images
CN116580310B (en) Crop growth condition monitoring method based on image recognition
CN112304902B (en) Real-time monitoring method and device for crop weather
CN111666855A (en) Unmanned aerial vehicle-based animal three-dimensional parameter extraction method and system and electronic equipment
CN110969654A (en) Corn high-throughput phenotype measurement method and device based on harvester and harvester
WO2022213218A1 (en) System and method for vegetation detection from aerial photogrammetric multispectral data
CN114119574A (en) Picking point detection model construction method and picking point positioning method based on machine vision
CN114548277B (en) Method and system for ground point fitting and crop height extraction based on point cloud data
CN117409339A (en) Unmanned aerial vehicle crop state visual identification method for air-ground coordination
Meyer et al. CherryPicker: Semantic skeletonization and topological reconstruction of cherry trees
CN112215714B (en) Unmanned aerial vehicle-based rice spike detection method and device
CN116843738A (en) Tree dumping risk assessment system and method based on TOF depth camera
CN113487636B (en) Laser radar-based automatic extraction method for plant height and row spacing of wide-ridge crops
Plowright Extracting trees in an urban environment using airborne LiDAR
CN117649409B (en) Automatic limiting system, method, device and medium for sliding table based on machine vision
CN115170981B (en) Automatic evergreen forest identification method based on cloud platform fusion of multi-source satellite images
CN111178264A (en) Estimation algorithm for tower footing attitude of iron tower in aerial image of unmanned aerial vehicle
CN117456364B (en) Grassland biomass estimation method and system based on SfM and grassland height factors
CN114283167B (en) Vision-based cleaning area detection method
CN114419439B (en) Wheat seedling monitoring method based on unmanned aerial vehicle remote sensing and deep learning
CN116309857A (en) Plant leaf inclination angle measurement method based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant