CN117036926A - Weed identification method integrating deep learning and image processing - Google Patents

Weed identification method integrating deep learning and image processing Download PDF

Info

Publication number
CN117036926A
CN117036926A CN202310247564.4A CN202310247564A CN117036926A CN 117036926 A CN117036926 A CN 117036926A CN 202310247564 A CN202310247564 A CN 202310247564A CN 117036926 A CN117036926 A CN 117036926A
Authority
CN
China
Prior art keywords
grid
image
weed
images
crops
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310247564.4A
Other languages
Chinese (zh)
Inventor
金小俊
陈勇
于佳琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Provincial Laboratory Of Weifang Modern Agriculture
Nanjing Forestry University
Original Assignee
Shandong Provincial Laboratory Of Weifang Modern Agriculture
Nanjing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Provincial Laboratory Of Weifang Modern Agriculture, Nanjing Forestry University filed Critical Shandong Provincial Laboratory Of Weifang Modern Agriculture
Priority to CN202310247564.4A priority Critical patent/CN117036926A/en
Publication of CN117036926A publication Critical patent/CN117036926A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a weed identification method integrating deep learning and image processing, which comprises the following steps: collecting field images; dividing a field image into a plurality of grid images; identifying the grid images through a classified neural network model, and marking each grid image as a crop or a background; and respectively carrying out image segmentation, area filtering and connected region marking on green pixels in the two grid images by using color factors in image processing, counting the number of connected regions of the two grid images, and identifying weed distribution conditions in the grid images through the number of connected regions. The neural network model only needs to pay attention to whether crops exist in the image, and the varieties of the crops are single or limited, so that the recognition result is not influenced even if the varieties of weeds which do not appear in the training set are encountered. Therefore, the method can effectively reduce the complexity of weed identification and the cost of training set image construction, and can improve the robustness and generalization capability of model identification.

Description

Weed identification method integrating deep learning and image processing
Technical Field
The invention relates to the technical field of weed identification, in particular to a weed identification method integrating deep learning and image processing.
Background
Weeds compete with crops for moisture, nutrients and illumination, and plant diseases and insect pests are bred, so that the yield of grains is reduced. The number of common weeds in the field is 140 or more, and the variety is various. The weed control methods include artificial weeding, chemical weeding, biological weeding, mechanical weeding and the like. The labor intensity of manual weeding is high and the efficiency is low. The chemical weeding has quick effect, but the excessive use of the chemical weeding is easy to cause environmental pollution. Biological weeding is environment-friendly, but is not suitable for sudden grass damage. Mechanical weeding is widely regarded as a green pollution-free weeding mode, but if weeds cannot be accurately identified, crops are often damaged by mistake. At present, manual weeding is mainly adopted for field weeding. The shortage of labor in rural areas and the improvement of labor cost not only improve the planting cost, but also limit the development of the agricultural industry. Therefore, development of efficient intelligent weeding equipment is imperative. In order to realize intelligent weeding, accurate identification of weeds is needed to be realized.
In a large-scale base, vegetable crops (such as beet, cabbage, tomatoes, peppers and the like) are usually planted mechanically, the row spacing and plant spacing of vegetable seedlings are more standard, and the vegetable seedlings have larger row spacing and plant spacing than vegetable fields planted artificially, so that the vegetable seedling weeding machine is also suitable for mechanical intelligent weeding. Meanwhile, the existing and traditional manual weeding method obviously cannot keep pace with the large-scale planting requirement.
With the development of machine learning technology, particularly deep learning technology, convolutional neural networks have been widely used. The deep learning models commonly used today are classified into an object detection network (object detection neural network) and an image classification network (image classification neural network). The object detection network can identify weeds and mark a weed bounding box (bounding box), but the weed bounding boxes are different in size, and the unit operation range of the weeding actuator is often fixed, so that the bounding box cannot be directly used for accurate weeding. On the other hand, the cost for establishing the object detection model training set is high, and a plurality of weed images with a large quantity and a large variety are required to be acquired and respectively marked by a boundary box. The recognition accuracy of the image classification network is generally higher than that of the object detection network, but the image classification network can only recognize whether or not weeds are contained in an image, and cannot determine the position of the weeds in the image.
In view of the above problems, the present invention proposes a weed recognition method that fuses an image classification network (classification neural network model) and image processing.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a weed identification method integrating deep learning and image processing.
The invention is realized by the following technical scheme:
the invention provides a weed identification method integrating deep learning and image processing, which comprises the following steps:
s1, collecting field images;
s2, uniformly dividing the field image into a plurality of grid images;
s3, identifying the grid images through the trained classified neural network model, marking each grid image as a crop or a background, wherein crops exist in the grid images marked as crops, and crops do not exist in the grid images marked as the background;
s4, taking a grid image marked as a background and a grid image marked as a crop, respectively carrying out image segmentation, area filtering and connected region marking on green pixels (plants, crops/weeds) in the two grid images by using color factors in image processing, counting the number of connected regions of the two grid images, and identifying the distribution condition of weeds in the grid images by the number of connected regions. Because the positions of the grid images in the field image are known information, the area of the weeds in the field image can be determined after all the grid images containing the weeds are identified by using the classified neural network model.
Further, in the step S4, for the grid image marked as the background, if the number of connected areas is 0, the grid image is a crop-free weed-free soil area; if the number of connected areas is N, and N > 0, the grid image contains N weeds.
Further, in the step S4, for the grid image marked as the crop, if the number of connected areas is 1, the grid image only includes the crop; if the number of the communication areas is N, R is set as 1 ,…,R N And N is more than 1, then the following treatment is carried out: in the grid image, only R is reserved 1 The pixels at the positions of the connected areas are hidden (the transparency is set to be 1, or the pixel value is directly set to be 0 or 255), then the grid image is input into a classification neural network model for recognition (the positions of the connected areas are mapped to a grid original image for pixel processing, the input classification neural network model is used for recognition, the grid original image of the pixels at the designated connected areas is hidden, the binary image after non-image processing is used for recognition), and if the recognition result is crops, R is 1 The pixels of the communication area are crop pixels, and conversely, weed pixels; according to the same operation, only R is reserved 2 … up to R N Pixels at the positions of the connected regions are respectively input into a classification neural network model for recognition, and R is determined 2 To R N And identifying all crops and weeds in the grid image by carrying out one-to-one identification on the connected areas.
Further, in the step S2, in the divided grid images, the grid image including the crop is a positive sample, and the grid image not including the crop is a negative sample; the positive sample label is a crop, and the negative sample label is a background.
Further, the positive sample grid image of the crop with the label is an image containing the crop, that is, the grid image is considered to be the crop as long as the crop is contained, and specifically, 2 kinds of scenes can be: only crops exist in the grid image, and both crops and weeds exist in the image.
Further, the negative sample grid image with the label as the background is an image not containing crops, and specifically can be 2 kinds of scenes: only weeds were present in the grid image, neither crop nor weed (i.e. only soil or other background was present) in the image.
Further, the size of the grid image is consistent with the size of the neural network model training set image.
Further, the size of the grid image is set according to the working range of the weeding actuator, so that the working range of the weeding actuator covers each grid area.
Further, in the step S4, when area filtering is performed, an area threshold needs to be set to filter out crop or weed pixels with too small area in the grid image, so as to avoid erroneous judgment of the classified neural network model during recognition. When dividing the grid images, the situation that only a very small part of areas in the grid images contain weeds or crops is likely to occur, and the grid images are very easy to be mistakenly recognized when the neural network model is recognized due to the very small target size, so that scene optimization is performed through the area threshold, and specifically, the crops or weed pixels smaller than the designated area threshold are ignored and filtered. The object of the filtering may be image noise or a small piece of weed or crop that does not affect the actual weed identification, as the bulk of the small piece of crop or weed will appear in the other grid image and be identified normally.
The invention has the beneficial effects that:
the identification targets of the neural network model are divided into positive samples (crops) and negative samples (non-crops and backgrounds), and the non-crop images are weed or soil background images. The neural network model only needs to search whether crops exist in the image or not during recognition, and the images are considered to be background images when no crops exist. Because of the variety of weeds, the neural network model only needs to pay attention to whether crops exist in the images or not, and the variety of the crops is single or limited, so that the recognition result is not influenced even if the variety of the weeds which do not appear in the training set is encountered. Therefore, the method can effectively reduce the complexity of weed identification and the cost of training set image construction, and can improve the robustness and generalization capability of model identification.
Drawings
FIG. 1 is a schematic diagram of a field image divided into grid images according to an embodiment of the present invention;
FIG. 2 is a diagram showing weed recognition in grid image No. 2 in the embodiment of the present invention;
FIG. 3 is a diagram showing weed recognition in grid image No. 6 in the embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The embodiment of the invention provides a weed identification method integrating deep learning and image processing, which specifically comprises the following steps:
step one, training a classified neural network model
The original image containing the crop or weed is uniformly segmented into grid images. And manually classifying the obtained grid images, wherein the grid images of crops are included as positive samples, and the negative samples are formed in the opposite directions. The positive and negative samples are used to train a categorized neural network model.
The positive sample grid image contains 2 kinds of scenes:
1) Only crops are in the image;
2) There are both crops and weeds in the image.
The negative-sample grid image contains 2 kinds of scenes:
1) Only weeds are in the image;
2) No weeds nor crops (i.e. only soil or other background is present) in the image.
When the classified neural network model is trained, the positive sample label is the crop, and the negative sample label is the background. Training the classified neural network model until the network converges or reaches the highest iteration number, and storing the optimal network model.
Step two, collecting field images
The field image of the area to be treated is acquired by a camera or a vision system of intelligent weeding equipment.
Step three, uniformly dividing the field image into a plurality of grid images
Uniformly dividing the field image into a plurality of grid images, wherein the size of the grid images is consistent with the size of the classified neural network model training set image. Referring to fig. 1, a field image is divided into 12 grid images.
As shown in fig. 1, the divided grid image can be classified into the following scenes:
scene 1: no crop nor weed, i.e. only soil (mesh image No. 8) is contained within the mesh image;
scene 2: only a single crop target (grid images No. 1, 5, 7, 9, 11) is contained within the grid image;
scene 3: the grid image contains a plurality of crop targets (grid image No. 2);
scene 4: only a single weed target (grid image No. 12) is contained within the grid image;
scene 5: the grid image contains a plurality of weed targets (grid image No. 4);
scene 6: the grid images contained both weeds and crop targets (grid images No. 3, 6, 10).
Step four, identifying grid images
And identifying the 12 grid images through the trained classified neural network model, and marking each grid image as a crop or a background. The grid image marked as the crop is an image containing the crop, namely, the grid image is considered as the crop as long as the crop is contained in the grid image; the grid image marked as background is an image that does not contain crops.
In the embodiment, 12 grid images are respectively input into a classification neural network model for recognition, and scenes 2, 3 and 6 containing crops, namely, grid images 1, 2, 3, 5, 6, 7, 9, 10 and 11 are positive samples and recognized as crops; scenes 1, 4, 5, i.e. grid images No. 4, 8, 21, which do not contain crops, are negative samples, identified as background.
Step five, weed identification
For grid images marked as background (grid images No. 4, 8 and 21), respectively carrying out image processing on the grid images by using color factors (such as supergreen factors and 2G-R-B) in image processing, obtaining a binary image by using an automatic threshold segmentation method (such as OTSU algorithm) after segmentation, filtering noise pixel points and pixels of a very small block target by using an area filtering method, finally marking connected areas, and counting the number of the connected areas.
If the number of the connected areas is 0, the grid image is soil (No. 8 grid); if the number of the communication areas is greater than 0, the grid image is a weed grid (grid number 4 and grid number 12), and the number of the communication areas is the number of weeds (grid number 12 contains 1 weed, grid number 4 contains 2 weeds).
And (II) for grid images marked as crops (grid images No. 1, 2, 3, 5, 6, 7, 9, 10 and 11), respectively carrying out image processing on the grid images by using color factors (such as supergreen factors and 2G-R-B) in image processing, obtaining a binary image by using an automatic threshold segmentation method (such as OTSU algorithm) after segmentation, filtering noise pixel points and pixels of a very small block target by using an area filtering method, finally marking connected areas, and counting the number of the connected areas.
If the number of the connected areas is 1, the grid image is a crop image (and the number of the crops is only 1, such as 1, 5, 7, 9 and 11 grids); if the number of the communication areas is more than 1, the following processing is performed:
grid No. 2:
referring to fig. 2, the number of connected areas is 2, firstly, the 1 st connected area is mapped to the original grid image, the pixel transparency of the area is set to be 1 or the pixel value is set to be 0 or 255, then the grid image is input into the neural network model for recognition, the recognition result is crop, and the 2 nd connected area is indicated as crop. And similarly, mapping the 2 nd communication area to the original grid image, setting the pixel transparency of the area to be 1 or setting the pixel value to be 0 or 255, and inputting the grid image into a neural network model for identification, wherein the identification result is also crop, and the 1 st communication area is also indicated as crop. Thus, it can be deduced that the 2 connected areas are crops in the No. 2 grid image.
Grid No. 3:
the number of the connected areas is 2, firstly, the 1 st connected area is mapped to the original grid image, the pixel transparency of the area is set to be 1 or the pixel value is set to be 0 or 255, then the grid image is input into the neural network model for recognition, the recognition result is the background, and the 2 nd connected area is indicated as weed. And similarly, mapping the 2 nd connected region to the original grid image, setting the pixel transparency of the region to be 1 or setting the pixel value to be 0 or 255, and inputting the grid image into a neural network model for identification, wherein the identification result is a crop, and the 1 st connected region is indicated as the crop. Thus, it can be deduced that the 2 connected areas are crops and weeds in the No. 3 grid image.
Grid No. 6:
referring to fig. 3, the number of connected areas is 4, the 1 st connected area is reserved first, the rest connected areas are mapped to the original mesh image, the transparency is set to 1 or the pixel value is set to 0 or 255, then the mesh image is input into the neural network model for recognition, the recognition result is the background, and the 1 st connected area is indicated as weed. Similarly, the 2 nd connected region is reserved, the rest connected regions are mapped to the original grid image, the transparency is set to be 1 or the pixel value is set to be 0 or 255, then the grid image is input into the neural network model for recognition, the recognition result is a crop, and the 2 nd connected region is indicated as the crop. All the communicating areas are treated sequentially according to the method, so that the categories of all the communicating areas can be identified (2 communicating areas are weeds and 2 communicating areas are crops).
Grid No. 10:
the number of the connected areas is 3, firstly, the 1 st connected area is reserved, the rest connected areas are mapped to the original grid image, the transparency is set to be 1 or the pixel value is set to be 0 or 255, then the grid image is input into the neural network model for identification, the identification result is crop, and the 1 st connected area is indicated as crop. Similarly, the 2 nd connected region is reserved, the rest connected regions are mapped to the original grid image, the transparency is set to be 1 or the pixel value is set to be 0 or 255, then the grid image is input into the neural network model for recognition, the recognition result is the background, and the 2 nd connected region is indicated as weed. The 3 rd connected region is treated according to the method, so that the categories of all the connected regions can be identified (2 connected regions are crops, and 1 connected region is weed).
Therefore, according to the method disclosed by the invention, all the grid images containing weeds can be identified, and the positions of the grid images in the field images are known information, so that the areas of the weeds in the field images can be determined after the grid images containing the weeds are identified by using the classified neural network model.
The foregoing is merely a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiments, but all technical solutions falling under the concept of the present invention fall within the scope of the present invention, and it should be noted that, for those skilled in the art, several modifications and adaptations without departing from the principles of the present invention should and are intended to be regarded as the scope of the present invention.

Claims (8)

1. The weed identification method integrating deep learning and image processing is characterized by comprising the following steps of:
s1, collecting field images;
s2, uniformly dividing the field image into a plurality of grid images;
s3, identifying the grid images through the trained classified neural network model, marking each grid image as a crop or a background, wherein crops exist in the grid images marked as crops, and crops do not exist in the grid images marked as the background;
s4, taking a grid image marked as a background and a grid image marked as a crop, respectively carrying out image segmentation, area filtering and connected region marking on green pixels in the two grid images by using color factors in image processing, counting the number of connected regions of the two grid images, and identifying the weed distribution situation in the grid images through the number of the connected regions.
2. The weed recognition method combining deep learning and image processing according to claim 1, wherein,
in S4, for the grid image marked as background,
if the number of the communication areas is 0, the grid image is a crop-free weed-free soil area;
if the number of connected areas is N, and N > 0, the grid image contains N weeds.
3. The weed recognition method combining deep learning and image processing according to claim 1, wherein,
in S4, for the grid image marked as crop,
if the number of the connected areas is 1, the grid image only contains crops;
if the number of the communication areas is N, R is set as 1 ,…,R N And N is more than 1, then the following treatment is carried out:
in the grid image, only R is reserved 1 Pixels at the positions of the connected areas are hidden, the pixels at the positions of the other connected areas are input into a classification neural network model for recognition, and if the recognition result is crops, R is the same as the recognition result 1 The pixels of the communication area are crop pixels, and conversely, weed pixels; respectively for R according to the same operation 2 … up to R N And identifying the connected areas, namely identifying all crops and weeds in the grid image.
4. The weed recognition method combining deep learning and image processing according to claim 1, wherein,
in the step S2, in the divided grid images, the grid images containing the crops are positive samples, and the grid images not containing the crops are negative samples; the positive sample label is a crop, and the negative sample label is a background.
5. The weed recognition method combining deep learning and image processing according to claim 4, wherein,
the positive sample grid image includes 2 scenarios: only crops exist in the grid image, and the crops and weeds exist in the grid image;
the negative-sample grid image includes 2 scenarios: only weeds are in the grid image, and no crop or weed exists in the grid image.
6. The weed recognition method combining deep learning and image processing according to claim 1, wherein,
the size of the grid image is consistent with the size of the neural network model training set image.
7. The weed recognition method combining deep learning and image processing according to claim 1, wherein,
the size of the grid image is consistent with the operation range size of the weeding actuator.
8. The weed recognition method combining deep learning and image processing according to claim 1, wherein,
in the step S4, when area filtering is performed, an area threshold is set to filter out crop or weed pixels smaller than the area threshold in the grid image.
CN202310247564.4A 2023-03-15 2023-03-15 Weed identification method integrating deep learning and image processing Pending CN117036926A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310247564.4A CN117036926A (en) 2023-03-15 2023-03-15 Weed identification method integrating deep learning and image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310247564.4A CN117036926A (en) 2023-03-15 2023-03-15 Weed identification method integrating deep learning and image processing

Publications (1)

Publication Number Publication Date
CN117036926A true CN117036926A (en) 2023-11-10

Family

ID=88634189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310247564.4A Pending CN117036926A (en) 2023-03-15 2023-03-15 Weed identification method integrating deep learning and image processing

Country Status (1)

Country Link
CN (1) CN117036926A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557915A (en) * 2024-01-09 2024-02-13 中化现代农业有限公司 Crop variety identification method, device, electronic equipment and storage medium
CN117853930A (en) * 2024-01-25 2024-04-09 湖南省第二测绘院 Machine learning-based field inspection method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557915A (en) * 2024-01-09 2024-02-13 中化现代农业有限公司 Crop variety identification method, device, electronic equipment and storage medium
CN117557915B (en) * 2024-01-09 2024-04-19 中化现代农业有限公司 Crop variety identification method, device, electronic equipment and storage medium
CN117853930A (en) * 2024-01-25 2024-04-09 湖南省第二测绘院 Machine learning-based field inspection method and system

Similar Documents

Publication Publication Date Title
CN108009542B (en) Weed image segmentation method in rape field environment
CN108647652B (en) Cotton development period automatic identification method based on image classification and target detection
Tian et al. Machine vision identification of tomato seedlings for automated weed control
CN100416590C (en) Method for automatically identifying field weeds in crop seeding-stage using site and grain characteristic
CN117036926A (en) Weed identification method integrating deep learning and image processing
CN114818909B (en) Weed detection method and device based on crop growth characteristics
CN111914914A (en) Method, device, equipment and storage medium for identifying plant diseases and insect pests
Alejandrino et al. Visual classification of lettuce growth stage based on morphological attributes using unsupervised machine learning models
CN111753646A (en) Agricultural pest detection and classification method fusing population season growth and elimination information
CN103530643A (en) Pesticide positioned spraying method and system on basis of crop interline automatic identification technology
CN114239756B (en) Insect pest detection method and system
Selvi et al. Weed detection in agricultural fields using deep learning process
CN111727457A (en) Cotton crop row detection method and device based on computer vision and storage medium
CN113469112A (en) Crop growth condition image identification method and system
CN113011221A (en) Crop distribution information acquisition method and device and measurement system
CN108629289A (en) The recognition methods in farmland and system, applied to the unmanned plane of agricultural
He et al. Visual detection of rice rows based on Bayesian decision theory and robust regression least squares method
CN111523457B (en) Weed identification method and weed treatment equipment
CN117876823B (en) Tea garden image detection method and model training method and system thereof
CN111967441A (en) Crop disease analysis method based on deep learning
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
CN113377062B (en) Multifunctional early warning system with disease and pest damage and drought monitoring functions
CN116453003B (en) Method and system for intelligently identifying rice growth vigor based on unmanned aerial vehicle monitoring
CN117933558A (en) Campus intelligent agricultural planting management method
CN117456523A (en) Crop type identification method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination