CN117392627A - Corn row line extraction and plant missing position detection method - Google Patents

Corn row line extraction and plant missing position detection method Download PDF

Info

Publication number
CN117392627A
CN117392627A CN202311312815.9A CN202311312815A CN117392627A CN 117392627 A CN117392627 A CN 117392627A CN 202311312815 A CN202311312815 A CN 202311312815A CN 117392627 A CN117392627 A CN 117392627A
Authority
CN
China
Prior art keywords
corn
image
row
line
crop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311312815.9A
Other languages
Chinese (zh)
Inventor
王庆杰
张馨悦
王超
康可新
卢彩云
何进
李洪文
贾麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Beike Ruisi Modern Agricultural Technology Co ltd
China Agricultural University
Original Assignee
Heilongjiang Beike Ruisi Modern Agricultural Technology Co ltd
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang Beike Ruisi Modern Agricultural Technology Co ltd, China Agricultural University filed Critical Heilongjiang Beike Ruisi Modern Agricultural Technology Co ltd
Priority to CN202311312815.9A priority Critical patent/CN117392627A/en
Publication of CN117392627A publication Critical patent/CN117392627A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of intelligent agriculture, and particularly relates to a corn row line extraction and plant missing position detection method. The method comprises the following steps: s1: collecting an image; s2: image marking and enhancement; s3: building a network and training; s4, corn crop line segmentation; s5: fitting the boundary line and outwards translating 30 pixel points; s6, scanning scribing lines from top to bottom and taking points; s7, self-adaptive perspective transformation; s8: fitting a minimum rectangular frame and extracting a row line. The method disclosed by the invention can effectively cope with the problems of complex background and plant missing, separate clear and continuous corn crop rows and accurately detect the plant missing position in the farmland, and provide reliable basic data for subsequent agricultural operation and analysis.

Description

Corn row line extraction and plant missing position detection method
Technical Field
The invention belongs to the technical field of intelligent agriculture, and particularly relates to a corn row line extraction and plant missing position detection method.
Background
In the agricultural field, corn is an important grain crop and has a wide planting area. In corn planting, extraction of row lines and detection of a plant missing position are key tasks. The row line extraction can help to perform row fertilization, irrigation and the like, and the detection of the plant missing position can assist in crop growth monitoring and help to perform fertilizer and irrigation management so as to ensure that target fertilization and irrigation are performed, so that the medicament is saved, and the yield of corn is improved. However, accurate row line extraction and detection of the plant missing position has been a challenging problem due to the complexity of the farmland environment and distortion of the corn row lines caused by image distortion under the monocular camera.
At present, the deep learning technology has made remarkable progress in the field of computer vision and is widely applied to row line extraction and target detection tasks. The Unet is a commonly used deep learning network structure, and can effectively extract features in an image and generate a pixel-level segmentation result. However, there are some problems in using the Unet network for corn row line and plant missing location extraction. Firstly, the traditional Unet network structure is not accurate enough for treating complex background interference, and complex and changeable field environments mainly comprise weeds, changeable illumination environments, plant missing conditions of crop rows and the like. Secondly, the traditional Unet network structure can not process the row line deformation problem caused by shooting by a monocular camera in an farmland. In addition, the Unet network has difficulty in processing detection tasks of row lines and plant missing positions at the same time.
Disclosure of Invention
The invention discloses a corn row line extraction and plant missing position detection method, which improves the traditional Unet network structure. First, by introducing more residual connections and attention mechanisms, the feature extraction and context information transfer capabilities of the network are enhanced. Therefore, the problems of complex background and plant missing can be better solved, and the segmentation precision of corn crop rows is improved. Secondly, self-adaptive perspective transformation is introduced to solve the problem that parallel corn row lines form vanishing points at the top end due to the shooting of a monocular camera.
A corn row line extraction and plant missing position detection method is characterized by comprising the following steps:
s1: collecting an image;
s2: image marking and enhancement;
s3: building a network and training;
s4, corn crop line segmentation;
s5: fitting the boundary line and outwards translating 30 pixel points;
s6, scanning scribing lines from top to bottom and taking points;
s7, self-adaptive perspective transformation;
s8: fitting a minimum rectangular frame and extracting a row line.
Preferably, the S1 specifically is: a CMOS machine vision camera is adopted to collect corn crop row video stream in front of the machine, the height of the camera from the ground is 1.5m, the inclination angle is 30 degrees, and corn seedlings in front of a vehicle in the range of about 8 meters can be covered; the acquired image size is 1920 pixels, the video frame rate is 12 frames/s, and the video format is AVI.
Preferably, the S2 specifically is: obtaining corn field images according to each frame, selecting 1008 corn crop line pictures under a complex environment altogether, labeling four lines of crops with clear middle, performing brightness change, noise addition and rotation operation on the images, and finally establishing an image segmentation data set containing 3024 corn crop lines; the ratio of training set to test set was set to 9:1.
Preferably, the S3 specifically is: and (3) downsampling by using a convolutional neural network of ResNet-50 on the basis of unet, cutting an input image into 512 x 512 sizes, extracting features by using the ResNet-50 network to obtain 5 effective feature layers, realizing the coding process of corn crop line images by using multi-layer convolution, pooling and residual networks, restoring a reduced feature image to the size of the input image in an upsampling mode in a reinforced feature layer part, and overlapping the channel number of the upsampled feature layer and the downsampled feature layer to finish feature fusion. The output of the up-sampling is a classification for each pixel and maintains the same resolution as the input image, and the channel is adjusted to the number of classifications, i.e., corn crop rows and background, at the time of the last convolution.
Preferably, the S4 specifically is: because the target crop is white in behavior and black in background in the semantically segmented image, when the pixel value is changed, the pixel value is possibly an edge point, and the first and last pixel value change points are selected by the construction condition, so that the pixel point of the outer edge is obtained.
Preferably, the S5 specifically is: the edge line is fitted by least squares, which is the best solution obtained by minimizing the sum of squares of the residuals, i.e. the line is fitted by minimizing the sum of squares of the distances of the data points to the fitted line. Fitting a straight line of y=kx+b to the extracted n edge point coordinates (x 1, y10, (x 2, y 2), (xn, yn), wherein,
and calculating a vector according to k and b, and translating the straight line by 30 pixel points along the normal vector to obtain a new straight line.
Preferably, the S6 specifically is: because the closer the camera is to the far end when shooting, the more easily crop rows cross and merge into one point, namely a 'blanking point', the point near the upper side and the point near the lower side are ignored when taking the point; the lines of the 1/100 th row and the 80/100 th row are taken, so that the adaptive perspective transformation is performed according to the obtained four known points A, B, C and D.
Preferably, the S7 is specifically: according to the positions of the four points A, B, C and D, a perspective transformation matrix is calculated, and perspective transformation is carried out on an original image to obtain a transformed image.
Preferably, the S8 specifically is: and carrying out rectangular fitting on the outline to obtain a center point (x, y), width, height and rotation angle of the smallest rectangle, so that the length and width of the rectangle can be obtained to determine the width of corn crops, the fitted rotation rectangle is drawn through 4 vertex coordinates of the rectangle, the smallest fitting rectangle of the outline of the crop row is obtained, and on the basis of a rectangular fitting frame, the upper coordinate point and the lower coordinate point of the central line are obtained according to the four vertex coordinates, so that the crop row is obtained.
The technical scheme of the invention has the following beneficial effects:
by improving the Unet network structure, the characteristic extraction and the context information transmission capability of the network are enhanced, and the accuracy of corn row line extraction is improved. The method can effectively solve the problems of complex background and plant missing, separate clear and continuous corn crop lines, accurately detect the plant missing position in the farmland, and provide reliable basic data for subsequent agricultural operation and analysis. The self-adaptive perspective transformation technology is utilized to carry out perspective transformation on the image, so that the problem of vanishing points caused by a monocular camera is solved. By accurately extracting the corn row lines and detecting the plant missing positions, key navigation and operation information can be provided for automatic row fertilization and spraying. The agricultural machinery can perform row-to-row operation according to the data of row lines and plant missing positions, so that the automation level and the operation efficiency of the agricultural machinery are improved, and the requirement of manual intervention is reduced.
Drawings
FIG. 1 is a schematic flow chart of a method for extracting corn row lines and detecting a plant missing position;
FIG. 2 is a marking image of a corn crop line of the present invention;
FIG. 3 is a diagram of an improved Unet network architecture of the present invention;
FIG. 4 is a diagram of a residual block structure in a modified Unet network of the present invention;
FIG. 5 is a plot of 30 pixels of crop line boundary line fitting and boundary line outward translation in accordance with the present invention;
FIG. 6 is a graph of the results of scanning scribe lines and dot placement from top to bottom in accordance with the present invention;
FIG. 7 is an adaptive perspective variation of the present invention;
fig. 8 is a graph of the results of the minimum rectangular frame fit and row line extraction of the present invention.
Detailed Description
The invention provides a corn row line extraction and plant missing position detection method, which can effectively segment corn crop lines from corn field images and perform self-adaptive perspective transformation to eliminate image distortion, thereby accurately fitting out the corn row lines and detecting plant missing positions. As shown in FIG. 1, the corn row line extraction and plant missing position detection method comprises 8 steps:
step S1: a CMOS machine vision camera is adopted to shoot corn field image videos, and in order to ensure the definition and resolution of images, the vertical distance between the instrument and the ground and the inclination angle of 30 degrees are kept as much as possible.
Step S2: and acquiring corn field images from the video, screening the images, and then manually marking and enhancing data on the sample images to construct a training set and a testing set. The criteria for screening the images are: the image contains at least four complete corn crop rows, and the image is clear and has no blurring. The method for manually labeling the sample comprises the following steps: the middle four rows of corn crop rows in each image are marked by using open source software labelme to form a binarized mask image which is used as a label, as shown in fig. 2. The data enhancement method comprises the following steps: each image is subjected to operations such as random rotation, clipping, scaling, flipping, color transformation and the like so as to increase the diversity and robustness of data. The method for constructing the training set and the testing set is as follows: all images and the corresponding labels are randomly divided into a training set and a testing set according to the proportion of 9:1.
Step S3: and building an improved Unet semantic segmentation network model, and training the improved UNet network model by using the training set. The improved Unet network model is based on the original Unet network model, and modules such as an attention mechanism, residual connection and the like are added to improve the expression capacity and learning efficiency of the network, as shown in fig. 3.
Specifically, the improved Unet network model includes the following components:
an encoder section: with a residual network ResNet-50 as an encoding part of the Unet, the ResNet-50 consists of two basic residual blocks of Conv Block and Identity Block, as shown in FIG. 4, wherein the Conv Block changes the network dimension while the Identity Block does not change the network dimension. The residual network structure predicts by learning a deeper feature map, and gives a good result to the binary segmentation problem.
Attention mechanism: to improve the feature expression capability of the network model, attention mechanisms are introduced in the first layer front and the last layer convolution back of the=encoder part, and a CBAM module which focuses on both the channel and the spatial attention mechanism is adopted to reduce the classification error rate of the network. The attention mechanism is embodied in a weighted manner, and the CBAM can enable the model to pay more attention to the corn crop row pixels in the channel and space dimensions and allocate more weight coefficients for the network.
A decoder section: the feature fusion method comprises the steps of reducing a reduced feature image to the size of an input image through a deconvolution and inverse pooling mode, and superposing the channel number of an up-sampled feature layer and a down-sampled feature layer to finish feature fusion. The output of the up-sampling is a classification for each pixel and maintains the same resolution as the input image, and the channel is adjusted to the number of classifications, i.e., corn crop rows and background, at the time of the last convolution.
The method for training the improved UNet network model by using the training set comprises the following steps: inputting the images in the training set and the corresponding labels into a network model, using a cross entropy loss function as an optimization target, using an Adam optimizer as an optimization algorithm, setting proper super parameters such as learning rate, batch size, iteration number and the like, and continuously updating parameters of the network model until the loss function converges or reaches a preset stop condition.
And S4, taking the model parameters obtained through training as a prediction model, and carrying out corn crop line image segmentation. The specific method comprises the following steps: and inputting the images in the test set into a prediction model to obtain an output image, comparing the output image with the original image, and calculating evaluation indexes such as segmentation accuracy, recall rate, F1 value and the like to evaluate the performance of the model.
Step S5: the outer boundary line is fitted and shifted outwards by 30 pixels. The specific method for obtaining the pixel points of the outer edge is as follows: the corn line image obtained after segmentation is a two-class image, wherein the target crop is white in behavior and black in background, so that when the pixel value is changed, the pixel value is possibly an edge point, and the first pixel value and the last pixel value are changed by the construction condition, so that the pixel point of the outer edge is obtained; for each pixel point on the outer edge straight line, a straight line equation is fitted by using a least square method to serve as a boundary line. If the fitted linear equation has larger deviation from the original linear, adjusting fitting parameters; for each boundary line equation, the normal vector is calculated according to the slope and intercept of the boundary line equation, and 30 pixel points are translated along the direction of the normal vector, so that a new straight line is obtained as shown in fig. 5. The purpose of this is to expand the field of view of the corn field image and eliminate the effects of edge noise.
Step S6: scanning and scribing from top to bottom: and taking a row of pixels of the image every ten pixels from the top to the bottom of the image to obtain coordinate values of white pixels with the pixel value of 255. And taking the points to perform self-adaptive perspective transformation. The specific method comprises the following steps: because the closer the camera is to the far end at the time of shooting, the easier the crop rows cross and merge into a point, the "blanking point". Therefore, in the point taking, the points near the upper side and near the lower side are ignored, and the lines of the 1/100 th row and the 80/100 th row are taken as the figure, so that four known points A, B, C and D which can be subjected to perspective transformation are obtained as shown in fig. 6.
Step S7: according to the positions of the four points, a perspective transformation matrix is calculated, and perspective transformation is carried out on the original image to obtain a transformed image. The purpose of this is to eliminate the perspective distortion of the image so that the corn row lines appear as parallel straight lines in the transformed image. The perspective transformed corn crop row image is shown in fig. 7.
Step S8: the minimum rectangular frame fits the width of the corn crop row and extracts the corn row line according to the coordinate information. The specific method comprises the following steps: and for each transformed image, using a connected domain analysis algorithm to find out all connected domains in the image, and screening out the connected domains with the area larger than a certain threshold (1000 pixels) as candidate areas of corn crop rows. Next, for each candidate region, a minimum rectangular frame is fitted using a minimum rectangular frame algorithm as a bounding box for the corn crop row and its width and height are calculated as the width and length of the corn crop row. Then, according to the position and the direction of the boundary box, the coordinate information of the corn crop row in the image is determined, and the corn row line is extracted. The area smaller than a certain threshold value is defined as a plant-missing area. As shown in fig. 8.
The method can effectively extract corn row lines from corn field images, detect the positions of the lacking plants and provide useful information and reference for corn field management.
The above description is only a preferred embodiment of the present invention, and the scope of the claims should not be limited thereto, but it will be understood by those skilled in the art that all or part of the above-described embodiments may be implemented and equivalents thereof may be modified to fall within the scope of the present invention.

Claims (9)

1. A corn row line extraction and plant missing position detection method is characterized by comprising the following steps:
s1: collecting an image;
s2: image marking and enhancement;
s3: building a network and training;
s4, corn crop line segmentation;
s5: fitting the boundary line and outwards translating 30 pixel points;
s6, scanning scribing lines from top to bottom and taking points;
s7, self-adaptive perspective transformation;
s8: fitting a minimum rectangular frame and extracting a row line.
2. The method for extracting corn row lines and detecting the positions of the missing plants according to claim 1, wherein the step S1 is specifically as follows: a CMOS machine vision camera is adopted to collect corn crop row video stream in front of the machine, the height of the camera from the ground is 1.5m, the inclination angle is 30 degrees, and corn seedlings in front of a vehicle in the range of about 8 meters can be covered; the acquired image size is 1920 pixels, the video frame rate is 12 frames/s, and the video format is AVI.
3. The method for extracting corn row lines and detecting the positions of the missing plants according to claim 1, wherein the step S2 is specifically as follows: obtaining corn field images according to each frame, selecting 1008 corn crop line pictures under a complex environment altogether, labeling four lines of crops with clear middle, performing brightness change, noise addition and rotation operation on the images, and finally establishing an image segmentation data set containing 3024 corn crop lines; the ratio of training set to test set was set to 9:1.
4. The method for extracting corn row lines and detecting the positions of the missing plants according to claim 1, wherein the step S3 is specifically as follows: downsampling by using a convolutional neural network of ResNet-50 on the basis of unet, cutting an input image into 512 x 512 sizes, extracting features through the ResNet-50 network to obtain 5 effective feature layers, realizing the coding process of corn crop line images by using multi-layer convolution, pooling and residual networks, restoring a reduced feature image to the size of the input image in an upsampling mode in a reinforced feature layer part, and superposing channel numbers of the upsampled feature layer and the downsampled feature layer to finish feature fusion; the output of the up-sampling is a classification for each pixel and maintains the same resolution as the input image, and the channel is adjusted to the number of classifications, i.e., corn crop rows and background, at the time of the last convolution.
5. The method for extracting corn row lines and detecting the positions of the missing plants according to claim 1, wherein the step S4 is specifically: because the target crop is white in behavior and black in background in the semantically segmented image, when the pixel value is changed, the pixel value is possibly an edge point, and the first and last pixel value change points are selected by the construction condition, so that the pixel point of the outer edge is obtained.
6. The method for extracting corn row lines and detecting the position of a plant missing according to claim 1, wherein the step S5 is specifically: fitting an edge line by a least square method, which is to obtain an optimal solution by minimizing the sum of squares of residuals, namely fitting the line by minimizing the sum of squares of distances from the data points to the fitted line; fitting a straight line of y=kx+b to the extracted n edge point coordinates (x 1, y10, (x 2, y 2), (xn, yn), wherein,
and calculating a vector according to k and b, and translating the straight line by 30 pixel points along the normal vector to obtain a new straight line.
7. The method for extracting corn row lines and detecting the positions of the missing plants according to claim 1, wherein the step S6 is specifically: because the closer the camera is to the far end when shooting, the more easily crop rows cross and merge into one point, namely a 'blanking point', the point near the upper side and the point near the lower side are ignored when taking the point; the lines of the 1/100 th row and the 80/100 th row are taken, so that the adaptive perspective transformation is performed according to the obtained four known points A, B, C and D.
8. The method for extracting corn row lines and detecting the position of a plant lack according to claim 1, wherein the step S7 is specifically: according to the positions of the four points A, B, C and D, a perspective transformation matrix is calculated, and perspective transformation is carried out on an original image to obtain a transformed image.
9. The method for extracting corn row lines and detecting the positions of the missing plants according to claim 1, wherein the step S8 is specifically: and carrying out rectangular fitting on the outline to obtain a center point (x, y), width, height and rotation angle of the smallest rectangle, so that the length and width of the rectangle can be obtained to determine the width of corn crops, the fitted rotation rectangle is drawn through 4 vertex coordinates of the rectangle, the smallest fitting rectangle of the outline of the crop row is obtained, and on the basis of a rectangular fitting frame, the upper coordinate point and the lower coordinate point of the central line are obtained according to the four vertex coordinates, so that the crop row is obtained.
CN202311312815.9A 2023-10-11 2023-10-11 Corn row line extraction and plant missing position detection method Pending CN117392627A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311312815.9A CN117392627A (en) 2023-10-11 2023-10-11 Corn row line extraction and plant missing position detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311312815.9A CN117392627A (en) 2023-10-11 2023-10-11 Corn row line extraction and plant missing position detection method

Publications (1)

Publication Number Publication Date
CN117392627A true CN117392627A (en) 2024-01-12

Family

ID=89467622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311312815.9A Pending CN117392627A (en) 2023-10-11 2023-10-11 Corn row line extraction and plant missing position detection method

Country Status (1)

Country Link
CN (1) CN117392627A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635719A (en) * 2024-01-26 2024-03-01 浙江托普云农科技股份有限公司 Weeding robot positioning method, system and device based on multi-sensor fusion
CN117854029A (en) * 2024-03-09 2024-04-09 安徽农业大学 Intelligent agricultural crop root row prediction method based on machine vision

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635719A (en) * 2024-01-26 2024-03-01 浙江托普云农科技股份有限公司 Weeding robot positioning method, system and device based on multi-sensor fusion
CN117635719B (en) * 2024-01-26 2024-04-16 浙江托普云农科技股份有限公司 Weeding robot positioning method, system and device based on multi-sensor fusion
CN117854029A (en) * 2024-03-09 2024-04-09 安徽农业大学 Intelligent agricultural crop root row prediction method based on machine vision

Similar Documents

Publication Publication Date Title
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN117392627A (en) Corn row line extraction and plant missing position detection method
CN112052783A (en) High-resolution image weak supervision building extraction method combining pixel semantic association and boundary attention
CN112766155A (en) Deep learning-based mariculture area extraction method
CN110378873B (en) In-situ lossless counting method for rice ear plants and grains based on deep learning
Xiao et al. Deep learning-based spatiotemporal fusion of unmanned aerial vehicle and satellite reflectance images for crop monitoring
CN113065562B (en) Crop ridge row extraction and dominant route selection method based on semantic segmentation network
CN109034184A (en) A kind of grading ring detection recognition method based on deep learning
CN114120141B (en) All-weather remote sensing monitoring automatic analysis method and system thereof
CN113280820B (en) Orchard visual navigation path extraction method and system based on neural network
de Silva et al. Towards agricultural autonomy: crop row detection under varying field conditions using deep learning
EP3971767A1 (en) Method for constructing farmland image-based convolutional neural network model, and system thereof
CN114841961A (en) Wheat scab detection method based on image enhancement and improvement of YOLOv5
CN116977960A (en) Rice seedling row detection method based on example segmentation
CN114724031A (en) Corn insect pest area detection method combining context sensing and multi-scale mixed attention
CN114494786A (en) Fine-grained image classification method based on multilayer coordination convolutional neural network
CN111832508B (en) DIE _ GA-based low-illumination target detection method
CN116935296A (en) Orchard environment scene detection method and terminal based on multitask deep learning
Zhang et al. Automated detection of Crop-Row lines and measurement of maize width for boom spraying
Islam et al. QuanCro: a novel framework for quantification of corn crops’ consistency under natural field conditions
CN116524174A (en) Marine organism detection method and structure of multiscale attention-fused Faster RCNN
CN115330747A (en) DPS-Net deep learning-based rice plant counting, positioning and size estimation method
Buttar Satellite Imagery Analysis for Crop Type Segmentation Using U-Net Architecture
Wei et al. Accurate crop row recognition of maize at the seedling stage using lightweight network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination