CN113807131A - Method, device, agricultural machine and medium for identifying farmland soil surface - Google Patents
Method, device, agricultural machine and medium for identifying farmland soil surface Download PDFInfo
- Publication number
- CN113807131A CN113807131A CN202010537234.5A CN202010537234A CN113807131A CN 113807131 A CN113807131 A CN 113807131A CN 202010537234 A CN202010537234 A CN 202010537234A CN 113807131 A CN113807131 A CN 113807131A
- Authority
- CN
- China
- Prior art keywords
- image
- soil surface
- farmland
- label
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000002689 soil Substances 0.000 title claims abstract description 136
- 238000000034 method Methods 0.000 title claims abstract description 119
- 238000012549 training Methods 0.000 claims abstract description 36
- 238000013136 deep learning model Methods 0.000 claims abstract description 32
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 26
- 238000012360 testing method Methods 0.000 claims abstract description 17
- 230000011218 segmentation Effects 0.000 claims abstract description 13
- 238000009825 accumulation Methods 0.000 claims description 17
- 238000003860 storage Methods 0.000 claims description 11
- 238000009499 grossing Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 238000007619 statistical method Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 241000196324 Embryophyta Species 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009313 farming Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Forestry; Mining
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Agronomy & Crop Science (AREA)
- Primary Health Care (AREA)
- Mining & Mineral Resources (AREA)
- Marine Sciences & Fisheries (AREA)
- Animal Husbandry (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The disclosed embodiment provides a method for identifying a farmland soil surface, which comprises the following steps: acquiring an image of a farmland; obtaining the label of the farmland soil surface according to the image; pairing the images and the labels to form a paired data set, and dividing the data set into a test data set and a plurality of groups of training data sets; and (3) training and optimizing parameters of the semantic segmentation convolutional neural network by using a plurality of groups of training data sets and test data sets respectively to finally obtain a deep learning model for identifying the soil surface of the farmland. By the technical scheme, the semantic segmentation convolutional neural network is trained and tested by taking the farmland images and the labels of the farmland soil tables related to the farmland images as a matched data set, a deep learning model is generated, and the farmland soil tables are identified based on the farmland images by using the deep learning model. By the method, farmland soil surface labels can be generated quickly, and the identification model is obtained through training, so that the farmland soil surface identification capability of identifying more complex scenes (such as non-mechanized planting) is achieved.
Description
Technical Field
The present disclosure relates to a method, apparatus, agricultural machine, and storage medium for identifying soil surface of a farmland.
Background
Farm soil surface identification may be used for navigation of agricultural machinery (e.g., farm robots) based on visual navigation. At present, crops in a farmland are segmented by an image analysis (such as SVM, clustering and the like) by a farmland soil surface identification method, the mode seriously depends on manually labeled data, the labeling workload is large, and the robustness of the identification method is poor.
Disclosure of Invention
An object of the disclosed embodiments is to provide a method, an apparatus, an agricultural machine, and a storage medium capable of quickly and accurately identifying a soil surface of a farmland.
To achieve the above object, according to a first aspect of the present disclosure, there is provided a method for determining a label of a soil surface of an agricultural field, comprising:
carrying out binarization on the image of the farmland to extract the region where the vegetation is located in the image and generate a binarization image of the image;
determining the main direction of planting rows in the image;
accumulating the number of vegetation pixels in the main direction by taking the main direction and a direction perpendicular to the main direction as a coordinate system to obtain an accumulation curve;
determining the peak vertex and the width of the accumulation curve;
generating planting line rectangular strips line by line according to the peak and the width of the wave crest to obtain a planting line area binary image, and obtaining a non-crop area mask image according to the planting line area binary image; and
and extracting the soil surface area according to the non-crop area mask image and the image to obtain the label of the soil surface.
In an embodiment of the present disclosure, binarizing the image of the field comprises binarizing the image using at least one of the following methods:
a color space distinguishing method;
a color index distinguishing method;
a vegetation index distinguishing method.
In an embodiment of the disclosure, determining the main direction of the planted row in the image comprises:
converting the binary image into a polar coordinate system Hough space;
returning a data list through linear Hough transformation, wherein the data list comprises a peak value, an angle and a distance;
and performing statistical analysis on the data list to obtain the main direction of the planting rows.
In an embodiment of the present disclosure, statistically analyzing the data list to derive a primary direction of planting rows includes using at least one of the following methods:
selecting the angle with the most occurrence times in the data list as the main direction of the planting row;
and under the condition that the returned result is lower than the set threshold value, selecting the first-ranked angle in the data list as the main direction of the planting row.
In an embodiment of the present disclosure, the method further comprises: before determining the peak vertex and the width of the accumulation curve, the accumulation curve is smoothed.
In an embodiment of the disclosure, accumulating the number of vegetation pixels in the main direction to obtain an accumulation curve comprises:
rotating the binary image until the main direction is in the vertical direction;
expanding the rotated binary image into a square, wherein the length of the side of the square is equal to the length of the diagonal line of the binary image;
filling the area outside the binarized image in the square with zero values;
the number of non-zero pixels in the main direction is counted column by column to generate an accumulation curve.
In an embodiment of the present disclosure, obtaining the non-crop area mask map according to the two-value map of the plant line area includes:
performing inversion operation on pixels of the binary image of the planting row area;
reversely rotating the binary image of the planting row area to restore the main direction to the direction before rotation;
and restoring the two-value image of the planting row area to the size before expansion to obtain a non-crop area mask image.
In an embodiment of the present disclosure, extracting a soil surface region according to a non-crop region mask diagram and an image to obtain an annotation of a soil surface includes:
taking the mask image of the non-crop area as a label when the non-crop area is a soil surface;
when the non-crop area is not completely the soil surface, the soil surface area is extracted, and the part of the non-crop area mask map corresponding to the extracted soil surface area is used as the label of the soil surface.
In an embodiment of the present disclosure, the method further comprises:
before accumulating the number of vegetation pixels in the main direction, judging whether a constraint condition is required;
under the condition of requiring constraint conditions, executing the step of accumulating the number of vegetation pixels in the main direction;
and under the condition of no requirement constraint condition, obtaining a non-crop area mask image according to the binary image.
In an embodiment of the present disclosure, the constraint comprises a narrowest soil strip.
According to a second aspect of the present disclosure, there is provided a method for identifying a soil surface of a farmland, comprising:
acquiring an image of a farmland;
obtaining the label of the farmland soil surface in the image according to the image;
pairing the images and annotations associated with the images to form a paired dataset and dividing the dataset into a test dataset and a plurality of sets of training datasets;
performing iterative training on the semantic segmentation convolutional neural network by using a plurality of groups of training data sets to obtain a deep learning model; the data of each iteration is obtained by randomly extracting a plurality of pieces of image data from a plurality of groups of training data sets so as to update the parameters of the deep learning model through a plurality of iterations;
and inputting the test data set into a deep learning model, performing parameter optimization on the deep learning model, and using the optimized deep learning model for identifying the farmland soil table.
In an embodiment of the present disclosure, the method further comprises:
the convolutional neural network is pre-trained using a known data set before it is trained using multiple sets of training data sets.
In an embodiment of the present disclosure, obtaining the label of the farmland soil surface in the image according to the image includes:
the label is obtained using the method described above for determining a label for a soil surface of a field.
According to a third aspect of the present disclosure, there is provided a method for planning a work path for an unmanned vehicle, comprising:
determining a farmland soil surface by using the method for identifying the farmland soil surface;
obtaining the positions among ridges of the farmland according to the determined farmland soil surface; and
and determining the operation path of the unmanned vehicle according to the inter-ridge position.
In an embodiment of the present disclosure, determining the working path of the unmanned vehicle according to the inter-ridge position includes:
determining the distance between the soil surface rows according to the inter-ridge positions;
and adjusting the wheel track of the unmanned vehicle to be integral multiple of the distance.
According to a fourth aspect of the present disclosure, there is provided a method for predicting crop yield, comprising:
determining a farmland soil surface by using the method for identifying the farmland soil surface;
obtaining ridge spacing according to the determined farmland soil surface;
comparing the ridge spacing with a preset ridge spacing;
and predicting the yield of the crops according to the comparison result.
According to a fifth aspect of the present disclosure, there is provided an apparatus comprising:
a processor; and
a memory configured to store instructions configured to, when executed by the processor, enable the processor to perform at least one of:
the method for determining the label of the farmland soil surface;
the method for identifying the soil surface of the farmland;
the method for planning the operation path of the unmanned vehicle;
the method for predicting crop yield described above.
According to a sixth aspect of the present disclosure, there is provided an agricultural machine comprising the above-described apparatus.
In an embodiment of the present disclosure, the agricultural machine includes at least one of:
unmanned aerial vehicle, unmanned vehicle, transplanter, seeder.
According to a seventh aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon instructions that, when executed by a processor, are capable of causing the processor to perform at least one of:
the method for determining the label of the farmland soil surface;
the method for identifying the soil surface of the farmland;
the method for planning the operation path of the unmanned vehicle;
the method for predicting crop yield described above.
By the technical scheme, the semantic segmentation convolutional neural network is trained and tested by taking the image of the farmland and the label of the farmland soil table related to the image as a matched data set, a deep learning model is generated, and the farmland soil table is identified based on the image of the farmland by using the deep learning model. By the method, farmland soil surface labels can be generated quickly, and the identification model is obtained through training, so that the farmland soil surface identification capability of identifying more complex scenes (such as non-mechanized planting) is achieved.
Additional features and advantages of embodiments of the present disclosure will be described in detail in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosed embodiments and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the detailed description serve to explain the embodiments but not to limit the embodiments. In the drawings:
FIG. 1A is a flow chart schematically illustrating an example of a method for identifying a soil surface of an agricultural field according to an embodiment of the present disclosure;
FIG. 1B schematically illustrates an example network structure of a deep learning network used in a method for identifying farmland soil tables according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a display of a label of a soil surface according to an embodiment of the present disclosure;
FIG. 3 is a flow chart that schematically illustrates an example of a method for determining a farmland soil surface label, in accordance with an embodiment of the present disclosure;
FIG. 4A schematically illustrates an acquired image of a field according to an embodiment of the present disclosure;
FIG. 4B schematically shows a schematic diagram of an expanded square and a rotated binarized map according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a smoothed accumulation curve according to an embodiment of the disclosure;
FIG. 6A schematically illustrates a two-value plot for a plantation row area according to an embodiment of the present disclosure;
fig. 6B schematically illustrates a binary image obtained after inverting the two-value image of the planting row area according to an embodiment of the present disclosure;
FIG. 6C schematically shows a region of the original binarized map (i.e., the binarized map before rotation and expansion) in the map shown in FIG. 6B;
FIG. 6D schematically illustrates a resulting non-crop area mask map after de-rotation and de-expansion, in accordance with an embodiment of the present disclosure;
FIG. 7 is a flow chart that schematically illustrates an example of a method for determining a soil surface label for an agricultural field, in accordance with another embodiment of the present disclosure;
FIG. 8 illustrates a binarized map of a farmland soil surface without requirement constraints according to an embodiment of the present disclosure;
fig. 9 is a flow chart schematically illustrating an example of a method for planning a work path for an unmanned vehicle according to another embodiment of the present disclosure;
fig. 10 is a flow chart schematically illustrating an example of a method for predicting crop yield according to another embodiment of the present disclosure; and
fig. 11 is a block diagram schematically illustrating an example of an apparatus for implementing at least one of the methods according to the embodiments of the present disclosure, according to an embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are referred to in the embodiments of the present disclosure, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are changed accordingly.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present disclosure, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between the various embodiments can be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not be within the protection scope of the present disclosure.
Fig. 1A is a flowchart schematically illustrating an example of a method for identifying a soil surface of an agricultural field according to an embodiment of the present disclosure. As shown in fig. 1A, a method for identifying a soil surface of a farmland is provided, which may include the following steps.
In step S11, an image of the field is acquired. In the disclosed embodiments, an image of the field may be obtained in at least one way. For example, in one example, a field may be overhead photographed by a drone-mounted camera to obtain an image. In another example, the field may be pitched by a camera located at a fixed point (e.g., a pitched point located at or near the field area, such as a pole, observation tower, etc.) to obtain an image. In yet another example, a camera may be mounted on a balloon (e.g., a hot air balloon) to pan the field to obtain an image.
In step S12, a label (label) of the soil surface of the farmland in the acquired image is obtained from the image. The purpose of the annotation is to classify the soil surface marker in the image, such as soil surface, background, etc. In an embodiment of the present disclosure, the image may be labeled manually to obtain the label of the soil surface. For example, an image semantic segmentation labeling tool (e.g., Labelme) can be used to label the soil surface region in the farmland image with a polygonal box. In another embodiment of the present disclosure, the annotations may be generated by a suitable algorithm, which will be described in detail below. In another embodiment of the present disclosure, the annotations may be generated by a suitable algorithm, which will be described in detail below. The label (or label) of the soil surface can be a gray scale map, for example, as shown in fig. 2.
In step S13, the images and associated annotations are paired to form a paired data set, and the data set is divided into a test data set and a plurality of sets of training data sets. Specifically, images of a plurality of farmlands may be acquired, labels associated with the images may be obtained from each image, and the images and the associated labels may be paired to form a paired data set. The data sets are divided into sets of training data sets and test data sets. In one example, the training data set may have a greater amount of data than the test data set. For example, the training data set may account for 90% of the data set and the test data set may account for 10% of the data set.
In step S14, the semantically segmented convolutional neural network is iteratively trained using multiple sets of training data sets to obtain a deep learning model. The data for each iteration is obtained by randomly extracting a number of pieces of image data from a plurality of sets of training data to update the parameters of the deep learning model through a plurality of iterations.
Specifically, in training a semantically segmented convolutional neural network, at each iteration, several pieces of image data may be randomly drawn from a set of training data sets to form a batch to update parameters of the convolutional neural network, and a Loss function is calculated (examples of the Loss function may include, but are not limited to, bce withlogbits Loss, Focal Loss, or cross entropy). The loss function (loss function) can be used to estimate the degree of inconsistency between the predicted value and the true value of the model, and is a non-negative true value function, and the smaller the loss function is, the better the robustness of the model is. A loss function is calculated using the predicted values and the true values, and parameters of the convolutional neural network are adjusted based on the loss function. The parameters of the deep learning model may thus be updated through multiple iterations.
In embodiments of the present disclosure, a gradient descent and back propagation algorithm may be employed for optimal training of the convolutional neural network. In one example, a Learning Rate Range Test algorithm may be used to find a suitable Learning Rate interval, and then a periodic Learning Rate (cyclic Learning Rate) method may be employed to train the convolutional neural network.
In embodiments of the present disclosure, the acquired farmland image may be pre-processed. For example, in one example, the field image may be enhanced as a detection image. In another example, each field image may be cropped into a plurality of sub-images of size K x N pixels (where K and N represent natural numbers, and K may be the same as or different from N). The pixel size K × N of the sub-image may be determined according to the processing frequency and the existence of the graphic card for processing the image. In one example, the sub-images may be unified into a 512 by 512 size three channel RGB image.
In step S15, the test data set is input to the deep learning model, and the deep learning model is subjected to parameter optimization.
After the training data set is used to train the semantic segmentation convolutional neural network to obtain the deep learning model, the test data set can be input to the deep learning model to perform parameter optimization on the deep learning model. The optimized deep learning model can be used for identifying the farmland soil surface.
Specifically, after training the convolutional neural network using the training data set to obtain a deep learning model, the test data set is used to verify the model, i.e., target detection. An Intersection-over-Union (IOU) ratio may be used in target detection. IOU accuracy is an evaluation criterion of semantic segmentation accuracy, which can be defined as the ratio of the intersection to the union of the set of actual values and the set of predicted values. The test data set is input into a deep learning model, the deep learning model outputs a predicted image according to the input test data set, and the IOU is calculated according to the predicted value of the predicted image and the real value of the label associated with the farmland image corresponding to the predicted image. The method for calculating the IOU belongs to the methods known by the technical personnel in the field and is not described in detail herein. And determining an objective function according to the calculated IOU. For example, the IOU penalty may be calculated, incorporated into an objective function of the deep learning model, and parameters of the deep learning model adjusted (or optimized) based on the objective function until the IOU accuracy reaches a desired value. For example, multiple iterations may be performed until the IOU accuracy no longer improves, e.g., the degree of IOU accuracy improvement goes to zero or below a threshold.
In an alternative embodiment of the present disclosure, the semantically segmented convolutional neural network may be pre-trained prior to training the convolutional neural network with the input training data set. For example, a convolutional neural network may be pre-trained with a known data set. Known datasets may include, but are not limited to, for example ImageNet.
In embodiments of the present disclosure, the semantically segmented convolutional neural network may include, but is not limited to, for example, a Full Convolutional Network (FCN), Unet, LinkNet.
Fig. 1B schematically illustrates an example network structure of a deep learning network used in a method for identifying an earth surface of an agricultural field according to an embodiment of the present disclosure. The example network structure of the deep learning network shown in fig. 1B may be used to implement the method for identifying farmland soil tables of the embodiment described with reference to fig. 1A. The network structure of the deep learning network shown in fig. 1B may include an encoder module 110 and a decoder module 120. The encoder module 110 may be used to gradually lower the feature map and obtain higher semantic information. The decoder module 120 may be used to gradually recover the spatial information.
Samples (e.g., images) in the training data set or the test data set are input to the deep learning network, the encoder module 110 may extract feature information of each image in the data set, and the decoder module 120 may up-sample the feature information and output a predicted image. The predicted image is the result of semantic segmentation, the pixel values of which correspond to a particular object type (i.e., a preset type). The output predicted image can be a gray-scale image of the soil surface region, and the gray-scale image is a semantic segmentation result. In one example, the size of the gray scale map may be the same as the size of the acquired image of the farmland, and in the gray scale map, the value of any pixel may represent the semantic category (i.e. the preset type) to which the deep learning model predicts the object at the position. For example, 0 represents background and 1 represents earth surface.
The encoder module 110 may include a deep convolutional neural network (DCCN) and a hole space pyramid pooling (ASPP). The DCCN can be used for extracting the characteristics of farmland images, and the characteristics can comprise color, shape, texture characteristics and the like. ASPP can be used to increase the receptive field of the feature map, i.e., the size of the convolution kernel as seen on the image. In this embodiment, examples of backbone networks selected for use by the DCCN may include, but are not limited to, ResNet, VGG, SENet, Xception, mobilent. As shown in fig. 1B, in the encoder, DCNN uses multi-scale hole convolution (Atrous Conv), including hole convolution (layer) of 3x3 Conv, and the expansion rates (rates) may be 6, 12, 18 (i.e., 3x3 Convrate 6, 3x3 Convrate 12, 3x3 Convrate 18 shown in the figure), respectively, which may sense semantic information on the input farmland image in a wider range to facilitate accurate segmentation. The input Image (for example, a farmland Image) is input to the encoder module 110, and subjected to point-by-point hole convolution (1x1 Conv shown in the figure), 3x3 Conv rate 6, 3x3 Conv rate 12, and 3x3 Conv rate 18 in parallel, and subjected to Pooling (for example, Image farming shown in fig. 1B), and then subjected to 1x1 Conv, and feature information (feature map) is output. The pooling operation in the encoder module 110 may employ maximum pooling (maxporoling).
At the decoder module 120, the feature information output by the encoder module 110 may be first upsampled (upsampled), for example as shown in fig. 1B, first bilinearly upsampled (for example, by an upsampling factor of 4), and then combined (localization, the localization shown in the figure) with Low-Level Features (Low-Level Features) having the same spatial resolution from the backbone network of the encoder module 110. In one example, prior to combining, the low-level features may be convolved by 1x1 (1x1 Conv) to reduce the number of channels. After combining, multiple convolution operations (e.g., 3 × 3 convolution) may be applied, followed by bilinear upsampling (e.g., with an upsampling factor of 4), and the predicted image is finally output.
Although an example of a network structure of a deep learning network applicable to the method for identifying a soil table of an agricultural field according to the embodiment of the present disclosure is illustrated in fig. 1B, it may be understood by those skilled in the art that other types of deep learning networks capable of achieving the same or similar functions and achieving the same or similar effects or performances as the embodiment illustrated in fig. 1B are also possible, and thus the scope of the present disclosure is not limited to the specific network structure described in the specific embodiment.
In an alternative embodiment of the present disclosure, the convolutional neural network may be pre-trained before it is trained by the input training data set. For example, a convolutional neural network may be pre-trained with a known data set. Known datasets may include, but are not limited to, for example ImageNet.
In embodiments of the present disclosure, the semantically segmented convolutional neural network may include, but is not limited to, for example, a Full Convolutional Network (FCN), Unet, LinkNet.
According to the method for identifying the farmland soil table provided by the embodiment of the disclosure, the semantic segmentation convolutional neural network can be trained and tested by taking the image of the farmland and the label of the soil table related to the image as a paired data set, a deep learning model is generated, and the farmland soil table is identified based on the image of the farmland by using the deep learning model. By the method, the farmland soil surface label can be quickly generated, and the recognition model is obtained through training, so that the farmland soil surface recognition capability of recognizing more complex scenes (such as non-mechanized planting) is realized.
After the soil surface is identified, the soil surface moisture content analysis and the like can be carried out based on the identification result, and the multispectral analysis can be carried out on the soil surface by multiplexing the multispectral data into region segmentation. In addition, the obtained soil surface distribution can be used for field robot navigation and the like.
As described above, the annotations can be generated by a suitable algorithm. In an embodiment of the present disclosure, a method for determining a label for a soil surface of an agricultural field is provided. The method may be applied to step S12 described above. Fig. 3 is a flow chart schematically illustrating an example of a method for determining a farmland soil surface label according to an embodiment of the present disclosure. As shown in fig. 3, the method may include the following steps.
In step S31, the image of the farmland is binarized to extract the region of the image where the vegetation is located and generate a binarized map of the image.
Specifically, the purpose of extracting the region where vegetation (e.g., crops) is located is to segment out where vegetation is in the image. The regions in the binarized map are either vegetation regions or non-vegetation regions. The binarization method may include, but is not limited to, a color space (e.g., RGB to HSV), a color index, a vegetation index, and other distinguishing methods. In one example, taking as an example that the color index may be an extra green index (ExcessGreen), the following equation is given:
ExG=2*Green–Red–Blue
in this formula, ExG denotes an ultragreen index, Green denotes a pixel value of a Green channel, Red denotes a pixel value of a Red channel, and Blue denotes a pixel value of a Blue channel. The formula may be applied to the image. In practical application, mathematical processing such as normalization can be performed on the above formula. After the ultragreen index is calculated, the vegetation area can be separated through a threshold value, the ultragreen index is larger than the threshold value and is a non-zero value, and the ultragreen index is smaller than the threshold value and is set to be a zero value, so that a binary image is generated. The threshold value may be set manually or obtained by the Ostu method. For example, the vegetation region may be non-zero pixels and the vegetation region may be zero pixels.
Although the binarization process is described above with an ultragreen index as an example, it will be understood by those skilled in the art that other binarization methods are also possible.
In step S32, the main direction of the planted row in the image is determined.
For example, the main direction of the planting row may be determined by a hough line detection (straight line hough transform) method. More specifically, in an embodiment of the present disclosure, the binarized map is converted to polar coordinate system hough space, and a data list is returned from the straight line hough transform (e.g., using the function: sketch. Here, the angle may refer to a polar coordinate angle, that is, an included angle between a connection line between a pixel point and an origin in an original image (i.e., a binarized image) and an x-axis (rectangular coordinate system), and the distance may refer to a distance between the pixel point and the origin. The returned peak value can be understood as a (hough space) accumulated value, that is, the number of pixel points passing through an assumed straight line on the original rectangular coordinate system is larger, and the reliability that the assumed straight line is a true straight line is higher. The Hough transform assumes all possible straight lines in the image with a certain precision, and then accumulates the number of pixels on each assumed straight line, thereby obtaining the accumulated value on each assumed straight line. For example, suppose that a set of parallel straight lines (n) can be assumed to exist in the original image, and that the angle is the same for this set of parallel straight lines, and the number of times the angle appears is n as many as the number of straight lines. And performing statistical analysis on the data list to obtain the main direction of the planting rows. Examples of methods of statistical analysis may include, but are not limited to: selecting the angle with the most occurrence times in the data list as the main direction of the planting row; and when the returned result is less (for example, lower than a set threshold value), selecting the first-ranked angle in the data list as the main direction of the planting row.
In step S33, the number of vegetation pixels in the main direction is accumulated along the direction perpendicular to the main direction with the main direction and the direction perpendicular to the main direction as a coordinate system to obtain an accumulation curve.
In the embodiments of the present disclosure, to facilitate image processing, the binarized map may be rotated to have the main direction in the vertical direction, and the binarized map may be expanded to a square. Specifically, the binarized map may be rotated according to the main direction angle so that the main direction is vertical. And calculating the length of the diagonal line of the binary image according to the height and the width of the binary image, and expanding the binary image into a square, wherein the side length of the square is equal to the length of the diagonal line. The area outside the binarized map in the square can be filled with zero values. Fig. 4A schematically shows an acquired farmland image, and fig. 4B schematically shows a schematic view of an expanded square and a rotated binarized map.
The main direction and a direction perpendicular to the main direction may be taken as a coordinate system, for example, an x-y coordinate system, the x-axis may be a direction perpendicular to the main direction, and the y-axis may be the main direction. The number of non-zero pixels in the y-axis direction is counted column by column along the x-axis to generate an accumulation curve.
In step S34, the peak apex and width of the accumulation curve are determined.
In an alternative embodiment of the present disclosure, the accumulated curve may be smoothed before determining the peak vertex and width of the accumulated curve. Suitable means for smoothing may include, but are not limited to, moving average post-denoising, Univariate Spline post-fitting denoising, Savitzky _ Golay Filter post-smoothing denoising. The denoising case may include, but is not limited to, modifying the value of the smoothed negative number of Savitzky _ Golay Filter, and the like. Fig. 5 schematically shows the smoothed accumulation curve (curve II), in fig. 5, curve I is the original peak, and the dots are the peaks found by the algorithm, so that it can be seen that many wrong peaks appear without smoothing, which is not beneficial to line search.
In the embodiment of the present disclosure, the peak vertex and the width of the smoothed curve may be calculated using similar methods such as scipy. The purpose of the smoothing process is mainly to reduce errors by parameter constraints. For example, this parameter for peak protrusion height may set a minimum requirement, and peaks below the minimum may be discarded, such as the left most peak in FIG. 5 may be discarded because the minimum is not reached (x is the selected peak point). Similarly, in addition to the peak protrusion height, a parametric constraint may be placed on the peak width.
In step S35, planting line rectangular strips are generated line by line according to the peak and the width to obtain a planting line region binary image, and a non-crop region mask (mask) image is obtained according to the planting line region binary image.
Each row in the planting row area binary image can correspond to a planting row rectangular strip, the central line of the rectangle is the row where the peak fixed point is located, the width of the rectangle is the width of the peak, and the height of the rectangle is the width of the original image. The planting row rectangular bars mean the planting row areas in an ideal state, the areas in the rectangular bars can be non-zero values, and the areas between the rectangular bars can be zero values. The peak width may be a value that can be obtained by setting a parameter, and peak widths corresponding to different peak heights may be obtained. And (4) carrying out negation operation on the pixels of the two-value image of the planting row area, namely changing a non-zero value into a zero value, changing the zero value into the non-zero value, and obtaining a non-crop area mask image.
If the main direction is rotated as described above in step S33, the planting row area binary image is derotated and dereployed to obtain a non-crop area mask image corresponding to the original image (binary image). The derotation and dereflexion may be the inverse of the rotation and expansion in the above embodiments. Namely, the main direction is rotated back to the original direction, and the two-value map of the planting row area is restored to the size before expansion. Fig. 6A schematically shows a two-value map of the planting row area. As shown in fig. 6A, the pixels of the white rectangular bars may be non-zero values and the pixels of the black regions between the rectangular bars may be zero values. Fig. 6B schematically shows a binary image obtained after the inversion operation is performed on the two-value image of the planting row area. Fig. 6C schematically shows the region of the original binarized map (i.e., the binarized map before rotation and expansion) in the map shown in fig. 6B. Fig. 6D schematically shows a mask diagram of the non-crop area obtained after the reverse rotation and the reverse expansion.
In step S36, a soil surface region is extracted from the non-crop region mask map and the image to obtain a label (label map) of the soil surface.
In the disclosed embodiments, if the non-crop area is the soil surface, the non-crop area mask may be used as a label directly. If the non-crop areas are not all soil surfaces, the soil surface areas can be extracted. Specifically, a land condition screening, for example, a land color screening may be performed on the non-crop area mask, and the land in this scene may be set to be yellow or gray. Of course, one skilled in the art will appreciate that other extraction methods are applicable. The part of the non-crop area mask graph corresponding to the extracted soil surface area can be used as the label of the soil surface.
Fig. 7 is a flow chart schematically illustrating an example of a method for determining a farmland soil surface label according to another embodiment of the present disclosure. The embodiment shown in fig. 7 is largely the same as the embodiment shown in fig. 3, except that the method may further include step S71, before performing step S32: it is determined whether or not strict, and if strict, step S32 is performed. In particular, whether strict may refer to whether a constraint is required, such as the narrowest soil strip. For the case of demanding constraints, this makes it easy to know the width of the ground gap, facilitating navigation of the agricultural machine (e.g. a field robot). If no constraints are required, it is understood that the area may be land as long as it is not vegetation (crop). Fig. 8 shows a binarized map of a farmland soil table without requirement constraints.
The method of the embodiment shown in fig. 7 may further include step S72: if the judgment is not strict, namely no constraint condition is required, the non-crop area mask graph can be obtained directly according to the binary graph. For example, the binary map may be negated (e.g., if the binary values are 0 and 1, 0 becomes 1, 1 becomes 0) to obtain a non-crop region mask map.
According to the method for determining the label of the farmland soil surface, the visual processing technology and algorithm are adopted to automatically process the image of the farmland, the soil surface area is quickly extracted, the image-label set for training is obtained to carry out convolutional neural network training, the obtained neural network has the capability of coping with more complex conditions, and can cope with even not very straight planting row scenes and the like.
Farmland soil surface identification can be applied to a variety of scenarios. In one embodiment of the present disclosure, one application of farmland soil surface identification may be planning a work path of an unmanned vehicle. Fig. 9 is a flow chart schematically illustrating an example of a method for planning a work path for an unmanned vehicle according to another embodiment of the present disclosure. As shown in fig. 9, in an embodiment of the present disclosure, a method for planning a work path for an unmanned vehicle may include the following steps.
In step S91, a farmland soil surface is determined. For example, the method for identifying a soil surface of an agricultural field in the above-described embodiment may be used to determine the soil surface of the agricultural field.
In step S92, the inter-ridge position of the farmland is obtained from the determined farmland soil surface. Specifically, after the farmland soil surface is determined, the inter-ridge positions of the farmland can be obtained (identified) from the farmland soil surface.
In step S93, the work route of the unmanned vehicle is determined from the inter-ridge position. Specifically, after the inter-ridge locations are determined, the work path of the unmanned vehicle may be planned along the inter-ridge locations such that the unmanned vehicle works along the inter-ridge locations.
In the disclosed embodiments, the track width of the unmanned vehicle is adjustable. In planning the working path, it is desirable that the wheel of the unmanned vehicle always follows the surface of the soil, i.e., crosses one or more ridges without pressing against the ridges during travel. Therefore, in the method of the embodiments of the present disclosure, the distance between the soil surface rows may be determined according to the inter-ridge position, and the track width of the unmanned vehicle may be adjusted according to the distance. For example, the track width of the unmanned vehicle may be adjusted to an integer multiple of the pitch, which may allow the wheels of the unmanned vehicle to follow the path of the top of the earth as much as possible.
In embodiments of the present disclosure, another application for farmland soil surface identification may be predicting crop yield. Fig. 10 is a flowchart schematically illustrating an example of a method for predicting crop yield according to another embodiment of the present disclosure. As shown in fig. 10, in an embodiment of the present disclosure, a method for predicting crop yield may include the following steps.
In step S101, a farmland soil surface is determined. For example, the method for identifying a soil surface of an agricultural field in the above-described embodiment may be used to determine the soil surface of the agricultural field.
In step S102, a ridge pitch is obtained from the determined farmland soil surface. Specifically, after the farmland soil surface is determined, the ridge pitch of the farmland, that is, the distance between two adjacent ridges, can be obtained (identified) from the farmland soil surface.
In step S103, the ridge pitch is compared with a preset ridge pitch.
In step S104, the yield of the crop is predicted according to the comparison result.
Specifically, the plants of each crop have a particular ridge spacing corresponding to the highest yield (e.g., the ideal highest yield), which may be taken as the preset ridge spacing. The yield of the plants is affected by too large or too small a ridge spacing. Thus, in the disclosed embodiments, crop yield may be predicted from a comparison of ridge spacing (actual ridge spacing) derived from a field soil surface to a preset ridge spacing. In one example, yield prediction may involve preliminary prediction, i.e., predicting whether the yield can reach the highest yield based on the comparison. In another example, the relationship between the difference of the ridge spacing from the preset ridge spacing and the yield impact value (difference from the highest yield) may be predetermined for a particular crop, such as a look-up table. The look-up table may be determined from a limited number of experiments or empirically. After the actual ridge spacing is determined, finding a corresponding yield influence value from the lookup table according to a difference value between the actual ridge spacing and the preset ridge spacing, and predicting the yield of the crops according to the highest yield and the yield influence value.
Fig. 11 is a block diagram schematically illustrating the structure of an example of an apparatus for identifying a soil surface of an agricultural field according to an embodiment of the present disclosure. As shown in fig. 11, in an embodiment of the present disclosure, the apparatus may include a processor 1110 and a memory 1120. Memory 1120 may store instructions that, when executed by processor 1110, may cause processor 1110 to perform the methods for identifying an agricultural soil surface or determining a label for an agricultural soil surface described in previous embodiments.
Examples of processor 1110 may include, but are not limited to, a general purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), a state machine, and the like. The processor may perform signal encoding, data processing, power control, input/output processing.
Examples of memory 1120 may include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that may be used to store information that may be accessed by a processor.
In an embodiment of the disclosure, there is also provided an agricultural machine that may include the apparatus for identifying a soil surface of an agricultural field described in accordance with the above embodiment. Examples of agricultural machines may include, but are not limited to, unmanned aerial vehicles, unmanned vehicles, rice planters, seed planters, and the like.
In an embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon instructions that, when executed by a processor, can cause the processor to perform a method for identifying an agricultural soil surface or a method for determining a label of an agricultural soil surface according to the methods described in the previous embodiments.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in practicing the disclosure.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to part of the description of the method embodiment.
The above description is only an embodiment of the present disclosure, and is not intended to limit the present disclosure. Various modifications and variations of this disclosure will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the scope of the claims of the present disclosure.
Claims (19)
1. A method for determining a label for a soil surface of an agricultural field, comprising:
carrying out binarization on an image of a farmland to extract an area where vegetation is located in the image and generate a binarization image of the image;
determining a main direction of planting rows in the image;
accumulating the number of vegetation pixels in the main direction by taking the main direction and a direction perpendicular to the main direction as a coordinate system to obtain an accumulation curve;
determining the peak vertex and the width of the accumulation curve;
generating planting row rectangular strips line by line according to the peak and the width to obtain a planting row area binary image, and obtaining a non-crop area mask image according to the planting row area binary image; and
and extracting a soil surface region according to the non-crop region mask image and the image to obtain the label of the soil surface.
2. The method of claim 1, wherein the binarizing the image of the field comprises binarizing the image using at least one of:
a color space distinguishing method;
a color index distinguishing method;
a vegetation index distinguishing method.
3. The method of claim 1, wherein determining the primary direction of the planted row in the image comprises:
converting the binary image into a polar coordinate system Hough space;
returning a data list through linear Hough transform, wherein the data list comprises a peak value, an angle and a distance;
and carrying out statistical analysis on the data list to obtain the main direction of the planting rows.
4. The method of claim 3, wherein statistically analyzing the data list to derive a primary direction of planting rows comprises using at least one of:
selecting the angle with the largest occurrence frequency in the data list as the main direction of a planting row;
and selecting the first-ranked angle in the data list as the main direction of the planting row under the condition that the returned result is lower than a set threshold value.
5. The method of claim 1, further comprising: before determining the peak vertex and the width of the accumulation curve, smoothing the accumulation curve.
6. The method of claim 1, wherein accumulating the number of vegetation pixels in the principal direction to obtain an accumulation curve comprises:
rotating the binary image until the main direction is in a vertical direction;
expanding the rotated binary image into a square, wherein the length of the side of the square is equal to the length of the diagonal line of the binary image;
filling the area outside the binarized map in the square with zero values;
counting the number of non-zero pixels of the main direction column by column to generate an accumulation curve.
7. The method of claim 6, wherein obtaining the non-crop area mask map from the two-valued map of the crop row area comprises:
performing inversion operation on pixels of the binary image of the planting row area;
reversely rotating the planting row area binary image to restore the main direction to the direction before rotation;
and restoring the two-value image of the planting row area to the size before expansion to obtain the mask image of the non-crop area.
8. The method of claim 1, wherein the extracting the soil surface region from the non-crop region mask map and the image to obtain the label of the soil surface comprises:
when the non-crop area is a soil surface, taking the mask image of the non-crop area as the mark;
when the non-crop area is not completely the soil surface, extracting the soil surface area, and using the part of the non-crop area mask map corresponding to the extracted soil surface area as the label of the soil surface.
9. The method of claim 1, further comprising:
before accumulating the number of vegetation pixels in the main direction, judging whether a constraint condition is required;
performing the step of accumulating the number of vegetation pixels in the main direction if a constraint condition is required;
and under the condition of no requirement constraint condition, obtaining a non-crop area mask map according to the binary map.
10. The method of claim 9, wherein the constraint comprises a narrowest soil strip.
11. A method for identifying a soil surface of a field, comprising:
acquiring an image of a farmland;
obtaining the label of the farmland soil surface in the image according to the image;
pairing the image and an annotation associated with the image to form a paired dataset and dividing the dataset into a test dataset and a plurality of sets of training datasets;
performing iterative training on the semantic segmentation convolutional neural network by using the multiple groups of training data sets to obtain a deep learning model; the data of each iteration is obtained by randomly extracting a plurality of pieces of image data from a plurality of groups of training data sets so as to update the parameters of the deep learning model through a plurality of iterations;
and inputting the test data set into the deep learning model, and performing parameter optimization on the deep learning model, wherein the optimized deep learning model is used for identifying the soil table of the farmland.
12. The method of claim 11, further comprising:
pre-training a semantically segmented convolutional neural network using a known data set before training the semantically segmented convolutional neural network using the plurality of sets of training data sets.
13. The method of claim 11, wherein obtaining the label of the soil surface of the farmland in the image from the image comprises:
the label is obtained using the method for determining a label for a soil surface of an agricultural field according to any one of claims 1 to 10.
14. A method for planning a work path for an unmanned vehicle, comprising:
determining a farmland soil surface using the method for identifying a farmland soil surface according to any one of claims 11 to 13;
obtaining the positions among ridges of the farmland according to the determined farmland soil surface; and
and determining the operation path of the unmanned vehicle according to the inter-ridge position.
15. The method of claim 14, wherein the determining the work path of the unmanned vehicle from inter-ridge locations comprises:
determining the distance between the soil surface rows according to the inter-ridge positions;
and adjusting the wheel track of the unmanned vehicle to be integral multiple of the distance.
16. A method for predicting crop yield, comprising:
determining a farmland soil surface using the method for identifying a farmland soil surface according to any one of claims 11 to 13;
obtaining ridge spacing according to the determined farmland soil surface;
comparing the ridge spacing with a preset ridge spacing;
and predicting the yield of the crops according to the comparison result.
17. An apparatus, comprising:
a processor; and
a memory configured to store instructions configured to, when executed by the processor, enable the processor to perform at least one of:
a method as claimed in any one of claims 1 to 10 for determining a label for a soil surface of an agricultural field;
a method for identifying a soil surface of an agricultural field according to any one of claims 11 to 13;
the method for planning a working path for an unmanned vehicle of claim 14 or 15;
the method for predicting crop yield of claim 16.
18. An agricultural machine, comprising the apparatus of claim 17.
19. A computer-readable storage medium having instructions stored thereon, which when executed by a processor, enable the processor to perform at least one of:
a method as claimed in any one of claims 1 to 10 for determining a label for a soil surface of an agricultural field;
a method for identifying a soil surface of an agricultural field according to any one of claims 11 to 13;
the method for planning a working path for an unmanned vehicle of claim 14 or 15;
the method for predicting crop yield of claim 16.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010537234.5A CN113807131A (en) | 2020-06-12 | 2020-06-12 | Method, device, agricultural machine and medium for identifying farmland soil surface |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010537234.5A CN113807131A (en) | 2020-06-12 | 2020-06-12 | Method, device, agricultural machine and medium for identifying farmland soil surface |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113807131A true CN113807131A (en) | 2021-12-17 |
Family
ID=78944079
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010537234.5A Pending CN113807131A (en) | 2020-06-12 | 2020-06-12 | Method, device, agricultural machine and medium for identifying farmland soil surface |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113807131A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114485612A (en) * | 2021-12-29 | 2022-05-13 | 广州极飞科技股份有限公司 | Route generation method and device, unmanned working vehicle, electronic device and storage medium |
CN114549960A (en) * | 2022-02-28 | 2022-05-27 | 长光禹辰信息技术与装备(青岛)有限公司 | Ridge-ridge-crop broken-ridge point identification method and related device |
CN115292334A (en) * | 2022-10-10 | 2022-11-04 | 江西电信信息产业有限公司 | Intelligent planting method and system based on vision, electronic equipment and storage medium |
CN115641504A (en) * | 2022-10-26 | 2023-01-24 | 南京农业大学 | Automatic remote sensing extraction method for field boundary based on crop phenological characteristics and decision tree model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04336383A (en) * | 1991-05-13 | 1992-11-24 | Kubota Corp | Crops array detection device for operation machine |
US20140254861A1 (en) * | 2013-03-08 | 2014-09-11 | Raven Industries, Inc. | Row guidance parameterization with hough transform |
CN108010033A (en) * | 2016-11-02 | 2018-05-08 | 哈尔滨派腾农业科技有限公司 | A kind of farmland scene image collection and processing method |
WO2019176844A1 (en) * | 2018-03-15 | 2019-09-19 | ヤンマー株式会社 | Work vehicle and crop row recognition program |
US20200021716A1 (en) * | 2018-07-11 | 2020-01-16 | Raven Industries, Inc. | Adaptive color transformation to aid computer vision |
-
2020
- 2020-06-12 CN CN202010537234.5A patent/CN113807131A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04336383A (en) * | 1991-05-13 | 1992-11-24 | Kubota Corp | Crops array detection device for operation machine |
US20140254861A1 (en) * | 2013-03-08 | 2014-09-11 | Raven Industries, Inc. | Row guidance parameterization with hough transform |
CN108010033A (en) * | 2016-11-02 | 2018-05-08 | 哈尔滨派腾农业科技有限公司 | A kind of farmland scene image collection and processing method |
WO2019176844A1 (en) * | 2018-03-15 | 2019-09-19 | ヤンマー株式会社 | Work vehicle and crop row recognition program |
US20200021716A1 (en) * | 2018-07-11 | 2020-01-16 | Raven Industries, Inc. | Adaptive color transformation to aid computer vision |
Non-Patent Citations (1)
Title |
---|
曹倩 等: "基于机器视觉的旱田多目标直线检测方法", 《农业工程学报》, vol. 26, 31 October 2010 (2010-10-31), pages 187 - 191 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114485612A (en) * | 2021-12-29 | 2022-05-13 | 广州极飞科技股份有限公司 | Route generation method and device, unmanned working vehicle, electronic device and storage medium |
CN114485612B (en) * | 2021-12-29 | 2024-04-26 | 广州极飞科技股份有限公司 | Route generation method and device, unmanned operation vehicle, electronic equipment and storage medium |
CN114549960A (en) * | 2022-02-28 | 2022-05-27 | 长光禹辰信息技术与装备(青岛)有限公司 | Ridge-ridge-crop broken-ridge point identification method and related device |
CN115292334A (en) * | 2022-10-10 | 2022-11-04 | 江西电信信息产业有限公司 | Intelligent planting method and system based on vision, electronic equipment and storage medium |
CN115641504A (en) * | 2022-10-26 | 2023-01-24 | 南京农业大学 | Automatic remote sensing extraction method for field boundary based on crop phenological characteristics and decision tree model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113807131A (en) | Method, device, agricultural machine and medium for identifying farmland soil surface | |
Zhang et al. | Automated robust crop-row detection in maize fields based on position clustering algorithm and shortest path method | |
García-Santillán et al. | Curved and straight crop row detection by accumulation of green pixels from images in maize fields | |
Kaiser et al. | Learning aerial image segmentation from online maps | |
Romeo et al. | Crop row detection in maize fields inspired on the human visual perception | |
CN109165549B (en) | Road identification obtaining method based on three-dimensional point cloud data, terminal equipment and device | |
CN103400151B (en) | The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method | |
CN109726627B (en) | Neural network model training and universal ground wire detection method | |
US7653218B1 (en) | Semi-automatic extraction of linear features from image data | |
Ünsalan et al. | A system to detect houses and residential street networks in multispectral satellite images | |
Kanagaraj et al. | Deep learning using computer vision in self driving cars for lane and traffic sign detection | |
Hu et al. | Road network extraction and intersection detection from aerial images by tracking road footprints | |
CN104766058A (en) | Method and device for obtaining lane line | |
Benarchid et al. | Building extraction using object-based classification and shadow information in very high resolution multispectral images, a case study: Tetuan, Morocco | |
US8538071B2 (en) | System and method for target separation of closely spaced targets in automatic target recognition | |
CN113850129A (en) | Target detection method for rotary equal-variation space local attention remote sensing image | |
CN103456022A (en) | High-resolution remote sensing image feature matching method | |
US20220044072A1 (en) | Systems and methods for aligning vectors to an image | |
Zhao et al. | Rapid extraction and updating of road network from airborne LiDAR data | |
CN111414826A (en) | Method, device and storage medium for identifying landmark arrow | |
Li et al. | Road extraction algorithm based on intrinsic image and vanishing point for unstructured road image | |
CN106558051A (en) | A kind of improved method for detecting road from single image | |
CN104899857A (en) | Camera calibration method and apparatus using a color-coded structure | |
CN116704446B (en) | Real-time detection method and system for foreign matters on airport runway pavement | |
Sun et al. | Knowledge-based automated road network extraction system using multispectral images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |