CN114723818A - Seedling line identification method and device based on deep learning and agricultural machine - Google Patents

Seedling line identification method and device based on deep learning and agricultural machine Download PDF

Info

Publication number
CN114723818A
CN114723818A CN202210333683.7A CN202210333683A CN114723818A CN 114723818 A CN114723818 A CN 114723818A CN 202210333683 A CN202210333683 A CN 202210333683A CN 114723818 A CN114723818 A CN 114723818A
Authority
CN
China
Prior art keywords
seedling
picture
line
seedling line
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210333683.7A
Other languages
Chinese (zh)
Inventor
常志中
蒋相哲
王香珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Huida Technology Development Co ltd
Original Assignee
Heilongjiang Huida Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang Huida Technology Development Co ltd filed Critical Heilongjiang Huida Technology Development Co ltd
Priority to CN202210333683.7A priority Critical patent/CN114723818A/en
Publication of CN114723818A publication Critical patent/CN114723818A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The embodiment of the application provides a seedling line identification method and device based on deep learning and an agricultural machine, and the method comprises the following steps: acquiring a first picture, and generating a second picture according to the first picture, wherein the first picture is a picture shot by a camera of an agricultural machine; processing the second picture by using a pre-trained neural network model to generate a seedling line; calculating the spatial relationship of each seedling line in the first picture according to the coordinates of the seedling lines to obtain a first seedling line and a second seedling line, wherein the first seedling line and the second seedling line are respectively two most central seedling lines in the first picture; and generating a leading line according to the first seedling line and the second seedling line, wherein the abscissa of the leading line is the mean value of the abscissa of the first seedling line and the abscissa of the second seedling line. The application provides a scheme, can compromise speed and the rate of accuracy of seedling line discernment.

Description

Seedling line identification method and device based on deep learning and agricultural machine
Technical Field
The embodiment of the application relates to the field of agriculture, in particular to a seedling line identification method and device based on deep learning and an agricultural machine.
Background
With the development of science and technology, agricultural mechanization tends to be more and more intelligent, an automatic navigation technology of agricultural machinery is a key technology of fine agriculture, and particularly, when agricultural machinery operates in the field, the identification of seedling lines is of great importance.
In the prior art, the seedling line identification method is complex and consumes a large amount of time, so that the speed of seedling line identification is reduced. Therefore, how to provide a seedling line identification method which can give consideration to both speed and accuracy is a technical problem to be solved urgently.
Disclosure of Invention
The application provides a seedling line identification method and device based on deep learning and an agricultural machine, which can take account of the speed and accuracy of seedling line identification.
In a first aspect, a method for seedling line identification based on deep learning is provided, which includes: acquiring a first picture, and generating a second picture according to the first picture, wherein the first picture is a picture shot by a camera of an agricultural machine; processing the second picture by using a pre-trained neural network model to generate a seedling line; calculating the spatial relationship of each seedling line in the first picture according to the coordinates of the seedling lines to obtain a first seedling line and a second seedling line, wherein the first seedling line and the second seedling line are respectively two most central seedling lines in the first picture; and generating a leading line according to the first seedling line and the second seedling line, wherein the abscissa of the leading line is the mean value of the abscissa of the first seedling line and the abscissa of the second seedling line.
The scheme provided by the application is based on the neural network model trained in advance, can correctly and rapidly identify the seedling line in the driving process of the agricultural machine, and output the leading line according to the identified seedling line, so that the agricultural machine can drive according to the output leading line, and damage to crops can be avoided.
With reference to the first aspect, in some possible implementations, the generating a navigation line according to the first seedling line and the second seedling line includes: when the number of the crop strips between two wheels of the agricultural machine is odd, generating the leading line positioned in the middle of the crop strips, wherein the first seedling line and the second seedling line are two seedling lines belonging to the same crop strip respectively; when the number of the crop belts between two wheels of the agricultural machinery is even, the navigation line positioned in the middle of two adjacent ridges of the crop belts is generated, and the first seedling line and the second seedling line are respectively adjacent two seedling lines of the crop belts, which face the ridges. The positions of the generated leading lines are set according to the number of the crop belts between two wheels of the agricultural machine, so that the generation of wrong leading lines caused by the deviation of a camera of the agricultural machine or the deviation of the agricultural machine can be avoided.
With reference to the first aspect, in some possible implementations, the generating a second picture according to the first picture includes: reducing the first picture to the second picture according to a first proportion; the calculating the spatial relationship of each seedling line in the first picture according to the coordinates of the seedling line comprises: and after the seedling line is generated, reducing the coordinate of the seedling line in the second picture to the coordinate of the seedling line in the first picture according to a second proportion, wherein the first proportion corresponds to the second proportion. In this way, the reduced picture is close to the original picture in proportion, and the deviation of the seedling line identification result can be reduced by restoring according to the second proportion corresponding to the first proportion; meanwhile, the calculation amount in the seedling line identification process can be reduced by using the reduced second picture for calculation, and calculation resources are saved.
With reference to the first aspect, in some possible implementations, before training the neural network model, the method further includes: acquiring a third picture; adding a label to the third picture, and converting the third picture added with the label into a fourth picture, wherein the fourth picture is a gray scale image of the third picture added with the label, and the label corresponds to the spatial position of the seedling line in the third picture. In this way, the coordinate values of the seedling line in the fourth picture can be used as a reference of the coordinate values of the seedling line output by the neural network model in the training process.
With reference to the first aspect, in some possible implementations, before the processing the second picture using a pre-trained neural network model to generate a seedling line, the method further includes: training the neural network model to obtain the pre-trained neural network model; wherein the training the neural network model comprises: acquiring a third picture, and reducing the third picture to a fifth picture according to a first proportion; extracting the characteristics of the fifth picture by using a backbone network to obtain a first characteristic diagram; converting the first characteristic diagram into coordinates of the seedling line; the neural network model is modified using a loss function.
With reference to the first aspect, in some possible implementations, the backbone network includes: the first input layer is configured to process the fifth picture by adopting a convolution module to obtain a second feature map; a first intermediate layer configured to process the second feature map with a depth separable convolution module to obtain a third feature map; the first output layer is configured to process the third feature map by adopting a convolution module to obtain a first feature map; a second intermediate layer configured to process the fourth feature map with a depth separable convolution module to obtain a fifth feature map; a second output layer configured to process the fifth feature map with a convolution module to obtain a sixth feature map; a third intermediate layer configured to process the seventh feature map with a depth separable convolution module to obtain an eighth feature map; a third output layer configured to process the eighth feature map with a convolution module to obtain a ninth feature map. The backbone network structure can accelerate the running speed of the neural network model, can be deployed on embedded equipment and can be compatible with the embedded equipment.
With reference to the first aspect, in some possible implementations, the input layer, the first intermediate layer, and the first output layer are configured to infer the seedling line; the second intermediate layer, the second output layer, the third intermediate layer and the third output layer are configured to participate in an auxiliary training of the seedling line.
With reference to the first aspect, in some possible implementations, the backbone network includes 5 downsampling. Thus, the running speed of the neural network model is further accelerated.
With reference to the first aspect, in some possible implementations, the first middle layer includes 3 depth-separable convolution modules, and the second middle layer and the third middle layer respectively include 2 depth-separable convolution modules.
With reference to the first aspect, in some possible implementations, the depth separable convolution module includes: a feature augmentation layer configured to augment the number of feature maps by a factor of 6 using a 1 x 1 convolution kernel; depth-wise convolutional layers configured to be convolved on a single channel; a point-wise convolutional layer configured to traverse over all channels by a 1 x 1 convolutional kernel; a direct connection layer configured to connect an input and an output using a residual structure. Through the deep separable convolution module, the calculated amount can be reduced and the operation speed of the neural network model can be improved on the basis of obtaining more details.
With reference to the first aspect, in some possible implementations, the converting the first feature map into the coordinates of the seedling line includes: using a 1 x 1 convolution kernel to perform dimensionality reduction on the first feature map to obtain a tenth feature map; extracting three groups of feature points from a tenth feature map by using the first route, the second route and the third route respectively; splicing the three groups of characteristic points with n full-connection layers to obtain n middle layers; and correspondingly connecting the n middle layers with the n output layers one by one to obtain an output matrix of n x h (m +1), wherein n corresponds to the number of the seedling lines, h corresponds to the ordinate of the seedling lines, and m corresponds to the abscissa of the seedling lines. The dimension of the first characteristic diagram is reduced to the tenth characteristic diagram, so that the reasoning speed of the neural network model can be improved under the strong mania without losing obvious precision; and by using three different routes to extract the features, richer features can be obtained.
In combination with the first aspect, in some possible implementations, n is 6.
With reference to the first aspect, in some possible implementations, the extracting, by using the first route, the second route, and the third route, three groups of feature points from the ninth feature map respectively includes: extracting features from the ordinate direction first and then from the abscissa direction based on the first route; based on the second route, firstly extracting features from the horizontal coordinate direction, and then extracting features from the vertical coordinate direction; based on the third route, features are extracted using full convolution.
With reference to the first aspect, in some possible implementations, the loss function satisfies:
Ltotal=Lcls+αLstr+βLseg
wherein the content of the first and second substances,
Figure BDA0003573821910000041
Lstr=Lsim+Lshp
Figure BDA0003573821910000042
Figure BDA0003573821910000043
Lclsas a function of classification loss, LtotalAs a function of said loss, LstrAs a function of structural loss, LsegAs a function of the segmentation loss, LsimIs a loss of adjacencyFunction, LshpAs a function of shape loss, α is LstrBeta is LsegScaling parameter of, Pi,jCorresponding to the predicted value of the coordinates of the seedling line, Ti,jCorresponding to the actual value of the coordinates of the seedling line, LCEAs a cross-entropy loss function, Loci,jIs the coordinate of the seedling point in the seedling line. Classification loss function LclsRepresenting the classification loss of the abscissa of the seedling point on the seedling line with the ordinate of j of the ith seedling line; adjacent loss function LsimCan be used for restraining the positions of adjacent seedling points on the same seedling line and keeping the coordinates of the seedling points continuous; shape loss function LshpCan be used for restricting the shape of the seedling line; segmentation loss function LsegThe method can be used for auxiliary training of the neural network model.
In a second aspect, there is provided an apparatus for seedling line identification, comprising means for performing the method of the first aspect or any possible implementation manner of the first aspect.
In a third aspect, there is provided an agricultural machine comprising: the camera is used for shooting an image of a working land in the driving process of the agricultural machine; a processor configured to control the camera and perform the method of the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium comprising a computer program which, when run on a computer device, causes a processing unit in the computer device to perform the instructions of the first aspect or any possible implementation of the first aspect.
In a fifth aspect, a computer program product is provided, which comprises computer program instructions for causing a computer to perform the method of the first aspect or the implementation manners of the first aspect.
A sixth aspect provides a computer program which, when run on a computer, causes the computer to perform the method of the first aspect or any possible implementation of the first aspect.
Drawings
FIG. 1 is an exemplary diagram of an application scenario according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of a method of shoot line identification in an embodiment of the present application;
FIG. 3 is a schematic view of an identified seedling line according to an embodiment of the present application;
FIG. 4 is a schematic illustration of an outgoing routing path according to an embodiment of the present application;
FIG. 5 is a schematic illustration of a routing path of an output of an embodiment of the present application;
FIG. 6 is a schematic illustration of a lead path offset of an embodiment of the present application;
FIG. 7 is a schematic illustration of an outgoing routing path according to an embodiment of the present application;
FIG. 8 is a schematic illustration of a tagged picture according to an embodiment of the present application;
FIG. 9 is a diagram illustrating training a neural network model according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a depth separable convolution module according to an embodiment of the present application;
FIG. 11 is a schematic representation of the transformation of a feature map into seedling line coordinates according to an embodiment of the present application;
FIG. 12 is a flowchart of the conversion of a first profile into seedling line coordinates according to an embodiment of the present application;
fig. 13 is a schematic diagram of a route for extracting feature points according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
The embodiment of the application can be applied to field operation, the position of the seedling line of an operation land is identified, and the leading line is output, so that the agricultural machine runs according to the leading line.
Fig. 1 is an exemplary diagram of an application scenario of the technical solution provided in the embodiment of the present application. As shown in fig. 1, the application scenario may include at least one farm machine 110 and a camera 120. During the operation and driving process of the agricultural machine 110, the camera 120 takes pictures to collect images of the operation land, and the positions of the seedling lines of the operation land are identified according to the collected images of the operation land, wherein the operation land may include one or more crop belts, and one crop belt may correspond to two seedling lines.
In the driving process of the agricultural machine 110, a plurality of pictures are generally captured by using the camera 120 on the agricultural machine, and then the position of the seedling line in the pictures is identified by using image identification technologies such as a neural network. The conventional seedling line identification method has low operation speed, needs to occupy more computing resources, and is difficult to consider both the accuracy and the operation speed.
The seedling line recognition algorithm in the application is based on a neural network model formed by pre-training, the seedling line in the driving process of the agricultural machine can be correctly recognized according to the obtained picture, the pilot line is output according to the seedling line, and the accuracy and the speed of the seedling line recognition can be considered by the seedling line recognition algorithm.
The method for identifying the seedling line of the present application will be described in detail below with reference to fig. 2.
Fig. 2 is a schematic flowchart of a method for identifying seedling lines based on deep learning according to an embodiment of the present application. As shown in fig. 2, the method 200 includes:
s210, a first picture is obtained, and a second picture is generated according to the first picture, where the first picture is a picture taken by the camera 120 of the agricultural machinery 110.
The first picture may be a picture of the working area taken by the camera 120 of the agricultural machine 110, and the second picture is obtained by processing the first picture, for example, the second picture is a picture obtained by reducing the first picture according to a certain proportion. Wherein, the first picture and the second picture can both comprise one or more crop strips. The second picture may be a picture of the first picture processed by an image processing tool, such as OpenCv.
And S220, processing the second picture by using a pre-trained neural network model to generate a seedling line.
Inputting the second picture into the pre-trained neural network model, obtaining the target pixels on the second picture, and correctly identifying and generating the seedling line in the driving process of the agricultural machinery 110 according to the target pixels. Wherein, the generated seedling line can comprise the following information: the number of seedling lines, the coordinates of the seedling points in the seedling lines, such as the ordinate and the abscissa, and the like.
Optionally, the second picture may also be input into a pre-trained neural network model in the form of data, for example, the data is obtained after the first picture is processed by the OpenCV tool, and the data includes information in the compressed first picture.
And S230, calculating the spatial relationship of each seedling line in the first picture according to the coordinates of the seedling lines to obtain a first seedling line and a second seedling line, wherein the first seedling line and the second seedling line are two most central seedling lines in the first picture respectively.
FIG. 3 is a schematic view of a seedling line identified in an embodiment of the present application. As shown in fig. 3, 6 seedling lines are shown, and 6 seedling lines are arranged in the abscissa direction, i.e., in the x direction, and are respectively numbered 1, 2,3,4,5, and 6. The first and second seedling lines are the two seedling lines at the center of the 6 seedling lines shown in fig. 3, i.e., the seedling lines numbered 3 and 4, respectively. The number of crop strips included in the first picture is different, and the number of seedling lines output by the pre-trained neural network model in the operation process of the agricultural machinery 110 may also be different, but the first seedling line and the second seedling line are both the two seedling lines at the center of the identified and output seedling lines.
S240, generating a leading line according to the first seedling line and the second seedling line, wherein the abscissa of the leading line is the mean value of the abscissa of the first seedling line and the abscissa of the second seedling line.
As shown in fig. 3, the first seedling line and the second seedling line may be seedling lines numbered 3 and 4 in the drawing, respectively, the leading line is located between the first seedling line and the second seedling line, and the agricultural machinery 110 may travel according to the generated leading line.
Optionally, in an embodiment of the present application, generating a navigation line according to the first seedling line and the second seedling line includes: when the number of the crop strips between two wheels of the agricultural machine is odd, generating the leading line positioned in the middle of the crop strips, wherein the first seedling line and the second seedling line are two seedling lines belonging to the same crop strip respectively;
when the number of the crop zones between two wheels of the agricultural machine is an even number, the pilot line positioned in the middle of two adjacent ridges of the crop zones is generated, and the first seedling line and the second seedling line are two adjacent seedling lines of the adjacent ridges of the crop zones respectively.
FIG. 4 is a schematic diagram of an outgoing routing path in an embodiment of the present application. As shown in fig. 4, a crop strip is included between two wheels of the agricultural machine 110, that is, two seedling lines are included, and the two seedling lines may be a first seedling line and a second seedling line, respectively. At this time, the generated navigation line is positioned in the middle of the crop strip.
FIG. 5 is a schematic diagram of an outgoing routing path in an embodiment of the present application. As shown in fig. 5, three crop strips are included between two wheels of the agricultural machine 110, i.e., 6 seedling lines are included, and in this case, the first and second seedling lines may be the seedling lines numbered 3 and 4, respectively. At this time, the generated leading line is located in the middle of the crop strip.
FIG. 6 is a schematic illustration of a lead path offset of an embodiment of the present application. As shown in fig. 6, when the camera 120 is shifted, the identified seedling line may be shifted to the left. That is, at this time, the recognized seedling line is changed from fig. 5 to fig. 6, the seedling line denoted by reference numeral 1 in the original 5 is the seedling line denoted by reference numeral 2' in fig. 6, and the seedling line not recognized in the original 5 is recognized as the seedling line in fig. 6. In this case, the identified first seedling line and the second seedling line do not belong to the same crop strip, and the generated navigation line is located in the middle of the ridges of two adjacent crop strips and is different from the correct navigation line by the distance of one ridge. The deviation of the agricultural machinery 110 can occur when the agricultural machinery runs according to the wrong leading line, which causes damage to the crop belt.
By setting the number of the crop belts between two wheels of the agricultural machine to be odd, the generated leading line is positioned in the middle of the crop belts, so that the deviation of the leading line caused by the deviation of a camera and the like can be avoided, and the deviation of the agricultural machine is avoided.
FIG. 7 is a schematic illustration of an outgoing routing path in an embodiment of the present application. As shown in fig. 7, two crop belts, i.e. four seedling lines, are included between two wheels of the agricultural machine 110, and at this time, the first seedling line and the second seedling line are respectively seedling lines marked as 2 and 3 in fig. 7, and the first seedling line and the second seedling line respectively belong to different crop belts and are both close to the ridge. In this case, the resulting navigation line is located on the ridge between two adjacent crop strips.
Optionally, in an embodiment of the present application, generating the second picture according to the first picture includes: reducing the first picture to a second picture according to a first proportion; calculating the spatial relationship of each seedling line in the first picture according to the coordinates of the seedling lines comprises the following steps: and after the seedling line is generated, the coordinate of the seedling line on the second picture is reduced to the coordinate of the seedling line on the first picture according to a second proportion, and the first proportion corresponds to the second proportion.
The first picture may be 720 x 1280 (in pixels), the first picture may be scaled down to a second picture 288 x 400 according to a first scale, the scaled down second picture may have a similar scale to the first picture, and the amount of computation may be about 1/9 of the original amount of computation in the subsequent computation. After the coordinates of the seedling points of the seedling lines are generated, the coordinates of the seedling points on each seedling line can be reduced to the coordinates in the first picture according to the second proportion, so that the coordinates of the seedling points on the seedling lines in the first picture can be obtained, and the coordinates of the seedling points on the seedling lines can be closer to the true value. And then, generating and outputting a pilot line according to the coordinate values of the seedling points in the first seedling line and the second seedling line in the first picture.
Optionally, in an embodiment of the present application, before training the neural network model, the method further includes: acquiring a third picture; adding a label to the third picture, and converting the third picture added with the label into a fourth picture, wherein the fourth picture is a gray scale image of the third picture added with the label, and the label corresponds to the spatial position of the seedling line in the third picture.
The third picture may be an original picture used for training through the neural network model, for example, the size of the third picture is 720 x 1280 x 3, the third picture is labeled by using a label tool, and the labeled third picture is converted into a fourth picture, the fourth picture is a grayscale picture and has a size of 720 x 1280 x 1.
Fig. 8 is a schematic diagram of a tagged picture according to an embodiment of the present application. As shown in fig. 8, the label values of the seedling lines are 1, 2,3,4,5 and 6 in sequence from left to right along the x direction, and the black background value in the figure is 0. That is, in the fourth picture, i.e., in fig. 8, when there is a seedling line, a label value is given to the seedling line, and if there is no seedling line, the value here is 0.
The neural network model in the embodiment of the present application represents the seedling line identification as a classification problem of n x h x m, where n represents the number of seedling lines, h represents the ordinate of the seedling point on the seedling line, i.e., the position in the y direction, and m represents the abscissa of the seedling point on the seedling line, i.e., the position in the x direction. For example, in the seedling line identification process, 36 points may be selected from the picture with the size of 720 x 1280 at equal intervals in the ordinate direction, and 100 desirable coordinate values may be selected in the abscissa direction, and when there is no seedling point, the abscissa value is 0. Based on this, the neural network model can identify 6 seedling lines at most, and the output result is a matrix of 6 × 36 × 101. The classification result of the model can be obtained according to the following formula (1):
Pi,j=fij(X),i∈[1,n],j∈[1,h]
wherein, one (i, j) corresponds to the matrix with the output result of formula (1) being 1 × 101, and the abscissa corresponding to the value with the largest value among the 1 × 101 values is the final classification result.
And judging the spatial relationship of each seedling line according to the value of i, so that two most central seedling lines in the identified seedling lines can be found out, the abscissa of the two most central seedling lines is averaged to obtain the coordinate value of 36 points, and a leading line can be generated.
Optionally, in this embodiment of the present application, more points may also be selected in the ordinate direction, and the specific setting may be set according to an actual requirement, which is not limited in this embodiment of the present application.
Optionally, in an embodiment of the present application, before processing the second picture using the pre-trained neural network model to generate the seedling line, the method for seedling line identification further includes: the neural network model is trained to obtain a pre-trained neural network model.
Fig. 9 is a schematic diagram of training a neural network model according to an embodiment of the present application, and as shown in fig. 9, the following steps may be included.
And S310, acquiring a third picture, and reducing the third picture to a fifth picture according to the first proportion.
The third picture, size 720 x 1280, may be reduced to a fifth picture, size 288 x 400, according to a first scale, which is close to the scale of the third picture, and which may reduce the amount of computation to about 1/9 relative to the third picture.
The third picture may be a pre-stored picture containing the work parcel, and may be used in a training process of the neural network model.
And S320, extracting the characteristics of the fifth picture by using the backbone network to obtain a first characteristic diagram.
And a backbone network, which may also be referred to as a backbone network, for extracting features, specifically for extracting features of the fifth picture.
Optionally, the backbone network may include: a first input layer input1, a first intermediate stage1, a first output layer out1, a second intermediate stage2, a second output layer out2, a third intermediate stage3, and a third output layer out 3.
The first input layer input1 is configured to process the fifth picture using a convolution module ConV-Block, resulting in a second feature map. For example, the size of the fifth picture is 288 × 400 × 3, the input of input1 is 288 × 400 × 3, and after convolution, a second feature map is obtained, the size of which is 144 × 200 × 16.
The first intermediate stage1 is configured to process the second feature map with a depth separable convolution module Deepwise-Block, resulting in a third feature map. For example, the second signature size 144 x 200 x 16 and the third signature size 36 x 100 x 8, wherein the first intermediate stage1 may include three depth separable convolution modules Deepwise-Block.
The first output layer out1 is configured to process the third profile with a convolution module ConV-Block resulting in a first profile. For example, the size of the third feature map is 36 × 100 × 8, and the size of the first feature map is 36 × 52 × 44.
Optionally, the first input layer input1, the first intermediate layer stage1 and the first output layer out1 are used for seedling line inference, that is, the second intermediate layer stage2, the second output layer out2, the third intermediate layer stage3 and the third output layer out3 are not used for seedling line inference, but the second intermediate layer stage2, the second output layer out2, the third intermediate layer stage3 and the third output layer out3 participate in auxiliary training of the seedling line identification process.
The second intermediate stage2 is configured to process the fourth feature map using a depth separable convolution module Deepwise-Block, resulting in a fifth feature map. For example, the fourth signature is sized 36 x 100 x 12 and the fifth signature is sized 18 x 50 x 36, wherein the second intermediate stage2 may include two depth separable convolution modules Deepwise-Block.
The second output layer out2 is configured to process the fifth feature map with the convolution module ConV-Block resulting in a sixth feature map. For example, the size of the fifth feature map is 18 × 50 × 36, and the size of the sixth feature map is 18 × 26 × 89.
The third intermediate stage3 is configured to process the seventh feature map with a depth separable convolution module Deepwise-Block, resulting in an eighth feature map. For example, the seventh feature map is sized 9 × 25 × 56 and the eighth feature map is sized 9 × 25 × 112, wherein the third intermediate stage3 may include two depth separable convolution modules Deepwise-Block.
The third output layer out3 is configured to process the eighth feature map with a convolution module ConV-Block resulting in a ninth feature map. For example, the eighth feature size is 9 × 25 × 112, and the ninth feature size is 9 × 13 × 448.
Optionally, the backbone network structure includes 5 downsampling, that is, by using 5 downsampling, the operation speed of the neural network model is further increased.
The output of the first output layer out1 is used for seedling line reasoning, and the outputs of the second output layer out2 and the third output layer out3 are used for auxiliary training, so that the accuracy of the model can be improved, and the calculation speed of the model can be increased. Table 1 shows specific parameters for feature extraction using the backbone network structure.
TABLE 1 parameters of backbone networks
Figure BDA0003573821910000111
In table 1, Input represents the size of an Input feature map, Operator represents the name of a used convolution module, t represents the number of times of operation of the corresponding convolution module, c is the number of channels, n is the size of a convolution kernel, and s is an initial sliding distance, where the sliding distance is 1 from the second time.
FIG. 10 is a schematic diagram of a depth separable convolution module according to an embodiment of the present application. As shown in FIG. 10, the depth separable convolution module, which may also be referred to as a Deepwise-Block, includes a feature extension layer, a depth-wise convolution layer, a point-wise convolution layer, and a direct connection layer.
The feature expansion layer is configured to expand the number of feature maps by 6 times the original number using a 1 x 1 convolution kernel to obtain more detail.
The depth-wise convolutional layer, which may also be referred to as a Deepwise convolutional layer, is configured to perform convolution on a single channel, and may reduce the amount of computation to 1/c times of the normal convolution operation while extracting features, where c is the number of channels.
The point-by-point convolutional layer may also be referred to as a Pointwise convolutional layer, and the point-by-point convolutional layer is configured to traverse on all channels through 1 × 1 convolution kernels, so that the global information of the feature map can be related to make up for the loss caused by convolution in a single channel.
The direct connection layer can also be called as a ShortCut layer, and is configured to connect input and output by using a residual structure so as to reach a deeper network structure, so that the neural network model obtains stronger characterization capability.
And S330, converting the first characteristic diagram into coordinates of the seedling line.
Fig. 11 is a schematic diagram of converting a feature map into seedling line coordinates according to an embodiment of the present application, and fig. 12 is a flowchart of converting a first feature map into seedling line coordinates according to an embodiment of the present application. Alternatively, in an embodiment of the present application, as shown in fig. 12, converting the first feature map into coordinates of the seedling line includes the following steps.
And S610, using a 1-by-1 convolution kernel to reduce the dimension of the first feature map to obtain a tenth feature map.
After passing through the first output layer out1, the first characteristic pattern has a size of, for example, 36 × 52 × 44. The first feature map can be subjected to dimensionality reduction to obtain a tenth feature map, and the size of the tenth feature map is 36 x 52 x 8. Therefore, the number of the feature maps is reduced, the reasoning speed of the model can be improved, and the number of the feature maps is reduced from 44 to 8, so that the obvious precision of the model can not be lost.
S620, extracting three groups of feature points from the tenth feature map by using the first route, the second route and the third route respectively.
Fig. 13 is a schematic diagram of a route for extracting feature points according to an embodiment of the present application. As shown in fig. 13, the extraction of the feature points may be performed through three different routes.
In the first route Conv1, the features are extracted from the ordinate direction first, and then the features are extracted from the abscissa direction. In the first route Conv1, the convolution kernel may be specifically expressed as: kernel _ size ═ 36,1, strings ═ 36,1, filters ═ 32, plus kernel _ size ═ 1,52, strings ═ 1,52, filters ═ 32. From the 36 × 52 × 8-sized feature map, 32 feature points can be obtained by the first route Conv 1.
In the second route Conv2, the features are extracted from the abscissa direction first, and then the features are extracted from the ordinate direction. In the second route Conv2, the convolution kernel may be expressed as: kernel _ size ═ 1,52, strings ═ 1,52, filters ═ 32, plus kernel _ size ═ 36,1, strings ═ 36,1, filters ═ 32. By means of the second route Conv2, 32 feature points can be obtained from a feature map of size 36 × 52 × 8.
In the third route Conv3, features are extracted using a full convolution. In the third route Conv3, the convolution kernel may be specifically expressed as: kernel _ size ═ 36,52, strings ═ 36,52, filters ═ 32. By means of the third route Conv3, 32 feature points can be obtained from a feature map of size 36 × 52 × 8.
And connecting the characteristic points obtained from the three routes to finally obtain 96 characteristic points. The method for extracting the feature points can obtain richer features and can obtain higher model running speed.
And S630, splicing the three groups of characteristic points with the n full-connection layers to obtain n middle layers.
Optionally, n is 6, and n corresponds to the number of seedling lines. Optionally, the 96 feature points are respectively spliced with 6 fully-connected layers, each fully-connected layer comprises 12 points, so that a 96 × 12 × 6 matrix can be obtained. Wherein, each middle layer is responsible for reasoning and predicting a seedling line.
And S640, correspondingly connecting the n middle layers with the n output layers one by one to obtain an output matrix of n x h (m + 1).
Alternatively, n is 6, h is 36, m is 100, 6 intermediate layers are connected to 6 output layers, each output layer comprises 36 × 101 points, and the 6 output layers are connected together, so that a matrix of 6 × 36 × 101 can be obtained.
And S340, correcting the neural network model by using the loss function L _ total.
Optionally, in an embodiment of the present application, the loss function LtotalSatisfies the following conditions:
Ltotal=Lcls+αLstr+βLseg
Figure BDA0003573821910000131
Lstr=Lsim+Lshp
Figure BDA0003573821910000132
Figure BDA0003573821910000133
wherein L isclsAs a function of classification loss, LstrAs a function of structural loss, LsegAs a function of the segmentation loss, LsimAs a function of adjacent losses, LshpAs a function of shape loss, α is LstrBeta is LsegScaling parameter of, Pi,jCorresponding to the predicted value of the coordinates of the seedling line, Ti,jCorresponding to the actual value of the coordinates of the seedling line, LCEAs a cross-entropy loss function, Loci,jIs the coordinate of the seedling point in the seedling line.
Classification loss function LclsAnd (4) representing the sorting loss of the abscissa of the jth coordinate point of the ith seedling line. Pi,jCorresponding to the predicted value of the seedling line coordinate, comparing the coordinate of the predicted seedling point with the coordinate of the actual seedling point, and if the two are consistent, determining that T is equal to Ti,jIs 1, mismatch, Ti,jIs 0. The coordinates of the actual seedling point can be obtained from the fourth picture with the label, that is, the fourth picture can be used as a basis for whether the seedling point prediction is correct or not.
Adjacent loss function LsimThe positions of adjacent seedling points can be restrained, so that the seedling lines are kept relatively continuous, and the scattered seedling situation is avoided. Pi,jAnd Pi,j+1Respectively corresponding to two seedling points on the same seedling line and adjacent to each other in the vertical coordinate.
Shape loss function LshpThe shape of the seedling line can be restrained, so that most of the seedling line can be guaranteed to be straight, and the turning condition of the seedling line is avoided or reduced.
Segmentation loss function LsegThe method is used for training the model in an auxiliary manner, and can realize the aggregation of global information and local information.
After the neural network model is trained by adopting the method, the neural network model trained in advance can be obtained, and the seedling line is identified in the operation process. The above process of converting the first feature map into the coordinates of the seedling line may be collectively referred to as post-processing, the model used in the above-mentioned post-processing process is a density layer, and in other embodiments, a Conv layer, a Deepwise Conv layer, or the like may also be used.
In order to obtain an optimal neural network model, different models, namely a Dense layer, a Conv layer and a Deepwise Conv layer, are respectively adopted for testing in the post-processing process of the neural network model. The test combines the backbone network with different post-processing models and tests on the embedded device, and the test results are shown in table 2. The embedded device may be hd400b, LPC 540.
TABLE 2 test results of the backbone network and different post-processing models in the embedded device
Figure BDA0003573821910000141
As can be seen from table 2, as the network depth increases, that is, as the application from stage1 to stage2 to stage3, the accuracy of the Top1 of the model gradually increases, wherein in the Conv layer in stage3, the accuracy of the Top1 of the model is 78.3%, but the model needs to consume more computer resources, and the operation speed of the model at stage3 is also significantly reduced compared to stage 1.
Comprehensively considering the overall running speed and precision of the model, stage1-Dense is applied to the neural network model as a final model, stage1-Dense achieves 7.5ms reasoning speed at the PC end, 67.8ms at the 701 equipment CPU and 72.8ms at the 701 equipment GPU, which is only about 1-2ms slower than that of a single backbone network structure, and Top1 precision can achieve 75.5%. Compared with the Stage3-Conv with the highest accuracy, the Stage1-Dense only loses the precision of 2.8 percent, and the running speeds on the CPU and the GPU are respectively improved by about 1 time and 3 times.
The neural network model of the embodiment of the application can be operated on an embedded device, such as a low-computing-power platform, or operated in real time, but the embedded operation of the common model is difficult to realize.
The embodiment of the application also provides a device for identifying the seedling line, which comprises a module for executing the technical scheme or part of the technical scheme.
The embodiment of the application also provides an agricultural machine, and the agricultural machine at least comprises: the camera is used for shooting an image of a working land in the driving process of the agricultural machine; and the processor is used for controlling the camera and executing the technical scheme or part of the technical scheme of the application.
An embodiment of the present application further provides a computer-readable storage medium for storing a computer program.
It should be understood that the processor of the embodiments of the present application may be an integrated circuit image processing system having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor executes instructions in the memory and combines hardware thereof to perform the steps of the above-described method. To avoid repetition, it is not described in detail here.
It should also be understood that the foregoing descriptions of the embodiments of the present application focus on highlighting differences between the various embodiments, and that the same or similar elements that are not mentioned may be referred to one another and, for brevity, are not repeated herein.
It should be understood that, in the embodiment of the present application, the term "and/or" is only one kind of association relation describing an associated object, and means that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A seedling line identification method based on deep learning is characterized by comprising the following steps:
acquiring a first picture, and generating a second picture according to the first picture, wherein the first picture is a picture shot by a camera of an agricultural machine;
processing the second picture by using a pre-trained neural network model to generate a seedling line;
calculating the spatial relationship of each seedling line in the first picture according to the coordinates of the seedling lines to obtain a first seedling line and a second seedling line, wherein the first seedling line and the second seedling line are respectively two seedling lines at the center of the seedling lines in the first picture;
and generating a leading line according to the first seedling line and the second seedling line, wherein the abscissa of the leading line is the mean value of the abscissa of the first seedling line and the abscissa of the second seedling line.
2. The method of claim 1, wherein the generating a navigation line from the first and second seedling lines comprises:
when the number of the crop strips between two wheels of the agricultural machine is odd, generating the leading line positioned in the middle of the crop strips, wherein the first seedling line and the second seedling line are two seedling lines belonging to the same crop strip respectively;
when the number of the crop zones between two wheels of the agricultural machine is an even number, the pilot line positioned in the middle of two adjacent ridges of the crop zones is generated, and the first seedling line and the second seedling line are two adjacent seedling lines of the adjacent ridges of the crop zones respectively.
3. The method according to claim 1 or 2, wherein the generating a second picture from the first picture comprises: reducing the first picture to the second picture according to a first proportion;
the calculating the spatial relationship of each seedling line in the first picture according to the coordinates of the seedling line comprises: and after the seedling line is generated, reducing the coordinate of the seedling line in the second picture to the coordinate of the seedling line in the first picture according to a second proportion, wherein the first proportion corresponds to the second proportion.
4. The method of any one of claims 1 to 3, wherein prior to training the neural network model, the method further comprises:
acquiring a third picture;
adding a label to the third picture, and converting the third picture added with the label into a fourth picture, wherein the fourth picture is a gray scale image of the third picture added with the label, and the label corresponds to the spatial position of the seedling line in the third picture.
5. The method of any one of claims 1 to 4, wherein prior to said processing said second picture using a pre-trained neural network model to generate a seedling line, the method further comprises: training the neural network model to obtain the pre-trained neural network model;
wherein the training the neural network model comprises:
acquiring a third picture, and reducing the third picture to a fifth picture according to a first proportion;
extracting the characteristics of the fifth picture by using a backbone network to obtain a first characteristic diagram;
converting the first characteristic diagram into the coordinates of the seedling line;
the neural network model is modified using a loss function.
6. The method of claim 5, wherein the backbone network comprises:
the first input layer is configured to process the fifth picture by adopting a convolution module to obtain a second feature map;
a first intermediate layer configured to process the second feature map with a depth separable convolution module to obtain a third feature map;
a first output layer configured to process the third feature map with a convolution module to obtain the first feature map;
a second intermediate layer configured to process the fourth feature map with a depth separable convolution module to obtain a fifth feature map;
a second output layer configured to process the fifth feature map with a convolution module to obtain a sixth feature map;
a third intermediate layer configured to process the seventh feature map with a depth separable convolution module to obtain an eighth feature map;
a third output layer configured to process the eighth feature map with a convolution module to obtain a ninth feature map.
7. The method of claim 6, wherein the input layer, the first intermediate layer, and the first output layer are configured to infer the seedling line; the second intermediate layer, the second output layer, the third intermediate layer and the third output layer are configured to participate in an auxiliary training of the seedling line.
8. The method of claim 6 or 7, wherein the backbone network comprises 5 downsampling.
9. The method of any of claims 6 to 8, wherein the first intermediate tier comprises 3 depth-separable convolution modules, and the second and third intermediate tiers each comprise 2 depth-separable convolution modules.
10. The method of any of claims 6 to 9, wherein the depth separable convolution module comprises:
a feature augmentation layer configured to augment the number of feature maps by a factor of 6 using a 1 x 1 convolution kernel;
depth-wise convolutional layers configured to be convolved on a single channel;
a point-wise convolutional layer configured to traverse over all channels by a 1 x 1 convolutional kernel;
a direct connection layer configured to connect an input and an output using a residual structure.
11. The method of any one of claims 5 to 10, wherein said converting the first feature map into coordinates of the seedling line comprises:
using a 1-by-1 convolution kernel to reduce the dimension of the first feature map to obtain a tenth feature map;
extracting three groups of feature points from a tenth feature map by using the first route, the second route and the third route respectively;
splicing the three groups of characteristic points with n full-connection layers to obtain n middle layers;
and correspondingly connecting the n middle layers with the n output layers one by one to obtain an output matrix of n x h (m +1), wherein n corresponds to the number of the seedling lines, h corresponds to the ordinate of the seedling lines, and m corresponds to the abscissa of the seedling lines.
12. The method of claim 11, wherein n-6.
13. The method according to claim 11 or 12, wherein said extracting three sets of feature points from a ninth feature map using the first route, the second route, and the third route, respectively, comprises:
extracting features from the ordinate direction first and then from the abscissa direction based on the first route;
based on the second route, firstly extracting features from the horizontal coordinate direction, and then extracting features from the vertical coordinate direction;
based on the third route, features are extracted using full convolution.
14. The method according to any one of claims 5 to 13, wherein the loss function satisfies:
Ltotal=Lcls+αLstr+βLseg
wherein the content of the first and second substances,
Figure FDA0003573821900000031
Lstr=Lsim+Lshp
Figure FDA0003573821900000041
Figure FDA0003573821900000042
Ltotalas a function of said loss, LclsAs a function of classification loss, LstrAs a function of structural loss, LsegAs a function of the segmentation loss, LsimAs a function of adjacent losses, LshpAs a function of shape loss, α is LstrBeta is LsegScaling parameter of, Pi,jCorresponding to the predicted value of the coordinates of the seedling line, Ti,jCorresponding to the actual value of the coordinates of the seedling line, LCEAs a cross-entropy loss function, Loci,jIs the coordinate of the seedling point in the seedling line.
15. An apparatus for seedling line identification, comprising: means for performing the method of any one of claims 1 to 14.
16. An agricultural machine, comprising:
the camera is used for shooting an image of a working land in the driving process of the agricultural machine;
a processor for controlling the camera and performing the method of any one of claims 1 to 14.
17. A computer-readable storage medium, comprising a computer program which, when run on a computer device, causes a processing unit in the computer device to perform the method of any one of claims 1 to 14.
CN202210333683.7A 2022-03-30 2022-03-30 Seedling line identification method and device based on deep learning and agricultural machine Pending CN114723818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210333683.7A CN114723818A (en) 2022-03-30 2022-03-30 Seedling line identification method and device based on deep learning and agricultural machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210333683.7A CN114723818A (en) 2022-03-30 2022-03-30 Seedling line identification method and device based on deep learning and agricultural machine

Publications (1)

Publication Number Publication Date
CN114723818A true CN114723818A (en) 2022-07-08

Family

ID=82240839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210333683.7A Pending CN114723818A (en) 2022-03-30 2022-03-30 Seedling line identification method and device based on deep learning and agricultural machine

Country Status (1)

Country Link
CN (1) CN114723818A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116892944A (en) * 2023-09-11 2023-10-17 黑龙江惠达科技股份有限公司 Agricultural machinery navigation line generation method and device, and navigation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116892944A (en) * 2023-09-11 2023-10-17 黑龙江惠达科技股份有限公司 Agricultural machinery navigation line generation method and device, and navigation method and device
CN116892944B (en) * 2023-09-11 2023-12-08 黑龙江惠达科技股份有限公司 Agricultural machinery navigation line generation method and device, and navigation method and device

Similar Documents

Publication Publication Date Title
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
CN109272509B (en) Target detection method, device and equipment for continuous images and storage medium
CN111008561B (en) Method, terminal and computer storage medium for determining quantity of livestock
US20190065817A1 (en) Method and system for detection and classification of cells using convolutional neural networks
US20170364757A1 (en) Image processing system to detect objects of interest
CN111814902A (en) Target detection model training method, target identification method, device and medium
CN110246160B (en) Video target detection method, device, equipment and medium
CN111192377B (en) Image processing method and device
CN112132093A (en) High-resolution remote sensing image target detection method and device and computer equipment
CN110619316A (en) Human body key point detection method and device and electronic equipment
CN111079739A (en) Multi-scale attention feature detection method
CN112183295A (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN112580662A (en) Method and system for recognizing fish body direction based on image features
CN113095441A (en) Pig herd bundling detection method, device, equipment and readable storage medium
CN113223027A (en) Immature persimmon segmentation method and system based on PolarMask
CN114723818A (en) Seedling line identification method and device based on deep learning and agricultural machine
US20220207679A1 (en) Method and apparatus for stitching images
CN115578590A (en) Image identification method and device based on convolutional neural network model and terminal equipment
CN111242066A (en) Large-size image target detection method and device and computer readable storage medium
CN114898434A (en) Method, device and equipment for training mask recognition model and storage medium
CN112396594A (en) Change detection model acquisition method and device, change detection method, computer device and readable storage medium
CN112132780A (en) Reinforcing steel bar quantity detection method and system based on deep neural network
CN112116567A (en) No-reference image quality evaluation method and device and storage medium
CN112101148B (en) Moving object detection method and device, storage medium and terminal equipment
CN109978863B (en) Target detection method based on X-ray image and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 150029 Building 1, Kechuang headquarters, Shenzhen (Harbin) Industrial Park, No. 288, Zhigu street, Songbei District, Harbin, Heilongjiang Province

Applicant after: Heilongjiang Huida Technology Co.,Ltd.

Address before: 150029 Building 1, Kechuang headquarters, Shenzhen (Harbin) Industrial Park, No. 288, Zhigu street, Songbei District, Harbin, Heilongjiang Province

Applicant before: HEILONGJIANG HUIDA TECHNOLOGY DEVELOPMENT Co.,Ltd.

CB02 Change of applicant information