CN111950391A - Fruit tree bud recognition method and device - Google Patents

Fruit tree bud recognition method and device Download PDF

Info

Publication number
CN111950391A
CN111950391A CN202010718954.1A CN202010718954A CN111950391A CN 111950391 A CN111950391 A CN 111950391A CN 202010718954 A CN202010718954 A CN 202010718954A CN 111950391 A CN111950391 A CN 111950391A
Authority
CN
China
Prior art keywords
bud
fruit tree
category
image data
buds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010718954.1A
Other languages
Chinese (zh)
Inventor
夏雪
柴秀娟
孙坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Information Institute of CAAS
Original Assignee
Agricultural Information Institute of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Information Institute of CAAS filed Critical Agricultural Information Institute of CAAS
Priority to CN202010718954.1A priority Critical patent/CN111950391A/en
Publication of CN111950391A publication Critical patent/CN111950391A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for identifying fruit tree buds. Wherein, the method comprises the following steps: acquiring image data of fruit tree buds; marking the bud coordinate and the bud category in the image data; constructing a bud target detection model according to the image data and the bud coordinates; constructing a bud category identification model according to the cut image data and the bud category; and identifying the fruit tree bud image according to the bud target detection model and the bud category identification model. The invention solves the technical problem that the prior art can only identify the varieties of agricultural products with larger differences or single plant organs, but can not automatically classify and identify the flower buds and the leaf buds on the fruit trees.

Description

Fruit tree bud recognition method and device
Technical Field
The invention relates to the field of computer vision, digital image processing and machine learning, in particular to a method and a device for identifying fruit tree buds.
Background
The proportion of the flower buds and the leaf buds of the apple trees is an important basis for determining the proper fruit load capacity of the apple trees, and directly influences the yield and the quality of the apple trees. Accurately identify flower buds and leaf buds of apple trees, and is extremely important for guiding fruit tree pruning, reasonably adjusting tree body load, overcoming the phenomenon of big and small years and ensuring high yield and high quality of fruit trees. However, since the shapes of flower buds and leaf buds of apple trees are very similar, many fruit growers are difficult to distinguish when pruning fruit trees, the pruning methods of flower bud branches and leaf bud branches are greatly different, and if the flower buds and the leaf buds are not clearly distinguished, the situation of mistaken pruning can occur, which not only affects the yield and quality of fruits in the current year, but also relates to the quantity and quality of subsequent flower bud differentiation of the fruit trees. Therefore, it is important to correctly identify the flower bud and the leaf bud.
The machine vision technology is an important means for identifying agricultural objects efficiently and at low cost, can realize accurate identification of targets to be operated in agricultural production, and most of existing researches on agricultural product buds are to utilize a traditional image processing method to segment buds of agricultural products such as garlic, ginger and sugarcane from images, so that identification accuracy is greatly improved along with the appearance of deep learning technology. Some researches combine machine vision and deep learning technologies to realize organ classification recognition of fruits and vegetables, recognition of flower patterns and varieties, sugarcane bud detection, recognition of garlic bulbil orientation, recognition of fruit and vegetable pest types and the like. The research is mainly to simply detect the buds of crops or agricultural products or to identify the products with larger differences. The method is not used for detecting small targets in agricultural images and identifying fine categories of similar crop object organs. The prior art can only identify the varieties of agricultural products with larger differences or single plant organs at present, but cannot automatically classify and identify the flower buds and the leaf buds on the fruit trees.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying fruit tree buds, which are used for at least solving the technical problem that flower buds and leaf buds on fruit trees cannot be automatically classified and identified only by identifying large-difference agricultural product varieties or single plant organs at present.
According to an aspect of the embodiment of the present invention, there is provided a method for identifying fruit tree buds, including: acquiring image data of fruit tree buds; marking the bud coordinate and the bud category in the image data; constructing a bud target detection model according to the image data and the bud coordinates; constructing a bud category identification model according to the cut image data and the bud category; and identifying the fruit tree bud image according to the bud target detection model and the bud category identification model.
Optionally, the labeling the bud coordinate and the bud category in the image data includes: framing out fruit tree buds in the image data; marking the coordinates of the fruit tree buds to obtain the coordinates of the buds; and marking the category of the fruit tree bud to obtain the bud category.
Optionally, the constructing a bud target detection model according to the image data and the bud coordinates includes: inputting the image data and the bud coordinate as an input label of a deep convolutional neural network; and generating the bud target detection model through the deep convolutional neural network.
Optionally, the constructing a bud category identification model according to the clipped image data and the bud category includes: inputting the clipped image data and the bud category as input labels of a deep convolutional neural network; and generating the bud category identification model through the deep convolutional neural network.
Optionally, the image data after clipping refers to clipping of an image in a fruit tree bud coordinate labeling frame of the image data according to the bud coordinate.
Optionally, identifying the fruit tree bud image according to the bud target detection model and the bud type identification model comprises: acquiring the fruit tree bud body image; inputting the fruit tree bud image into the bud target detection model to obtain fruit tree bud coordinates; and inputting the fruit tree bud image into the bud category identification model to obtain the fruit tree bud category.
Optionally, after the inputting the fruit tree bud image into the bud category identification model to obtain the fruit tree bud category, the method further includes: and outputting the coordinates of the fruit tree buds and the categories of the fruit tree buds.
According to another aspect of the embodiments of the present invention, there is also provided a device for identifying fruit tree buds, including: the acquisition module is used for acquiring image data of fruit tree buds; the marking module is used for marking the bud coordinate and the bud category in the image data; the coordinate module is used for constructing a bud target detection model according to the image data and the bud coordinates; the category module is used for constructing a bud category identification model according to the cut image data and the bud category; and the recognition module is used for recognizing the fruit tree bud images according to the bud target detection model and the bud category recognition model.
Optionally, the labeling module includes: the selecting unit is used for framing out fruit tree buds in the image data; the coordinate unit is used for marking the coordinates of the fruit tree buds to obtain the coordinates of the buds; and the classification unit is used for labeling the classification of the fruit tree buds to obtain the bud classification.
Optionally, the coordinate module includes: the input unit is used for inputting the image data and the bud coordinate as an input label of the deep convolutional neural network; and the generating unit is used for generating the bud target detection model through the deep convolutional neural network.
Optionally, the category module includes: the input unit is also used for inputting the clipped image data and the bud category as input labels of the deep convolutional neural network; and the generating unit is also used for generating the bud category identification model through the deep convolutional neural network.
Optionally, the image data after clipping refers to clipping of an image in a fruit tree bud coordinate labeling frame of the image data according to the bud coordinate.
Optionally, the identification module includes: the acquisition unit is used for acquiring the fruit tree bud body image; the coordinate generating unit is used for inputting the fruit tree bud image into the bud target detection model to obtain fruit tree bud coordinates; and the category generation unit is used for inputting the fruit tree bud image into the bud category identification model to obtain the fruit tree bud category.
Optionally, the identification module further includes: and the output unit is used for outputting the coordinates of the fruit tree buds and the categories of the fruit tree buds.
According to another aspect of the embodiments of the present invention, there is also provided a computer program product including instructions which, when run on a computer, cause the computer to perform a method for identifying fruit buds.
According to another aspect of the embodiment of the invention, a non-volatile storage medium is further provided, and the non-volatile storage medium includes a stored program, wherein the program controls a device in which the non-volatile storage medium is located to execute a fruit bud recognition method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory is stored with computer readable instructions, and the processor is used for executing the computer readable instructions, wherein the computer readable instructions execute a method for identifying fruit tree buds when running.
In the embodiment of the invention, the image data of the fruit tree bud body is acquired; marking the bud coordinate and the bud category in the image data; constructing a bud target detection model according to the image data and the bud coordinates; constructing a bud category identification model according to the cut image data and the bud category; according to bud target detection model and bud class identification model, the mode of discerning is carried out fruit tree bud image, reached and utilized machine vision technique to carry out the automatic identification of flower bud leaf bud, can help fruit grower to differentiate apple bud class fast on the one hand to make horticulture measures such as exact bud picking or pruning, on the other hand can provide technical support for the intelligent tree precision management and control before the birth of guiding orchard robot, and then solved and only can discern great agricultural product variety or single plant organ at present, and can't carry out automatic classification's technical problem to flower bud and leaf bud on the fruit tree.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flow chart of a method for identifying fruit tree buds according to an embodiment of the invention;
fig. 2 is a block diagram of a fruit bud recognition device according to an embodiment of the present invention;
FIG. 3 is an example of an image of flower and leaf buds of an apple tree;
FIG. 4 is an example of labeling apple tree sprouts in an image;
FIG. 5 is a schematic structural diagram of a fruit tree bud target detection deep convolutional neural network according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a deep convolutional neural network for fruit tree bud category identification according to an embodiment of the present invention;
fig. 7 is a three-line attention model structure in a three-line attention sampling network.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a fruit bud recognition method, where the steps illustrated in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and where a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be executed in an order different from that shown.
Example one
Fig. 1 is a flowchart of a method for identifying fruit tree buds according to an embodiment of the invention, and as shown in fig. 1, the method comprises the following steps:
and S102, acquiring image data of the fruit tree bud body.
Specifically, in the embodiment of the present invention, the bud part of the fruit tree needs to be identified, so in a natural environment of the orchard, an image imaging device is used to capture a large number of RGB images of the fruit tree buds, wherein the RGB images of the fruit tree buds contain flower buds and leaf buds of the fruit tree, and are transmitted to the processor of the embodiment of the present invention through a remote communication protocol or an electrical connection transmission line by the image imaging device, and after the processor identifies the RGB images of the fruit tree buds, the image data is stored in a storage device together for a subsequent fruit tree bud identification step.
It should be noted that the image imaging device may be a single-lens reflex Camera, a smart phone, a Web Camera, or other imaging devices, and the above devices are paired through data transmission and connected to the input port of the processor, and can transmit the fruit bud image data shot in real time to the processor.
As shown in fig. 3, for example, when the apple tree bud is photographed by using a single lens reflex camera, the photographed image is a high-pixel picture of the apple tree including leaf buds and flower buds, and the picture is transmitted to a processor as the RGB image data of the fruit bud body for the subsequent identification work of the leaf buds and the flower buds in the fruit bud body.
And step S104, marking the bud coordinate and the bud category in the image data.
Specifically, after the RGB image data of the fruit tree bud is obtained, the bud part in the image needs to be labeled, and the position of each bud in the image can be selected by using an image labeling tool, as shown in fig. 4. The specific frame selection range is from the head of the bud to the junction of the root of the bud and the branch, and the category (0 represents flower bud and 1 represents leaf bud) of each bud and the coordinate information (namely the x and y coordinate values of pixel points at the upper left corner and the lower right corner of the marking frame) of the marking frame are respectively recorded.
Optionally, the labeling the bud coordinate and the bud category in the image data includes: framing out fruit tree buds in the image data; marking the coordinates of the fruit tree buds to obtain the coordinates of the buds; and marking the category of the fruit tree bud to obtain the bud category.
The method comprises the steps of carrying out bud coordinate marking and bud category marking according to image data, and helping to train a target detection model and a category identification model of buds, so that fruit tree buds are selected from an image data middle frame when image data marking is carried out, the coordinates of the fruit tree buds are marked to obtain bud coordinates, and finally the categories of the fruit tree buds are marked to obtain the bud categories. For example, the single lens reflex camera shoots a picture of an apple tree containing fruit tree buds and transmits the picture to the processor, the processor prompts a user to select the buds in the image by using an image marking tool according to the image data, and simultaneously calculates a coordinate value of the position compared with the whole image data according to the position selected by the user, and stores the coordinate value, and similarly, the user also needs to judge the leaf buds and the flower buds and marks the flower buds or the leaf buds in the same way, and the processor stores the category information according to the marks of the flower buds or the leaf buds marked by the user. The coordinates of the fruit tree buds can be marked by (x, y), and flower buds or leaf buds in the fruit tree buds can be represented by 0 or 1, so that the purpose of identifying the category of the fruit tree buds while identifying the coordinates of the fruit tree buds is achieved.
And S106, constructing a bud target detection model according to the image data and the bud coordinates.
Optionally, the constructing a bud target detection model according to the image data and the bud coordinates includes: inputting the image data and the bud coordinate as an input label of a deep convolutional neural network; and generating the bud target detection model through the deep convolutional neural network.
Specifically, based on the constructed fruit tree bud image data set, the bud image and the coordinate label of the labeling frame of the bud image are used as input, and a fruit bud target detection model is trained. Because the fruit tree bud in the image is usually small, a common detection model is difficult to have a good detection effect, and therefore a deep learning method for small target detection is introduced into the model. The present invention provides a Small Object Detection Multi-Task generic adaptive Network (SOD-MTGAN) for this purpose as an implementation example, and the Network structure is shown in FIG. 5.
The model is constructed based on a generation countermeasure network (GAN), and mainly comprises a baseline detector network, a generator network and a discriminator network. The GAN completes the learning of the generator and the arbiter simultaneously through the countermeasure process. In one aspect, a generator is trained to produce samples that can spoof a discriminator, while the discriminator is trained to distinguish between real images and pseudo-images produced by the generator. The training process alternately optimizes the generator and the arbiter. The network is described in detail as follows:
(1) the baseline detector network is used to crop positive (i.e., objects) and negative (i.e., backgrounds) examples from the input image, train the generator network and the discriminator network, and generate a region of interest (ROI) for testing. The detector can be any type of detection network, such as fast-RCNN, Mask-RCNN, etc., and in the training stage, the positive/negative samples obtained by the detector are used for training the generator and the discriminator, and in the testing stage, the RoI candidate region obtained by the detector is used for further screening the target by the SOD-MTGAN.
(2) The generator network is used to process the input low resolution image to upsample a smaller target image to a larger scale and output a super-resolved image. The generator network adopts a deep CNN architecture, and comprises two upsampling convolutional layers, three common convolutional layers and five residual error modules. The generator first upsamples the low resolution small image output by the baseline detector, containing the target object and background candidate ROIs, to a 4-fold super-resolution image through the convolution layer, and then performs a convolution operation to produce a corresponding sharp image.
(3) The discriminator network is a multi-task (multi-task) network, on one hand, the super-resolution image generated by the generator network is distinguished from the real high-resolution image, and the generator network is guided to reconstruct a higher-quality high-resolution RoI image in a training phase; on the other hand, the high-resolution target image reconstructed by the generator network is subjected to the tasks of object type prediction and target position regression. In the arbiter network, different types of backbone networks can be used, such as AlexNet, VGGNet, or ResNet. Meanwhile, three parallel Full Connection (FC) layers are arranged behind the last average pooling layer of the backbone network, the first FC layer uses a sigmoid loss function to complete a task of distinguishing a real high-resolution image from a generated super-resolution image, the second FC layer uses a softmax loss function to complete an object class prediction task, and the third FC layer uses a smooth L1 loss function to complete a bounding box regression task. The classification and regression losses during training are further propagated back to the generator network, directing the generator to produce higher quality images for easier classification and better localization.
And S108, constructing a bud type identification model according to the clipped image data and the bud type.
Optionally, the constructing a bud category identification model according to the clipped image data and the bud category includes: inputting the clipped image data and the bud category as input labels of a deep convolutional neural network; and generating the bud category identification model through the deep convolutional neural network.
Optionally, the image data after clipping refers to clipping of an image in a fruit tree bud coordinate labeling frame of the image data according to the bud coordinate.
Specifically, images in the labeling frame are cut out (Crop) from the whole image according to the coordinates of the labeled bud images to obtain a large number of cut bud images, and the bud images and the corresponding labeled category labels are used as input to train the apple tree bud category identification model. The appearance similarity of the flower buds and the leaf buds of the apple trees is high, and the satisfactory effect is difficult to obtain by a universal identification model. The apple tree bud type identification model is constructed by means of a fine-grained identification technology. The present invention provides a three-line Attention Sampling Network (TASN) based on weak supervised learning fine-grained identification for this purpose as an implementation example, and the Network structure is shown in fig. 6. The feature learning of the TASN is performed by two networks, namely, a master network (master-net) and a component network (part-net), wherein the master network is used for learning global features, the component network is used for learning local features, and finally, based on teacher-student (teacher-student) modes, the detailed features learned in the component network are migrated to the master network by using a distillation (distilling) method. The network is described in detail as follows:
(1) for an input image, it is subjected to a series of convolution processes to extract a convolution feature map, and then subjected to a three-line attention model to convert the feature maps (feature maps) into attention maps (attention maps).
The three-line attention model is obtained by convolving the spatial relationship among the channels of the feature map, and the three-line attention model map is shown in fig. 7. Assume that the shape (shape) of a feature map is c × h × w, where c denotes channels, h denotes height, and w denotes width. The transformed (reshape) convolution signature with dimension c × hw is represented by X, the spatial relationship between channels is represented by XXT, the spatial relationship is inserted into the signature, i.e. XXT and X are dot-product, obtaining a trilinear signature, which can be expressed as m (X) ═ N (X) XT) X, where each channel of m (X) represents a signature, the first N (X) is spatial normalization, keeping each channel of the signature in the same scale, the second is relational normalization for each relation vector (N (X) XT) i. Finally, the obtained c × hw dimensional attention map is converted into c × h × w dimensions again.
(2) The original image and the attention map are used as input and are respectively sent to the main network and the component network, and a sampling map of a reserved structure and a sampling map of reserved details are generated. The master network captures the global structure of the image, removing areas of non-contributing detail, allowing better use of high resolution areas. The component network focuses on details of a certain portion of the image, enlarging the portion with a high attention (attention) weight, thereby preserving more fine-grained details.
The main network firstly performs average pooling (averaging) on the attention, then performs sampling, and finally retains the global overall characteristics of the image and omits unimportant areas. Can be expressed by the formula
IS=S(I,A(M))
Where M is the attention map, S (×) is the non-uniform sampling function, and a (×) is the average pooling between channels.
The component network performs random selection (random select) operation on all the attention diagrams, and performs random extraction in each iteration, so that each attention diagram is randomly sampled to fully reserve the fine-grained characteristics of the local components. Can be expressed by the formula
IS=S(I,R(M))
In the formula, R (×) represents a random selection of a channel from the input.
Respectively sending the generated structure retention diagram and detail retention diagram into a convolutional neural network with the same structure and shared parameters to obtain fully-connected outputs zs and zd, then converting zs and zd into classification probabilities by using a softmax classifier, and transferring the part characteristics learned in the part network into the main network as references for assisting in object identification through soft target cross entropy. Finally, the network outputs the category of the bud in the image, namely the bud belongs to the flower bud or the leaf bud.
And S110, identifying the fruit tree bud image according to the bud target detection model and the bud type identification model.
Optionally, identifying the fruit tree bud image according to the bud target detection model and the bud type identification model comprises: acquiring the fruit tree bud body image; inputting the fruit tree bud image into the bud target detection model to obtain fruit tree bud coordinates; and inputting the fruit tree bud image into the bud category identification model to obtain the fruit tree bud category.
Optionally, after the inputting the fruit tree bud image into the bud category identification model to obtain the fruit tree bud category, the method further includes: and outputting the coordinates of the fruit tree buds and the categories of the fruit tree buds.
Specifically, after a bud target detection model and a bud type identification model are obtained, any fruit tree bud image can be input, the purposes of identifying the fruit tree bud target and identifying the fruit tree bud type are achieved respectively according to the calculation of the two models, any given RGB fruit tree bud image is used as the input, a position area where a bud is located in the image is detected by using the trained fruit tree bud target detection model, the image of the position area is further input into the fruit tree bud type identification model, and the type of the bud, namely the bud or the leaf bud, is predicted. And finally outputting the position information and the bud category information of the buds in the graph.
Through the steps, the technical problem that in the prior art, only a great variety of agricultural products or a single plant organ can be identified, but flower buds and leaf buds on fruit trees cannot be automatically classified and identified is solved.
Example two
Fig. 2 is a block diagram of a fruit bud recognition device according to an embodiment of the present invention, and as shown in fig. 2, the device includes:
the obtaining module 20 is configured to obtain image data of fruit tree buds.
Specifically, in the embodiment of the present invention, the bud part of the fruit tree needs to be identified, so in a natural environment of the orchard, an image imaging device is used to capture a large number of RGB images of the fruit tree buds, wherein the RGB images of the fruit tree buds contain flower buds and leaf buds of the fruit tree, and are transmitted to the processor of the embodiment of the present invention through a remote communication protocol or an electrical connection transmission line by the image imaging device, and after the processor identifies the RGB images of the fruit tree buds, the image data is stored in a storage device together for a subsequent fruit tree bud identification step.
It should be noted that the image imaging device may be a single-lens reflex Camera, a smart phone, a Web Camera, or other imaging devices, and the above devices are paired through data transmission and connected to the input port of the processor, and can transmit the fruit bud image data shot in real time to the processor.
As shown in fig. 3, for example, when the apple tree bud is photographed by using a single lens reflex camera, the photographed image is a high-pixel picture of the apple tree including leaf buds and flower buds, and the picture is transmitted to a processor as the RGB image data of the fruit bud body for the subsequent identification work of the leaf buds and the flower buds in the fruit bud body.
And the labeling module 22 is used for labeling the bud coordinates and the bud categories in the image data.
Specifically, after the RGB image data of the fruit tree bud is obtained, the bud part in the image needs to be labeled, and the position of each bud in the image can be selected by using an image labeling tool, as shown in fig. 4. The specific frame selection range is from the head of the bud to the junction of the root of the bud and the branch, and the category (0 represents flower bud and 1 represents leaf bud) of each bud and the coordinate information (namely the x and y coordinate values of pixel points at the upper left corner and the lower right corner of the marking frame) of the marking frame are respectively recorded.
Optionally, the labeling module includes: the selecting unit is used for framing out fruit tree buds in the image data; the coordinate unit is used for marking the coordinates of the fruit tree buds to obtain the coordinates of the buds; and the classification unit is used for labeling the classification of the fruit tree buds to obtain the bud classification.
The method comprises the steps of carrying out bud coordinate marking and bud category marking according to image data, and helping to train a target detection model and a category identification model of buds, so that fruit tree buds are selected from an image data middle frame when image data marking is carried out, the coordinates of the fruit tree buds are marked to obtain bud coordinates, and finally the categories of the fruit tree buds are marked to obtain the bud categories. For example, the single lens reflex camera shoots a picture of an apple tree containing fruit tree buds and transmits the picture to the processor, the processor prompts a user to select the buds in the image by using an image marking tool according to the image data, and simultaneously calculates a coordinate value of the position compared with the whole image data according to the position selected by the user, and stores the coordinate value, and similarly, the user also needs to judge the leaf buds and the flower buds and marks the flower buds or the leaf buds in the same way, and the processor stores the category information according to the marks of the flower buds or the leaf buds marked by the user. The coordinates of the fruit tree buds can be marked by (x, y), and flower buds or leaf buds in the fruit tree buds can be represented by 0 or 1, so that the purpose of identifying the category of the fruit tree buds while identifying the coordinates of the fruit tree buds is achieved.
And the coordinate module 24 is used for constructing a bud target detection model according to the image data and the bud coordinates.
Optionally, the coordinate module includes: the input unit is used for inputting the image data and the bud coordinate as an input label of the deep convolutional neural network; and the generating unit is used for generating the bud target detection model through the deep convolutional neural network.
Specifically, based on the constructed fruit tree bud image data set, the bud image and the coordinate label of the labeling frame of the bud image are used as input, and a fruit tree bud target detection depth convolution neural network model is trained. Because the fruit tree bud in the image is usually small, a common detection model is difficult to have a good detection effect, and therefore a deep learning method for small target detection is introduced into the model. The present invention provides a Small Object Detection Multi-Task generic adaptive Network (SOD-MTGAN) for this purpose as an implementation example, and the Network structure is shown in FIG. 5.
The model is constructed based on a generation countermeasure network (GAN), and mainly comprises a baseline detector network, a generator network and a discriminator network. The GAN completes the learning of the generator and the arbiter simultaneously through the countermeasure process. In one aspect, a generator is trained to produce samples that can spoof a discriminator, while the discriminator is trained to distinguish between real images and pseudo-images produced by the generator. The training process alternately optimizes the generator and the arbiter. The network is described in detail as follows:
(1) the baseline detector network is used to crop positive (i.e., objects) and negative (i.e., backgrounds) examples from the input image, train the generator network and the discriminator network, and generate a region of interest (ROI) for testing. The detector can be any type of detection network, such as fast-RCNN, Mask-RCNN, etc., and in the training stage, the positive/negative samples obtained by the detector are used for training the generator and the discriminator, and in the testing stage, the RoI candidate region obtained by the detector is used for further screening the target by the SOD-MTGAN.
(2) The generator network is used to process the input low resolution image to upsample a smaller target image to a larger scale and output a super-resolved image. The generator network adopts a deep CNN architecture, and comprises two upsampling convolutional layers, three common convolutional layers and five residual error modules. The generator first upsamples the low resolution small image output by the baseline detector, containing the target object and background candidate ROIs, to a 4-fold super-resolution image through the convolution layer, and then performs a convolution operation to produce a corresponding sharp image.
(3) The discriminator network is a multi-task (multi-task) network, on one hand, the super-resolution image generated by the generator network is distinguished from the real high-resolution image, and the generator network is guided to reconstruct a higher-quality high-resolution RoI image in a training phase; on the other hand, the high-resolution target image reconstructed by the generator network is subjected to the tasks of object type prediction and target position regression. In the arbiter network, different types of backbone networks can be used, such as AlexNet, VGGNet, or ResNet. Meanwhile, three parallel Full Connection (FC) layers are arranged behind the last average pooling layer of the backbone network, the first FC layer uses a sigmoid loss function to complete a task of distinguishing a real high-resolution image from a generated super-resolution image, the second FC layer uses a softmax loss function to complete an object class prediction task, and the third FC layer uses a smooth L1 loss function to complete a bounding box regression task. The classification and regression losses during training are further propagated back to the generator network, directing the generator to produce higher quality images for easier classification and better localization.
And the category module 26 is used for constructing a bud category identification model according to the clipped image data and the bud category.
Optionally, the category module includes: the input unit is also used for inputting the clipped image data and the bud category as input labels of the deep convolutional neural network; and the generating unit is also used for generating the bud category identification model through the deep convolutional neural network.
Optionally, the image data after clipping refers to clipping of an image in a fruit tree bud coordinate labeling frame of the image data according to the bud coordinate.
Specifically, images in the labeling frame are cut out (Crop) from the whole image according to the coordinates of the labeled bud images to obtain a large number of cut bud images, and the bud images and the corresponding labeled category labels are used as input to train the apple tree bud category identification model. The appearance similarity of the flower buds and the leaf buds of the apple trees is high, and the satisfactory effect is difficult to obtain by a universal identification model. The apple tree bud type identification model is constructed by means of a fine-grained identification technology. The present invention provides a three-line Attention Sampling Network (TASN) based on weak supervised learning fine-grained identification for this purpose as an implementation example, and the Network structure is shown in fig. 6. The feature learning of the TASN is performed by two networks, namely, a master network (master-net) and a component network (part-net), wherein the master network is used for learning global features, the component network is used for learning local features, and finally, based on teacher-student (teacher-student) modes, the detailed features learned in the component network are migrated to the master network by using a distillation (distilling) method. The network is described in detail as follows:
(1) for an input image, it is subjected to a series of convolution processes to extract a convolution feature map, and then subjected to a three-line attention model to convert the feature maps (feature maps) into attention maps (attention maps).
The three-line attention model is obtained by convolving the spatial relationship among the channels of the feature map, and the three-line attention model map is shown in fig. 7. Assume that the shape (shape) of a feature map is c × h × w, where c denotes channels, h denotes height, and w denotes width. The transformed (reshape) convolution signature with dimension c × hw is represented by X, the spatial relationship between channels is represented by XXT, the spatial relationship is inserted into the signature, i.e. XXT and X are dot-product, obtaining a trilinear signature, which can be expressed as m (X) ═ N (X) XT) X, where each channel of m (X) represents a signature, the first N (X) is spatial normalization, keeping each channel of the signature in the same scale, the second is relational normalization for each relation vector (N (X) XT) i. Finally, the obtained c × hw dimensional attention map is converted into c × h × w dimensions again.
(2) The original image and the attention map are used as input and are respectively sent to the main network and the component network, and a sampling map of a reserved structure and a sampling map of reserved details are generated. The master network captures the global structure of the image, removing areas of non-contributing detail, allowing better use of high resolution areas. The component network focuses on details of a certain portion of the image, enlarging the portion with a high attention (attention) weight, thereby preserving more fine-grained details.
The main network firstly performs average pooling (averaging) on the attention, then performs sampling, and finally retains the global overall characteristics of the image and omits unimportant areas. Can be expressed by the formula
IS=S(I,A(M))
Where M is the attention map, S (×) is the non-uniform sampling function, and a (×) is the average pooling between channels.
The component network performs random selection (random select) operation on all the attention diagrams, and performs random extraction in each iteration, so that each attention diagram is randomly sampled to fully reserve the fine-grained characteristics of the local components. Can be expressed by the formula
IS=S(I,R(M))
In the formula, R (×) represents a random selection of a channel from the input.
Respectively sending the generated structure retention diagram and detail retention diagram into a convolutional neural network with the same structure and shared parameters to obtain fully-connected outputs zs and zd, then converting zs and zd into classification probabilities by using a softmax classifier, and transferring the part characteristics learned in the part network into the main network as references for assisting in object identification through soft target cross entropy. Finally, the network outputs the category of the bud in the image, namely the bud belongs to the flower bud or the leaf bud.
And the recognition module 28 is used for recognizing the fruit tree bud images according to the bud target detection model and the bud category recognition model.
Optionally, the identification module includes: the acquisition unit is used for acquiring the fruit tree bud body image; the coordinate generating unit is used for inputting the fruit tree bud image into the bud target detection model to obtain fruit tree bud coordinates; and the category generation unit is used for inputting the fruit tree bud image into the bud category identification model to obtain the fruit tree bud category.
Optionally, the identification module further includes: and the output unit is used for outputting the coordinates of the fruit tree buds and the categories of the fruit tree buds.
Specifically, after a bud target detection model and a bud type identification model are obtained, any fruit tree bud image can be input, the purposes of identifying the fruit tree bud target and identifying the fruit tree bud type are achieved respectively according to the calculation of the two models, any given RGB fruit tree bud image is used as the input, a position area where a bud is located in the image is detected by using the trained fruit tree bud target detection model, the image of the position area is further input into the fruit tree bud type identification model, and the type of the bud, namely the bud or the leaf bud, is predicted. And finally outputting the position information and the bud category information of the buds in the graph.
According to another aspect of the embodiments of the present invention, there is also provided a computer program product including instructions which, when run on a computer, cause the computer to perform a method for identifying fruit buds.
According to another aspect of the embodiment of the invention, a non-volatile storage medium is further provided, and the non-volatile storage medium includes a stored program, wherein the program controls a device in which the non-volatile storage medium is located to execute a fruit bud recognition method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory is stored with computer readable instructions, and the processor is used for executing the computer readable instructions, wherein the computer readable instructions execute a method for identifying fruit tree buds when running.
Through the steps, the technical problem that in the prior art, only a great variety of agricultural products or a single plant organ can be identified, but flower buds and leaf buds on fruit trees cannot be automatically classified and identified is solved.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (17)

1. A method for identifying fruit tree buds is characterized by comprising the following steps:
acquiring image data of fruit tree buds;
marking the bud coordinate and the bud category in the image data;
constructing a bud target detection model according to the image data and the bud coordinates;
constructing a bud category identification model according to the cut image data and the bud category;
and identifying the fruit tree bud image according to the bud target detection model and the bud category identification model.
2. A method according to claim 1 wherein said labeling sprout coordinates and sprout categories in said image data comprises:
framing out fruit tree buds in the image data;
marking the coordinates of the fruit tree buds to obtain the coordinates of the buds;
and marking the category of the fruit tree bud to obtain the bud category.
3. A method according to claim 1 wherein constructing a bud target detection model based on the image data and the bud coordinates comprises:
inputting the image data and the bud coordinate as an input label of a deep convolutional neural network;
and generating the bud target detection model through the deep convolutional neural network.
4. A method according to claim 1 wherein constructing a sprout category identification model based on the cropped image data and the sprout category comprises:
inputting the clipped image data and the bud category as input labels of a deep convolutional neural network;
and generating the bud category identification model through the deep convolutional neural network.
5. The method according to claim 1, wherein the image data after clipping refers to clipping the image data in a fruit tree bud coordinate labeling frame according to the bud coordinates.
6. The method as claimed in claim 1, wherein said identifying the fruit tree bud image according to the bud target detection model and the bud category identification model comprises:
acquiring the fruit tree bud body image;
inputting the fruit tree bud image into the bud target detection model to obtain fruit tree bud coordinates;
and inputting the fruit tree bud image into the bud category identification model to obtain the fruit tree bud category.
7. The method as claimed in claim 6, wherein after inputting the fruit bud image into the bud category identification model to obtain the fruit bud category, the method further comprises:
and outputting the coordinates of the fruit tree buds and the categories of the fruit tree buds.
8. The utility model provides an identification device of fruit tree bud, its characterized in that includes:
the acquisition module is used for acquiring image data of fruit tree buds;
the marking module is used for marking the bud coordinate and the bud category in the image data;
the coordinate module is used for constructing a bud target detection model according to the image data and the bud coordinates;
the category module is used for constructing a bud category identification model according to the cut image data and the bud category;
and the recognition module is used for recognizing the fruit tree bud images according to the bud target detection model and the bud category recognition model.
9. The apparatus of claim 8, wherein the labeling module comprises:
the selecting unit is used for framing out fruit tree buds in the image data;
the coordinate unit is used for marking the coordinates of the fruit tree buds to obtain the coordinates of the buds;
and the classification unit is used for labeling the classification of the fruit tree buds to obtain the bud classification.
10. The apparatus of claim 8, wherein the coordinate module comprises:
the input unit is used for inputting the image data and the bud coordinate as an input label of the deep convolutional neural network;
and the generating unit is used for generating the bud target detection model through the deep convolutional neural network.
11. The apparatus of claim 8, wherein the category module comprises:
the input unit is also used for inputting the clipped image data and the bud category as input labels of the deep convolutional neural network;
and the generating unit is also used for generating the bud category identification model through the deep convolutional neural network.
12. The apparatus according to claim 8, wherein the cropped image data is the image data cropped in a fruit tree bud coordinate labeling box according to the bud coordinates.
13. The apparatus of claim 8, wherein the identification module comprises:
the acquisition unit is used for acquiring the fruit tree bud body image;
the coordinate generating unit is used for inputting the fruit tree bud image into the bud target detection model to obtain fruit tree bud coordinates;
and the category generation unit is used for inputting the fruit tree bud image into the bud category identification model to obtain the fruit tree bud category.
14. The apparatus of claim 13, wherein the identification module further comprises:
and the output unit is used for outputting the coordinates of the fruit tree buds and the categories of the fruit tree buds.
15. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 7.
16. A non-volatile storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 7.
17. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform the method of any one of claims 1 to 7.
CN202010718954.1A 2020-07-23 2020-07-23 Fruit tree bud recognition method and device Pending CN111950391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010718954.1A CN111950391A (en) 2020-07-23 2020-07-23 Fruit tree bud recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010718954.1A CN111950391A (en) 2020-07-23 2020-07-23 Fruit tree bud recognition method and device

Publications (1)

Publication Number Publication Date
CN111950391A true CN111950391A (en) 2020-11-17

Family

ID=73340911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010718954.1A Pending CN111950391A (en) 2020-07-23 2020-07-23 Fruit tree bud recognition method and device

Country Status (1)

Country Link
CN (1) CN111950391A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418082A (en) * 2020-11-22 2021-02-26 同济大学 Plant leaf identification system and method based on metric learning and depth feature learning
CN112903677A (en) * 2021-01-21 2021-06-04 河北省农林科学院昌黎果树研究所 Method for rapidly detecting grape flower buds
CN115187570A (en) * 2022-07-27 2022-10-14 北京拙河科技有限公司 Singular traversal retrieval method and device based on DNN deep neural network
CN115631796A (en) * 2022-10-13 2023-01-20 济宁市农业科学研究院 Garlic biological fingerprint spectrum construction and identification method, terminal equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013021A (en) * 2010-08-19 2011-04-13 汪建 Tea tender shoot segmentation and identification method based on color and region growth
US20140314280A1 (en) * 2013-04-18 2014-10-23 Electronics And Telecommunications Research Institute System for predicting production of fruit tree and driving method thereof
CN109964675A (en) * 2017-12-27 2019-07-05 天津蓝多可科技有限公司 Vine beta pruning robot device
CN110378420A (en) * 2019-07-19 2019-10-25 Oppo广东移动通信有限公司 A kind of image detecting method, device and computer readable storage medium
CN110414559A (en) * 2019-06-26 2019-11-05 武汉大学 The construction method and commodity recognition method of intelligence retail cabinet commodity target detection Unified frame
CN111339839A (en) * 2020-02-10 2020-06-26 广州众聚智能科技有限公司 Intensive target detection and metering method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013021A (en) * 2010-08-19 2011-04-13 汪建 Tea tender shoot segmentation and identification method based on color and region growth
US20140314280A1 (en) * 2013-04-18 2014-10-23 Electronics And Telecommunications Research Institute System for predicting production of fruit tree and driving method thereof
CN109964675A (en) * 2017-12-27 2019-07-05 天津蓝多可科技有限公司 Vine beta pruning robot device
CN110414559A (en) * 2019-06-26 2019-11-05 武汉大学 The construction method and commodity recognition method of intelligence retail cabinet commodity target detection Unified frame
CN110378420A (en) * 2019-07-19 2019-10-25 Oppo广东移动通信有限公司 A kind of image detecting method, device and computer readable storage medium
CN111339839A (en) * 2020-02-10 2020-06-26 广州众聚智能科技有限公司 Intensive target detection and metering method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HELIANG ZHENG 等: "Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-grained Image Recognition" *
YANCHENG BAI 等: "SOD-MTGAN: Small Object Detection via Multi-Task Generative Adversarial Network" *
冯国民: "苹果树花芽与叶芽的识别" *
吴雪梅 等: "基于图像颜色信息的茶叶嫩叶识别方法研究" *
孙肖肖 等: "基于深度学习的复杂背景下茶叶嫩芽检测算法" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418082A (en) * 2020-11-22 2021-02-26 同济大学 Plant leaf identification system and method based on metric learning and depth feature learning
CN112903677A (en) * 2021-01-21 2021-06-04 河北省农林科学院昌黎果树研究所 Method for rapidly detecting grape flower buds
CN115187570A (en) * 2022-07-27 2022-10-14 北京拙河科技有限公司 Singular traversal retrieval method and device based on DNN deep neural network
CN115631796A (en) * 2022-10-13 2023-01-20 济宁市农业科学研究院 Garlic biological fingerprint spectrum construction and identification method, terminal equipment and storage medium
CN115631796B (en) * 2022-10-13 2024-04-09 济宁市农业科学研究院 Garlic biological fingerprint construction and identification method, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
Koirala et al. Deep learning–Method overview and review of use for fruit detection and yield estimation
Bargoti et al. Image segmentation for fruit detection and yield estimation in apple orchards
Parvathi et al. Detection of maturity stages of coconuts in complex background using Faster R-CNN model
Lu et al. Canopy-attention-YOLOv4-based immature/mature apple fruit detection on dense-foliage tree architectures for early crop load estimation
CN111950391A (en) Fruit tree bud recognition method and device
Zheng et al. A mango picking vision algorithm on instance segmentation and key point detection from RGB images in an open orchard
Puttemans et al. Automated visual fruit detection for harvest estimation and robotic harvesting
Zhang et al. Computer vision‐based tree trunk and branch identification and shaking points detection in Dense‐Foliage canopy for automated harvesting of apples
Oppenheim et al. Detecting tomato flowers in greenhouses using computer vision
Fu et al. Fast detection of banana bunches and stalks in the natural environment based on deep learning
CN113252584B (en) Crop growth detection method and system based on 5G transmission
Adhikari et al. 3D reconstruction of apple trees for mechanical pruning
Ge et al. Three dimensional apple tree organs classification and yield estimation algorithm based on multi-features fusion and support vector machine
Zhang et al. An improved YOLO network for unopened cotton boll detection in the field
Olenskyj et al. End-to-end deep learning for directly estimating grape yield from ground-based imagery
Keresztes et al. Real-time fruit detection using deep neural networks
Liu et al. SE-Mask R-CNN: An improved Mask R-CNN for apple detection and segmentation
Suresh Kumar et al. Selective fruit harvesting: Research, trends and developments towards fruit detection and localization–A review
Rong et al. Picking point recognition for ripe tomatoes using semantic segmentation and morphological processing
Rahim et al. Data augmentation method for strawberry flower detection in non-structured environment using convolutional object detection networks
AHM et al. A deep convolutional neural network based image processing framework for monitoring the growth of soybean crops
Smitt et al. Explicitly incorporating spatial information to recurrent networks for agriculture
CN112541383A (en) Method and device for identifying weed area
Jiang et al. Thin wire segmentation and reconstruction based on a novel image overlap-partitioning and stitching algorithm in apple fruiting wall architecture for robotic picking
CN115995017A (en) Fruit identification and positioning method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination