CN112084977B - Image and time characteristic fused apple phenological period automatic identification method - Google Patents

Image and time characteristic fused apple phenological period automatic identification method Download PDF

Info

Publication number
CN112084977B
CN112084977B CN202010962901.4A CN202010962901A CN112084977B CN 112084977 B CN112084977 B CN 112084977B CN 202010962901 A CN202010962901 A CN 202010962901A CN 112084977 B CN112084977 B CN 112084977B
Authority
CN
China
Prior art keywords
image
phenological period
fully
network
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010962901.4A
Other languages
Chinese (zh)
Other versions
CN112084977A (en
Inventor
邓红霞
樊泽泽
李海芳
王志伟
李燕
许增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Climate Central
Taiyuan University of Technology
Original Assignee
Shandong Climate Central
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Climate Central, Taiyuan University of Technology filed Critical Shandong Climate Central
Priority to CN202010962901.4A priority Critical patent/CN112084977B/en
Publication of CN112084977A publication Critical patent/CN112084977A/en
Application granted granted Critical
Publication of CN112084977B publication Critical patent/CN112084977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of computer vision, and particularly relates to an automatic apple phenological period identification method fusing image and time characteristics, which comprises the following steps: abstracting the one-dimensional time characteristic vector by using a full-connection network, extracting object characteristics of the image by using a convolutional neural network, fusing the one-dimensional time characteristic and the abstracted image characteristics by using a series connection mode, classifying the fused characteristic vector by using a full-connection layer, and identifying a phenological period to which the image belongs. The invention firstly provides a scientific acquisition and processing method for the apple tree image, expands data by using an image random cutting and information deleting mode aiming at the problem of small data quantity, fuses time characteristics on the basis of extracting the apple tree image characteristics by a convolution network so as to obtain a more representative characteristic diagram for probability calculation of the phenological period category, and improves the accuracy of apple tree phenolperiod judgment. The method is used for identifying the phenological period of the apples.

Description

Image and time characteristic fused apple phenological period automatic identification method
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an automatic apple phenological period identification method fusing image and time characteristics.
Background
Agriculture is a fundamental industry in China, apples have huge yield in China, and apples are main economic crops in various places. The scientific and standardized apple plantation is promoted by means of scientific and technological innovation, and the method is vital to improving the productivity of apple orchards, synchronously reducing the cost of apple tree plantation and increasing the economic benefits brought by the apple tree plantation.
The phenological period refers to a natural phenomenon of plants which are influenced by biological and non-biological factors and have a period of years, the growth period of the apple trees is long, and the apple trees need to undergo a plurality of phenological periods of bud enlargement, bud opening, leaf expansion, flowering, tip drawing, fruiting and leaf falling, and the maintenance measures adopted in different phenological periods are different. Therefore, the accurate monitoring of the phenological period of the apple trees has great influence on the growth and development of the apples, each phenological period of the apple trees is accurately monitored, measures such as timely irrigation, pesticide spraying, deinsectization, fertilization and the like can be conveniently carried out on the orchard at different periods, scientific management of the apple orchard is facilitated, and high yield and quality improvement of the apples are promoted. Some scholars study the relation between environmental factors such as temperature, precipitation, illumination intensity and the like in a specified ecological area and the phenological period of crops, discuss the effect of the environmental factors on advancing and delaying the phenological period, but are difficult to break through the limitation of the geographic environment, and only can predict the trend of the phenological period development but not the accurate result. Partial scholars use a remote sensing technology to research the change relationship between the biomass characteristics of crops and the phenological period of the crops. The ground monitoring research aiming at the phenological period of the apple is less, so the phenological period of the fruit tree in the existing apple orchard is estimated through historical data or judged manually, when the apple orchard is far away or has complicated climate, the method has large uncertainty, and meanwhile, manual observation is inconvenient, time and labor are wasted, and the cost is high. With the increasing computing power of chips, the application field of deep learning technology is expanding because it can automatically acquire, process and identify a large amount of information. Therefore, the deep learning technology is applied to the phenological period monitoring, and the neural network is constructed to realize automatic judgment of the phenological period of the apple tree through the ground visible light image, so that the method has important significance.
Disclosure of Invention
Aiming at the technical problems of high uncertainty, time and labor waste and high cost of the existing method for judging the phenological period of the fruit trees in the apple orchard, the invention provides the automatic identification method of the phenological period of the apples, which is high in efficiency, strong in accuracy and low in cost and integrates image and time characteristics.
In order to solve the technical problems, the invention adopts the technical scheme that:
an automatic apple phenological period identification method fusing image and time characteristics comprises the following steps:
s1, preparing a data set for training the network, modifying the image size in the data set to 416 × 3;
s2, extracting the features of the image in the S1, further abstracting the image features by using a first full-connection network, and modifying the image features p with one-dimensional image dimensions;
s3, analyzing image attributes, extracting image shooting time as an initial time feature vector, and processing the vector by using a second full-connection network to form a time feature t;
s4, serially connecting the image features p and the time features t together, representing the image features p and the time features t by feature vectors X, inputting the X into a full connection layer, and converting the X into image category probability through calculation;
and S5, taking the category with the maximum probability value in the calculation result as the final apple tree phenological period image category.
The data set acquisition method in S1 includes: the method comprises the steps of collecting apple tree images in the flowering phase, the leaf expanding phase and the fruiting phase by using two modes of automatic observation collection and manual collection of a camera, stretching gray levels of the apple tree images in the flowering phase, the leaf expanding phase and the fruiting phase collected by the two modes of automatic observation collection and manual collection of the camera, enabling pixel points to be distributed more uniformly in each gray level interval, and carrying out data volume amplification on the processed apple tree images by using a mode of random image cutting and GridMask information deletion.
The image feature extraction method in the step S2 includes: and extracting image features by alternately using 6 convolution layers and 5 pooling layers, wherein each convolution operation is followed by batch normalization and a ReLu activation layer, and the pooling layers adopt a maximum pooling mode.
In S2, the first fully-connected network is a fully-connected layer including 1024 and 512 neurons, and the image features of the first fully-connected network are changed into one-dimensional feature vectors, the initialized weight value of the first fully-connected network is randomly generated by normal distribution with a control standard deviation of 0.1, the initialized bias term of the first fully-connected network is initialized to 0, and the size of the processed one-dimensional image feature p is changed to 512 x 1.
The second fully-connected network in S3 is a fully-connected neural network that includes 32 neurons and 64 neurons, respectively, the initialization weight value of the second fully-connected network is randomly generated by normal distribution with a control standard deviation of 0.1, the bias term of the second fully-connected network is initialized to 0, and the size of the processed one-dimensional time feature t becomes 64 × 1.
The image feature p and the time feature t in S4 are connected in series, and a new feature vector X formed after connection is obtained by the following formula:
Figure BDA0002681158530000031
the phenological period category probability calculation method of the apple tree image in S4 is as follows:
Figure BDA0002681158530000032
h isiIs the final output.
Compared with the prior art, the invention has the following beneficial effects:
1. the method comprises the steps of serially fusing input apple tree image features p extracted by alternately using 6 layers of convolutional neural networks and 5 layers of maximal pooling and one-dimensional time features t obtained by using a full-connection network for abstraction into a new feature vector X, and calculating probability values of feature graphs belonging to a flowering period, a leaf expanding period and a result period by using the full-connection network and a softmax function;
2. the invention firstly provides a scientific acquisition and processing method for the apple tree image, expands data by using an image random cutting and information deleting mode aiming at the problem of small data quantity, fuses time characteristics on the basis of extracting the apple tree image characteristics by a convolution network so as to obtain a more representative characteristic diagram for probability calculation of the phenological period category, and improves the accuracy of apple tree phenolperiod judgment.
Drawings
FIG. 1 is a diagram of a network architecture model of the present invention;
FIG. 2 is a structural parameter diagram of the convolutional neural network of the present invention;
FIG. 3 is a graph comparing the loss values of a network model using different gradient descent functions as a function of the training process according to the present invention;
FIG. 4 is a graph of the comparison of accuracy of phenological phase classification using different network models according to the present invention;
FIG. 5 is a comparison graph of training results of the regularization method of the present invention with different parameters.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Compared with other image classification tasks, the phenological period identification of the apple tree image in the natural environment has the characteristics of many unrelated backgrounds and clutter, unstable environment during shooting, large mutual overlapping and branch and leaf shielding of targets, and large quantity of small targets and incomplete targets, and the phenological period identification task has specificity and difficulty. In the invention, time characteristics in picture attributes are combined and fused with image characteristics extracted through a convolutional neural network to obtain brand new and more effective phenological period characteristics, and the phenological period characteristics contain more phenological period information as much as possible, and a network model of the invention is shown in figure 1.
Step one, data set preparation
1.1 data acquisition
The method comprises the following steps of collecting apple tree images in the flowering stage, the leaf expanding stage and the fruiting stage by using two modes of automatic observation collection and manual collection of a camera. Wherein, the tree age and the shooting place of the fruit tree are manually collected and recorded. The observation plants are selected according to the requirements of robust growth, branch shape and the like, and branches growing close to the ground are avoided as much as possible during shooting. The photographing frequency is once a day. The data acquisition requires that 3 trees are selected optionally outside an agricultural meteorological test station for random photographing, the whole tree is firstly photographed, then 3 branches are respectively selected for each tree for photographing, finally, one flower or fruit on each branch is photographed, and the flower thinning operation is not carried out in the observation process;
1.2 data preprocessing
Stretching the gray levels of the apple tree images in the flowering period, the leaf expanding period and the fruiting period, which are acquired by the automatic observation acquisition and manual acquisition modes of the camera, so that pixel points are distributed more uniformly in each gray level interval;
1.3 data amplification
And amplifying the data volume of the preprocessed apple tree image by using an image random cropping mode and a GridMask information deleting mode, wherein the image random cropping mode is to crop and amplify the area containing flowers, leaves and apple targets from the image into the size of 416 x 3, the GridMask information is deleted into a mask with the size of 416 x 3, the size and the interval of the deleted area defined by parameters (40, 100,60), (30, 100,80) and (20, 100,80) are respectively set in the mask, and the mask is multiplied by the original image to delete the pixels of the black area in the original image. And (3) completing data amplification by using the above method, removing images without targets, and finally obtaining a data set containing 7000 flowering stage images, 7000 leaf stage images and 20000 result stage images.
Step two, image feature extraction
Aiming at two characteristics that fruit trees in different phenological periods have obvious appearance difference and each phenological period of the apple trees is approximately fixed in a time period of one year, the implementation selects image characteristics and time characteristics;
2.1 image features
The image is the most intuitive and most important characteristic for distinguishing the phenological period category of the apple tree image, the implementation constructs a convolutional neural network to extract the image characteristic,
uniformly setting the convolution kernel size in convolution operation to be 3 x 3, the sliding step length to be 1 and the edge expansion to be 1 pixel value; the size of the filter in the pooling operation was 2 x 2, the sliding step was 2;
and (3) alternately using 6 CBR components and 5 pooling operations, wherein C represents that the convolution operation extracts the image feature map, and the input data and the feature map calculation modes before and after the convolution operation are as follows:
Figure BDA0002681158530000061
the size of the feature map obtained after the convolution operation is calculated as follows:
Figure BDA0002681158530000062
wherein, N is the size of the output characteristic diagram obtained after convolution operation, I is the size of the input matrix, F is the size of the convolution kernel, P is the number of the filling pixels, and S is the sliding step length.
B represents batch normalization operation, prevents gradient dispersion from occurring in the training process of the convolutional network and accelerates the convergence speed of the neural network;
r represents an activation layer using a ReLU function, increases nonlinear factors in a network model, improves the fitting capability of the model to linear irreparable data, strengthens the expression and learning capability of apple tree data, and relieves the problem of model gradient disappearance caused by a saturated region, wherein the expression of the ReLU function is as follows:
Figure BDA0002681158530000063
the specific steps of the alternately used convolutional layer and pooling layer include:
the first layer of convolution and pooling. Inputting 416 x 3 original apple tree images into a convolution layer which uses 64 convolution kernels 3 x 3, the sliding step is 1, the edges are expanded by 1 pixel value, after the convolution operation, the size of the input matrix is changed into 416 x 64, after the convolution operation, batch normalization operation is used, and the nonlinear factor of the network is increased by using a ReLU activation function. Inputting the feature map obtained after convolution into a filter with the size of 2 x 2, setting the maximum pooling layer with the step size of 2, and changing the feature map size into 208 x 64;
second layer convolution and pooling. Similar to the first layer, the feature map of 208 × 64 is convolved with 64 convolution kernels of 3 × 3 size, the feature map size is 208 × 64, and after the CBR component and the max pooling operation, the feature map size is 104 × 64;
the third layer is convolution and pooling. Similar to the second layer, the convolution operation was performed on the input 104 × 64 feature map using 128 convolution kernels of size 3 × 3, the feature map size became 104 × 128, and after the CBR component and max pooling operation, the feature map size became 52 × 128;
and the fourth layer of convolution and pooling. Similar to the third layer, the convolution operation was performed on the input 52 × 128 feature map using 128 convolution kernels of size 3 × 3, the feature map size became 52 × 128, and after the CBR component and max pooling operation, the feature map size became 26 × 128;
and fifth layer convolution and pooling. Similar to the fourth layer, the convolution operation was performed on the input 26 × 128 feature map using 256 convolution kernels of size 3 × 3, the feature map size became 26 × 256, and after the CBR component and max pooling operation, the feature map size became 13 × 256;
layer 6 convolution. Similar to the fifth layer, the input signature of 13 × 256 was convolved with 512 convolution kernels of size 3 × 3, and the signature size became 13 × 512 after passing through the CBR component;
and 7, a full connection layer. Inputting the 13 × 512 feature map into a fully connected layer containing 1024 neurons, and changing the feature vector size to 1024 × 1;
and 8, an 8 th full connection layer. The feature vector of 1024 × 1 was input to the fully connected layer containing 512 neurons, and the feature vector size was 512 × 1.
2.2 temporal characteristics
The time characteristic t is extracted by adopting a full-connection network, wherein the construction requirement and the steps of the full-connection network are as follows:
acquiring image shooting time and converting the image shooting time into a one-dimensional time characteristic vector, and if the shooting time is Y, M, month, D, day, h, M minutes, converting the image shooting time into a one-dimensional vector of [ YMDhm ];
because the size of the one-dimensional time characteristic vector is small, the vector is input into a fully-connected neural network respectively containing 32 neurons and 64 neurons, the initialization weight value of the network is randomly generated by normal distribution with the control standard deviation of 0.1, the bias term is initialized to 0, and the size of the processed one-dimensional time characteristic vector is changed into 64 x 1;
multi-feature fusion and phenological phase classification
The way in which the image feature p and the temporal feature t are concatenated into a new feature vector X is as follows:
Figure BDA0002681158530000081
inputting the feature vector X fused by the method into a full connection layer, wherein the full connection layer is provided with 256 neurons, the initialization weight value of the network is randomly generated by normal distribution with the control standard deviation of 0.1, and the bias term is initialized to 0. The image class probability is obtained by the following calculation:
Figure BDA0002681158530000082
wherein h isiFor the final output, a multi-class cross entropy loss function is set to identify class probabilities belonging to flowering, leaf-expanding and fruiting phases, as follows:
Figure BDA0002681158530000083
where yi is the true label value of the image.
Continuously reducing a cross entropy loss function value by using a small batch gradient descent method, and optimizing parameters in the model;
the complexity of the model is reduced by adding a parameter penalty term to the loss function by using an L2 regularization method, the generalization capability of the model is improved, and the overfitting problem is solved as follows:
J(θ:X:y)=J(θ:X:y)+αΩ(θ)
wherein, alpha belongs to [0, ∞ ]) is used for balancing the relationship between the parameter penalty term omega (theta) and the loss function J (theta: X: y), and the larger the value of alpha is, the larger the regularization strength is. The L2 regularization formula is as follows:
Figure BDA0002681158530000091
where W is the parameter vector.
Step three, experimental process and result analysis
1. Network architecture representation
Fig. 1 is a convolutional neural network model diagram of an apple phenological period automatic identification method with image and time feature fused, which is mainly divided into two parts, namely extracting an image feature p by using a convolutional network, wherein the specific structure of the convolution is shown in fig. 2, and extracting a time feature t by using the convolutional network, and fusing the image feature p and the time feature t for classification in a serial manner.
2. Comparative experiment of decreasing function
In order to improve the convergence speed and the final identification accuracy of the model, a direction with the fastest reduction is searched by using a gradient reduction method to carry out iterative solution, and the weight in the network is updated. Fig. 3 compares the loss values of the network trained by using three gradient descent algorithms, Bulk Gradient Descent (BGD), random gradient descent (SGD) and small batch gradient descent (MBGD), and comprehensively compares the network convergence rate and the final loss value to determine the model trained by using the MBGD method, and the loss of the final model is 0.073.
3. Regularization method selection contrast experiment
The complexity of the model is reduced by adding a parameter penalty term to a loss function by using a regularization method, the generalization capability of the model is improved, the phenological period identification performance of a brand-new apple tree image is improved, the overfitting problem of a network model is reduced, a comparison experiment is carried out by setting a parameter penalty term omega (theta) function and a value of a balance parameter alpha, the experiment result is shown in figure 5, when an L2 regularization method is used and the parameter alpha is set to be 0.5, the network performance obtained by training is the best, and the test error is 0.052.
4. Comparison experiment for judging accuracy of phenological period of apple trees
In order to verify the effectiveness of the fruit tree image phenological period classification method fusing time characteristics, a phenological period classification experiment is carried out on an apple tree image data set by using VGG11 and VGG16 and network models only based on image characteristics, only based on time characteristics and fused image and time characteristics, and fig. 4 shows the accuracy rate comparison of the methods. Compared with the method only using image features or only using time features, the method has the advantages that the model final training and testing accuracy rate is higher after the image features and the time features are fused, which shows that the method is effective in fusing the two features, in addition, the accuracy rate is higher after the amplified data set is used for training each model, which shows that the data amplification method is effective, and the final identification accuracy rate is 94.4%.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.

Claims (6)

1. An automatic apple phenological period identification method fusing image and time characteristics is characterized by comprising the following steps: comprises the following steps:
s1, preparing a data set for training the network, modifying the image size in the data set to 416 × 3;
s2, extracting features of the image in S1, further abstracting the image features and modifying the image features p with one-dimensional image size by using a first fully-connected network, where the method for extracting the image features in S2 includes: extracting image features by alternately using 6 convolution layers and 5 pooling layers, wherein each convolution operation is followed by a batch normalization layer and a ReLu activation layer, and the pooling layers adopt a maximum pooling mode;
s3, analyzing image attributes, extracting image shooting time as an initial time feature vector, and processing the vector by using a second full-connection network to form a time feature t;
s4, serially connecting the image features p and the time features t together, representing the image features p and the time features t by feature vectors X, inputting the X into a full connection layer, and converting the X into image category probability through calculation;
and S5, taking the category with the maximum probability value in the calculation result as the final apple tree phenological period image category.
2. The method for automatically identifying the phenological period of the apples by fusing the image and the time characteristics as claimed in claim 1, wherein: the data set acquisition method in S1 includes: the method comprises the steps of collecting apple tree images in the flowering phase, the leaf expanding phase and the fruiting phase by using two modes of automatic observation collection and manual collection of a camera, stretching gray levels of the apple tree images in the flowering phase, the leaf expanding phase and the fruiting phase collected by the two modes of automatic observation collection and manual collection of the camera, enabling pixel points to be distributed more uniformly in each gray level interval, and carrying out data volume amplification on the processed apple tree images by using a mode of random image cutting and GridMask information deletion.
3. The method for automatically identifying the phenological period of the apples by fusing the image and the time characteristics as claimed in claim 1, wherein: in S2, the first fully-connected network is a fully-connected layer including 1024 and 512 neurons, and the image features of the first fully-connected network are changed into one-dimensional feature vectors, the initialized weight value of the first fully-connected network is randomly generated by normal distribution with a control standard deviation of 0.1, the initialized bias term of the first fully-connected network is initialized to 0, and the size of the processed one-dimensional image feature p is changed to 512 x 1.
4. The method for automatically identifying the phenological period of the apples by fusing the image and the time characteristics as claimed in claim 1, wherein: the second fully-connected network in S3 is a fully-connected neural network that includes 32 neurons and 64 neurons, respectively, the initialization weight value of the second fully-connected network is randomly generated by normal distribution with a control standard deviation of 0.1, the bias term of the second fully-connected network is initialized to 0, and the size of the processed one-dimensional time feature t becomes 64 × 1.
5. The method for automatically identifying the phenological period of the apples by fusing the image and the time characteristics as claimed in claim 1, wherein: the image feature p and the time feature t in S4 are connected in series, and a new feature vector X formed after connection is obtained by the following formula:
Figure FDA0003034804150000021
6. the method for automatically identifying the phenological period of the apples by fusing the image and the time characteristics as claimed in claim 1, wherein: the phenological period category probability calculation method of the apple tree image in S4 is as follows:
Figure FDA0003034804150000022
h isiFor the final output, PiIs the class probability.
CN202010962901.4A 2020-09-14 2020-09-14 Image and time characteristic fused apple phenological period automatic identification method Active CN112084977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010962901.4A CN112084977B (en) 2020-09-14 2020-09-14 Image and time characteristic fused apple phenological period automatic identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010962901.4A CN112084977B (en) 2020-09-14 2020-09-14 Image and time characteristic fused apple phenological period automatic identification method

Publications (2)

Publication Number Publication Date
CN112084977A CN112084977A (en) 2020-12-15
CN112084977B true CN112084977B (en) 2021-06-18

Family

ID=73736696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010962901.4A Active CN112084977B (en) 2020-09-14 2020-09-14 Image and time characteristic fused apple phenological period automatic identification method

Country Status (1)

Country Link
CN (1) CN112084977B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111938A (en) * 2021-04-09 2021-07-13 中国工程物理研究院电子工程研究所 Terrain classification method based on digital elevation model data
CN113111937A (en) * 2021-04-09 2021-07-13 中国工程物理研究院电子工程研究所 Image matching method based on deep learning
CN113344035A (en) * 2021-05-17 2021-09-03 捷佳润科技集团股份有限公司 Banana phenological period monitoring module and planting system
CN114529097B (en) * 2022-02-26 2023-01-24 黑龙江八一农垦大学 Multi-scale crop phenological period remote sensing dimensionality reduction prediction method
CN117197595A (en) * 2023-11-08 2023-12-08 四川省农业科学院农业信息与农村经济研究所 Fruit tree growth period identification method, device and management platform based on edge calculation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345555A (en) * 2018-10-15 2019-02-15 中科卫星应用德清研究院 Rice, which is carried out, based on multidate multi- source Remote Sensing Data data knows method for distinguishing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915674B (en) * 2014-10-24 2018-12-14 北京师范大学 The method that Landsat8 and MODIS constructs high-spatial and temporal resolution data identification autumn grain crop
CN106651844B (en) * 2016-12-16 2021-04-02 山东锋士信息技术有限公司 Apple growth period identification method based on image analysis
US11361039B2 (en) * 2018-08-13 2022-06-14 International Business Machines Corporation Autodidactic phenological data collection and verification
CN109492665A (en) * 2018-09-28 2019-03-19 江苏省无线电科学研究所有限公司 Detection method, device and the electronic equipment of growth period duration of rice

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345555A (en) * 2018-10-15 2019-02-15 中科卫星应用德清研究院 Rice, which is carried out, based on multidate multi- source Remote Sensing Data data knows method for distinguishing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A phenology-based spectral and temporal feature selection method for crop mapping from satellite time series;QiongHu et al.;《International Journal of Applied Earth Observation and Geoinformation》;20190501;第80卷;第218-229页 *

Also Published As

Publication number Publication date
CN112084977A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN112084977B (en) Image and time characteristic fused apple phenological period automatic identification method
CN111709489B (en) Citrus identification method based on improved YOLOv4
He et al. Fruit yield prediction and estimation in orchards: A state-of-the-art comprehensive review for both direct and indirect methods
Yalcin Plant phenology recognition using deep learning: Deep-Pheno
CN109344883A (en) Fruit tree diseases and pests recognition methods under a kind of complex background based on empty convolution
Selvi et al. Weed detection in agricultural fields using deep learning process
Díaz et al. Grapevine buds detection and localization in 3D space based on structure from motion and 2D image classification
CN114818909A (en) Weed detection method and device based on crop growth characteristics
Pratama et al. Deep learning-based object detection for crop monitoring in soybean fields
CN114821321A (en) Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network
Pereira et al. Pixel-based leaf segmentation from natural vineyard images using color model and threshold techniques
CN116543316B (en) Method for identifying turf in paddy field by utilizing multi-time-phase high-resolution satellite image
CN111832448A (en) Disease identification method and system for grape orchard
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
Bhuyar et al. Crop classification with multi-temporal satellite image data
Yalcin Phenology recognition using deep learning
CN114898359A (en) Litchi pest and disease detection method based on improved EfficientDet
Seelwal et al. Automatic detection of rice diseases using deep convolutional neural networks with sgd and adam
Miao et al. Crop weed identification system based on convolutional neural network
AHM et al. A deep convolutional neural network based image processing framework for monitoring the growth of soybean crops
CN111523503A (en) Apple target detection method based on improved SSD algorithm
CN110555343B (en) Method and system for extracting three elements of forest, shrub and grass in typical resource elements
Dahiya et al. An effective detection of litchi disease using deep learning
CN114154694A (en) Method for predicting plant growth state in cotton topping period based on multispectral remote sensing of unmanned aerial vehicle
Li et al. A longan yield estimation approach based on uav images and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant