CN108682006B - Non-contact type canned compost maturity judging method - Google Patents

Non-contact type canned compost maturity judging method Download PDF

Info

Publication number
CN108682006B
CN108682006B CN201810379431.1A CN201810379431A CN108682006B CN 108682006 B CN108682006 B CN 108682006B CN 201810379431 A CN201810379431 A CN 201810379431A CN 108682006 B CN108682006 B CN 108682006B
Authority
CN
China
Prior art keywords
compost
data
image
network
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810379431.1A
Other languages
Chinese (zh)
Other versions
CN108682006A (en
Inventor
薛卫
胡雪娇
徐阳春
韦中
梅新兰
陈行健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Agricultural University
Original Assignee
Nanjing Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Agricultural University filed Critical Nanjing Agricultural University
Priority to CN201810379431.1A priority Critical patent/CN108682006B/en
Publication of CN108682006A publication Critical patent/CN108682006A/en
Application granted granted Critical
Publication of CN108682006B publication Critical patent/CN108682006B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a non-contact type canned compost maturity judging method, which comprises the following steps: s1, extracting image data at the time t; s2, preprocessing; s3, constructing a Convolutional Neural Network (CNN) based on the data obtained in S2 to extract the characteristics of the compost image to obtain 255-dimensional characteristic vectors; s4, combining the color histogram data of the thermal imaging image in S1 and the feature vectors output by the image feature extraction convolutional neural network in S3 to form compost real-time features, and carrying out normalization processing; s5, predicting based on the long-short term memory network LSTM; and S6, outputting a judgment result. The method for detecting the composting state in real time based on temperature and appearance has accurate result and small operation difficulty.

Description

Non-contact type canned compost maturity judging method
Technical Field
The invention relates to a method for judging the maturity of compost in real time by carrying out feature extraction on natural image information of the compost pile and a thermal imaging image shot by a thermal imager through a deep learning neural network and then classifying, belonging to the field of agricultural informatics.
Background
The fertilizer is decomposed to have very important significance on the growth of crops, the use of the fertilizer which is not decomposed can not only have positive effect on the growth of the crops, but also attract flies to lay eggs to corrode the root systems of the crops, and the mass propagation of microorganisms can also cause the lack of oxygen in the soil. The completely decomposed fertilizer can not cause adverse effects on the environment, is convenient to transport, and is beneficial to improving the soil fertility and promoting the growth of plants.
The maturity is an index reflecting the stabilization degree of the composting process, and the existing maturity judging indexes include physical indexes, chemical indexes and biological indexes. Generally, the temperature change in the physical index is used as an important index for evaluating the compost maturity. In the composting process, the internal temperature of the compost is increased by heat generated by decomposing organic substances by microorganisms, the temperature can reach 60-70 ℃ or even 80 ℃ in a high-temperature stage, and then the temperature of the compost is gradually reduced. But after stirring the temperature will rise again and then fall again. After several stirring and repeated temperature rise and fall, the relatively easily decomposable organic substances in the compost material gradually disappear, and the temperature does not rise any more even if the compost material is stirred again. The traditional contact type temperature judging and decomposing method is that manual temperature measurement is carried out and a temperature sensor is buried in compost. In addition, the surface shape characteristics of the decomposed fertilizer and the non-decomposed fertilizer are different.
Disclosure of Invention
Aiming at the problems in the background technology, the natural image characteristics and the thermal imaging image characteristics of the compost surface are used as the characteristic description of the compost, and the image characteristics of the compost are learned by using a deep learning method to realize the maturity prediction.
The technical scheme is as follows: a non-contact type canned compost maturity judging method comprises the following steps:
s1, extracting image data at the t moment, wherein the image data comprises 255 multiplied by 3 dimensional thermal imaging image color histogram data and compost surface RGB image data;
s2, preprocessing, namely performing median filtering on RGB image data on the surface of the compost;
s3, constructing a Convolutional Neural Network (CNN) based on the data obtained in S2 to extract the characteristics of the compost image to obtain 255-dimensional characteristic vectors;
s4, combining 255 x 3 dimensional data of the color histogram of the thermography image in S1 and 255 dimensional feature vectors output by the image feature extraction convolutional neural network in S3 to form 255 x 4 dimensional compost real-time features, and carrying out normalization processing;
s5, performing prediction based on the long-short term memory network LSTM, taking the data obtained in S4 as input quantity, and simultaneously taking the data before the input gate, the forgetting gate and the output gate as input data; the long-short term memory network LSTM prediction model is obtained by continuously training a network to update parameter values;
and S6, outputting a judgment result.
Specifically, in S1, the thermographic color histogram data is expressed as:
Figure GDA0003059375640000021
Figure GDA0003059375640000022
Figure GDA0003059375640000023
in the formula, QR、QG、QBR, G, B chrominance probability values on the components, respectively; when shooting, the thermal imager is placed on the top of the compost tank to face the surface of the compost, and the distance between the thermal imager and the surface of the compost is 15-100 cm.
Specifically, in S1, the RGB image data of the compost surface at time t is extracted by the following formula:
Figure GDA0003059375640000024
pt is an RGB color image matrix of a compost natural image, a common digital camera is used for shooting at the top of the tank right opposite to the surface of the compost when shooting, light is supplemented in the tank through an LED lamp, the distance between the light and the surface of the compost is 15-100cm, 90 pixels are taken from the middle area of a shot image, and n is 90.
Specifically, in S2, the median filtering process is: and (3) adding convolution results of (i, j) positions on three channels of the RGB image through a filtering window, and then taking an activation function value to obtain a value of the filtered image L (i, j).
Specifically, in S3, the CNN feature extraction includes two steps:
firstly, training, specifically, sending N marked decomposed and non-decomposed sample images into a CNN, training by a gradient descent algorithm to obtain image characteristics, and extracting various parameters of a network;
secondly, using the trained CNN model parameters for feature extraction in the monitoring process;
the CNN includes 3 convolutional layers, 3 pooling layers, 2 fully-connected layers, and 1 classification layer, and the 255-dimensional vector of the second fully-connected layer is the final feature of the image.
Specifically, in the CNN, three channels of RGB image data matrices are input, and the calculation formula of the size of the convolutional layer output picture is:
Figure GDA0003059375640000025
in the formula 1, W1For input of matrix width, W2Inputting the matrix width after convolution layer convolution, wherein F is the size of a convolution kernel, P is whether zero padding is adopted or not, the P value of the zero padding is 1, the P value of the non-adopted zero padding is 0, and S is the step length;
the output picture size calculation formula of the pooling layer is as follows:
Figure GDA0003059375640000031
in the formula 2, W1For input of matrix width, W2To input the matrix width after passing through the pooling layer, F is the filter size and S is the step size.
Specifically, in S4, the normalization function is as follows:
X1=(X0- μ)/σ (formula 3)
Wherein, X0、X1Respectively normalizing the real-time characteristic vectors of the pre-compost and the post-compost, wherein mu is the mean value of all sample data, and sigma is the standard deviation of all sample data.
Specifically, in S5, the long-short term memory network LSTM includes three layers: input, hidden and output layers, input dimension 255 × 4: 1020, hiddenLayer neuron 500, single output; training network parameters when LSTM is used for the first time, and converting the composting state characteristic vector x at the time ttSimultaneously, the data in the forgetting gate is multiplied by a forgetting coefficient ftMultiplying the input data by the input gate attenuation coefficient itMultiplying the output gate data by an output coefficient otAnd integrating data through the tanh activation function to obtain network output, performing back propagation by the network according to the marking value, calculating network errors, and reducing the errors by continuously training the network to update parameter values to obtain an LSTM prediction model.
Specifically, the back propagation algorithm is as follows:
Wf、bf、Wi、biwo and io are network parameters, initial values are random values, the network is continuously trained, and gradient values of the parameters are calculated through a back propagation algorithm to update the parameter values.
In LSTM, by hiding state h(t)Gradient of (2)
Figure GDA0003059375640000032
And C(t)Gradient of (2)
Figure GDA0003059375640000033
And step by step forward propagation, and derivation is carried out on the loss function to obtain a gradient formula:
Figure GDA0003059375640000034
is determined by the output gradient error of the layer, namely:
Figure GDA0003059375640000035
Figure GDA0003059375640000036
the reverse gradient error of the previous layer
Figure GDA0003059375640000037
Gradient of (2)Error and the slave h of the layer(t)The gradient error that is transmitted back, two parts are constituteed, and the gradient formula is:
Figure GDA0003059375640000038
based on
Figure GDA0003059375640000039
Medicine for curing cancer
Figure GDA00030593756400000310
The gradient of the parameter is easily obtained.
Specifically, the back propagation algorithm process is as follows:
1) initializing forget gate parameter Wf、bfOutput gate parameters Wo and io and input gate parameter Wi、biIndex output parameter V, c
2) Data preprocessing, converting the picture data into a processable Tensor, and performing normalization processing
3) for iter to 1 to training iteration steps
4) for start 1 to training set data length
5) Calculating predicted value y ^ at t moment by utilizing forward propagation algorithm(t)
6) Calculating a loss function L
7) Calculating the partial derivative values of all hidden layer nodes by utilizing the partial derivative values of the output layer nodes through a chain derivative method
8) Updating W step by step through an optimization functionf、bf、Wi、biValues of Wo, io parameters
End of cycle
End of cycle
And (6) ending.
Specifically, in S6, LSTM forward propagation is used for outputting prediction information of fertilizer maturity, and the feature vector x of the compost state at the time t is usedtInput network, output predicted value y ^(t)
The invention has the advantages of
The tank-type composting method is a research object, the tank-type composting method is characterized by being closed, and the operation difficulty of two traditional contact-type temperature measurement modes is high. The thermal imager is a non-contact temperature measurement method, and can shoot a thermal image of the surface of an object in real time, wherein different colors in the thermal image can represent different temperatures. Although only the surface temperature has certain loss compared with the manual temperature measurement and the temperature measurement of a temperature sensor in the compost, the temperature and the change information of the compost can be basically reflected through a large number of temperature measurement points, and the temperature measurement is accurate and convenient to install. In pot-type compost, the method for judging compost maturity by chemistry, biology and the like has high operation difficulty and large judgment error of a single evaluation index. The non-contact method for detecting the composting state in real time based on temperature and appearance has accurate result and small operation difficulty.
Drawings
FIG. 1 is a process for judging maturity in accordance with the present invention.
Fig. 2 is a CNN filtering schematic.
Fig. 3 is a diagram of a CNN network architecture.
FIG. 4 is a view showing an internal structure of the LSTM.
FIG. 5 is a natural image of the surface of compost in the example.
Detailed Description
The invention is further illustrated by the following examples, without limiting the scope of the invention:
taking a certain composting plant as an example, composting raw materials are animal manure and tailed vegetables, a camera and a thermal imager are arranged at the top of a fermentation tank to monitor the fermentation tank for a production period, the data acquisition interval is set to be 2 hours, and thermal imaging images and 1500 pieces of data on the surface of the composting are collected together. With 300 test samples and 1200 training samples.
With reference to fig. 1, a non-contact type canned compost maturity judging method comprises the following steps:
and S1, extracting image data at the t moment, wherein the image data comprises 255 x 3 dimensional thermal imaging image color histogram data and compost surface RGB image data.
The method comprises the steps of installing a camera with a network communication function for shooting natural compost images and a thermal imager for shooting thermal imaging images, programming, collecting data such as thermal imaging image names, natural image names and time and storing the data in a database so as to facilitate data scheduling, and storing the images in a JPG file format.
S1-1, extracting color histogram data of the thermal imaging diagram at the time t:
and (3) placing a thermal imager on the top of the compost tank to be opposite to the surface of the compost, and shooting at the time t to obtain a thermal imaging image of the surface of the compost, wherein the distance between the thermal imaging image and the surface of the compost is 15-100 cm. And describing the general profiles of the thermal imaging image at the time t on three components of RGB by using a color histogram, and acquiring the ratios of different chromaticities in the image.
Figure GDA0003059375640000051
Figure GDA0003059375640000052
Figure GDA0003059375640000053
The value ranges of the three RGB components are all [0,255%]So each component has 255-dimensional data, QR、QG、QBRespectively, the chrominance probability values on the R, G, B components.
S1-2, extracting RGB image data on the compost surface at the time t:
Figure GDA0003059375640000054
Ptthe method is characterized in that the method is an RGB color image matrix of a compost natural image, a common digital camera is used for shooting at the top of a tank right opposite to the surface of the compost during shooting, light is supplemented in the tank through an LED lamp, the distance from the surface of the compost is 15-100cm, 90 pixels are taken from the middle area of a shot image, and n is 90.
S2, preprocessing, namely performing median filtering on the RGB image data on the surface of the compost:
and performing median filtering on the RGB images on the compost surface by adopting a trained convolution filtering window. As shown in fig. 2, taking a filter window with a window size of 3 × 3 as an example, the value of the filtered image L (i, j) is obtained by adding the convolution results of the filter windows at (i, j) of the three channels of the RGB image and then taking the activation function value. And putting the obtained image into a full-connection layer to obtain a 255-dimensional feature vector.
S3, constructing a Convolutional Neural Network (CNN) based on the data obtained in S2 to extract the characteristics of the compost image to obtain 255-dimensional characteristic vectors:
the convolutional neural network simulates a mode that human brain visual information processing is from low-order features to high-order features, and progressive and complex features are extracted by utilizing a plurality of continuous convolutional layers. The front-end convolutional layer filter detects low-order features, and the back-end combines more complex representations of image features as convolutional layers increase. The method comprises the steps of firstly training, specifically, sending N marked decomposed and non-decomposed sample images into the CNN, training by a gradient descent algorithm to obtain image characteristics, and extracting various parameters of the network; and then using the trained CNN model parameters for feature extraction of the monitoring process.
The convolutional neural network comprises a plurality of layers of neural networks, each layer is composed of a plurality of planes, and each layer is provided with a plurality of independent neurons, specifically, the independent neurons are of module types such as an input module, a convolutional layer, a pooling layer, a full-link layer and an output module. The convolutional neural network used for extracting the characteristics of the compost image comprises 3 convolutional layers, 3 pooling layers, 2 full-connection layers and 1 classification layer. The 255-dimensional vector of the second fully connected layer is the final feature of the image. The network structure is shown in fig. 3.
First, fig. 2 shows a three-channel RGB image data matrix, where the convolution kernel size of the first convolutional layer is 5 × 5, and a 32-dimensional eigenmap of 88 × 88 is output, and the filter size of the first pooling layer is 2 × 2, and a 32-dimensional eigenmap of 44 × 44 is output. The convolution kernel size in the second convolution layer is 3 × 3, outputting a feature map with size of 44 × 44 in 64 dimensions, and the filter size of the second pooling layer is 2 × 2 outputting a feature map with size of 22 × 22 in 64 dimensions. The convolution kernel size in the third layer of convolution layer is 3 x 3, the output 128 dimension size is 22 x 22 feature mapping, the filter size of the third layer of pooling layer is 2 x 2, and the output 128 dimension size is 11 x 11 feature mapping. The data of the last two fully connected layers are 15488 and 255 dimensions respectively.
The calculation formula of the size of the output picture of the convolutional layer is as follows:
Figure GDA0003059375640000061
in the formula 1, W1For input of matrix width, W2Inputting the matrix width after convolution layer convolution, wherein F is the size of a convolution kernel, P is whether zero padding is adopted or not, the P value of the zero padding adopted is 1, the P value of the non-zero padding adopted is 0, and S is the step length.
The output picture size calculation formula of the pooling layer is as follows:
Figure GDA0003059375640000062
in the formula 2, W1For input of matrix width, W2To input the matrix width after passing through the pooling layer, F is the filter size and S is the step size.
S4, combining 255 x 3 dimensional data of the color histogram of the thermography image in S1 and 255 dimensional feature vectors output by the image feature extraction convolutional neural network in S3 to form 255 x 4 dimensional compost real-time features, and normalizing:
so far, collected thermal imaging image features are 3 255-dimensional feature vectors with the color distribution probability temperature data size on three RGB components, and natural image features are one 255-dimensional feature vector, and are combined together to form the compost real-time features. Considering the difference in magnitude between different dimensions of the data, before the data is sent to a model for training, standard deviation normalization processing is carried out on the data. The processed data fit a positive-Taiji distribution with an average of 0 and a standard deviation of 1, and the normalization function is as follows:
X1=(X0- μ)/σ (formula 3)
Wherein, X0、X1The method is characterized by comprising the steps of normalizing a compost real-time characteristic vector before and after normalization, mu is an average value of all sample data, and sigma is a standard deviation of all sample data.
S5, performing prediction based on the long-short term memory network LSTM, taking the data obtained in S4 as input quantity, and simultaneously taking the data before the input gate, the forgetting gate and the output gate as input data; the long-short term memory network LSTM prediction model is obtained by continuously training a network to update parameter values:
in recent years, the deep learning model is developed rapidly, and a long-short term memory network (LSTM) combines the concepts of time sequence and forgetting in the network structure design, so that the LSTM has strong adaptability in the analysis of time sequence data and data with long time intervals. The LSTM is characterized in that the weight of self-circulation is changed by increasing an input threshold, a forgetting threshold and an output threshold, so that the integral scale at different moments can be dynamically changed under the condition that the model parameters are fixed, and the problem of gradient disappearance or gradient expansion is avoided. At a certain point in time, the LSTM receives the compost state input vector, and the input gate, the forgetting gate, and the output gate simultaneously take the important data of the previous time as input data and save the current important data. The data updates the node state of the hidden layer of the network through an activation function, and the output layer makes prediction. The hidden state of each layer is transmitted backwards all the time, so that the state of the hidden layer stores the information of the historical time of the composting, and the relation between the history and the information of the current time can be mined.
1) Working principle of forgetting door
When the network information at the time t-1 is transmitted into the network at the time t, the forgetting degree of the network information is determined firstly, the memory state before the time t is multiplied by an attenuation coefficient between 0 and 1, and then the memory unit of the network at the time t +1 is added with the memory learned at the time t. The attenuation coefficient is calculated by outputting h to the network at the time t-1t-1And the network input x of this steptCombining the two, performing linear transformation, and mapping the result to 0-1 through sigmoid activation function to be used as a memory networkThe attenuation coefficient of (D) is denoted as ft. The attenuation coefficient calculation formula is as follows:
ft=σ(Wf·[ht-1,xt]+bf)
2) working principle of input gate
First, the contents learned at the present time are calculated
Figure GDA0003059375640000071
Which corresponds to an attenuation coefficient it. Memory learned by current state
Figure GDA0003059375640000072
Is obtained by linear transformation and tanh activation function. Content learned at present
Figure GDA0003059375640000073
The calculation formula of (2) is as follows:
Figure GDA0003059375640000074
attenuation coefficient itThe same calculation method as the forgetting gate attenuation coefficient is adopted, and the attenuation coefficient itThe calculation formula is as follows:
it=σ(Wi·[ht-1,xt]+bi)
finally, the attenuation coefficient f at the moment t-1 is measuredtMemory C multiplied by t-1t-1Plus the memory learned at that time t
Figure GDA0003059375640000081
Multiplying by its decay equation to obtain the memory state C at time ttThe calculation formula is as follows:
Figure GDA0003059375640000082
3) working principle of output door
Using first a similar calculation to the forgetting coefficientCalculating the coefficient o obtained by the input gatetThe coefficient determines the output and then depends on the coefficient otTo obtain an output value ht. Coefficient otAnd network output htThe calculation formula of (2) is as follows:
ot=σ(Wo·[ht-1,xt]+bo)
ht=ot*tanh(Ct)
updating the current sequence index prediction output, V is the coefficient vector, c is the offset:
y^(t)=σ(Vh(t)+c)
the first use requires training LSTM network parameters.
With reference to fig. 4, the long-short term memory neural network LSTM used in the compost maturity prediction method of the present invention includes three layers: input, hidden and output layers, input dimension 255 × 4: 1020 dimension, hidden layer neuron 500, time step 10.
LSTM network parameters are set as follows:
LSTM(input_size=1020,hidden_size=500,num_layers=2,batch_first=True)
Linear(hidden_size=500,n_class=2)
wherein, input _ size represents the input data dimension; hidden _ size indicates the output dimension; num _ layers represents the LSTM stacking several layers, default to 1; batch _ first True or False because the data input accepted by nn.lstm () is (sequence length, batch, input dimension), we can change the input to (batch, sequence length, input dimension) using batch _ first; n _ class represents a category.
S6, outputting a judgment result: under the detection state, the LSTM forward propagation realizes the output of the prediction information of the fertilizer maturity, and the composting information at the time t, namely xtInput network, output predicted value y ^(t)
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (9)

1. A non-contact type canned compost maturity judging method is characterized by comprising the following steps:
s1, extracting image data at the t moment, wherein the image data comprises 255 multiplied by 3 dimensional thermal imaging image color histogram data and compost surface RGB image data;
s2, preprocessing, namely performing median filtering on RGB image data on the surface of the compost;
s3, constructing a Convolutional Neural Network (CNN) based on the data obtained in S2 to extract the characteristics of the compost image to obtain 255-dimensional characteristic vectors; in S3, CNN feature extraction consists of two steps:
firstly, training, specifically, sending N marked decomposed and non-decomposed sample images into a CNN, training by a gradient descent algorithm to obtain image characteristics, and extracting various parameters of a network;
secondly, using the trained CNN model parameters for feature extraction in the monitoring process; the CNN comprises 3 convolution layers, 3 pooling layers, 2 full-connection layers and 1 classification layer, and the 255-dimensional vector of the second full-connection layer is the final feature of the image;
in the CNN, a three-channel RGB image data matrix is input, and the calculation formula of the size of the output picture of the convolution layer is as follows:
Figure FDA0003059375630000011
in the formula 1, W1For input of matrix width, W2Inputting the matrix width after convolution layer convolution, wherein F is the size of a convolution kernel, P is whether zero padding is adopted or not, the P value of the zero padding is 1, the P value of the non-adopted zero padding is 0, and S is the step length;
the output picture size calculation formula of the pooling layer is as follows:
Figure FDA0003059375630000012
in the formula 2, W1For input of matrix width, W2Inputting the width of the matrix after passing through the pooling layer, wherein F is the size of the filter, and S is the step length;
s4, combining 255 x 3 dimensional data of the color histogram of the thermography image in S1 and 255 dimensional feature vectors output by the image feature extraction convolutional neural network in S3 to form 255 x 4 dimensional compost real-time features, and carrying out normalization processing;
s5, performing prediction based on the long-short term memory network LSTM, taking the data obtained in S4 as input quantity, and simultaneously taking the data before the input gate, the forgetting gate and the output gate as input data; the long-short term memory network LSTM prediction model is obtained by continuously training a network to update parameter values;
and S6, outputting a judgment result.
2. The method according to claim 1, wherein in S1, the thermographic color histogram data is represented as:
Figure FDA0003059375630000013
Figure FDA0003059375630000021
Figure FDA0003059375630000022
in the formula, QR、QG、QBR, G, B chrominance probability values on the components, respectively; when shooting, the thermal imager is placed on the top of the compost tank to face the surface of the compost, and the distance between the thermal imager and the surface of the compost is 15-100 cm.
3. The method of claim 1, wherein in S1, the RGB image data of the compost surface at time t is extracted by the following formula:
Figure FDA0003059375630000023
pt is an RGB color image matrix of a compost natural image, a common digital camera is used for shooting at the top of the tank right opposite to the surface of the compost when shooting, light is supplemented in the tank through an LED lamp, the distance between the light and the surface of the compost is 15-100cm, 90 pixels are taken from the middle area of a shot image, and n is 90.
4. The method according to claim 1, wherein in S2, the median filtering process is: and (3) adding convolution results of (i, j) positions on three channels of the RGB image through a filtering window, and then taking an activation function value to obtain a value of the filtered image L (i, j).
5. The method according to claim 1, wherein in S4, the normalization function is as follows:
X1=(X0- μ)/σ (formula 3)
Wherein, X0、X1Respectively normalizing the real-time characteristic vectors of the pre-compost and the post-compost, wherein mu is the mean value of all sample data, and sigma is the standard deviation of all sample data.
6. The method of claim 1, wherein in S5, the long short term memory network LSTM comprises three layers: the input dimension is 255 multiplied by 4 which is 1020, the hidden layer neuron 500 is output singly; training network parameters when LSTM is used for the first time, and converting the composting state characteristic vector x at the time ttSimultaneously, the data in the forgetting gate is multiplied by a forgetting coefficient ftMultiplying the input data by the input gate attenuation coefficient itMultiplying the output gate data by an output coefficient otAnd integrating data through the tanh activation function to obtain network output, performing back propagation by the network according to the marking value, calculating network errors, and reducing the errors by continuously training the network to update parameter values to obtain an LSTM prediction model.
7. The method of claim 6, wherein the back propagation algorithm is:
Wf、bf、Wi、biwo and io are network parameters, the initial values are random values, the network is continuously trained, and the gradient values of the parameters are calculated by a back propagation algorithm to update the parameter values;
in LSTM, by hiding state h(t)Gradient of (2)
Figure FDA0003059375630000031
And C(t)Gradient of (2)
Figure FDA0003059375630000032
And step by step forward propagation, and derivation is carried out on the loss function to obtain a gradient formula:
Figure FDA0003059375630000033
is determined by the output gradient error of the layer, namely:
Figure FDA0003059375630000034
Figure FDA0003059375630000035
the reverse gradient error of the previous layer
Figure FDA0003059375630000036
Gradient error of (2) and the slave h of the layer(t)The gradient error that is transmitted back, two parts are constituteed, and the gradient formula is:
Figure FDA0003059375630000037
based on
Figure FDA0003059375630000038
And
Figure FDA0003059375630000039
the gradient of the parameter is easily obtained.
8. The method of claim 6, wherein the back propagation algorithmic process is:
1) initializing forgetting gate parameters Wf and bf, outputting gate parameters Wo and io, inputting gate parameters Wi and bi, and indexing output parameter V, c
2) Data preprocessing, converting the picture data into a processable Tensor, and performing normalization processing
3) for iter to 1 to training iteration steps
4) for start 1 to training set data length
5) Calculating predicted value y ^ at t moment by utilizing forward propagation algorithm(t)
6) Calculating a loss function L
7) Calculating the partial derivative values of all hidden layer nodes by utilizing the partial derivative values of the output layer nodes through a chain derivative method
8) Updating the parameter values of Wf, bf, Wi, bi, Wo and io step by step through an optimization function
End of cycle
End of cycle
And (6) ending.
9. The method of claim 1, wherein in S6, LSTM forward propagation is used to output the predicted fertilizer maturity, the compost status feature vector xt at time t is input into the network, and the predicted value y ^ is output(t)
CN201810379431.1A 2018-04-25 2018-04-25 Non-contact type canned compost maturity judging method Expired - Fee Related CN108682006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810379431.1A CN108682006B (en) 2018-04-25 2018-04-25 Non-contact type canned compost maturity judging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810379431.1A CN108682006B (en) 2018-04-25 2018-04-25 Non-contact type canned compost maturity judging method

Publications (2)

Publication Number Publication Date
CN108682006A CN108682006A (en) 2018-10-19
CN108682006B true CN108682006B (en) 2021-07-20

Family

ID=63801750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810379431.1A Expired - Fee Related CN108682006B (en) 2018-04-25 2018-04-25 Non-contact type canned compost maturity judging method

Country Status (1)

Country Link
CN (1) CN108682006B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059759A (en) * 2019-04-25 2019-07-26 南京农业大学 Compost maturity prediction technique based on weighting LBP- color moment
CN110377691A (en) * 2019-07-23 2019-10-25 上海应用技术大学 Method, apparatus, equipment and the storage medium of text classification
CN111028893B (en) * 2019-10-28 2023-09-26 山东天岳先进科技股份有限公司 Crystal growth prediction method and device
CN112633292A (en) * 2020-09-01 2021-04-09 广东电网有限责任公司 Method for measuring temperature of oxide layer on metal surface
CN112378527B (en) * 2020-11-27 2022-06-21 深圳市同为数码科技股份有限公司 Method and device for improving non-contact temperature measurement precision
CN113139342B (en) * 2021-04-23 2022-12-27 上海交通大学 Aerobic compost monitoring system and result prediction method
CN117976081A (en) * 2024-04-02 2024-05-03 北京市农林科学院 Composting formula method, system, equipment and medium based on model predictive optimization

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106442381A (en) * 2016-07-06 2017-02-22 中国农业大学 Characterization method for biogas residue aerobic composting fermentation maturity
CN107463919A (en) * 2017-08-18 2017-12-12 深圳市唯特视科技有限公司 A kind of method that human facial expression recognition is carried out based on depth 3D convolutional neural networks
CN107590799A (en) * 2017-08-25 2018-01-16 山东师范大学 The recognition methods of banana maturity period and device based on depth convolutional neural networks
CN107862326A (en) * 2017-10-30 2018-03-30 昆明理工大学 A kind of transparent apple recognition methods based on full convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106442381A (en) * 2016-07-06 2017-02-22 中国农业大学 Characterization method for biogas residue aerobic composting fermentation maturity
CN107463919A (en) * 2017-08-18 2017-12-12 深圳市唯特视科技有限公司 A kind of method that human facial expression recognition is carried out based on depth 3D convolutional neural networks
CN107590799A (en) * 2017-08-25 2018-01-16 山东师范大学 The recognition methods of banana maturity period and device based on depth convolutional neural networks
CN107862326A (en) * 2017-10-30 2018-03-30 昆明理工大学 A kind of transparent apple recognition methods based on full convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
城市生活垃圾堆肥腐熟度试验与评价研究;宁尚晓;《中国优秀硕士学位论文全文数据库 农业科技辑》;20121215;第4、17、39-40页 *

Also Published As

Publication number Publication date
CN108682006A (en) 2018-10-19

Similar Documents

Publication Publication Date Title
CN108682006B (en) Non-contact type canned compost maturity judging method
Abdalla et al. Nutrient status diagnosis of infield oilseed rape via deep learning-enabled dynamic model
CN109325495B (en) Crop image segmentation system and method based on deep neural network modeling
CN110287944A (en) The crop pests monitoring method of multi-spectrum remote sensing image based on deep learning
Concepcion et al. Estimation of photosynthetic growth signature at the canopy scale using new genetic algorithm-modified visible band triangular greenness index
Azimi et al. Intelligent monitoring of stress induced by water deficiency in plants using deep learning
CN113011397A (en) Multi-factor cyanobacterial bloom prediction method based on remote sensing image 4D-FractalNet
Concepcion et al. Tomato septoria leaf spot necrotic and chlorotic regions computational assessment using artificial bee colony-optimized leaf disease index
CN113705937B (en) Farmland yield estimation method combining machine vision and crop model
CN117036088A (en) Data acquisition and analysis method for identifying growth situation of greening plants by AI
Wang et al. Digital image processing technology under backpropagation neural network and K-Means Clustering algorithm on nitrogen utilization rate of Chinese cabbages
US20230073541A1 (en) System and method for performing machine vision recognition of dynamic objects
CN115115830A (en) Improved Transformer-based livestock image instance segmentation method
CN109063660A (en) A kind of crop recognition methods based on multispectral satellite image
Fukano et al. GIS-based analysis for UAV-supported field experiments reveals soybean traits associated with rotational benefit
Bai et al. Estimation of soybean yield parameters under lodging conditions using RGB information from unmanned aerial vehicles
CN114898405A (en) Portable broiler chicken abnormity monitoring system based on edge calculation
Islam et al. HortNet417v1—A deep-learning architecture for the automatic detection of pot-cultivated peach plant water stress
Alajas et al. Indirect prediction of aquaponic water nitrate concentration using hybrid genetic algorithm and recurrent neural network
Chen et al. Integrating a crop growth model and radiative transfer model to improve estimation of crop traits based on deep learning
Hati et al. AI-driven pheno-parenting: a deep learning based plant phenotyping trait analysis model on a novel soilless farming dataset
Bhadra et al. End-to-end 3D CNN for plot-scale soybean yield prediction using multitemporal UAV-based RGB images
Khoshrou et al. Deep learning prediction of chlorophyll content in tomato leaves
Ouf A review on the relevant applications of machine learning in agriculture
Araneta et al. Controlled Environment for Spinach Cultured Plant with Health Analysis using Machine Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210720