CN110598841A - Flower disease analysis method based on multi-input convolutional neural network - Google Patents

Flower disease analysis method based on multi-input convolutional neural network Download PDF

Info

Publication number
CN110598841A
CN110598841A CN201810626283.9A CN201810626283A CN110598841A CN 110598841 A CN110598841 A CN 110598841A CN 201810626283 A CN201810626283 A CN 201810626283A CN 110598841 A CN110598841 A CN 110598841A
Authority
CN
China
Prior art keywords
flower
input
neural network
disease
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810626283.9A
Other languages
Chinese (zh)
Inventor
许蕾
林君宇
李奕萱
罗雯波
郑聪尉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201810626283.9A priority Critical patent/CN110598841A/en
Publication of CN110598841A publication Critical patent/CN110598841A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention provides a method for diagnosing flower diseases based on a Multi-Input Convolutional Neural Network (Multi-Input Convolutional Neural Network) for a smart phone user. The method is characterized in that a plurality of multi-input convolutional neural network models are trained, the characteristics of convenient carrying and easy operation of a mobile phone are utilized, a user shoots pictures of the same flower from different angles or records a video as the input of the network, the flower models predict flower types, the disease classification models predict disease types, and the disease details and the treatment scheme are returned from a database according to the results. The method is simple to operate, does not increase time overhead, and can effectively improve the identification accuracy.

Description

Flower disease analysis method based on multi-input convolutional neural network
Technical Field
The invention belongs to the technical field of computers, in particular to the field of computer vision. The invention provides a method for detecting flower diseases by using a multi-input convolutional neural network, which is used for assisting flower planting.
Background
The flower industry plays an important role in beautifying the environment, adjusting the industrial structure, improving the economic level and promoting the ecological civilized construction. The flower industry in China develops rapidly. According to statistics, the flower planting area in the country in 2011 is 102 ten thousand hectares, the total flower sales amount is 1068 hundred million yuan, and the total flower sales amount in 2016 is up to 1389 hundred million yuan. According to the national flower industry development planning (2011-.
The expansion of the flower industry scale leads the number of people who grow flowers and enjoy flowers to increase continuously, but because of high labor cost or lack of corresponding professional knowledge, many flower sellers and flower consumers often do not know or even ignore the growth state of the flowers, thereby leading to the death of the flowers and causing loss. If can provide an economic swift mode to carry out the analysis to the flowers state, especially to the diagnosis of flowers disease, let the planter can discover the problem and take corresponding measure as early as possible, just can ensure that flowers well grow.
In the mobile era, smart phones and mobile internet have high popularity, and at present, a plurality of programs (such as flower identification, flower partner, microsoft flower identification, green finger and the like) are provided for free photographing and flower identification services for users in the mobile application market, namely, the users take and upload flower pictures, and identify the types of flowers in the pictures through the programs. The principle behind these mobile phone photographing and pattern recognition software depends on a Convolutional Neural Network (CNN), which is a multilayer perception model created by inspiring of the human nervous system, and has the characteristics of parameter sharing, sparse connection, translation invariance and the like. The convolutional neural network structure contains 3 main parts: a convolution Layer (Convolutional Layer), which extracts the input characteristics of the previous Layer through a plurality of convolution kernels (Filter) and outputs the input characteristics to the next Layer; a Pooling Layer (Pooling Layer) for Down-sampling (Down Pooling) the input feature map, and preserving the main features while reducing the dimension of the features; the Fully connected layer (Fully connected layer) is equivalent to a hidden layer in the multi-layer perceptron, and maps the learned features to the mark space to play a role in classification.
The convolutional neural network training process belongs to supervised learning, and the weight of each convolutional layer is updated by inputting a sample with a label. The prediction process uses the learned parameters to calculate the probability that the input sample belongs to each class. The learning comprises two steps of forward propagation and backward propagation: forward Propagation (Forward Propagation), in which a sample is predicted at a stage when input data reaches an output layer through a convolutional layer, a pooling layer and a full-link layer; backward Propagation (Backward Propagation), namely, according to the error output by the forward Propagation and the sample real label, reversely propagating the prediction error, and updating the parameters layer by layer.
The identification of flower types and disease symptoms by using images is a very challenging task, the flower classification granularity is fine, the incomplete flower leaves, plant variation and sunlight difference also become important factors influencing the flower classification accuracy, and the flower disease classification also needs to consider different disease symptoms with highly similar symptoms, the symptom differences caused by different infection degrees or disease stages and the symptom differences caused by different pathogenic sources. Products such as shape and color identification flowers and the like provide forum functions for users to communicate with related problems of flower maintenance, but flower disease identification by instant feedback through pictures still belongs to a new field.
The existing research on plant disease diagnosis by computer vision is mostly directed to crops, flowers are not taken as research objects, and common methods include a Support Vector Machine (SVM) and a convolutional neural network. The invention provides a new Multi-input convolutional Neural Network (Multi-input convolutional Neural Network) based method for diagnosing flower diseases for smart phone users so as to improve the detection precision and efficiency.
Disclosure of Invention
The invention aims to solve the problems that: and realizing a smart phone program, and feeding back a disease diagnosis report in real time according to the disease flower picture input by the user.
The technical scheme of the invention is as follows: training a plurality of convolutional neural network models, inputting flower pictures and diseased part pictures by a user, predicting the flower types through the flower models, selecting the corresponding disease classification models to predict the disease types, and returning disease details and treatment schemes from the database according to results. All convolutional neural network models are multi-input, and by utilizing the characteristics of convenience in carrying and easiness in operation of a mobile phone, a user can shoot pictures of the same flower at different angles or record a video as the input of a network, so that the detection precision and efficiency are improved.
The invention specifically comprises the following steps:
1. and constructing a database which comprises a normal flower picture database, an abnormal flower picture database and a flower maintenance knowledge base.
2. And designing and training a multi-input convolutional neural network model, wherein the multi-input convolutional neural network model comprises a flower classification model and a plurality of disease classification models.
3. And guiding a user to shoot the flowers at multiple angles, wherein the shooting of the multiple-angle pictures and the shooting of the videos pretreat the original pictures or videos to generate input samples.
4. And (4) predicting the input sample by using a convolutional neural network model, and returning a diagnosis report and a treatment scheme from the database according to the result.
In step 1, a series of databases are constructed: a normal flower picture database, an abnormal flower picture database and a flower maintenance knowledge database.
The normal flower picture database stores a large amount of unaffected flower picture data, and the abnormal flower picture database stores a large amount of affected flower picture data in the form of < pictures, tags >. The two databases provide a training set and a testing set for convolutional neural network model training. The flower maintenance knowledge database stores various diseases infected by different flowers, and information such as corresponding symptoms, causes and sources of diseases, nursing schemes and the like. This database provides corresponding information for the generation of diagnostic reports.
In step 2, a plurality of different multi-input convolutional neural network models are designed and trained, and the models can be divided into a flower classification model and a disease classification model according to purposes.
Flower classification model: inputting flower pictures and outputting flower varieties.
Disease classification model: inputting the picture of the affected part and outputting the disease species.
In the step 3, a user shoots multiple multi-angle pictures or videos of flowers, the pictures or videos are used as original samples, and then the input samples are generated through preprocessing.
General photographing and flower recognition software only supports recognition of a single picture, characteristics of a mobile phone are not fully utilized, and recognition accuracy is limited. The invention allows a user to use a plurality of pictures or videos for identification, and in order to accurately acquire input data, the shooting interface displays prompt information and prompt boxes to guide the user to move along with a shooting object and aim a lens at a target. And after the picture shooting is finished, performing the same pretreatment on all the pictures, and stacking the pictures to be used as an input sample. After the video shooting is finished, a certain number of frames are extracted and converted into pictures, and then the pictures are processed into input samples in a multi-picture mode.
In step 4, the model first predicts the input samples. Different flowers may be infected with different diseases, and the same disease may have completely different manifestations of symptoms on different flowers, so we will predict two stages: flower category prediction and disease category prediction.
And in the flower category prediction stage, inputting the normal samples into a flower classification model, outputting the most possible flower categories by the model, and loading the corresponding disease classification model. And in the disease category prediction stage, the diseased part samples are input into a previously loaded disease classification model, and the model outputs the most probable disease. After the disease types are known, the corresponding information such as symptoms, etiology and disease sources, nursing schemes and the like is searched in the flower maintenance knowledge database, and a diagnosis report is generated and presented to a user.
All the convolutional neural network classification models are multi-input, each input sample is split firstly, then the pictures are respectively input into different inlets of the first convolutional layer, and the pictures are combined into a feature map after the features are independently extracted.
By adopting the technical scheme, the invention has the following advantages:
1. the operation is simple: the invention realizes the automation of flower type and disease analysis, and the user only needs to shoot pictures or videos according to prompts and does not need to provide more prior knowledge.
2. No time overhead is added: if all the pictures after input sample unpacking are the same, the action of respectively inputting the pictures into different entries of the first convolution layer is equivalent to single input, so that the multi-input convolution neural network does not increase the calculation time.
3. The accuracy is improved: compared with a single input model, the multi-input model used by the invention covers more real world information, reduces the accidental possibility of misclassification, and can generally improve the identification accuracy rate under the condition of sufficient samples or easy sample acquisition.
Drawings
FIG. 1 is a schematic diagram of multi-graph input recognition
FIG. 2 is a schematic diagram of video input recognition
FIG. 3 is a diagram of a multi-input architecture for a convolutional neural network
FIG. 4 is a flow chart of a convolutional neural network model
Detailed Description
The method firstly trains a plurality of convolutional neural network models, including the steps that a user inputs flower pictures and diseased part pictures, one model carries out flower type prediction, the other model carries out disease type prediction, and disease details and treatment schemes are inquired from a database according to prediction results. The input modes of all the convolutional neural network models are multiple inputs. By utilizing the characteristics of convenient carrying and easy operation of the smart phone, the user can shoot pictures or videos of the same flower at different angles as the input of the network, thereby improving the detection precision and efficiency.
The recognition schematic diagram is shown in fig. 1 and 2, and the multi-input structure and flow of the convolutional neural network model are shown in fig. 3 and 4, and specifically include the following four steps.
The first step is as follows: constructing a database comprising:
1. normal flower picture database: storing the sample of the flowers which are not affected.
2. An abnormal flower picture database: storing the diseased flower samples.
3. Flower maintenance knowledge base: storing the information of the common names, the study names, the biological classifications, the morphological characteristics, the growth habits, the planting modes, the pathogenic causes of the diseases, the disease control methods and the like of the flowers.
The samples in the database are all stored in the format. The normal flower picture database and the abnormal flower picture database provide a training set and a testing set for training the convolutional neural network model. The flower maintenance knowledge base provides corresponding information for the generation of the flower diagnosis report.
The second step is that: designing and training a multi-input convolutional neural network model, wherein the model can be divided into the following parts according to the application:
1. flower classification model: inputting flower pictures and outputting flower varieties.
2. Disease classification model: inputting the picture of the affected part and outputting the disease species.
In designing a model, we use specific components to build a convolutional neural network, and a typical convolutional neural network has such a structure:
input data → [ [ convolution layer → activation function ] → pooling layer ] → [ all-connected layer ] → loss function
Wherein the square brackets indicate one or more occurrences
Definition 1: components of convolutional neural networks
(1) Convolutional Layer (Convolutional Layer): inputting the Feature map (Feature map) of the previous layer, extracting features by convolution on the map through a plurality of convolution kernels (filters), and outputting the Feature map of the next layer. As the number of layers increases, the features extracted by the convolutional layer change from low-level to high-level.
(2) Pooling Layer (Pooling Layer): the feature map is input, downsampled by checking the input feature map with a specific convolution kernel, and the feature map after dimension reduction is output. The pooling layer can retain main characteristics, maintain characteristic invariance, reduce the dimension of the characteristics and prevent Overfitting (Overfitting).
(3) Fully Connected Layer (full Connected Layer): and inputting the characteristic diagram or one-dimensional characteristic of the previous layer, and outputting the one-dimensional characteristic of the next layer through full connection of the neurons. The fully-connected layer is equivalent to a hidden layer in the multilayer perceptron, and the learned features are mapped to a mark space to play a role in classification.
(4) Activation function (Activation function): the activation function is a non-linear function such as Sigmoid function and ReLU function. Because the combination of linear mapping is still linear mapping, complex functions cannot be formed, nonlinear factors are introduced into the activation functions, and the expression capacity of the network is increased.
(5) Loss function (Loss function): the loss function is an optimization objective function, such as an L2 loss function and Cross entropy (Cross entropy), which is used to measure the error between the predicted markers and the true markers, reflecting the fit of the model to the data.
Convolutional neural network learning is Supervised learning (Supervised learning). Iteratively inputting samples with labels in the training process, and updating parameters of each convolution layer; the prediction process inputs samples with unknown labels and calculates the label with the highest image possibility according to the learned parameters.
Definition 2: training of convolutional neural networks
(1) Forward Propagation (Forward Propagation): and outputting a prediction label and an error when the input data reaches an output layer through a convolution layer, a pooling layer and a full connection layer.
i. Convolutional layer formula: the first feature map representing the first convolutional layer is convolved with the corresponding convolution kernel, is a corresponding bias coefficient, is a subset of the input feature map, is an activation function, and is the first feature map of the first layer.
Pooling layer formula: where down is the down-sampling function used and its weight coefficient and bias coefficient, respectively.
Full connectivity layer formula: where is the activation function.
(2) Backward Propagation (Backward Propagation): and updating parameters layer by back propagation of the prediction Error by using a Gradient Descent method (Gradient decision) and Error back propagation (Error back propagation) according to the Error output by the forward propagation and the sample real label so as to minimize the Error.
i. Error formula: wherein, the sample real label is a prediction label.
Parameter update formula: where is the step down, is the derivative of the error with respect to the first layer
(3) Random inactivation (Dropout): a commonly used means of suppressing overfitting. In the training stage, each neuron is inactivated with a certain probability and does not participate in training, so that excessive dependence of the network on certain neurons is reduced, and generalization is enhanced. The test phase does not inactivate neurons.
(4) L2 regularization (L2 regularization): overfitting is suppressed by reducing the weight of some neurons in the network. The formula is that the regular term on the right of the plus sign squares and divides all parameters w by the sample size n, and λ is a regular term coefficient.
(5) Batch normalization (Batch normalization): and each layer is standardized, so that convergence can be accelerated, the selection of the hyper-parameters is more stable, the range is larger, and the method comprises two steps of standardization and zooming translation.
i. A normalized formula: where represents the input, represents the mean of the input, and represents the standard deviation of the input. The output has a mean value of 0 and a standard deviation of 1
A zoom translation formula: where is the zoom size and is the translation distance. The two parameters participate in training, adjust the specification of data, reduce the interdependence between the convolutional layers and carry out more independent learning.
The invention designs a multi-input convolution neural network model which is embodied in an input layer and a first convolution layer. And splitting each input sample, respectively inputting the split samples into different inlets of the first convolution layer, independently extracting the features, and stacking the features into a feature map.
Definition 3: multi-input convolutional neural network
(1) Inputting a sample: stacking a plurality of pictures of the same flower at different angles is regarded as an input sample, namely, the first input sample is the first picture forming the first input sample.
Incomplete flowers and leaves, plant variation, sunlight difference and the like are all important factors influencing the classification accuracy, and a plurality of pictures can cover more flower information, so that the accidental classification error is reduced.
(2) Dividing the first convolution layer: the convolution kernels of the first convolution layer are averagely divided into groups, each group is respectively convoluted with each picture forming the input sample, namely, the convolution kernels are the first group, operators represent convolution operation and are convolution output, and the result of stacking the convolution outputs of the groups is the characteristic diagram of the first sample passing through the first convolution layer.
If the user stands in place without moving and takes a plurality of pictures of the flowers, namely, any, satisfying, splitting the input samples and respectively inputting the input samples into different inlets of the first convolution layer is equivalent to a single-input convolution neural network. Therefore, the multi-input mode does not introduce more parameters, does not increase the calculation time overhead, and can generally improve the identification accuracy rate under the condition of sufficient samples or easy sample acquisition.
The third step: the user shoots the flower picture or video, and the input sample is generated through preprocessing.
The common photographing and flower recognition software only recognizes one picture at a time, the characteristics of convenience in carrying and easiness in operation of the smart phone are not fully utilized, and the recognition accuracy is limited.
Our software implements a multi-input convolutional neural network, allowing recognition using multiple pictures or a piece of video. A user holds a mobile phone to move around flowers, moves a certain distance to take a picture (figure 1), and can obtain a plurality of flower pictures at different angles; the user holds the mobile phone to move around the flowers and record the video, keeps the flowers in the view-finding frame (figure 2), and can obtain the flower video. In order to accurately acquire input data, the shooting interface provides a view-finding frame and corresponding user guidance to guide a user to complete shooting.
After the shooting is finished, if the image is a multi-image, image Preprocessing (Preprocessing) is performed, such as uniform size, Zero mean (Zero mean), and Normalization, and finally Stacking (Stacking) is performed as an input sample. If the video is the video, the video is regarded as a series of single-frame images, a certain number of frames are extracted at certain intervals, the frames are converted into pictures, and input samples are generated in a multi-picture mode.
The fourth step: the convolutional neural network model predicts the input samples and returns a diagnosis report and a treatment plan from the database according to the results.
Identification of floral diseases faces the following problems: (1) the same disease, infection of different flowers and different symptoms; (2) the same kind of diseases, the infection of different flowers and plants, the infection degree and the disease period are different; (3) the resistance of the same kind of disease, different pathogenic sources and flowers is different. Based on the reasons, a disease classification model is trained for each flower, and prediction is divided into two stages of flower class prediction and disease class prediction.
And (3) flower category prediction: inputting a normal sample, predicting the flower classification model, outputting the flower type, and loading the disease classification model corresponding to the flower according to the type.
Disease category prediction: inputting the diseased sample, predicting the disease classification model and outputting the disease species. And searching information such as symptoms, etiology and disease sources, treatment schemes and the like corresponding to the disease for predicting the flower infection in the flower maintenance knowledge base, generating a diagnosis report and presenting the diagnosis report to a user.
The convolutional neural network is implemented using a Tensorflow framework. Tensorflow is an open source software library developed by the Google team that employs Data flow graphs (Data flow graphs) for numerical calculations, containing neural network modules.

Claims (6)

1. A flower disease analysis method based on a multi-input convolutional neural network is characterized in that a data set containing various flower disease pictures and a maintenance knowledge database are created, a multi-input convolutional neural network model is designed for training a classifier, the created flower disease knowledge database is inquired, and a diagnosis report and a treatment scheme are returned.
2. The flower disease analysis method based on the multi-input convolutional neural network as claimed in claim 1, which comprises the steps of:
1) constructing databases in specific fields, including a normal flower picture database, an abnormal flower picture database and a flower maintenance knowledge database;
2) designing and training a multi-input convolutional neural network model, wherein the multi-input convolutional neural network model comprises a flower classification model and a plurality of disease classification models;
3) and according to the results obtained by the flower type classifier and the flower disease classifier, searching in a flower maintenance knowledge database to obtain a corresponding diagnosis report and a corresponding treatment scheme.
3. The method for analyzing the flower diseases based on the multi-input convolutional neural network as claimed in claim 2, wherein in the step 1), a series of databases including a normal flower picture database, an abnormal flower picture database and a flower maintenance knowledge database are created, and the method comprises the following steps: expanding the normal flower picture database on the basis of the existing normal flower picture database, and creating a normal flower picture database; storing pictures in a form of < pictures, labels > aiming at diseases of each flower, and establishing an abnormal flower picture database; and collecting basic information and specific disease maintenance information of various flowers, and constructing a flower maintenance knowledge database.
4. The flower disease analysis method based on the multi-input convolutional neural network as claimed in claim 2, wherein in the step 2), multi-graph recognition is introduced to increase the accuracy of flower type and disease recognition, and the specific method is as follows: designing and training a multi-input convolutional neural network model, and training the normal flower picture data and the abnormal flower picture data created in the step 1) to obtain a flower type classifier, wherein each flower corresponds to a flower disease classifier; in the practical application process, a user is guided to shoot pictures of a flower at multiple angles or upload a section of small video, and the pictures are preprocessed and then used as the input of a classifier.
5. The multi-input convolutional neural network model for a flower disease analysis method based on a multi-input convolutional neural network as claimed in claim 4, wherein the processing modes of the input layer and the first convolutional layer are specifically expressed as follows: stacking a plurality of pictures of the same flower at different angles on an input layer, and regarding the stacked pictures as an input sample, namely, a first input sample is a first picture forming the first input sample, and covering more flower information by the plurality of pictures so as to reduce the accidental classification; averagely dividing convolution kernels of the first convolution layer into groups, and performing convolution on each group of convolution kernels and each picture forming the input sample, namely, the first convolution kernel group, wherein an operator represents convolution operation and is convolution output, and the result of stacking the convolution outputs of the groups is a feature map of the first sample passing through the first convolution layer; if the user stands in place without moving, a plurality of pictures are shot for flowers, namely, the pictures are random, the input samples are split and input into different inlets of the first convolution layer respectively, the single input convolution neural network is equivalent to the single input convolution neural network, more parameters are not introduced in a multi-input mode, the calculation time overhead is not increased, and the identification accuracy can be generally improved under the condition that the samples are sufficient or the samples are easy to obtain.
6. The flower disease analysis method based on the multi-input convolutional neural network as claimed in claim 2, wherein in step 3), according to the flower type and disease type information obtained in step 2), searching is performed in the created flower maintenance knowledge database, and a disease diagnosis report and a treatment scheme of the corresponding flower type are obtained and returned to the user.
CN201810626283.9A 2018-06-13 2018-06-13 Flower disease analysis method based on multi-input convolutional neural network Pending CN110598841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810626283.9A CN110598841A (en) 2018-06-13 2018-06-13 Flower disease analysis method based on multi-input convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810626283.9A CN110598841A (en) 2018-06-13 2018-06-13 Flower disease analysis method based on multi-input convolutional neural network

Publications (1)

Publication Number Publication Date
CN110598841A true CN110598841A (en) 2019-12-20

Family

ID=68849196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810626283.9A Pending CN110598841A (en) 2018-06-13 2018-06-13 Flower disease analysis method based on multi-input convolutional neural network

Country Status (1)

Country Link
CN (1) CN110598841A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111800455A (en) * 2020-05-13 2020-10-20 杭州电子科技大学富阳电子信息研究院有限公司 Method for sharing convolutional neural network based on different host data sources in local area network
CN112528941A (en) * 2020-12-23 2021-03-19 泰州市朗嘉馨网络科技有限公司 Automatic parameter setting system based on neural network
CN112966541A (en) * 2020-09-23 2021-06-15 北京豆牛网络科技有限公司 Automatic fruit and vegetable goods inspection method and system, electronic equipment and computer readable medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218748A (en) * 2013-03-15 2013-07-24 北京市农林科学院 Diagnostic method of vegetable diseases and portable system
CN107527065A (en) * 2017-07-25 2017-12-29 北京联合大学 A kind of flower variety identification model method for building up based on convolutional neural networks
US20180095872A1 (en) * 2016-10-04 2018-04-05 Pure Storage, Inc. Distributed integrated high-speed solid-state non-volatile random-access memory

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218748A (en) * 2013-03-15 2013-07-24 北京市农林科学院 Diagnostic method of vegetable diseases and portable system
US20180095872A1 (en) * 2016-10-04 2018-04-05 Pure Storage, Inc. Distributed integrated high-speed solid-state non-volatile random-access memory
CN107527065A (en) * 2017-07-25 2017-12-29 北京联合大学 A kind of flower variety identification model method for building up based on convolutional neural networks

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111800455A (en) * 2020-05-13 2020-10-20 杭州电子科技大学富阳电子信息研究院有限公司 Method for sharing convolutional neural network based on different host data sources in local area network
CN112966541A (en) * 2020-09-23 2021-06-15 北京豆牛网络科技有限公司 Automatic fruit and vegetable goods inspection method and system, electronic equipment and computer readable medium
CN112966541B (en) * 2020-09-23 2023-12-05 北京豆牛网络科技有限公司 Fruit and vegetable automatic checking method, system, electronic equipment and computer readable medium
CN112528941A (en) * 2020-12-23 2021-03-19 泰州市朗嘉馨网络科技有限公司 Automatic parameter setting system based on neural network
CN112528941B (en) * 2020-12-23 2021-11-19 芜湖神图驭器智能科技有限公司 Automatic parameter setting system based on neural network

Similar Documents

Publication Publication Date Title
Boulent et al. Convolutional neural networks for the automatic identification of plant diseases
Sujatha et al. Performance of deep learning vs machine learning in plant leaf disease detection
Kotwal et al. Agricultural plant diseases identification: From traditional approach to deep learning
CN110517311A (en) Pest and disease monitoring method based on leaf spot lesion area
CN114565826B (en) Agricultural pest and disease identification and diagnosis method, system and device
CN110598841A (en) Flower disease analysis method based on multi-input convolutional neural network
Itakura et al. Automatic pear and apple detection by videos using deep learning and a Kalman filter
Li et al. High-throughput phenotyping analysis of maize at the seedling stage using end-to-end segmentation network
Prashanthi et al. Plant disease detection using Convolutional neural networks
Albahar A survey on deep learning and its impact on agriculture: Challenges and opportunities
Nandhini et al. Automatic detection of leaf disease using CNN algorithm
Long et al. Classification of wheat diseases using deep learning networks with field and glasshouse images
Liu et al. Flooding-based MobileNet to identify cucumber diseases from leaf images in natural scenes
Sehree et al. Olive trees cases classification based on deep convolutional neural network from unmanned aerial vehicle imagery
Monigari et al. Plant leaf disease prediction
Rani et al. Pathogen-based Classification of Plant Diseases: A Deep Transfer Learning Approach for Intelligent Support Systems
Miao et al. Crop weed identification system based on convolutional neural network
Lu et al. Citrus green fruit detection via improved feature network extraction
Baranwal et al. Detecting diseases in plant leaves: An optimised deep-learning convolutional neural network approach
Vidya Sree et al. A one-stop service provider for farmers using machine learning
Patel et al. Deep Learning-Based Plant Organ Segmentation and Phenotyping of Sorghum Plants Using LiDAR Point Cloud
Deshpande et al. Detection of Plant Leaf Disease by Generative Adversarial and Deep Convolutional Neural Network
Saifi et al. A Review on Plant Leaf Disease Detection using Deep Learning
Gunarathna et al. Identification of an efficient deep leaning architecture for tomato disease classification using leaf images
Li et al. Early drought plant stress detection with bi-directional long-term memory networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191220