CN114067314B - Neural network-based peanut mildew identification method and system - Google Patents
Neural network-based peanut mildew identification method and system Download PDFInfo
- Publication number
- CN114067314B CN114067314B CN202210046058.4A CN202210046058A CN114067314B CN 114067314 B CN114067314 B CN 114067314B CN 202210046058 A CN202210046058 A CN 202210046058A CN 114067314 B CN114067314 B CN 114067314B
- Authority
- CN
- China
- Prior art keywords
- pixel
- mildew
- classification
- peanut
- pixel values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of artificial intelligence, in particular to a peanut mildew identification method and system based on a neural network, wherein the method comprises the following steps: acquiring an initial image of peanuts, matching pixel values in the initial image with pixel values in a pixel property classification set according to the size of the pixel values, and acquiring confidence degrees of the pixel values in the initial image in the pixel property classification set and a Gaussian distribution model corresponding to each confidence degree; and respectively substituting each pixel value in the initial image into a corresponding Gaussian distribution model to obtain a probability value, taking the confidence coefficient corresponding to each Gaussian model as a weight to perform weighted summation on the probability value to obtain an initial score of each pixel value in the initial image, comparing the initial scores of each pixel value as a mildewed pixel and each pixel value as a normal pixel to obtain a score difference, and determining that the peanuts are mildewed when the score difference of all the pixels in the initial image is greater than zero so as to reduce the calculated amount on the basis of accurately identifying the mildewed peanuts.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a neural network-based peanut mildew identification method and system.
Background
A large amount of peanuts are often stacked together for storage in the process of storing the peanuts, when the peanuts are not stored properly, the peanuts are easy to mildew and are easily polluted by aflatoxin, the polluted aflatoxin causes great harm to human bodies, and the aflatoxin is classified as a type 1 carcinogen by the world health organization; in order to prevent aflatoxin from entering a food chain, the peanuts need to be identified and monitored for mildew, and deteriorated peanuts need to be screened out in advance, so that the safety of processed and produced food products is ensured.
At present, the machine vision mode is mostly adopted, the image of the peanut is collected, and the classification network is adopted to classify the peanut so as to achieve the purpose of screening out deteriorated peanuts.
In practice, the inventors found that the above prior art has the following disadvantages:
in the actual process of image recognition, if a neural network is needed to recognize each collected image, the method is not suitable for industrial production due to large calculation amount.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a peanut mildew identification method and system based on a neural network, and the adopted technical scheme is as follows:
a peanut mildew identification method based on a neural network comprises the following steps: acquiring initial images of peanuts, wherein each initial image comprises one peanut; matching the pixel values in the initial image with the pixel values in the pixel property classification set according to the size of the pixel values, and obtaining the confidence degrees of the pixel values in the initial image in the pixel property classification set and a Gaussian distribution model corresponding to each confidence degree; wherein the pixel property classification set is based on whether corresponding pixel values in the historical image of the peanuts belong to mildew pixels, and the pixel values are divided into a mildew set consisting of mildew pixel values and a normal set consisting of normal pixel values without mildew; each pixel value in each pixel property classification set corresponds to a confidence coefficient, and different pixel values under each confidence coefficient correspond to a Gaussian distribution model; and respectively substituting each pixel value in the initial image into the corresponding Gaussian distribution model to obtain a corresponding probability value, taking the confidence coefficient corresponding to each Gaussian model as a weight to perform weighted summation on the probability value to obtain an initial score of each pixel value in the initial image, comparing the initial score of each pixel value as a mildewed pixel with the initial score of a normal pixel to obtain a score difference, and determining that the peanuts are mildewed when the score difference of all the pixels in the initial image is greater than zero.
Further, the step of obtaining the confidence level comprises: and classifying the peanut historical image by using a classification network to obtain a mildew set consisting of mildew pixel values and a normal set consisting of normal pixel values, wherein the classification network outputs the confidence coefficient of each pixel value.
Further, the step of classifying the peanut historical images by using the classification network comprises the following steps of optimizing classification results by focusing attention on the salient regions: obtaining a saliency map after the feature map extracted by the classification network is subjected to global average pooling operation, and carrying out thresholding operation on the saliency map to obtain a binary map; and acquiring pixel value difference between the significance map and the binary map as attention loss, and performing joint training on the classification network by using the attention loss and cross entropy loss between the output of the classification network and a label as a first joint loss function.
Further, after the convergence of the first joint loss function, the classification network further includes: inputting an initial image of peanuts to be identified into a trained classification network, multiplying a saliency map and the initial image of the peanuts input into the classification network to obtain an attention area corresponding to a classification result when the absolute difference between the probability value of not mildewing and the probability value of mildewing is larger than a preset reliable threshold, and storing pixel values in the attention area and confidence degrees corresponding to the pixel values into corresponding pixel property classification sets.
Further, after the convergence of the first joint loss function, the classification network further includes a step of optimizing the pixel property classification set by changing a classification result obtained after adding the attention pixel of the saliency map.
Further, the step of adding the attention pixel of the saliency map comprises: and randomly adding a pixel value to the saliency map to obtain the saliency map after the disturbance every time, wherein the pixel value is 1.
Further, after the convergence of the first joint loss function, the classification network further includes the following steps: a step of performing training again by using a second joint loss function, wherein the second joint loss function includes a constraint that a difference between probabilities of the second training classification result and the first training classification result is greater than or equal to zero, a pixel loss between a saliency map output by the second training and a saliency map after the disturbance, and a constraint that a disturbed pixel is added each time and a pixel value is 1; after the second combined loss function is converged, multiplying the finally obtained significance map and the initial image of the peanuts input into the classification network to obtain a mildew area and a confidence coefficient corresponding to each pixel value in the mildew area; and storing the pixel values in the mildew region and the corresponding confidence degrees into a mildew set.
Further, the step of obtaining an initial image of the peanut comprises: acquiring an original image of peanuts, wherein the original image comprises a plurality of peanut kernels; and segmenting the original image by using a watershed algorithm to obtain a binary image, and multiplying the binary image and the original image to obtain an initial image of each peanut.
In another aspect, another embodiment of the present invention provides a peanut mildew identification system based on a neural network, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the steps of any one of the above methods when executing the computer program.
The invention has the following beneficial effects:
according to the method, the pixel values in the initial image of the peanuts are matched with the pixel values in the pixel property classification set according to the pixel values, the confidence degrees of the pixel values in the initial image in the pixel property classification set and the Gaussian distribution model corresponding to each confidence degree are obtained, the pixel values in the initial image are substituted into the corresponding Gaussian distribution models to obtain the corresponding probability values, and the scores of the peanut mildew are obtained according to the probability values and the corresponding confidence degrees. It should be noted that the pixel property classification set, the confidence level and the corresponding gaussian distribution model in the embodiment of the present invention are all results obtained in the network training stage; in the actual recognition process, the pixel property classification set, the confidence coefficient and the corresponding Gaussian distribution model are used as known data obtained by historical processing to participate in the actual recognition process, so that the calculation amount is reduced on the basis of ensuring the accuracy of the recognition result.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a peanut mildew identification method based on a neural network according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description, the structure, the features and the effects of the peanut mildew identification method and system based on the neural network according to the present invention are provided with the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the neural network-based peanut mildew identification method and system provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a neural network-based peanut mildew identification method according to an embodiment of the present invention is shown, where the neural network-based peanut mildew identification method includes:
and S001, acquiring initial images of peanuts, wherein each initial image comprises one peanut.
Specifically, firstly, a camera is deployed to collect an original image of peanuts, a plurality of peanut kernels exist in the collected original image of the peanuts, in order to obtain an image of each peanut, a watershed segmentation algorithm is used to obtain a mask region of each peanut in the original image, wherein the mask region of each peanut is a binary image, the pixel value of the corresponding peanut region is 1, the pixel values of other regions are 0, and the mask region and the original image of the peanuts are multiplied to obtain an initial image of each peanut.
Step S002, matching the pixel values in the initial image with the pixel values in the pixel property classification set according to the size of the pixel values, and obtaining confidence degrees of the pixel values in the initial image in the pixel property classification set and a Gaussian distribution model corresponding to each confidence degree; the pixel property classification set is based on whether corresponding pixel values in the peanut historical image belong to mildew pixels or not, and the pixel values are divided into a mildew set consisting of mildew pixel values and a normal set consisting of normal pixel values without mildew; each pixel value in each pixel property classification set corresponds to a confidence level, and different pixel values under each confidence level correspond to a Gaussian distribution model.
The matching refers to comparing the size of each pixel value in the initial image with the pixel values in the pixel property classification set, and matching the confidence degree corresponding to the pixel value in the pixel property classification set and the gaussian distribution model corresponding to the confidence degree to the corresponding pixel value in the initial image when the pixel value in the pixel property classification set is the same as the pixel value in the initial image.
The confidence coefficient is obtained by classifying historical peanut initial images by utilizing a classification network to obtain a mildew set consisting of mildew pixel values and a normal set consisting of normal pixel values, and the classification network outputs the confidence coefficient of each pixel value. The classification network is formed by connecting an Encoder (Encoder) and a classifier in series, wherein the Encoder is used for extracting a feature map, and the classifier is used for processing an input feature map and outputting a final classification result; wherein the classifier is a classification network consisting of a fully connected network (FC). The training process of the classification network comprises the following steps: the method comprises the steps of taking initial images of a large number of peanuts with class labels as a training data set, wherein the initial images of the peanuts comprise one peanut, artificially marking the initial images of each peanut with the class labels, wherein the class labels comprise two main classes of mildew and non-mildew, the class label of the mildew is 0, and the class label of the non-mildew is 1. And training a neural network by using the training data set, sending the initial images of the peanuts in the training set to an encoder for feature extraction to obtain a feature map, sending the feature map to a classifier for outputting a classification probability vector, wherein the probability vector is 1 row and 2 columns and corresponds to the confidence coefficient of each class.
The encoder structure may be implemented by using the existing classification networks such as ResNet and SENet.
In order to make the classification result of the neural network sensitive to the change of the saliency map and guarantee the accuracy of the classification result, the step of classifying the historical image of the peanuts by using the classification network further comprises the following step of optimizing the classification result by focusing on the saliency region, wherein the historical image refers to an initial image of the peanuts in the historical data or a sample image used for training the network:
(1) and obtaining a saliency map after the feature map extracted by the classification network is subjected to global average pooling operation, and carrying out thresholding operation on the saliency map to obtain a binary map.
Specifically, the classification network further comprises a pooling layer, wherein the output of the encoder is used as the input of the pooling layer, and the pooling layer outputs the final saliency map; namely, the saliency map is output after the feature map output by the encoder is subjected to global average pooling through the pooling layer. Feeding the initial image of the peanut into an encoder to perform feature extraction to obtain a feature map; on one hand, the feature map is sent to a classifier to output a classification probability vector; on the other hand, the feature maps are subjected to Global Average Pooling (GAP) operation in parallel to obtain corresponding significance maps (CAM), the significance maps reflect the positions of features extracted from the peanut images by the encoder, and the value of each pixel in the significance maps is [0,1 ].
And performing thresholding operation on the saliency map to obtain a binary map, wherein the pixel value of the concerned region of the neural network in the binary map is 1, and the pixel values of other regions in the binary map are 0. Specifically, the fixed threshold preset in the thresholding operation is 0.8, pixels smaller than 0.8 are set to 0, and pixels larger than 0.8 are set to 1, so as to obtain a binary image.
(2) And obtaining the difference of pixel values of corresponding pixel points in the significance map and the binary map as attention loss, and performing joint training on the classification network by taking the attention loss and the cross entropy loss between the output of the classification network and a label as a first joint loss function.
The first joint loss function is specifically obtained as follows: concentrating each training batchNumber of samples recordedEach sample image having a size ofTraining sampleIs marked as a category labelAnd the classification result output by the classification network is recorded asTraining sampleOutput significance map CAM pixel pointHas a pixel value ofTraining sampleCarrying out thresholding operation on the corresponding significance map CAM to obtain pixel points in the binary mapHas a pixel value ofWhereinRepresenting that thresholding operation is carried out on the image X to obtain a binary image; training sampleCorresponding pixel points in the corresponding significance map CAM and binary mapThe difference in pixel values of (a) is:training sampleThe cross-entropy penalty between the output of the classification network and the label is:. The first joint loss function of the classification network is then:
wherein the content of the first and second substances,is a 1 norm, representing the sum of the absolute values of all elements.
The cross entropy loss in the first joint loss function is used for restricting the accuracy of the classification result, the attention loss is used for restricting the region range of the image concerned by the neural network in the classification process, the classification of the neural network is associated with the region features with high significance, the classification result of the neural network is sensitive to the change of the significance map, and the accuracy of the pixel point features of the normal region and the mildew region in the subsequent step S003 is guaranteed.
After the first joint loss function is converged, the initial image of the peanut to be recognized is sent into a trained classification network, the classification result is judged, and the assumption is made in the classification resultA probability value representing no mildew,Representing a probability value of mildew, and considering the obtained recognition result as a reliable result when the absolute difference between the probability value of no mildew and the probability value of mildew is greater than a preset reliable threshold. Specifically, in the embodiment of the present invention, the value of the preset reliability threshold is 0.6, that is, when the value is equal toAnd considering the obtained identification result as a reliable result, obtaining the concerned area in each peanut image according to the output significance map after obtaining the reliable result, and obtaining the reliable classification result by the classification network according to the characteristics of the concerned area. Specifically, the saliency map is multiplied by an initial image of peanuts input into the classification network to obtain an attention region corresponding to a classification result, pixel values in the attention region and corresponding confidence degrees thereof are stored in a corresponding pixel property classification set, for example, when the classification result is a moldy pixel, the pixel values in the attention region and the corresponding confidence degrees thereof are stored in the moldy set, and when the classification result is not moldy, the pixel values in the attention region and the corresponding confidence degrees thereof are stored in a normal set.
Because it is difficult to ensure that the neural network can extract the global features of the initial image of each peanut, the embodiment of the invention obtains different types of pixel point sets by disturbing the saliency map and changing the classification result so as to determine the global features of the peanuts obtained from the mildew identification result obtained in the step S003, thereby achieving the purpose of obtaining an accurate mildew identification result. Therefore, after the convergence of the first joint loss function, the classification network further includes a step of optimizing the pixel property classification set through a change condition of a classification result obtained after adding a pixel of interest of the saliency map, wherein the adding of the pixel of interest of the saliency map is to randomly add a pixel value on the saliency map each time, so as to obtain the saliency map after the disturbance, and the size of the added pixel value is 1. Namely, when the classification neural network obtains a reliable result, the output significance map is disturbed, and the mildew set and the normal set are updated according to the change of the classification result, and the specific classification network further comprises the following steps after the first joint loss function is converged:
(1) and performing training again by using a second joint loss function, wherein the second joint loss function comprises a constraint that the difference between the probabilities of the second training classification result and the first training classification result is greater than or equal to zero, pixel loss between the significance map output by the second training and the significance map after the disturbance, and a constraint that one disturbed pixel is added each time and the pixel value is 1.
It is assumed that reliable classification results are obtained using the first joint loss function and probability values of mildewProbability value greater than not mildewedMaximum value among both classes, i.e.Thus, the classification of the class as a mildew class is carried out, and the corresponding significance map is obtained at the same time.
The method for perturbing the saliency map comprises the following steps: in order to achieve the purpose of fully acquiring data, the region with the pixel value of 1 in the saliency map is enlarged by one pixel point each time, so that the region concerned by the classification network is enlarged by one pixel point, and the influence of the pixel point on the classification result is obtained. Specifically, the disturbance matrix of the saliency map is recorded asWherein, CAM represents a significance map, in order to ensure that each disturbance of the significance map expands one pixel point and ensure the significance map after the disturbanceStill being a binary diagram, the constraint condition needs to be set for the disturbance matrix:
wherein the content of the first and second substances,for the 0-norm of the perturbation matrix,is the 1 norm of the perturbation matrix.
Obtaining an updated perturbation matrixAnd if the classification result of the significance map of the neural network is not reduced after the update, the pixel value at the updated pixel point still has obvious mildew characteristics.
In order to ensure that the classification standards of the significance map before and after updating are consistent, the embodiment of the invention freezes the classifier in the classification network, and only carries out secondary training on the encoder in the classification network so as to update the significance map output by the encoder; when the value of the second combined loss function is 0, obtaining an updated significance map, and recording the updated significance map as the updated significance mapAnd the corresponding classification results are recorded asOnce when the second combined loss function is 0Perturbation of the saliency map. The second joint loss function of the second training is therefore as follows:
wherein the content of the first and second substances,the classification result of the classifier after the significance map is updated,Updating the classification result of the previous classifier for the significance map;is a significance diagram output in the secondary training process;to add a perturbed saliency map to the original saliency map.
First part of second combined loss functionThe confidence of the classification result after the significance map is updated is restricted from being reduced, i.e.(ii) a The second section ensures that the saliency map achieves a perturbation; third partRestrain and take downEach disturbance of the saliency map enlarges one pixel point.
(2) And after the second combined loss function is converged, multiplying the finally obtained saliency map by the initial image of the peanuts input into the classification network to obtain a mildew area and a confidence coefficient corresponding to each pixel value in the mildew area, and storing the pixel values in the mildew area and the confidence coefficients corresponding to the pixel values in the mildew area into a mildew set.
According to the same procedure as in the above step (1)In the same way, the updated saliency map isAnd as a significance map before the next perturbation, perturbing the significance map before the perturbation again, training again by using a second combined loss function according to the same method, and obtaining an updated significance map when the second combined loss function is equal to 0 againBased on the perturbation result; repeating the above steps for continuous iterationWhen the convergence is 0, stopping iteration, not updating the significance map, obtaining a final significance map and a corresponding final classification result, and respectively recording the final significance map and the corresponding final classification result asAnd classification results(ii) a WhereinThe image is a two-value image,indicating the probability that the input peanut image belongs to mildew.
Will show the significance mapMultiplying the initial image of the input peanut to obtain the mildew probability ofThe area of (a). The pixel value of the region and the corresponding mildew probabilityStoring in a mildew set. The pixel values stored in the mildew set are the values of the peanut initial image in three RGB channels,indicating the probability that the pixel value belongs to a region of mildew.
And S003, respectively substituting each pixel value in the initial image into the corresponding Gaussian distribution model to obtain a corresponding probability value, taking the confidence corresponding to each Gaussian model as a weight to perform weighted summation on the probability value to obtain an initial score of each pixel value in the initial image, comparing the initial score of each pixel value as a mildewed pixel with the initial score of a normal pixel to obtain a score difference, and determining that the peanuts are mildewed when the score difference of all the pixels in the initial image is greater than zero.
The method for acquiring the Gaussian distribution model corresponding to each pixel value comprises the following steps: by using the same method as the step S002, a mildew pixel set storing peanut mildew pixels and a normal pixel set storing normal pixels are obtained by classifying a large number of peanut images, and the pixel values in the mildew pixel set and the normal pixel set correspond to a confidence level. Analyzing all pixel values in each set under the same confidence level by usingThe algorithm obtains three-dimensional Gaussian models of all pixel values under different confidence degrees, and records a data setMiddle confidence levelThe lower three-dimensional Gaussian model isWhereinPixel values for the RGB three channels. Then the pixel pointThe initial scores for moldy pixels of (a) pixel values of (b) were:whereinIs shown in the moldy pixel setThe number of different confidences in the image. Pixel points are formedThe pixel value of (a) is an initial score of a normal pixelWhereinIs represented in the normal pixel setThe number of different confidences in the image. The score difference is then:
and when the value of the grading difference is more than 0, indicating that the peanut to be identified is the mildewed peanut, and obtaining a peanut mildewed identification result.
In summary, the method provided by the embodiment of the invention matches the pixel value in the initial image of the peanut with the pixel value in the pixel property classification set according to the size of the pixel value, obtains the confidence level of the pixel value in the pixel property classification set and the gaussian distribution model corresponding to each confidence level, substitutes the pixel value in the initial image into the corresponding gaussian distribution model to obtain the corresponding probability value, and obtains the grade of peanut mildew according to the probability value and the corresponding confidence level. It should be noted that the pixel property classification set, the confidence level and the corresponding gaussian distribution model in the embodiment of the present invention are all results obtained in the network training stage; in the actual recognition process, the pixel property classification set, the confidence coefficient and the corresponding Gaussian distribution model are used as known data obtained by historical processing to participate in the actual recognition process, so that the calculation amount is reduced on the basis of ensuring the accuracy of the recognition result.
Based on the same inventive concept as the method embodiment, another embodiment of the present invention further provides a neural network-based peanut mildew identification system, which includes a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program to implement the steps of the neural network-based peanut mildew identification method provided by any one of the above embodiments. The neural network-based peanut mildew identification method is described in detail in the above embodiments, and is not described in detail.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (9)
1. A peanut mildew identification method based on a neural network is characterized by comprising the following steps:
acquiring initial images of peanuts, wherein each initial image comprises one peanut;
matching the pixel values in the initial image with the pixel values in the pixel property classification set according to the size of the pixel values, and obtaining the confidence degrees of the pixel values in the initial image in the pixel property classification set and a Gaussian distribution model corresponding to each confidence degree; wherein the pixel property classification set is based on whether corresponding pixel values in the historical image of the peanuts belong to mildew pixels, and the pixel values are divided into a mildew set consisting of mildew pixel values and a normal set consisting of normal pixel values without mildew; each pixel value in the pixel property classification set corresponds to a confidence coefficient, and different pixel values under each confidence coefficient correspond to a Gaussian distribution model;
and respectively substituting each pixel value in the initial image into the corresponding Gaussian distribution model to obtain a corresponding probability value, taking the confidence coefficient corresponding to each Gaussian model as a weight to perform weighted summation on the probability value to obtain an initial score of each pixel value in the initial image, comparing the initial score of each pixel value as a mildewed pixel with the initial score of a normal pixel to obtain a score difference, and determining that the peanuts are mildewed when the score difference of all the pixels in the initial image is greater than zero.
2. The neural network-based peanut mildew identification method according to claim 1, wherein the confidence level obtaining step comprises:
and classifying the peanut historical image by using a classification network to obtain a mildew set consisting of mildew pixel values and a normal set consisting of normal pixel values, wherein the classification network outputs the confidence coefficient of each pixel value.
3. The neural network-based peanut mildew identification method according to claim 2, wherein the step of classifying the peanut historical images by using the classification network comprises the following steps of optimizing classification results by focusing on the salient regions:
obtaining a saliency map after the feature map extracted by the classification network is subjected to global average pooling operation, and carrying out thresholding operation on the saliency map to obtain a binary map;
and acquiring pixel value difference between the significance map and the binary map as attention loss, and performing joint training on the classification network by using the attention loss and cross entropy loss between the output of the classification network and a label as a first joint loss function.
4. The neural network-based peanut mildew identification method of claim 3, wherein the classification network further comprises, after convergence of the first joint loss function: inputting an initial image of peanuts to be identified into a trained classification network, multiplying a saliency map and the initial image of the peanuts input into the classification network to obtain an attention area corresponding to a classification result when the absolute difference between the probability value of not mildewing and the probability value of mildewing is larger than a preset reliable threshold, and storing pixel values in the attention area and confidence degrees corresponding to the pixel values into corresponding pixel property classification sets.
5. The method as claimed in claim 3, wherein the classification network further comprises a step of optimizing the classification set of pixel properties by increasing the variation of the classification result obtained after the pixels of interest of the saliency map are added after the convergence of the first joint loss function.
6. The neural network-based peanut mildew identification method according to claim 5, wherein the step of adding the attention pixels of the saliency map comprises:
and randomly adding a pixel value to the saliency map to obtain the saliency map after the disturbance every time, wherein the pixel value is 1.
7. The neural network-based peanut mildew identification method of claim 4, wherein the classification network further comprises the following steps after the convergence of the first joint loss function:
a step of performing training again by using a second joint loss function, wherein the second joint loss function comprises a constraint that the probability difference between the second training classification result and the first training classification result is greater than or equal to zero, pixel loss between a saliency map output by the second training and a saliency map after disturbance, and a constraint that one disturbed pixel is added each time and the pixel value is 1;
after the second combined loss function is converged, multiplying the finally obtained significance map and the initial image of the peanuts input into the classification network to obtain a mildew area and a confidence coefficient corresponding to each pixel value in the mildew area; and storing the pixel values in the mildew region and the corresponding confidence degrees into a mildew set.
8. The neural network-based peanut mildew identification method of claim 1, wherein the step of obtaining an initial image of peanuts comprises: acquiring an original image of peanuts, wherein the original image comprises a plurality of peanut kernels; and segmenting the original image by using a watershed algorithm to obtain a binary image, and multiplying the binary image and the original image to obtain an initial image of each peanut.
9. A neural network based peanut mildew identification system comprising a memory, a processor and a computer program stored in said memory and run on said processor, wherein said processor when executing said computer program implements the steps of the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210046058.4A CN114067314B (en) | 2022-01-17 | 2022-01-17 | Neural network-based peanut mildew identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210046058.4A CN114067314B (en) | 2022-01-17 | 2022-01-17 | Neural network-based peanut mildew identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114067314A CN114067314A (en) | 2022-02-18 |
CN114067314B true CN114067314B (en) | 2022-04-26 |
Family
ID=80231394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210046058.4A Active CN114067314B (en) | 2022-01-17 | 2022-01-17 | Neural network-based peanut mildew identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114067314B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114240807B (en) * | 2022-02-28 | 2022-05-17 | 山东慧丰花生食品股份有限公司 | Peanut aflatoxin detection method and system based on machine vision |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102903124A (en) * | 2012-09-13 | 2013-01-30 | 苏州大学 | Moving object detection method |
CN103150738A (en) * | 2013-02-02 | 2013-06-12 | 南京理工大学 | Detection method of moving objects of distributed multisensor |
CN103390278A (en) * | 2013-07-23 | 2013-11-13 | 中国科学技术大学 | Detecting system for video aberrant behavior |
CN103679641A (en) * | 2012-09-26 | 2014-03-26 | 株式会社理光 | Depth image enhancing method and apparatus |
CN103793477A (en) * | 2014-01-10 | 2014-05-14 | 同观科技(深圳)有限公司 | System and method for video abstract generation |
CN110020621A (en) * | 2019-04-01 | 2019-07-16 | 浙江工业大学 | A kind of moving Object Detection method |
CN110517226A (en) * | 2019-07-24 | 2019-11-29 | 南京大树智能科技股份有限公司 | The offal method for extracting region of multiple features texture image fusion based on bilateral filtering |
CN111563902A (en) * | 2020-04-23 | 2020-08-21 | 华南理工大学 | Lung lobe segmentation method and system based on three-dimensional convolutional neural network |
CN112100435A (en) * | 2020-09-09 | 2020-12-18 | 沈阳帝信人工智能产业研究院有限公司 | Automatic labeling method based on edge end traffic audio and video synchronization sample |
CN113420614A (en) * | 2021-06-03 | 2021-09-21 | 江苏海洋大学 | Method for identifying mildewed peanuts by using near-infrared hyperspectral images based on deep learning algorithm |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149717B (en) * | 2020-09-03 | 2022-12-02 | 清华大学 | Confidence weighting-based graph neural network training method and device |
CN113505820B (en) * | 2021-06-23 | 2024-02-06 | 北京阅视智能技术有限责任公司 | Image recognition model training method, device, equipment and medium |
-
2022
- 2022-01-17 CN CN202210046058.4A patent/CN114067314B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102903124A (en) * | 2012-09-13 | 2013-01-30 | 苏州大学 | Moving object detection method |
CN103679641A (en) * | 2012-09-26 | 2014-03-26 | 株式会社理光 | Depth image enhancing method and apparatus |
CN103150738A (en) * | 2013-02-02 | 2013-06-12 | 南京理工大学 | Detection method of moving objects of distributed multisensor |
CN103390278A (en) * | 2013-07-23 | 2013-11-13 | 中国科学技术大学 | Detecting system for video aberrant behavior |
CN103793477A (en) * | 2014-01-10 | 2014-05-14 | 同观科技(深圳)有限公司 | System and method for video abstract generation |
CN110020621A (en) * | 2019-04-01 | 2019-07-16 | 浙江工业大学 | A kind of moving Object Detection method |
CN110517226A (en) * | 2019-07-24 | 2019-11-29 | 南京大树智能科技股份有限公司 | The offal method for extracting region of multiple features texture image fusion based on bilateral filtering |
CN111563902A (en) * | 2020-04-23 | 2020-08-21 | 华南理工大学 | Lung lobe segmentation method and system based on three-dimensional convolutional neural network |
CN112100435A (en) * | 2020-09-09 | 2020-12-18 | 沈阳帝信人工智能产业研究院有限公司 | Automatic labeling method based on edge end traffic audio and video synchronization sample |
CN113420614A (en) * | 2021-06-03 | 2021-09-21 | 江苏海洋大学 | Method for identifying mildewed peanuts by using near-infrared hyperspectral images based on deep learning algorithm |
Non-Patent Citations (4)
Title |
---|
A Memory- and Accuracy-Aware Gaussian Parameter-Based Stereo Matching Using Confidence Measure;Yeongmin Lee等;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;20191218;第43卷(第6期);第1845-1858页 * |
A New Fuzzy Clustering Algorithm for Brain MR Image Segmentation Using Gaussian Probabilistic and Entropy-Based Likelihood Measures;Sayan Kahali等;《2018 International Conference on Communication, Computing and Internet of Things (IC3IoT)》;20190318;第54-59页 * |
动态场景下运动目标检测与跟踪算法研究;谢文辉;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120630;第2012年卷(第6期);I138-1762 * |
基于视频的运动目标检测和跟踪技术;杨阳;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140630;第2014年卷(第6期);I138-941 * |
Also Published As
Publication number | Publication date |
---|---|
CN114067314A (en) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10282589B2 (en) | Method and system for detection and classification of cells using convolutional neural networks | |
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
CN107609525B (en) | Remote sensing image target detection method for constructing convolutional neural network based on pruning strategy | |
CN107526785B (en) | Text classification method and device | |
CN112734775B (en) | Image labeling, image semantic segmentation and model training methods and devices | |
US20190228268A1 (en) | Method and system for cell image segmentation using multi-stage convolutional neural networks | |
CN109002755B (en) | Age estimation model construction method and estimation method based on face image | |
CN110222634B (en) | Human body posture recognition method based on convolutional neural network | |
CN111079639A (en) | Method, device and equipment for constructing garbage image classification model and storage medium | |
CN111652317B (en) | Super-parameter image segmentation method based on Bayes deep learning | |
CN112200121B (en) | Hyperspectral unknown target detection method based on EVM and deep learning | |
CN112561910A (en) | Industrial surface defect detection method based on multi-scale feature fusion | |
CN112381764A (en) | Crop disease and insect pest detection method | |
CN111898621A (en) | Outline shape recognition method | |
CN113221956B (en) | Target identification method and device based on improved multi-scale depth model | |
CN111815582B (en) | Two-dimensional code region detection method for improving background priori and foreground priori | |
CN112766170A (en) | Self-adaptive segmentation detection method and device based on cluster unmanned aerial vehicle image | |
CN114782761A (en) | Intelligent storage material identification method and system based on deep learning | |
CN114067314B (en) | Neural network-based peanut mildew identification method and system | |
CN109101984B (en) | Image identification method and device based on convolutional neural network | |
CN116206208B (en) | Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence | |
CN116245855B (en) | Crop variety identification method, device, equipment and storage medium | |
CN115496936A (en) | Vegetable identification method based on image cutting and residual error structure | |
CN115861790A (en) | Cultivated land remote sensing image analysis method, device, equipment, storage medium and product | |
CN111126513B (en) | Universal object real-time learning and recognition system and learning and recognition method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |