CN112949704B - Tobacco leaf maturity state identification method and device based on image analysis - Google Patents
Tobacco leaf maturity state identification method and device based on image analysis Download PDFInfo
- Publication number
- CN112949704B CN112949704B CN202110205039.7A CN202110205039A CN112949704B CN 112949704 B CN112949704 B CN 112949704B CN 202110205039 A CN202110205039 A CN 202110205039A CN 112949704 B CN112949704 B CN 112949704B
- Authority
- CN
- China
- Prior art keywords
- image
- tobacco
- tobacco leaf
- maturity
- leaf
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a tobacco maturity state identification method based on image analysis, which is characterized by comprising the following steps of: collecting original data of a tobacco curing barn in a preset period; obtaining a primary judgment result of the maturity of the tobacco leaves according to the baking curve; preprocessing the baking image to obtain a tobacco leaf global image; processing the global image of the tobacco leaves by adopting a segmentation model of a full convolution neural network, and extracting a local image of the tobacco leaves; determining global and local images to be analyzed for judging the maturity of the tobacco leaves according to the primary judgment result of the maturity of the tobacco leaves; and acquiring a tobacco leaf global image and the maturity state of the local image of the tobacco leaf by adopting a tobacco leaf maturity state identification model of a convolutional neural network, and judging the tobacco leaf maturity state. According to the method, the tobacco leaf curing image is analyzed for multiple times, automatic tobacco leaf maturity state identification is realized based on the convolutional neural network model, the tobacco leaf curing quality can be greatly improved, and the tobacco leaf curing loss is reduced.
Description
Technical Field
The invention relates to the technical field of tobacco leaf baking, in particular to a tobacco leaf maturity state identification method and device based on image analysis.
Background
Image analysis techniques are an important area of artificial intelligence. In brief, image analysis is a technique for analyzing and processing an image by using a computer to identify an object in various modes. In recent years, with the development and popularization of artificial intelligence and particularly the extensive research and development of deep learning related technologies, an image analysis technology also steps into a new stage, and a main method of image recognition is gradually changed to an intelligent analysis method mainly integrating a deep neural network from simple analysis processing of the conventional fusion machine learning.
In the tobacco leaf baking process, the related changes of the temperature and the humidity of the baking environment affect the baking quality of the tobacco leaves, so that the maturity of the tobacco leaves needs to be tracked, observed and detected in the tobacco leaf baking process, and the related factors such as the temperature and the humidity are adjusted timely to avoid the loss of the baking quality of the tobacco leaves. At present, aiming at the identification and detection of the maturity of tobacco leaves, a tobacco leaf roaster is mainly trained by special organization personnel of tobacco related units, and the tobacco leaf roaster manually observes and detects the whole roasting process and manually adjusts parameters and roasting curves in the roasting stage. However, the maturity changes of the tobacco leaves in the baking process are not completely unified, manual identification and detection mainly depend on visual observation and subjective judgment, and the judgment of each tobacco leaf baker on the maturity of the tobacco leaves is different, so that the uncertain factors finally affect the baking quality of the tobacco leaves to a certain extent, and cause great baking loss. Therefore, the image acquisition and observation of the cured tobacco can be used, the image analysis processing can be further carried out on the tobacco maturity, the tobacco maturity of each stage in the curing process can be intelligently identified, and the parameters and the curing curve of the curing stage can be automatically adjusted by using the analysis result. The intelligent identification method not only avoids unstable factors caused by manual identification and adjustment, but also saves related human resources through automatic adjustment.
Disclosure of Invention
The invention provides a tobacco leaf maturity state identification method and device based on image analysis, and aims to solve the problems of low identification precision, high cost and the like in the tobacco leaf curing maturity identification process. Therefore, the present invention adopts the following technical solutions.
In a first aspect, the invention provides a tobacco maturity state identification method based on image analysis, which comprises the following steps:
s1: in the tobacco leaf baking process, acquiring original data of a tobacco leaf baking room in a preset period, wherein the original data comprises a baking image, tobacco leaf baking room dry bulb and wet bulb temperatures and tobacco leaf baking time;
s2: according to a baking curve, judging the tobacco leaf maturity state of the tobacco leaf baking room according to the dry bulb and wet bulb temperatures of the tobacco leaf baking room and the tobacco leaf baking duration to obtain a primary tobacco leaf maturity judgment result;
s3: preprocessing the baked image, and extracting a tobacco leaf global image in the baked image to obtain a tobacco leaf global image;
s4: processing the tobacco leaf global image by adopting a segmentation model of a full convolution neural network, and extracting a local image of the tobacco leaf in the tobacco leaf global image;
s5: determining an image to be analyzed for judging the maturity of the tobacco leaves according to the primary judgment result of the maturity of the tobacco leaves, wherein the image comprises a global image of the tobacco leaves and a local image of the tobacco leaves;
s6: identifying the tobacco leaf global image and the tobacco leaf local image by adopting a tobacco leaf maturity state identification model of a convolutional neural network to obtain maturity states and probabilities of the tobacco leaf global image and the tobacco leaf local image;
s7: and outputting the tobacco maturity state and the tobacco maturity state probability according to the maturity states and the probabilities of the global tobacco image and the local tobacco image.
Further, the tobacco maturity status comprises: the early stage of yellowing, the middle stage of yellowing, the later stage of yellowing, the early stage of fixing color, the middle stage of fixing color, the later stage of fixing color, the early stage of dry tendon, the middle stage of dry tendon and the later stage of dry tendon.
Further, the step S3 specifically includes:
s31: reading the baked image;
s32: converting the RGB color space of the baked image into HSV color space, and finishing binarization segmentation by using an H channel image as a segmentation image to obtain a binarization segmentation image;
s33: the binaryzation segmentation image corresponds to the baking image, and a foreground image of a tobacco leaf area in the binaryzation segmentation image is preprocessed to enhance detailed description of the tobacco leaf area;
the foreground image preprocessing specifically comprises the following steps:
wherein I (I, j) is a pixel point in the intercepted tobacco RGB image, minI (I, j) is the minimum value of RGB three channels of a pixel at the coordinate point (I, j), N is the pixel number of each line, M is the pixel number of each column, and the minimum value of the RGB three channels is subtracted from each channel of the I (I, j) pixel point.
S34: and intercepting the image of the tobacco leaf area in the image processed in the step S33 to obtain a tobacco leaf global image.
Further, the local image of the tobacco leaf includes: leaf ear local images, main vein local images, branch vein local images and leaf tip local images.
Further, the specific steps of obtaining the segmentation model in step S4 are as follows:
firstly, acquiring a tobacco leaf image through a camera to be used as a tobacco leaf segmentation training sample set;
secondly, labeling and preprocessing the images of the tobacco leaf segmentation training sample set;
thirdly, establishing a convolutional neural network of a training model;
and fourthly, performing iterative training on the labeled and preprocessed tobacco leaf segmentation training sample set by adopting the convolutional neural network in the previous step to obtain the segmentation model.
Further, the image labeling is that on the basis of carrying out numerical 0 on pixel points of all parts of background non-tobacco leaves, four different numerical values are adopted to carry out unified numerical description on pixel points in a leaf ear local image, a main vein local image, a branch vein local image and a leaf tip local image of the tobacco leaves, and finally a group-Truth image which needs to be segmented for training a segmentation model convolutional neural network is formed.
Further, the convolutional neural network construction step of the training model is as follows:
(1) inputting the tobacco leaf segmentation training sample set image into a training model convolutional neural network, and performing convolution operation through a first convolution group module to obtain a convolution characteristic image with the resolution reduced by 4 times than that of the input tobacco leaf image;
(2) reducing the resolution ratio by 4 times again through a second convolution group module, and carrying out average pooling operation of different scales on the layer of characteristic image to form a characteristic image;
(3) performing convolution operation on each pooled characteristic image through single convolution to obtain context characteristic images with different scales;
(4) carrying out channel weighting selection on the context characteristic image group through a channel attention module, and carrying out 5-time channel compression on the weighted context characteristic image group through a primary convolution operation module to form a final context characteristic image;
(5) and fusing the spatial feature image with the spatial description information output by the first convolution group module and the context feature image after channel compression, selecting a channel and a space by adopting a space and channel attention module to form a feature image with context description and space description, and outputting a segmentation mask image with the same size as the input image by image up-sampling.
Further, in the step S5:
for the early yellowing stage, the early yellowing stage and the middle yellowing stage, mainly analyzing the local images of the leaf tips and the change of the whole area of the tobacco leaves, and taking the local images of the leaf tips and the global images of the tobacco leaves as images needing to be analyzed for judging the maturity of the tobacco leaves;
for the early stage of color fixing, mainly analyzing the changes of the branch part and the whole area of the tobacco leaves, and taking the local image of the branch and the global image of the tobacco leaves as images needing to be analyzed for judging the maturity of the tobacco leaves;
for the middle fixing period and the later fixing period, mainly analyzing the changes of the main vein part and the whole tobacco leaf area, and taking the main vein local image and the tobacco leaf global image as images needing to be analyzed for judging the maturity of the tobacco leaf;
and mainly analyzing the local images of the leaf ears and the change of the whole tobacco leaf area for the early stage, the middle stage and the later stage of the stem tendon, and taking the local images of the leaf ears and the global images of the tobacco leaves as images needing to be analyzed for judging the maturity of the tobacco leaves.
Further, the specific steps of obtaining the tobacco maturity state identification model in the step S6 are as follows:
firstly, acquiring a global image and a local image of tobacco leaves through a camera, and taking the global image and the local image as a tobacco leaf maturity state recognition training sample set;
secondly, classifying and preprocessing the images of the tobacco maturity state recognition training sample set;
thirdly, establishing a convolutional neural network of a training model;
and fourthly, performing iterative training on the classified and preprocessed maturity state identification training sample set by adopting the convolutional neural network in the previous step to obtain the tobacco maturity state identification model.
Further, the tobacco maturity state recognition training sample set image classification is classified according to the classification labels distributed to the tobacco from 0 to 9 in sequence according to the different maturity states of the tobacco in the image.
Further, the convolutional neural network of the training model is constructed by the following steps:
(1) inputting the RGB images of the training sample set of the tobacco maturity state recognition model after classification and pretreatment into the training model convolutional neural network, and forming a characteristic image with resolution reduced by 16 times relative to the input RGB through two convolutional group modules;
(2) carrying out average pooling operation on the feature images reduced by 16 times by adopting different pooling scales through average pooling to form feature images of different scales, and carrying out single convolution operation on each pooled feature image to obtain context feature images of different scales;
(3) sampling each context characteristic image on the image, and combining channels to form a context characteristic image group;
(4) performing channel weighting selection on the feature image group through a channel attention module, selecting a feature channel with better description capacity, and reducing the number of feature image combination channels by 5 times through a single convolution operation with 1 x 1 convolution kernel to form a final feature image with context description;
(5) and further reducing the characteristic image by 2 times by adopting a convolution module, calculating the reduced characteristic image by adopting a full-connection module to form a one-dimensional vector consistent with the classification number of the tobacco leaf maturity states, and quantizing by adopting a Softmax function to obtain the tobacco leaf maturity states and probabilities.
Further, when the maturity states of the global tobacco leaf image and the local tobacco leaf image acquired in the step S6 are consistent, the consistent maturity state is output as the tobacco leaf maturity state, and the greater probability of the two is output as the tobacco leaf maturity state probability.
Further, when the maturity states of the global tobacco leaf image and the local tobacco leaf image acquired in step S6 are not consistent, weighting the probabilities of the global tobacco leaf image and the local tobacco leaf image, outputting the maturity state corresponding to the weighted maximum probability as the maturity state of the tobacco leaf, and outputting the probability corresponding to the weighted maximum probability as the maturity state probability of the tobacco leaf.
The weighting processing and judging formula is as follows:
c is the final tobacco leaf maturity state, Cj is the maturity state of the local image of the tobacco leaf, Cq is the maturity state of the global image of the tobacco leaf, Pj is the local image probability of the tobacco leaf, Pq is the global image probability of the tobacco leaf, Wj is the local image probability weight coefficient of the tobacco leaf, and Wq is the global image probability weight coefficient of the tobacco leaf.
Further, the local image probability weight coefficient Wj of the tobacco leaf is 0.6, and the global image probability weight coefficient Wq of the tobacco leaf is 0.4.
In a second aspect, the present application provides an image analysis-based tobacco maturity state identification apparatus, including:
a data acquisition module: the acquired data comprises a baking image, the dry bulb and wet bulb temperatures of the tobacco leaf baking room and the baking duration of the tobacco leaves;
the primary judgment module of the tobacco maturity comprises: judging the tobacco leaf maturity state of the tobacco leaf curing barn according to the dry bulb and wet bulb temperatures of the tobacco leaf curing barn and the tobacco leaf curing duration;
the tobacco leaf global image preprocessing module: preprocessing the baked image, and extracting a tobacco leaf global image in the baked image to obtain a tobacco leaf global image;
a local image segmentation module of the tobacco leaves: processing the tobacco leaf global image by adopting a full convolution neural network semantic segmentation model, and extracting a local image of the tobacco leaf in the tobacco leaf global image;
the image analysis module for judging the maturity of the tobacco leaves comprises: determining an image to be analyzed for judging the maturity of the tobacco leaves according to the primary judgment result of the maturity of the tobacco leaves, wherein the image comprises a global image of the tobacco leaves and a local image of the tobacco leaves;
tobacco maturity identification module: identifying the tobacco leaf global image and the tobacco leaf local image by adopting a tobacco leaf maturity state identification model of a convolutional neural network to obtain maturity states and probabilities of the tobacco leaf global image and the tobacco leaf local image;
a tobacco maturity output module: and outputting the tobacco maturity state and the tobacco maturity state probability according to the maturity states and the probabilities of the global tobacco image and the local tobacco image.
In a third aspect, the present application provides a tobacco maturity state identification system based on image analysis, which includes a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of the tobacco maturity state identification method based on image analysis when executing the program.
In a fourth aspect, the present application provides a computer storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the tobacco maturity state identification method based on image analysis.
The invention has the beneficial effects that: according to the tobacco leaf curing image recognition method, the tobacco leaf curing image is analyzed for multiple times, automatic tobacco leaf maturity state recognition is achieved based on the convolutional neural network model, and the tobacco leaf curing maturity can be recognized and detected accurately. Compared with the existing manual identification and detection method, the method is more objective and accurate, can greatly improve the tobacco leaf baking quality, reduces the tobacco leaf baking loss, and can save a large amount of manpower and financial resource cost.
Drawings
FIG. 1 is a schematic flow chart of a tobacco maturity state identification method based on image analysis according to the present invention;
fig. 2 is a schematic diagram of the baking curve in step S2 of the tobacco maturity state identification method based on image analysis according to the present invention;
fig. 3 is a schematic diagram of an image processing result of step S32 of the tobacco maturity state identification method based on image analysis according to the present invention;
fig. 4 is a tobacco leaf global image obtained by processing in step S3 of the tobacco leaf maturity state identification method based on image analysis according to the present invention;
fig. 5 is a schematic diagram of iterative training of a tobacco leaf segmentation model in step S4 of the method for identifying the maturity state of tobacco leaf based on image analysis according to the present invention;
fig. 6 is a schematic diagram of a segmentation result output by the tobacco leaf segmentation model in step S4 of the tobacco leaf maturity state identification method based on image analysis according to the present invention;
fig. 7 is a schematic diagram of iterative training of a tobacco maturity state recognition model in step S6 of the tobacco maturity state recognition method based on image analysis provided by the present invention.
Fig. 8 is a schematic structural diagram of a tobacco maturity state identification device based on image analysis provided by the invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail with reference to the accompanying examples and figures 1-8.
Referring to fig. 1, an embodiment of a tobacco maturity state identification method based on image analysis according to the present invention is shown in fig. 1, and the tobacco maturity state identification method based on image analysis specifically includes the following steps:
s1: in the tobacco leaf baking process, acquiring original data of a tobacco leaf baking room in a preset period, wherein the original data comprises a baking image, tobacco leaf baking room dry bulb and wet bulb temperatures and tobacco leaf baking time;
s2: according to a baking curve, judging the tobacco leaf maturity state of the tobacco leaf baking room according to the dry bulb and wet bulb temperatures of the tobacco leaf baking room and the tobacco leaf baking duration to obtain a primary tobacco leaf maturity judgment result;
s3: preprocessing the baked image, and extracting a tobacco leaf global image in the baked image to obtain a tobacco leaf global image;
s4: processing the tobacco leaf global image by adopting a segmentation model of a full convolution neural network, and extracting a local image of the tobacco leaf in the tobacco leaf global image;
s5: determining an image to be analyzed for judging the maturity of the tobacco leaves according to the primary judgment result of the maturity of the tobacco leaves, wherein the image comprises a global image of the tobacco leaves and a local image of the tobacco leaves;
s6: identifying the tobacco leaf global image and the tobacco leaf local image by adopting a tobacco leaf maturity state identification model of a convolutional neural network to obtain maturity states and probabilities of the tobacco leaf global image and the tobacco leaf local image;
s7: and outputting the tobacco maturity state and the tobacco maturity state probability according to the maturity states and the probabilities of the global tobacco image and the local tobacco image.
In step S1, specifically, during the tobacco leaf baking process, the baking image, the dry bulb and wet bulb temperatures of the tobacco leaf baking room, and the tobacco leaf baking time data are collected in a preset collection period, where the preset collection period is 5min, 8min, 10min, 15min, 16min, 20min, and the like, and the numerical values are only used for illustration and are not limited specifically.
The baking curve is a unified preset baking curve in the tobacco intensive baking industry, and takes the baking time as an abscissa and the dry-bulb and wet-bulb temperatures as ordinates.
In step S2, according to the baking curve, the tobacco maturity state of the tobacco flue-curing barn is determined according to the dry-bulb and wet-bulb temperatures of the tobacco flue-curing barn and the tobacco baking duration, so as to obtain a primary tobacco maturity determination result.
Referring to fig. 2, the curing curve is divided into 9 stages according to the maturity state of the tobacco leaves, as shown by the dotted line segmentation in the figure, the maturity state of the tobacco leaves includes: at the early yellowing stage, the middle yellowing stage, the later yellowing stage, the early fixing stage, the middle fixing stage and the later fixing stage, at the early dry rib stage, the middle dry rib stage and the later dry rib stage, the ten types of maturity states are sequentially coded from 0 to 9 according to the maturity state category of the tobacco leaves, and the stage coding is shown as the coding in brackets after the maturity state of the tobacco leaves in figure 2.
Referring to fig. 2, the baking curve includes a dry bulb temperature curve and a wet bulb temperature curve, the upper number of each line indicates the temperature and humidity to be reached, and the lower number indicates the duration of the period of the stage. The level represents a constant temperature period, the rise represents a warming period, and each baking stage comprises a rise period and a constant temperature period. In this embodiment, according to the baking curve, the tobacco maturity state of the tobacco flue-curing barn is preliminarily determined according to the dry bulb and wet bulb temperatures of the tobacco flue-curing barn and the baking duration of the tobacco.
And preprocessing the baking image in step S3, and extracting a tobacco leaf global image in the baking image to obtain a tobacco leaf global image.
The specific step S3 includes:
s31: reading the baking image to obtain an RGB color space of the baking image;
s32: converting the RGB color space of the baked image into HSV color space, and finishing binarization segmentation by using an H channel image as a segmentation image to obtain a binarization segmentation image;
specifically, the binarization segmentation method can be described by the following formula:
wherein I (I, j) is a pixel point in an intercepted tobacco H channel image, Th is a binarization threshold value of the global image for iterative mean calculation, if the pixel value of the pixel point I (I, j) is greater than the threshold value Th, the pixel point is 1, otherwise, the pixel point is 0.
The converted H-channel image is shown in the left image of FIG. 3, and the binarized segmented image is shown in the right image of FIG. 3.
S33: the binary segmentation image corresponds to the original image of the baking image, and the foreground image of the tobacco leaf area in the binary segmentation image is preprocessed to enhance the detailed description of the tobacco leaf area;
the detailed description is specifically as follows:
wherein I (I, j) is a pixel point in the intercepted tobacco RGB image, minI (I, j) is the minimum value of RGB three channels of a pixel at the coordinate point (I, j), N is the pixel number of each line, M is the pixel number of each column, and the minimum value of the RGB three channels is subtracted from each channel of the I (I, j) pixel point.
S34: and intercepting the image of the tobacco leaf area in the image processed in the step S33 to obtain a tobacco leaf global image.
Wherein I (I, j) is a pixel point in the intercepted tobacco RGB image, minI (I, j) is the RGB three-channel minimum value of the pixel at the coordinate point (I, j), N is the pixel number of each line, M is the pixel number of each column, and the minimum value in the three channels is subtracted from each channel of the pixel point.
Step S4: processing the tobacco leaf global image by adopting a segmentation model of deep full convolution neural network semantic meaning combined with multi-scale semantic description, space and channel attention, and extracting a local image of the tobacco leaf in the tobacco leaf global image;
specifically, in the global image of the tobacco leaf, each part of the tobacco leaf is further decomposed, and each local image of the tobacco leaf is segmented by adopting a deep full convolution neural network semantic segmentation method combining multi-scale semantic description, space and channel attention, and the method comprises the following steps: leaf ear local images, main vein local images, branch vein local images and leaf apex local images.
The specific steps for obtaining the segmentation model are as follows:
s41: the method comprises the steps of collecting a tobacco leaf segmentation training sample set, collecting tobacco leaf images with various angles, sizes and different mature states under various illumination conditions through a camera to serve as the tobacco leaf segmentation training sample set, and collecting tobacco leaf images with various angles, sizes and different mature states under various illumination conditions through 2400 cameras to serve as the tobacco leaf training sample set.
S42: labeling and preprocessing the images of the tobacco leaf segmentation training sample set;
specifically, the specific operations of image annotation of the tobacco leaf segmentation training sample set are as follows:
image labeling is to carry out numerical 0 on pixel points of all parts of background non-tobacco leaves, carry out unified numerical description on pixel points in leaf ear local images, main vein local images, branch vein local images and leaf apex local images of the tobacco leaves by adopting four different numerical values, and finally form a group-Truth image which needs to be segmented for training a segmentation model convolutional neural network. In this embodiment, the pixel of the blade tip local image is represented by a pixel value 1, the pixel of the main vein local image is represented by a pixel value 2, the pixel of the branch vein local image is represented by a pixel value 3, the leaf ear local image is represented by a pixel value 4, and the pixels of the rest parts of the image are all set to be a pixel value 0.
Specifically, the tobacco leaf segmentation training sample set image preprocessing specifically operates as follows:
(1) and (4) cutting the marked images, and adjusting all the cut images to be 512 × 512 in size.
(2) And (3) carrying out rotation or turnover operation on the images in the training sample set in the step (1) to form an amplification training sample. The rotation angle randomly takes a value in the range of 0-20 degrees, the overturning direction randomly takes horizontal overturning or longitudinal overturning, the sample diversity is increased, and the generalization performance of the convolutional neural network training model is improved.
(3) And (2) converting the HSV color space of the training sample set in the step (1) from the RGB image to HSV color space, enhancing different coefficients of the converted H channel image, S channel image and V channel image, wherein the enhancement coefficient is 0.7-1.4, and then converting the enhanced HSV color space image back to the RGB image, thereby realizing color enhancement of the training sample.
(4) And (4) merging the training sample sets formed in the step (1), the step (2) and the step (3) to be used as a convolutional neural network training sample set.
S43: building a training model convolutional neural network;
specifically, the training model convolutional neural network is shown in fig. 5, in which CONV represents a convolution operation, CONVs represents a convolution group module formed by serially connecting convolution operations with convolution kernels having a size of 3, UPSAMPLE represents image upsampling, CA represents a channel attention module, and SCA represents a space and channel attention module.
The convolutional neural network construction steps of the training model are as follows:
(1) the method comprises the steps that RGB images in a convolutional neural network training sample set are input into a training model convolutional neural network, convolution operation is conducted through a first CONVs module which is formed by connecting multiple convolution operations in series, a convolutional characteristic diagram which is 4 times lower than the resolution of an input tobacco leaf image is obtained, and enough space description information is stored in the output characteristic diagram.
(2) The resolution of the CONVs modules connected in series through the second multi-convolution operation is reduced by 4 times again, and the average pooling operation of different scales is carried out on the layer of characteristic images to form characteristic images;
the specific different scales are different pooling scale sizes, and are 4 in total, namely 32 × 32, 16 × 16, 8 × 8 and 4 × 4, and the obtained feature images are also different in size, so that the feature pyramid images are formed.
Average pooling can be described as the following equation:
wherein g (m, n) is the characteristic image after pooling, m and n are the row and column positions of the characteristic image before pooling operation, mean represents the mean value calculation of the image, fmn(s, t) represents the pixel values of the feature image before pooling.
(3) Performing convolution operation on each pooled characteristic image through single convolution to obtain context characteristic images with different scales, wherein the different scales are different pooled scale sizes, and are 4 in total, namely 32 × 32, 16 × 16, 8 × 8 and 4 × 4, and each context characteristic image is subjected to UPSAMPLE (unified frame of image) up-sampling and is combined through a channel to form a context characteristic image group;
UPSAMPLE can be described by the following equation:
wherein g (x, y) represents a feature image numerical value obtained by performing UPSAMPLE on a point at the feature image position (x, y), and f (x, y)1,y1) And f (x)2,y2) Respectively, are shown to be located at (x)1,y1) And (x)2,y2) The characteristic image value of (1).
(4) Carrying out channel weighting selection on the context feature image group through a CA module, and carrying out 5 times of channel compression on the weighted context feature image group through a CONV (1 x 1) module to form a final context feature image;
the CA module performs channel weighting by multiplying each value in the vector by a respective channel by a vector equal to the number of channels in the feature image group, which can be described by the following formula:
wherein ICA(F) Representing a feature image group F obtained by channel weighting the feature image group FcEach of the feature images representing the feature image group F,representing a characteristic image fcThe characteristic channel weighting value of (1).
(5) And fusing the spatial feature image with the spatial description information output by the first CONVs module and the context feature image after channel compression, selecting a channel and a space by adopting an SCA (supervisory control and maintenance) module to form a feature image with context description and spatial description, and outputting a segmentation mask image with the same size as the input image again by using UPSAMPLE.
The SCA is composed of channel attention CA and space attention SA, SA is realized through a two-dimensional weighting matrix which is the same as the size of the characteristic image, and the numerical value of the two-dimensional weighting matrix is subjected to dot multiplication with each characteristic image of the characteristic image group to complete space position weighting. Can be described by the following formula:
ISA(F)=[δ(i,j)*f1(i,j),δ(i,j)*f2(i,j).....δ(i,j)*fs(i,j)]
wherein, ISA(F) Representing a feature image group F spatially weightedc(i, j) represents the feature value of the c-th channel feature image in the feature image group at position (i, j), and δ (i, j) represents the weighting value of the two-dimensional weighting matrix of SA at position (i, j).
And fourthly, performing iterative training on the tobacco leaf segmentation model by adopting the convolutional neural network in the previous step.
The specific iterative training process is as follows:
(1) and grouping the marked and preprocessed tobacco leaf segmentation training sample sets, wherein each group of 8 samples comprises an RGB image and a Grond-Truth image corresponding to the RGB image.
(2) And inputting the RGB images of each group of samples into the network by adopting the training model convolutional neural network, and calculating the loss between the segmentation mask image output by the training model convolutional neural network and the labeled Grond-Truth image.
The loss function is shown by the following equation:
Loss=μCELoss+(1-μ)DiceLoss
where μ represents the weight coefficient of the value space (0,1), typically 0.5.
CELoss represents the cross-entropy loss, which is described by the formula:
whereinAnnotation information representing pixel points,ykAnd the characteristic value of the pixel output by the segmentation network model is shown, and n represents the number of classes.
DiceLoss is defined as follows:
wherein, x represents an input labeled Grond-Truth image, and y represents a segmentation mask image output by the training model convolutional neural network model.
(3) And when the numerical value of the loss function is less than 0.1, the whole iterative training is finished to obtain a trained segmentation model.
Inputting the global tobacco leaf image into the segmentation model to output the segmentation images with the same size, wherein the specific segmentation result is shown in fig. 7, and intercepting the images of each part of the tobacco leaf to obtain the local tobacco leaf image.
Step S5: and determining an image to be analyzed for judging the maturity of the tobacco leaves according to the primary judgment result of the maturity of the tobacco leaves, wherein the image comprises a global image of the tobacco leaves and a local image of the tobacco leaves.
Specifically, combining the primary judgment result of the tobacco maturity in the step S2, selecting the local position of the tobacco leaf needing to be analyzed in a focus manner according to the tobacco maturity state, taking the local image of the tobacco leaf and the global image of the tobacco leaf as the image needing to be analyzed in the current tobacco maturity state, and selecting the specific image of the tobacco leaf needing to be analyzed in a focus manner, wherein the whole selection process is as follows:
for the early yellowing stage, the early yellowing stage and the middle yellowing stage, mainly analyzing the changes of the local images of the leaf tips and the whole area of the tobacco leaves, and taking the local images of the leaf tips and the global images of the tobacco leaves as images needing to be analyzed for judging the maturity of the tobacco leaves;
in the early stage of color fixing, the changes of the branch part and the whole area of the tobacco leaves are mainly analyzed, and the local image of the branch and the global image of the tobacco leaves are used as images needing to be analyzed for judging the maturity of the tobacco leaves;
for the middle period and the later period of the fixation, the changes of the main vein part and the whole tobacco leaf area are mainly analyzed, and the local main vein image and the global tobacco leaf image are used as images needing to be analyzed for judging the maturity of the tobacco leaf;
the method mainly analyzes the local images of the leaf ears and the change of the whole tobacco leaf area in the early stage, the middle stage and the later stage of the stem tendon, and takes the local images of the leaf ears and the global images of the tobacco leaves as the images needing to be analyzed for judging the maturity of the tobacco leaves.
And step S6, recognizing the tobacco leaf global image and the tobacco leaf local image by adopting a tobacco leaf maturity state recognition model of a deep convolutional neural network combining multi-scale semantic description and channel attention, and acquiring maturity states and probabilities of the tobacco leaf global image and the tobacco leaf local image.
The tobacco maturity status comprises: the early stage of yellowing, the middle stage of yellowing, the late stage of yellowing, the early stage of fixing color, the middle stage of fixing color, the late stage of fixing color, the early stage of dry tendon, the middle stage of dry tendon and the late stage of dry tendon.
The specific steps for obtaining the tobacco maturity state recognition model are as follows:
the method comprises the steps of firstly, collecting a tobacco maturity state recognition model training sample set, collecting tobacco global images and tobacco local images of different angles, sizes and different maturity states under various illumination conditions through a camera to serve as the maturity state recognition sample training set, wherein 6400 sheets of tobacco global images of various angles, sizes and different maturity states under various illumination conditions are collected in a specific embodiment, and 6400 sheets of tobacco local area images of various angles, sizes and different maturity states and different tobacco local areas under various illumination conditions serve as the tobacco training sample set.
Secondly, marking and preprocessing images of the tobacco maturity state recognition training sample set;
the specific image classification and preprocessing process comprises the following steps:
(1) the tobacco maturity status comprises: the early stage of yellowing, the middle stage of yellowing, the later stage of yellowing, the early stage of fixation, the middle stage of fixation, the later stage of fixation, the early stage of dry tendon, the middle stage of dry tendon and the later stage of dry tendon. And sequentially carrying out tobacco leaf maturity state category coding on the ten categories of maturity states from 0 to 9, and sequentially distributing category labels from 0 to 9 for classification according to different maturity states of tobacco leaves in the image.
(2) And (2) clipping the sample images classified in the step (1) by adopting different aspect ratio coefficients to form a certain amplification training sample, adjusting the clipped images into 256 × 256 images as a training sample set, and randomly selecting the aspect ratio coefficients to be 4:3 or 3: 4.
(3) And (3) carrying out image rotation and overturning operations of different angles on the classified sample image in the step (1) to form an amplified sample image with rotation and overturning characteristics. The rotation angle is randomly selected from the range of 0 to 20 degrees, and the overturning direction is randomly horizontally overturned or longitudinally overturned.
(4) And (2) converting the sample image classified in the step (1) from an RGB image into an HSV color space, enhancing different coefficients of the converted H channel image, S channel image and V channel image, wherein the enhancing coefficient takes a value of 0.8-1.2, and then converting the enhanced HSV space image back to the RGB image, thereby realizing color enhancement of the sample image.
(5) And (4) combining the sample images formed in the steps (1) to (4) and the classification labels thereof to be used as a training sample set of the tobacco maturity state recognition model after classification and pretreatment.
Thirdly, establishing a convolutional neural network of a training model;
specifically, the tobacco maturity state identification model identification network is shown in fig. 7, wherein CONV represents a convolution operation, CONVs represents a convolution group module formed by multiple convolution operations, UPSAMPLE represents image upsampling, CA represents a channel attention module, and FCs represents a full-connection module.
The convolutional neural network construction steps of the training model are as follows:
(1) and inputting the RGB images of the training sample set of the classified and preprocessed tobacco leaf maturity state recognition model into the training model convolutional neural network, and forming a characteristic image with resolution reduced by 16 times relative to the input RGB through two CONVs modules.
(2) And carrying out average pooling operation on the feature images reduced by 16 times by adopting different pooling scales through average pooling to form feature images with different scales, and carrying out single convolution operation on each pooled feature image to obtain context feature images with different scales.
(3) And (4) up-sampling each context feature image by using UPSAMPLE, and combining channels to form a context feature image group.
(4) And performing channel weighting selection on the feature image group through a CA module, selecting feature channels with more description capacity, and reducing the number of feature image combination channels by 5 times through a single convolution operation with a convolution kernel of 1 x 1 to form the final feature image with context description.
(5) And further reducing the characteristic image by 2 times by adopting a CONVs module, and calculating the reduced characteristic image by adopting a full-connection module FC to form a one-dimensional vector consistent with the tobacco maturity state number, namely 0-9 stage numbers. And finally, quantifying by adopting a Softmax function to obtain the state and the probability of the maturity of the tobacco leaves.
The Softmax function is described as follows:
wherein, yiRepresents a value in the one-dimensional vector, p (yi) represents a probability quantization value of the value yi, the larger the value of yi, the larger the probability value.
And fourthly, performing iterative training on the labeled and preprocessed maturity state recognition sample training set by adopting the training model convolutional neural network.
The iterative training process specifically comprises:
(1) grouping the labeled and preprocessed training samples into 64 samples in each group, wherein each sample comprises an RGB image and a labeled class value corresponding to the RGB image.
(2) And inputting the RGB images of each group of samples into the training model convolutional neural network by adopting the training model convolutional neural network constructed in the way, and calculating the loss between the class value probability and the labeled class value output by the training model convolutional neural network.
The loss function is shown by the following equation:
whereinIndicating the labeled class value, ykThe class probability of the network output is represented, and n represents the number of classes.
(3) And when the numerical value of the loss function is less than 0.01, the whole iterative training is finished, and the trained tobacco maturity state recognition model is obtained.
Inputting the images (the tobacco leaf global image and the local image of the tobacco leaf) to be analyzed obtained in the step S5 into the tobacco leaf maturity state recognition model to output the tobacco leaf maturity state and the probability of the tobacco leaf global image and the tobacco leaf maturity state and the probability of the local image of the tobacco leaf.
And S7, outputting the tobacco maturity state and the tobacco maturity state probability according to the maturity states and the probabilities of the global tobacco image and the local tobacco image.
Since the images to be identified in the current stage include the local region image and the tobacco leaf global region image, the identification results of the two images need to be weighted and fused.
Specifically, the fusion process comprises the following steps:
and when the maturity states of the global tobacco leaf image and the local tobacco leaf image acquired according to the step S6 are consistent, outputting the consistent maturity state as the tobacco leaf maturity state, and outputting the greater probability of the two as the tobacco leaf maturity state probability.
And when the maturity states of the global tobacco leaf image and the local tobacco leaf image acquired according to the step S6 are not consistent, weighting the probabilities of the global tobacco leaf image and the local tobacco leaf image, outputting the maturity state corresponding to the weighted maximum probability as the tobacco leaf maturity state, and outputting the probability corresponding to the weighted maximum probability as the tobacco leaf maturity state probability.
The weighting processing and judging formula is as follows:
c is the final tobacco leaf maturity state, Cj is the maturity state of the local image of the tobacco leaf, Cq is the maturity state of the global image of the tobacco leaf, Pj is the local image probability of the tobacco leaf, Pq is the global image probability of the tobacco leaf, Wj is the local image probability weight coefficient of the tobacco leaf, and Wq is the global image probability weight coefficient of the tobacco leaf.
Specifically, the local image probability weight coefficient Wj of the tobacco leaf is 0.6, and the global image probability weight coefficient Wq of the tobacco leaf is 0.4.
According to the embodiment, the tobacco leaf curing image is analyzed for multiple times through the method, automatic tobacco leaf maturity state identification is realized based on the convolutional neural network model, and the tobacco leaf curing maturity can be accurately identified and detected. Compared with the existing manual identification and detection method, the method is more objective and accurate, can greatly improve the tobacco leaf baking quality, reduces the tobacco leaf baking loss, and can save a large amount of manpower and financial resource cost.
In order to effectively improve the accuracy and reliability of tobacco maturity state identification and effectively improve the automation degree and efficiency of the tobacco maturity state identification process, the application provides an embodiment of a tobacco maturity state identification method based on image analysis, which is wholly or partially contained in the tobacco maturity state identification method based on image analysis, and referring to fig. 8, the tobacco maturity state identification device based on image analysis comprises the following contents:
a data acquisition module: the acquired data comprises a baking image, the dry bulb and wet bulb temperatures of the tobacco leaf baking room and the baking duration of the tobacco leaves;
the primary judgment module of the tobacco maturity comprises: judging the tobacco leaf maturity state of the tobacco leaf curing barn according to the dry bulb and wet bulb temperatures of the tobacco leaf curing barn and the tobacco leaf curing duration;
the tobacco leaf global image preprocessing module: preprocessing the baked image, and extracting a tobacco leaf global image in the baked image to obtain a tobacco leaf global image;
a local image segmentation module of the tobacco leaves: processing the tobacco leaf global image by adopting a segmentation model of a deep full convolution neural network combining multi-scale semantic description, space and channel attention, and extracting a local image of tobacco leaves in the tobacco leaf global image;
the image analysis module for judging the maturity of the tobacco leaves comprises: determining an image to be analyzed for judging the maturity of the tobacco leaves according to the primary judgment result of the maturity of the tobacco leaves, wherein the image comprises a global image of the tobacco leaves and a local image of the tobacco leaves;
tobacco maturity identification module: identifying the tobacco leaf global image and the tobacco leaf local image by adopting a tobacco leaf maturity state identification model of a deep convolution neural network combining multi-scale semantic description and channel attention to obtain maturity states and probabilities of the tobacco leaf global image and the tobacco leaf local image;
and the tobacco maturity output module outputs the tobacco maturity state and the tobacco maturity state probability according to the tobacco global image and the maturity state and probability of the local image of the tobacco.
The tobacco leaf maturity state recognition device based on image analysis in the implementation realizes automatic and high-precision recognition of tobacco leaf maturity states, and is low in cost, simple, convenient and obvious in effect.
In order to effectively improve the accuracy and reliability of the tobacco leaf curing maturity identification method and effectively improve the automation degree and efficiency of the tobacco leaf curing maturity identification process, the application provides a system for identifying all or part of contents in the tobacco leaf curing maturity state based on image analysis, and the system specifically comprises the following contents:
the tobacco maturity state identification method based on image analysis comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the steps of the tobacco maturity state identification method based on image analysis when executing the program.
A communication interface and a bus; the processor and the memory complete mutual communication through a communication interface and a bus; the terminal can be a desktop computer, a tablet computer, a mobile terminal and the like.
In a specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, and the computer program, when executed by a processor, may implement some or all of the steps of the method for identifying the maturity state of tobacco leaves based on image analysis provided by the present application. The computer storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
According to the embodiment, the automatic tobacco leaf maturity state identification is realized based on the convolutional neural network model, and the tobacco leaf curing maturity can be accurately identified and detected. Compared with the existing manual identification and detection method, the method is more objective and accurate, manual work is not needed in the identification process, automation is realized, and the identification efficiency is higher. The scheme can greatly improve the tobacco leaf baking quality, reduce the tobacco leaf baking loss and save a large amount of manpower and financial resource cost.
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
Although the present invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention as defined in the following claims. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.
Claims (7)
1. A tobacco maturity state identification method based on image analysis is characterized by comprising the following steps:
s1: in the tobacco leaf baking process, acquiring original data of a tobacco leaf baking room in a preset period, wherein the original data comprises a baking image, tobacco leaf baking room dry bulb and wet bulb temperatures and tobacco leaf baking time;
s2: according to a baking curve, judging the tobacco maturity state of the tobacco curing barn according to the dry bulb and wet bulb temperatures of the tobacco curing barn and the tobacco baking duration to obtain a primary tobacco maturity judgment result, wherein the tobacco maturity state comprises: at the early stage of yellowing, at the middle stage of yellowing, at the later stage of yellowing, at the early stage of fixing color, at the middle stage of fixing color, at the later stage of fixing color, at the early stage of dry tendon, at the middle stage of dry tendon and at the later stage of dry tendon;
s3: preprocessing the baked image, and extracting a tobacco leaf global image in the baked image to obtain a tobacco leaf global image;
s4: processing the tobacco leaf global image by adopting a segmentation model of a full convolution neural network, and extracting a local image of the tobacco leaf in the tobacco leaf global image, wherein the local image of the tobacco leaf comprises: leaf ear local images, main vein local images, branch vein local images and leaf tip local images;
the specific steps for obtaining the segmentation model are as follows:
firstly, acquiring a tobacco leaf image through a camera to be used as a tobacco leaf segmentation training sample set;
secondly, image labeling and preprocessing are carried out on the tobacco leaf segmentation training sample set, wherein the image labeling is to carry out unified numerical description on pixel points in leaf ear local images, main vein local images, branch vein local images and leaf tip local images of tobacco leaves by adopting four different numerical values on the basis of carrying out numerical value 0 on pixel points of all parts of background non-tobacco leaves, and finally, a group-Truth image which needs to be segmented for segmentation model convolutional neural network training is formed;
thirdly, building a convolutional neural network of a training model, wherein the building step of the convolutional neural network of the training model comprises the following steps:
(1) inputting the tobacco leaf segmentation training sample set image into a training model convolutional neural network, and performing convolution operation through a first convolution group module to obtain a convolution characteristic image with the resolution reduced by 4 times than that of the input tobacco leaf image;
(2) reducing the resolution ratio by 4 times again through a second convolution group module, and carrying out average pooling operation of different scales on the characteristic image obtained through the second convolution group module to form a characteristic image;
(3) performing convolution operation on the feature images formed after the pooling in the previous step through single convolution to obtain context feature images with different scales, performing up-sampling on the context feature images, and combining the context feature images through channels to form a context feature image group;
(4) performing channel weighting selection on the context characteristic image group through a channel attention module, and performing 5-time channel compression on the weighted context characteristic image group through a primary convolution operation module to form a context characteristic image after channel compression;
(5) fusing a spatial feature image with spatial description information output by a first convolution group module and a context feature image after channel compression, selecting a channel and a space by adopting a space and channel attention module to form a feature image with context description and space description, and outputting a segmentation mask image with the same size as an input image through image up-sampling;
fourthly, iterative training is carried out on the marked and preprocessed tobacco leaf segmentation training sample set by adopting the convolutional neural network in the previous step, and the segmentation model is obtained;
s5: determining an image to be analyzed for judging the maturity of the tobacco leaves according to the primary judgment result of the maturity of the tobacco leaves, wherein the image comprises a global image of the tobacco leaves and a local image of the tobacco leaves;
analyzing the changes of the local images of the leaf tips and the whole area of the tobacco leaves at the early yellowing stage, the early yellowing stage and the middle yellowing stage, and taking the local images of the leaf tips and the global images of the tobacco leaves as images needing to be analyzed for judging the maturity of the tobacco leaves;
analyzing changes of a branch vein part and the whole tobacco leaf region in the early fixing period, and taking the branch vein local image and the tobacco leaf global image as images needing to be analyzed for judging the maturity of the tobacco leaf;
for the middle fixing period and the later fixing period, analyzing the changes of the main vein part and the whole tobacco leaf area, and taking the local main vein image and the global tobacco leaf image as images needing to be analyzed for judging the maturity of the tobacco leaf;
analyzing the local images of the leaf ears and the change of the whole tobacco leaf area for the early stage, the middle stage and the later stage of the dry ribs, and taking the local images of the leaf ears and the global images of the tobacco leaves as images needing to be analyzed for judging the maturity of the tobacco leaves;
s6: identifying the tobacco leaf global image and the tobacco leaf local image by adopting a tobacco leaf maturity state identification model of a convolutional neural network to obtain maturity states and probabilities of the tobacco leaf global image and the tobacco leaf local image;
s7: and outputting the tobacco maturity state and the tobacco maturity state probability according to the maturity states and the probabilities of the global tobacco image and the local tobacco image.
2. The image analysis-based tobacco maturity state identification method according to claim 1, wherein the step S3 specifically comprises the steps of:
s31: reading the baked image;
s32: converting the RGB color space of the baked image into HSV color space, and finishing binarization segmentation by using an H channel image as a segmentation image to obtain a binarization segmentation image;
s33: the binaryzation segmentation image corresponds to the baking image, and a foreground image of a tobacco leaf area in the binaryzation segmentation image is preprocessed to enhance detailed description of the tobacco leaf area;
the foreground image preprocessing specifically comprises the following steps:
wherein I (I, j) is a pixel point in the intercepted tobacco RGB image, minI (I, j) is the RGB three-channel minimum value of the pixel at the (I, j) coordinate point, N is the pixel number of each line, M is the pixel number of each column, and the minimum value in the RGB three channels is subtracted from each channel of the I (I, j) pixel point;
s34: and intercepting the image of the tobacco leaf area in the image processed in the step S33 to obtain a tobacco leaf global image.
3. The image analysis-based tobacco maturity state identification method according to claim 1, wherein the specific steps of obtaining the tobacco maturity state identification model in the step S6 are as follows:
firstly, acquiring a global image and a local image of tobacco leaves through a camera, and taking the global image and the local image as a tobacco leaf maturity state recognition training sample set;
secondly, classifying and preprocessing the images of the tobacco maturity state recognition training sample set, wherein the image classification of the tobacco maturity state recognition training sample set is classified according to category labels distributed from 0 to 9 in sequence from different maturity states of tobacco in the images;
thirdly, building a convolutional neural network of a training model, wherein the building step of the convolutional neural network of the training model comprises the following steps:
(1) inputting the RGB images of the training sample set of the tobacco maturity state recognition model after classification and pretreatment into the training model convolutional neural network, and forming a characteristic image with resolution reduced by 16 times relative to the input RGB through two convolutional group modules;
(2) carrying out average pooling operation on the feature images reduced by 16 times by adopting different pooling scales through average pooling to form feature images of different scales, and carrying out single convolution operation on each pooled feature image to obtain context feature images of different scales;
(3) sampling each context characteristic image on the image, and combining channels to form a context characteristic image group;
(4) performing channel weighting selection on the feature image group through a channel attention module, selecting a feature channel with better description capacity, and reducing the number of feature image combination channels by 5 times through a single convolution operation with 1 x 1 convolution kernel to form a final feature image with context description;
(5) reducing the characteristic image by 2 times by adopting a convolution module, calculating the reduced characteristic image by adopting a full-connection module to form a one-dimensional vector consistent with the classification number of the tobacco maturity states, and quantizing by adopting a Softmax function to obtain the tobacco maturity states and probabilities;
and fourthly, performing iterative training on the labeled and preprocessed tobacco maturity state recognition training sample set by adopting the convolutional neural network in the previous step to obtain the tobacco maturity state recognition model.
4. The method for identifying the maturity state of tobacco leaf based on image analysis according to claim 1, wherein in step S7, when the maturity states of the global tobacco leaf image and the local tobacco leaf image obtained in step S6 are consistent, the consistent maturity state is output as the maturity state of tobacco leaf, and the greater probability of the two probabilities is output as the maturity state probability of tobacco leaf.
5. The image analysis-based tobacco leaf maturity state identification method according to claim 1, wherein in step S7, when the maturity states of the tobacco leaf global image and the tobacco leaf local image acquired in step S6 are not consistent, the probabilities of the tobacco leaf global image and the tobacco leaf local image are weighted, the maturity state corresponding to the weighted maximum probability is output as the tobacco leaf maturity state, and the probability corresponding to the weighted maximum probability is output as the tobacco leaf maturity state probability;
the weighting processing and judging formula is as follows:
c is the final tobacco leaf maturity state, Cj is the maturity state of the local image of the tobacco leaf, Cq is the maturity state of the global image of the tobacco leaf, Pj is the local image probability of the tobacco leaf, Pq is the global image probability of the tobacco leaf, Wj is the local image probability weight coefficient of the tobacco leaf, and Wq is the global image probability weight coefficient of the tobacco leaf.
6. The tobacco maturity state identification method based on image analysis according to claim 5, wherein the local image probability weight coefficient Wj of the tobacco is 0.6, and the global image probability weight coefficient Wq of the tobacco is 0.4.
7. A tobacco leaf maturity state recognition device based on image analysis includes:
a data acquisition module: the acquired data comprises a baking image, the dry bulb and wet bulb temperatures of the tobacco leaf baking room and the baking duration of the tobacco leaves;
the primary judgment module of the tobacco maturity comprises: judging the tobacco leaf maturity state of the tobacco leaf curing barn according to the dry bulb temperature and the wet bulb temperature of the tobacco leaf curing barn and the tobacco leaf curing duration, wherein the tobacco leaf maturity state comprises the following steps: at the early stage of yellowing, at the middle stage of yellowing, at the later stage of yellowing, at the early stage of fixing color, at the middle stage of fixing color, at the later stage of fixing color, at the early stage of dry tendon, at the middle stage of dry tendon and at the later stage of dry tendon;
the tobacco leaf global image preprocessing module: preprocessing the baked image, and extracting a tobacco leaf global image in the baked image to obtain a tobacco leaf global image;
a local image segmentation module of the tobacco leaves: processing the tobacco leaf global image by adopting a full convolution neural network semantic segmentation model, and extracting a local image of the tobacco leaf in the tobacco leaf global image;
the specific steps for obtaining the segmentation model are as follows:
firstly, acquiring a tobacco leaf image through a camera to be used as a tobacco leaf segmentation training sample set;
secondly, image labeling and preprocessing are carried out on the tobacco leaf segmentation training sample set, wherein the image labeling is to carry out unified numerical description on pixel points in leaf ear local images, main vein local images, branch vein local images and leaf tip local images of tobacco leaves by adopting four different numerical values on the basis of carrying out numerical value 0 on pixel points of all parts of background non-tobacco leaves, and finally, a group-Truth image which needs to be segmented for segmentation model convolutional neural network training is formed;
thirdly, building a convolutional neural network of a training model, wherein the building step of the convolutional neural network of the training model comprises the following steps:
(1) inputting the tobacco leaf segmentation training sample set image into a training model convolutional neural network, and performing convolution operation through a first convolution group module to obtain a convolution characteristic image with the resolution reduced by 4 times than that of the input tobacco leaf image;
(2) reducing the resolution ratio by 4 times again through a second convolution group module, and carrying out average pooling operation of different scales on the characteristic image obtained through the second convolution group module to form a characteristic image;
(3) performing convolution operation on the feature images formed after the pooling in the previous step through single convolution to obtain context feature images with different scales, performing up-sampling on the context feature images, and combining the context feature images through channels to form a context feature image group;
(4) performing channel weighting selection on the context characteristic image group through a channel attention module, and performing 5-time channel compression on the weighted context characteristic image group through a primary convolution operation module to form a final context characteristic image;
(5) fusing a spatial feature image with spatial description information output by a first convolution group module and a context feature image after channel compression, selecting a channel and a space by adopting a space and channel attention module to form a feature image with context description and space description, and outputting a segmentation mask image with the same size as an input image through image up-sampling;
fourthly, iterative training is carried out on the marked and preprocessed tobacco leaf segmentation training sample set by adopting the convolutional neural network in the previous step, and the segmentation model is obtained;
the image analysis module for judging the maturity of the tobacco leaves comprises: determining an image to be analyzed for judging the maturity of the tobacco leaves according to the primary judgment result of the maturity of the tobacco leaves, wherein the image comprises a global image of the tobacco leaves and a local image of the tobacco leaves;
analyzing the changes of the local images of the leaf tips and the whole area of the tobacco leaves at the early yellowing stage, the early yellowing stage and the middle yellowing stage, and taking the local images of the leaf tips and the global images of the tobacco leaves as images needing to be analyzed for judging the maturity of the tobacco leaves;
analyzing changes of a branch vein part and the whole tobacco leaf region in the early fixing period, and taking the branch vein local image and the tobacco leaf global image as images needing to be analyzed for judging the maturity of the tobacco leaf;
for the middle fixing period and the later fixing period, analyzing the changes of the main vein part and the whole tobacco leaf area, and taking the local main vein image and the global tobacco leaf image as images needing to be analyzed for judging the maturity of the tobacco leaf;
analyzing the local images of the leaf ears and the change of the whole tobacco leaf area for the early stage, the middle stage and the later stage of the dry ribs, and taking the local images of the leaf ears and the global images of the tobacco leaves as images needing to be analyzed for judging the maturity of the tobacco leaves;
tobacco maturity identification module: identifying the tobacco leaf global image and the tobacco leaf local image by adopting a tobacco leaf maturity state identification model of a convolutional neural network to obtain maturity states and probabilities of the tobacco leaf global image and the tobacco leaf local image;
a tobacco maturity output module: and outputting the tobacco maturity state and the tobacco maturity state probability according to the maturity states and the probabilities of the global tobacco image and the local tobacco image.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110205039.7A CN112949704B (en) | 2021-02-24 | 2021-02-24 | Tobacco leaf maturity state identification method and device based on image analysis |
CN202111312939.8A CN113919442B (en) | 2021-02-24 | 2021-02-24 | Tobacco maturity state identification method based on convolutional neural network |
CN202111314758.9A CN113919443B (en) | 2021-02-24 | 2021-02-24 | Tobacco maturity state probability calculation method based on image analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110205039.7A CN112949704B (en) | 2021-02-24 | 2021-02-24 | Tobacco leaf maturity state identification method and device based on image analysis |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111314758.9A Division CN113919443B (en) | 2021-02-24 | 2021-02-24 | Tobacco maturity state probability calculation method based on image analysis |
CN202111312939.8A Division CN113919442B (en) | 2021-02-24 | 2021-02-24 | Tobacco maturity state identification method based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112949704A CN112949704A (en) | 2021-06-11 |
CN112949704B true CN112949704B (en) | 2021-11-02 |
Family
ID=76245849
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110205039.7A Active CN112949704B (en) | 2021-02-24 | 2021-02-24 | Tobacco leaf maturity state identification method and device based on image analysis |
CN202111314758.9A Active CN113919443B (en) | 2021-02-24 | 2021-02-24 | Tobacco maturity state probability calculation method based on image analysis |
CN202111312939.8A Active CN113919442B (en) | 2021-02-24 | 2021-02-24 | Tobacco maturity state identification method based on convolutional neural network |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111314758.9A Active CN113919443B (en) | 2021-02-24 | 2021-02-24 | Tobacco maturity state probability calculation method based on image analysis |
CN202111312939.8A Active CN113919442B (en) | 2021-02-24 | 2021-02-24 | Tobacco maturity state identification method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN112949704B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113793314A (en) * | 2021-09-13 | 2021-12-14 | 河南丹圣源农业开发有限公司 | Pomegranate maturity identification equipment and use method |
CN114397297B (en) * | 2022-01-19 | 2024-01-23 | 河南中烟工业有限责任公司 | Rapid nondestructive testing method for starch content of flue-cured tobacco |
CN114609135A (en) * | 2022-02-24 | 2022-06-10 | 河南中烟工业有限责任公司 | BP neural network-based flue-cured tobacco leaf field maturity mobile phone intelligent discrimination method |
CN114931230B (en) * | 2022-05-13 | 2023-10-27 | 中国烟草总公司郑州烟草研究院 | Process execution index analysis characterization method for tobacco leaf baking process |
CN114913100B (en) * | 2022-05-16 | 2023-09-15 | 中国烟草总公司四川省公司 | Tobacco leaf baking degree detection method based on image analysis |
CN115019090A (en) * | 2022-05-30 | 2022-09-06 | 河南中烟工业有限责任公司 | Method for detecting interlayer paper board in tobacco leaf packaging box based on neural network |
CN114862858B (en) * | 2022-07-08 | 2022-11-11 | 湖北省烟草科学研究院 | Cigar harvesting maturity identification method and system based on ensemble learning |
CN116434045B (en) * | 2023-03-07 | 2024-06-14 | 中国农业科学院烟草研究所(中国烟草总公司青州烟草研究所) | Intelligent identification method for tobacco leaf baking stage |
CN117893773B (en) * | 2024-01-18 | 2024-10-11 | 中国农业科学院烟草研究所(中国烟草总公司青州烟草研究所) | Tobacco leaf baking temperature and humidity key point judging method, medium and system |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2345357C1 (en) * | 2007-07-16 | 2009-01-27 | Государственное научное учреждение Всероссийский научно-исследовательский институт табака, махорки и табачных изделий Россельхозакадемии (ГНУ ВНИИТТИ Россельхозакадемии) | Method of determination of nicotine content in tobacco |
CN101762583B (en) * | 2009-12-16 | 2011-07-27 | 中国烟草总公司郑州烟草研究院 | Method for characterizing color of characteristic tobacco by place of origin |
CN103919258A (en) * | 2013-03-02 | 2014-07-16 | 重庆大学 | Densification tobacco flue-cure dry-wet bulb temperature automatic control technique based on tobacco image processing |
CN105069810A (en) * | 2015-08-31 | 2015-11-18 | 中国烟草总公司广东省公司 | Field tobacco leaf maturity quantitative assessment method |
GB201611596D0 (en) * | 2016-07-04 | 2016-08-17 | British American Tobacco Investments Ltd | Apparatus and method for classifying a tobacco sample into one of a predefined set of taste categories |
WO2019085369A1 (en) * | 2017-10-31 | 2019-05-09 | 高大启 | Electronic nose instrument and sensory quality evaluation method for tobacco and tobacco product |
CN108429819A (en) * | 2018-04-20 | 2018-08-21 | 云南佳叶现代农业发展有限公司 | Artificial intelligence flue-cured tobacco system and method based on Internet of Things |
CN109540894A (en) * | 2018-12-17 | 2019-03-29 | 云南省烟草公司红河州公司 | A kind of lossless rapid detection method of cured tobacco leaf maturity |
CN109886500A (en) * | 2019-03-05 | 2019-06-14 | 北京百度网讯科技有限公司 | Method and apparatus for determining processing technology information |
CN110646425B (en) * | 2019-09-12 | 2022-01-28 | 厦门海晟融创信息技术有限公司 | Tobacco leaf online auxiliary grading method and system |
CN110807760B (en) * | 2019-09-16 | 2022-04-08 | 北京农业信息技术研究中心 | Tobacco leaf grading method and system |
CN110705655A (en) * | 2019-11-05 | 2020-01-17 | 云南省烟草农业科学研究院 | Tobacco leaf classification method based on coupling of spectrum and machine vision |
CN111274860B (en) * | 2019-11-08 | 2023-08-22 | 杭州安脉盛智能技术有限公司 | Recognition method for online automatic tobacco grade sorting based on machine vision |
CN111079784B (en) * | 2019-11-11 | 2023-06-02 | 河南农业大学 | Flue-cured tobacco baking stage identification method in baking process based on convolutional neural network |
CN111860639B (en) * | 2020-07-17 | 2022-09-27 | 中国农业科学院烟草研究所 | System and method for judging quantized flue-cured tobacco leaf curing characteristics |
CN111915580A (en) * | 2020-07-27 | 2020-11-10 | 深圳市识农智能科技有限公司 | Tobacco leaf grading method, system, terminal equipment and storage medium |
CN112163527B (en) * | 2020-09-29 | 2022-06-14 | 华中科技大学 | Fusion model-based tobacco leaf baking state identification method, device and system |
-
2021
- 2021-02-24 CN CN202110205039.7A patent/CN112949704B/en active Active
- 2021-02-24 CN CN202111314758.9A patent/CN113919443B/en active Active
- 2021-02-24 CN CN202111312939.8A patent/CN113919442B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113919442B (en) | 2022-05-27 |
CN113919442A (en) | 2022-01-11 |
CN113919443A (en) | 2022-01-11 |
CN112949704A (en) | 2021-06-11 |
CN113919443B (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112949704B (en) | Tobacco leaf maturity state identification method and device based on image analysis | |
CN110148120B (en) | Intelligent disease identification method and system based on CNN and transfer learning | |
CN111553240B (en) | Corn disease condition grading method and system and computer equipment | |
CN114359727A (en) | Tea disease identification method and system based on lightweight optimization Yolo v4 | |
CN113435254A (en) | Sentinel second image-based farmland deep learning extraction method | |
CN111539293A (en) | Fruit tree disease diagnosis method and system | |
Rai et al. | Classification of diseased cotton leaves and plants using improved deep convolutional neural network | |
CN113469233A (en) | Tobacco leaf automatic grading method and system based on deep learning | |
CN117934957A (en) | Garbage classification and identification method based on capsule network | |
CN117253192A (en) | Intelligent system and method for silkworm breeding | |
CN110363240B (en) | Medical image classification method and system | |
CN116245855B (en) | Crop variety identification method, device, equipment and storage medium | |
CN116206208A (en) | Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence | |
CN112613521B (en) | Multilevel data analysis system and method based on data conversion | |
Murthi et al. | A semi-automated system for smart harvesting of tea leaves | |
CN114201999A (en) | Abnormal account identification method, system, computing device and storage medium | |
CN112200222A (en) | Model training apparatus | |
CN118675203B (en) | Intelligent recognition method and system for pangolin scales | |
CN113011289B (en) | Improved handwriting signature recognition method of capsule neural network | |
CN117953349B (en) | Method, device, equipment and storage medium for detecting plant diseases and insect pests of traditional Chinese medicinal materials | |
CN117593591B (en) | Tongue picture classification method based on medical image segmentation | |
kaur et al. | Detection of Plant Leaf Disease Using Image Processing and Deep Learning Technique—A Review | |
Jayanthi et al. | Performance Evaluation of the Infected Rice Leaves Using RCNN | |
CN118864872A (en) | Control system and method of intelligent organic fertilizer applicator | |
CN118781345A (en) | Automatic detection and segmentation method for solar black seeds |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |