CN117893773A - Tobacco leaf baking temperature and humidity key point judging method, medium and system - Google Patents

Tobacco leaf baking temperature and humidity key point judging method, medium and system Download PDF

Info

Publication number
CN117893773A
CN117893773A CN202410075612.0A CN202410075612A CN117893773A CN 117893773 A CN117893773 A CN 117893773A CN 202410075612 A CN202410075612 A CN 202410075612A CN 117893773 A CN117893773 A CN 117893773A
Authority
CN
China
Prior art keywords
features
feature
enhancement
mapping
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410075612.0A
Other languages
Chinese (zh)
Other versions
CN117893773B (en
Inventor
代英鹏
赵泮真
王松峰
孟令峰
孙福山
任杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingzhou Tobacco Research Institute of China National Tobacco Corp of Institute of Tobacco Research of CAAS
Original Assignee
Qingzhou Tobacco Research Institute of China National Tobacco Corp of Institute of Tobacco Research of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingzhou Tobacco Research Institute of China National Tobacco Corp of Institute of Tobacco Research of CAAS filed Critical Qingzhou Tobacco Research Institute of China National Tobacco Corp of Institute of Tobacco Research of CAAS
Priority to CN202410075612.0A priority Critical patent/CN117893773B/en
Publication of CN117893773A publication Critical patent/CN117893773A/en
Application granted granted Critical
Publication of CN117893773B publication Critical patent/CN117893773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, medium and system for judging key points of tobacco leaf baking temperature and humidity, belonging to the technical field of intelligent tobacco leaf baking, wherein the method for judging key points of multi-scale enhancement layer tobacco leaf baking temperature and humidity comprises the following steps: acquiring a cured tobacco leaf image; filtering the cured tobacco leaf image by using a feature mapping module to extract initial mapping features; extracting detail features from the cured tobacco leaf image by using an enhanced mapping feature module; extracting high-level semantic features from the cured tobacco leaf images by using the enhancement mapping feature module; inputting the initial mapping features, the detail features and the semantic features into a feature fusion module; extracting context features of the high-level semantic features; generating multi-scale features from the contextual features, the detailed features, and the enhanced mapping features; judging temperature and humidity key points of tobacco leaf baking in real time; the method can solve the problem of low accuracy of judging the model in the curing barn and realize rapid and accurate judgment of the key points of the temperature and humidity of tobacco curing.

Description

Tobacco leaf baking temperature and humidity key point judging method, medium and system
Technical Field
The invention belongs to the technical field of tobacco leaf baking, and particularly relates to a method, medium and system for judging key points of tobacco leaf baking temperature and humidity.
Background
The tobacco production process generally needs to be subjected to links such as tobacco planting, tobacco harvesting, tobacco baking, tobacco processing after baking and the like, wherein the tobacco baking is a key link for ensuring the formation of the quality of flue-cured tobacco, and the quality and the yield of the tobacco after baking have direct influence on the development of the cigarette industry and the improvement of economic benefits. In the tobacco leaf baking process, the corresponding temperature and humidity key points are required to be adjusted in time according to the tobacco leaf baking state, so that the high-quality baked tobacco leaves can be obtained. Because the interior environment of the curing barn is complex, the key point regulation and control mode of the temperature and humidity of the tobacco curing at present mainly depends on the visual observation of the change of the color of the tobacco to carry out manual judgment, and the method has stronger subjectivity and uneven quality of identification and has certain hysteresis, so that the curing quality of the tobacco cannot be ensured. Therefore, the key points of the baking temperature and humidity of the tobacco are rapidly, accurately and intelligently judged, and the corresponding baking process is matched in time, so that the intelligent baking temperature and humidity of the tobacco is an important problem to be solved.
In recent years, neural network technology and deep learning technology have been developed, and they have been widely used in various fields. The stochastic neural network such as a generalized learning system has better generalization performance, convergence speed and rapid learning efficiency under the condition of simple medium and small data or recognition environment. Under the complex task and complex environmental conditions of tobacco leaf baking, the feature expression of the model is often limited to a certain extent due to the simple structure, and the recognition speed and accuracy in the baking stage are seriously affected.
Disclosure of Invention
In view of the above, the invention provides a method, medium and system for judging key points of tobacco leaf baking temperature and humidity, which can solve the problems of complex internal environment of a baking room, large baking image data quantity and low model judgment accuracy, and realize rapid and accurate judgment of key points of tobacco leaf baking temperature and humidity.
The invention is realized in the following way:
the first aspect of the invention provides a method for distinguishing key points of baking temperature and humidity of tobacco leaves, which comprises the following steps:
S10, acquiring a cured tobacco leaf image;
S20, filtering the cured tobacco leaf image by using three basic residual error modules in the feature mapping module, and extracting initial mapping features;
S30, extracting detail features, including details of colors and textures, of the cured tobacco leaf images by using a bottom detail enhancement layer in an enhancement mapping feature module;
S40, extracting high-level semantic features from the cured tobacco leaf images by using a high-level semantic enhancement layer in the enhancement mapping feature module;
S50, inputting the initial mapping feature, the detail feature and the semantic feature into a feature fusion module;
S60, extracting context features of the high-level semantic features through pooling and convolution;
S70, performing operations such as up-sampling, convolution and the like on the context feature, the detail feature and the enhancement mapping feature to generate a multi-scale feature, and performing feature accumulation and filtering operation on the multi-scale feature to generate a fusion feature;
s80, inputting the fusion characteristics into a classifier, and judging the temperature and humidity key temperature points of tobacco baking according to the stage of tobacco baking.
On the basis of the technical scheme, the method for judging the key points of the baking temperature and humidity of the tobacco leaves can be improved as follows:
the specific step of S10 includes:
Setting an image acquisition device to align tobacco leaves in the curing barn;
determining the time interval of image acquisition according to the baking period;
At a preset time point, the image acquisition device automatically shoots tobacco leaves in the curing barn to acquire cured tobacco leaf images;
The captured image is stored in a storage device for later use.
The beneficial effects of adopting above-mentioned improvement scheme are: image data of tobacco leaves in the baking process is acquired as input for subsequent processing.
Further, the specific step of S20 includes:
inputting the obtained tobacco leaf image data into a first basic residual error module of the feature mapping module;
The first residual error module carries out filtering processing on the image and outputs a first transition characteristic;
continuously inputting the first transition characteristic into a second residual error module, filtering, and outputting the second transition characteristic;
And finally, inputting the second transition characteristic into a third residual error module, and outputting the mapping characteristic after filtering. The beneficial effects of adopting above-mentioned improvement scheme are: and processing the cured tobacco leaf image by using a feature mapping module, and extracting initial mapping features.
Further, the specific step of S30 includes:
the bottom layer detail enhancement layer comprises a plurality of groups of enhancement blocks, and each enhancement block comprises a plurality of residual error modules;
Inputting the mapping characteristic obtained in the step S20 into a first enhancement block, and outputting a first detail enhancement characteristic through filtering operation of a residual error module;
Inputting the first detail enhancement feature into a second enhancement block, filtering again, and outputting the second detail enhancement feature;
and finally, inputting the filtered data into a third enhancement block, and outputting a third detail enhancement characteristic.
The beneficial effects of adopting above-mentioned improvement scheme are: and carrying out feature enhancement on the tobacco leaf image by using a bottom detail enhancement layer, and extracting information of detail features such as colors, textures and the like.
Further, the specific step of S40 includes:
The high-level semantic enhancement layer comprises a plurality of groups of enhancement blocks, and each enhancement block consists of a plurality of residual error modules;
inputting the mapping features obtained in the step S20 into a first enhancement block, downsampling before the enhancement block, and filtering by a residual error module to output a first semantic feature;
Continuously inputting the first semantic features into a second enhancement block, performing downsampling and filtering, and outputting second semantic features;
and the last enhancement block directly filters the second semantic feature and outputs the final semantic feature.
The beneficial effects of adopting above-mentioned improvement scheme are: and carrying out feature enhancement on the tobacco leaf image by using the advanced semantic enhancement layer, and extracting to obtain higher semantic features.
Further, the specific step of S60 includes:
in the feature fusion module, selecting the advanced semantic features S2 obtained in the step S40 as input;
Carrying out global average pooling on the S2;
Convolving the pooled features and outputting filtered context features;
The resolution of the context feature is then restored to the same size as S2 by upsampling.
The beneficial effects of adopting above-mentioned improvement scheme are: and (5) inputting the initial mapping features, the bottom detail features and the high-level semantic features extracted in the steps S20-S40 into a feature fusion module.
Further, the specific step of S70 includes:
In the feature fusion module, fusing the context features obtained in the step S60, the detail features D1 in the step S30 and the semantic features S2 in the step S40;
up-sampling and convolution operation are carried out on the fused features to generate new features, and the process is repeated twice to obtain multi-scale features;
And accumulating and filtering the multi-scale features to obtain fusion features.
The beneficial effects of adopting above-mentioned improvement scheme are: and fusing different features to generate brand new multi-scale features.
The second aspect of the present invention provides a computer readable storage medium, where the computer readable storage medium stores program instructions, and the program instructions are used to execute the above-mentioned method for determining the key point of baking temperature and humidity of tobacco.
The third aspect of the invention provides a tobacco flue-curing temperature and humidity key point distinguishing system, which comprises the computer-readable storage medium.
Compared with the prior art, the method, the medium and the system for judging the key points of the baking temperature and humidity of the tobacco have the beneficial effects that:
1. Establishing a trainable multi-scale enhancement layer depth network architecture, wherein the trainable network focuses on the representation of multi-scale information in different enhancement layers, and enhances the accuracy of a model for identifying key points of tobacco leaf baking temperature and humidity;
2. the multi-scale enhancement layer depth network architecture is set to be an adjustable weight, so that the limitation brought by the data scale can be eliminated, and large-scale data can be effectively processed, so that the high accuracy and the reasoning speed of distinguishing the key points of the temperature and humidity of tobacco baking are realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for discriminating key points of tobacco leaf baking temperature and humidity;
FIG. 2 is a flow chart of a feature module algorithm of a method for discriminating key points of temperature and humidity of tobacco baking;
FIG. 3 is a flowchart of an enhanced mapping feature algorithm of a tobacco flue-curing temperature and humidity key point discrimination method;
FIG. 4 is a flowchart of a feature fusion module algorithm of a method for discriminating key points of temperature and humidity of tobacco baking;
fig. 5 is a flowchart of an algorithm module of a method for discriminating key points of the baking temperature and humidity of tobacco leaves.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Example 1
As shown in fig. 1, a first embodiment of a method for determining key points of temperature and humidity for baking tobacco leaves according to a first aspect of the present invention includes the following steps:
S10, acquiring a cured tobacco leaf image;
S20, filtering the cured tobacco leaf image by using three basic residual error modules in the feature mapping module, and extracting initial mapping features;
S30, extracting detail features, including details of colors and textures, of the cured tobacco leaf images by using a bottom detail enhancement layer in the enhancement mapping feature module;
s40, extracting high-level semantic features from the cured tobacco leaf images by using the high-level semantic enhancement layers in the enhancement mapping feature module;
s50, inputting the initial mapping features, the detail features and the semantic features into a feature fusion module;
S60, extracting context features of the high-level semantic features through pooling and convolution;
S70, performing operations such as up-sampling, convolution and the like on the context features, the detail features and the enhancement mapping features to generate multi-scale features, and performing feature accumulation and filtering operations on the multi-scale features to generate fusion features;
s80, inputting the fusion characteristics into a classifier, and judging the temperature and humidity key temperature points of tobacco baking according to the stage of the tobacco baking.
S10, acquiring a cured tobacco leaf image:
The purpose of this step is to obtain the image data of the tobacco leaves during the baking process as input for the subsequent processing. The specific implementation mode is as follows: setting an image acquisition device to align tobacco leaves in the curing barn; determining a time interval for image acquisition according to a baking period, such as acquiring images every 5 minutes; at a preset time point, the image acquisition device automatically shoots tobacco leaves in the curing barn to acquire cured tobacco leaf images; the captured image is stored in a storage device for later use. The collected tobacco leaf images should clearly reflect the characteristics of the color, the shape and the like of the tobacco leaves so as to ensure the accuracy of the subsequent processing results. The image acquisition hardware used in the step can be a common digital camera or an industrial camera, and a computer or an embedded device with a camera can also be used. Generally, the image acquisition mode of the step ensures that the image data reflecting the baking state of the tobacco leaves is acquired.
S20, extracting initial mapping features by using a feature mapping module:
the method aims at processing the cured tobacco leaf image by using a feature mapping module and extracting initial mapping features. The specific implementation mode is as follows: the feature mapping module comprises 3 basic residual error modules, and the obtained tobacco leaf image data is input into a first residual error module; the first residual error module carries out filtering processing on the image and outputs a first transition characteristic; continuously inputting the first transition characteristic into a second residual error module, filtering, and outputting the second transition characteristic; and finally, inputting the second transition characteristic into a third residual error module, and outputting the mapping characteristic after filtering. Each residual module uses a downsampling operation with a step length of 2 to eliminate redundant information and obtain main local mapping characteristics. The filtering calculation in the residual error module is carried out according to a residual error block structure in a residual error network, and model fitting capacity is improved through residual error learning. The three residual modules are connected in cascade to form a feature mapping module, so that initial mapping features of the tobacco leaf images can be extracted, and a foundation is provided for subsequent feature enhancement. The residual network structure used in this step can be used to reference the design thought of the classical network model such as VGGNet, googleNet.
S30 extracts detail features using the bottom detail enhancement layer:
The method aims at carrying out feature enhancement on the tobacco leaf image by using a bottom detail enhancement layer and extracting information of detail features such as colors, textures and the like. The specific implementation mode is as follows: the bottom layer detail enhancement layer comprises 3 groups of enhancement blocks, and each enhancement block comprises a plurality of residual error modules; inputting the mapping characteristic obtained in the step S20 into a first enhancement block, and outputting a first detail enhancement characteristic through filtering operation of a residual error module; inputting the first detail enhancement feature into a second enhancement block, filtering again, and outputting the second detail enhancement feature; and finally, inputting the filtered data into a third enhancement block, and outputting a third detail enhancement characteristic. Since each enhancement block has a resolution of 1/8, a high resolution profile can be provided. The residual error module is repeatedly used for characteristic enhancement, so that the tobacco leaf detail information can be extracted. The enhanced block structure proposed in this step is similar to the ResNet block structure in ResNet, and can enhance the expressive power of the feature.
S40 extracts semantic features using the advanced semantic enhancement layer:
The method aims at carrying out feature enhancement on the tobacco leaf image by using an advanced semantic enhancement layer, and extracting to obtain higher semantic features. The specific implementation mode is as follows: the advanced semantic enhancement layer comprises 3 groups of enhancement blocks, and each enhancement block also comprises a plurality of residual error modules; inputting the mapping feature obtained in the step S20 into a first enhancement block, performing downsampling before the enhancement block, and then filtering by a residual error module to output a first semantic feature; continuously inputting the first semantic features into a second enhancement block, performing downsampling and filtering, and outputting second semantic features; and the last enhancement block directly filters the second semantic feature and outputs the final semantic feature. The output semantic features have a 1/32 resolution relative to the original image, containing high-level abstract features. In the step, the downsampling operation can enlarge the receptive field, improve the semantic feature extraction capacity and enhance the feature expression of the residual structure.
S50, inputting the features into a feature fusion module:
The method aims at inputting the initial mapping features, the bottom detail features and the high-level semantic features extracted by the S20-S40Steps into a feature fusion module. The specific implementation mode is as follows: the three types of features are sequentially input into a feature fusion module according to respective time sequences; the feature fusion module needs to have input ends matched with the three types of features, and transmits the features to the corresponding input ends according to a connection mode. The method mainly realizes the integration of the features and lays a foundation for the subsequent feature fusion processing.
S60 extracting context information of semantic features
The purpose of this step is to extract context information from the semantic features. The specific implementation mode is as follows: in the feature fusion module, selecting the advanced semantic features S2 obtained in the step S40 as input; carrying out global average pooling on the S2, integrating global information, then carrying out convolution operation, and outputting the filtered context characteristics; the resolution of the context feature is then restored to the same size as S2 by upsampling. Global pooling can acquire global context information, convolution extracts effective features, and up-sampling enables feature sizes to meet requirements. This step is capable of learning context dependencies by taking reference to the design ideas of the self-attention mechanism.
S70, feature fusion generates multi-scale features:
The method aims at fusing different features to generate brand new multi-scale features. The specific implementation mode is as follows: in the feature fusion module, the context feature obtained in the step S60, the detail feature D1 in the step S30 and the semantic feature S2 in the step S40 are used as inputs; convolving D1 by step length 4 to obtain the same space size; performing dilation convolution on the S2 to enlarge the receptive field; fusing the context characteristics and the processed D1 and S2; up-sampling and convolution operation are carried out on the fusion characteristic to generate a new characteristic, and the process is repeated twice. In the step, the fusion of the multi-scale features adopts a jump connection structure, and the features of different layers generate new feature expression through fusion, which is equivalent to a feature reconstruction technology.
S80, judging key points according to the multi-scale characteristics:
the purpose of this step is to judge the temperature and humidity key points in the tobacco leaf baking process according to the multi-scale fusion characteristics generated in S70. The specific implementation mode is as follows: inputting the features obtained in the step S70 into a softmax classifier, and training a classification model; when the method is used on line, the collected tobacco leaf images are sequentially input into the steps S10-S70 for processing, and multi-scale features are extracted; and inputting the characteristics into a classification model, outputting the category to which the characteristics belong, and judging the current temperature and humidity key points. The classification model can be constructed by adopting algorithms such as a support vector machine, a random forest and the like. The whole process realizes multi-scale feature extraction and representation of the tobacco baking image, and the key points are judged according to the features, so that intelligent control of the baking process is realized.
Example two
As shown in fig. 1, the second embodiment of the method for judging the key points of the baking temperature and humidity of the tobacco leaves provided by the invention comprises the following steps:
S10, acquiring a cured tobacco leaf image;
The purpose of this step is to obtain the image data of the tobacco leaves during the baking process as input for the subsequent processing. Setting an image acquisition device, wherein parameters comprise:
P c: the image acquisition device can be a common digital camera, an industrial camera or a computing device with a camera;
Loc: the position coordinates of the image acquisition device are aligned with the tobacco leaf positions in the curing barn;
t a: the image acquisition time interval is determined according to the baking period;
t j: a predetermined image acquisition time point;
At each time t j, automatically shooting tobacco leaves in the curing barn according to the position Loc by using an image acquisition device P c, and acquiring a cured tobacco leaf image I j. I j is stored in turn in a storage device for later use.
Ij=Pc(Loc,tj),j=1,2,...,k;
The collected tobacco leaf image I j should clearly reflect the characteristics of the color, the form and the like of the tobacco leaves so as to ensure the accuracy of the subsequent processing results. Image acquisition hardware used in this step includes, but is not limited to, conventional digital cameras, industrial cameras, computers, embedded devices, and the like. Generally, the image acquisition mode of the step ensures that the image data reflecting the baking state of the tobacco leaves is acquired.
S20, filtering the cured tobacco leaf image by using three basic residual error modules in the feature mapping module, and extracting initial mapping features;
The method aims at processing the cured tobacco leaf image by using a feature mapping module and extracting initial mapping features. The feature mapping module comprises m basic residual modules, where m is typically given a value of 3. The input tobacco leaf image data is I j.
In the first basic residual block, feature mapping is extracted by convolution operation
Wherein and/> are the first convolution kernel and the first bias parameter, respectively, and σ is the activation function, such as ReLU.
The second basic residual module takes as input, and performs feature extraction again:
And/> is the second convolution kernel and the second bias parameter, respectively;
similarly, in a third basic residual block, we can get:
And/> is the third convolution kernel and the third bias parameter, respectively;
is output as the final mapping feature. Each residual module uses a downsampling operation with a step size of 2 to eliminate redundant information and obtain primary local mapping characteristics. The filtering calculation in the residual error module is carried out according to a residual error block structure in a residual error network, and model fitting capacity is improved through residual error learning. The three residual modules are connected in cascade to form a feature mapping module, so that initial mapping features of the tobacco leaf images can be extracted, and a foundation is provided for subsequent feature enhancement. The residual network structure used in this step can be used to reference the design thought of the classical network model such as VGGNet, googleNet.
S30, extracting detail features, including details of colors and textures, of the cured tobacco leaf images by using a bottom detail enhancement layer in the enhancement mapping feature module;
The method aims at carrying out feature enhancement on the tobacco leaf image by using a bottom detail enhancement layer and extracting information of detail features such as colors, textures and the like. The bottom detail enhancement layer contains n groups of enhancement blocks, each containing l residual modules. The mapping features obtained for S20 are input
In a first enhancement block, a first set of detail features is extracted by a layer of serially connected residual modules
Wherein is the convolution kernel and offset in the ith residual block, σ i is the activation function, i=1, 2, …, l.
Similarly, in the second enhancement block, it is possible to obtain:
This process is repeated until the last enhancement block output detail feature can provide a high resolution feature map because each enhancement block has a resolution of 1/8. The residual error module is repeatedly used for characteristic enhancement, so that the tobacco leaf detail information can be extracted. The enhanced block structure proposed in this step is similar to the ResNet block structure in ResNet, and can enhance the expressive power of the feature.
S40, extracting high-level semantic features from the cured tobacco leaf images by using the high-level semantic enhancement layers in the enhancement mapping feature module;
the method aims at carrying out feature enhancement on the tobacco leaf image by using an advanced semantic enhancement layer, and extracting to obtain higher semantic features. The advanced semantic enhancement layer contains n groups of enhancement blocks, each also consisting of l residual modules. The mapping features obtained for S20 are input
In a first enhancement block, downsampling and extracting a first set of semantic features through a residual module
Wherein downsample represents a downsampling operation, and/> represent convolution kernels and offset parameters of the 1 st residual module in the semantic enhancement layer, respectively, and/> and/> represent convolution kernels and offset parameters of the 2 nd residual module in the semantic enhancement layer, respectively.
In the second enhancement block, taking as input, downsampling and feature extraction are continued:
This process is repeated until the final output semantic feature has a 1/32 resolution relative to the original image, containing high-level abstract features. In the step, the downsampling operation can enlarge the receptive field, improve the semantic feature extraction capacity and enhance the feature expression of the residual structure.
S50, inputting the initial mapping features, the detail features and the semantic features into a feature fusion module;
the purpose of this step is to input the initial mapping features extracted by S20-S40 into the feature fusion module, the underlying detail features/> and the advanced semantic features/> .
The feature fusion module needs to have an input end matched with the three types of features, and inputs the features according to the following modes:
Wherein H in represents the detail of the output of step S20, D in represents the detail of the output of step S30, and S in represents the detail of the output of step S40.
The method mainly realizes the integration of the features and lays a foundation for the subsequent feature fusion processing.
S60, extracting context features of the high-level semantic features through pooling and convolution;
the purpose of this step is to extract context information from the semantic features. In the feature fusion module, selecting the semantic features S in obtained in the step S40 as input; global average pooling is performed on S in to obtain summary features S pool:
where h and w are the height and width of the feature map.
Then, a convolution operation is performed on S pool, and the filtered context feature S ctx is output:
Sctx=σ(Spool*Wctx+bctx);
Where W ctx and b ctx are the convolution kernel and bias, respectively, when the context feature is obtained.
The resolution of S ctx is then restored to the same size as S in by upsampling. Global pooling can acquire global context information, convolution extracts effective features, and up-sampling enables feature sizes to meet requirements. This step is capable of learning context dependencies by taking reference to the design ideas of the self-attention mechanism.
S70, performing operations such as up-sampling, convolution and the like on the context features, the detail features and the enhancement mapping features to generate multi-scale features;
The method aims at fusing different features to generate brand new multi-scale features. In the feature fusion module, the context features S ctx obtained in the step S60 and the detail features D in in the step S30 and the semantic features S in in the step S40 are used as inputs; step 4 convolving D in to obtain the same spatial size:
Dfusion=σ(Din*WD);
D fusion represents the processed detail features, W D represents the convolution kernel;
Expansion convolution was performed on S in to expand the receptive field:
Sfusion=σ(dilated_conv(Sin));
S fusion represents the processed semantic features, dilated _Conv represents the expanded convolution adjustment semantic features;
Fusion of S ctx、Dfusion and S fusion:
F=concat(Sctx,Dfusion,Sfusion);
f represents features obtained by fusing different features, and concat represents fused feature operation;
upsampling and convolving F generates a new feature F':
F′=σ(upsample(F)*WF+bF);
wherein upsample denotes a downsampling operation.
Repeating the above process twice to finally obtain the multi-scale feature. In the step, the fusion of the multi-scale features adopts a jump connection structure, and the features of different layers generate new feature expression through fusion, which is equivalent to a feature reconstruction technology.
S80, judging temperature and humidity key points of tobacco leaf baking in real time according to the multi-scale characteristics.
The purpose of this step is to judge the temperature and humidity key points in the tobacco leaf baking process according to the multi-scale fusion characteristics generated in S70. Inputting the features obtained in the step S70 into a softmax classifier, and training a classification model:
y=softmax(WcF′+bc);
y represents the class of the classification model output.
Wherein F' is the multi-scale feature output by S70.
When the method is used on line, the collected tobacco leaf images are sequentially input into the steps S10-S70 for processing, and multi-scale features are extracted; and inputting the characteristics into a classification model, outputting the belonging category y, and judging the current temperature and humidity key points. The classification model can be constructed by adopting algorithms such as a support vector machine, a random forest and the like. The whole process realizes multi-scale feature extraction and representation of the tobacco baking image, and the key points are judged according to the features, so that intelligent control of the baking process is realized.
Example III
As shown in fig. 1, the method for distinguishing the temperature and humidity key points of tobacco baking according to the present invention includes three steps: calculating mapping characteristics by a characteristic mapping module algorithm; the enhancement mapping feature module algorithm calculates details and semantic enhancement features; the feature fusion module algorithm fuses the mapping features and the enhancement features to generate new feature information. Firstly, a cured tobacco leaf image is input to a feature mapping module to extract initial mapping features, then the mapping features are input to an enhancement feature module to enhance features from two aspects of detail information and semantic information, the detail and semantic feature enhancement is represented by increasing feature diversity and enhancing feature representation capability, finally the mapping features, the detail enhancement features and the semantic enhancement features are input to a feature fusion module to fuse multi-scale features so as to generate a new more representative feature description cured tobacco leaf state, and finally tobacco leaf curing temperature and humidity key points are judged according to the tobacco leaf curing state and are correspondingly regulated, wherein a flow chart is shown in figure 5.
The key point distinguishing method of the baking temperature and humidity of the tobacco leaves mainly comprises the following steps:
1. Feature mapping module algorithm:
The feature mapping module, which is directly connected with the cured tobacco leaf image to extract the original mapped features, consists of three basic residual modules, as shown in fig. 2.
For the extraction of the mapping features, the first residual error module is used for carrying out filtering treatment on the cured tobacco leaf image to form a first transition feature, and then the rest two residual error modules are used for further enhancing the nonlinear relation to form a second transition feature and the mapping feature. To remove redundant information and obtain the main local key mapping features, each basic residual module is provided with a downsampling operation with a step size of 2. The mapping feature extraction process of the feature mapping module is defined as follows:
n is set to 3 because the feature mapping module includes three basic residual module operations and is defined as the input cured tobacco image data.
2. Enhancement map feature module algorithm:
After the mapping feature module, the mapping feature is enhanced using an enhanced mapping feature module. In order to obtain richer feature information, multi-level feature enhancement layers are used, including an underlying detail enhancement layer and an advanced semantic enhancement layer, as shown in fig. 3.
The bottom detail enhancement layer is used for extracting bottom detail characteristic information including color, texture and the like, and consists of three groups of enhancement blocks, wherein the first two enhancement blocks respectively comprise two basic residual modules, and the last group only comprises one residual module. Since the resolution of these enhancement blocks is 1/8, they can provide feature maps with high spatial resolution. The detail characteristic information can be obtained by the following calculation:
k is set to 5 because the bottom detail enhancement layer includes 5 base residual module operations and is defined as the mapping feature of the input.
The high-level semantic enhancement layer is used for extracting high-level semantic feature information and is also composed of three groups of enhancement blocks. The first two groups of enhancement blocks respectively comprise two basic residual modules, the feature map is subjected to 1/2 downsampling operation at the front part of each enhancement block, and the last group of enhancement blocks only comprise one basic residual module. The feature map at this time has a resolution of 1/32 relative to the original image and provides more advanced feature information, which can be calculated as follows:
let l be set to 5 because the advanced semantic enhancement layer contains 5 sets of basic residual module operations and is defined as the mapping characteristics of the input.
The mapping features are subjected to filtering operation of the enhanced mapping feature module to respectively generate enhanced detail information and enhanced semantic information. Furthermore, to enhance feature reusability, the enhancement detail information of the underlying detail enhancement layer contains three feature groups, named D1, D2 and D3, respectively, from the three groups of detail enhancement blocks, respectively, which represent different levels of detail feature enhancement, respectively; the high-level semantic enhancement layer emphasizes high-level abstract feature information and uses two downsampling operations, so that the enhanced semantic information contains two feature groups from two resolutions, named S1 and S2, respectively, which represent different levels of semantic feature enhancement.
3. Feature fusion module algorithm:
the feature fusion module is used for fusing the multi-level feature information such as the mapping feature, the bottom-layer detail enhancement feature, the high-level semantic enhancement feature and the like to generate a new feature representation, as shown in fig. 4.
The feature fusion module mainly comprises four steps, firstly, in order to extract the context features, filtering the advanced semantic enhancement features S2 through global average pooling and convolution operation. Then, the resolution of the extracted context feature is restored to the size of S2. Second, the output of the first step, a set of low-level detail enhancement features, and a set of high-level semantic enhancement features are fused. In order to obtain the same spatial resolution, D1 is processed by a convolution operation with a step size of 4. Further, S2 is processed by convolution operation, at which time the convolution expansion ratio is set to 4 to expand the receptive field. Then, the fused features are up-sampled 2 times and filtered by convolution operation. The last two steps operate similarly to the second step. For a given multiscale feature information Fusion DS = [ r|d1, D2, d3|s1, S2], the output of the feature Fusion module can be represented by:
OutMF=Fua(FusionDS);
In a second embodiment of the present invention, a first embodiment of a computer readable storage medium is provided, where the computer readable storage medium stores program instructions, and the program instructions are used to execute the above-mentioned method for determining a key point of a tobacco leaf baking temperature and humidity when the program instructions run.
A third aspect of the present invention provides a first embodiment of a tobacco flue-curing temperature and humidity key point determining system, in this embodiment, the first embodiment includes the above-mentioned computer-readable storage medium.
Specifically, the principle of the invention is as follows: acquiring a cured tobacco leaf image; filtering the cured tobacco leaf image by using three basic residual error modules in the feature mapping module, and extracting initial mapping features; extracting detail features, including details of colors and textures, of the cured tobacco leaf image by using a bottom detail enhancement layer in an enhancement mapping feature module; extracting high-level semantic features from the cured tobacco leaf image by using a high-level semantic enhancement layer in an enhancement mapping feature module; inputting the initial mapping feature, the detail feature and the semantic feature into a feature fusion module; extracting context features of the advanced semantic features through pooling and convolution; performing operations such as up-sampling and convolution on the context feature, the detail feature and the enhancement mapping feature to generate a multi-scale feature; and judging the temperature and humidity key points of tobacco leaf baking in real time according to the multi-scale characteristics.

Claims (9)

1. The method for distinguishing the key points of the baking temperature and humidity of the tobacco leaves is characterized by comprising the following steps:
S10, acquiring a cured tobacco leaf image;
S20, filtering the cured tobacco leaf image by using three basic residual error modules in the feature mapping module, and extracting initial mapping features;
S30, extracting detail features, including details of colors and textures, of the cured tobacco leaf images by using a bottom detail enhancement layer in an enhancement mapping feature module;
S40, extracting high-level semantic features from the cured tobacco leaf images by using a high-level semantic enhancement layer in the enhancement mapping feature module;
S50, inputting the initial mapping feature, the detail feature and the semantic feature into a feature fusion module;
S60, extracting context features of the high-level semantic features through pooling and convolution;
S70, performing operations such as up-sampling, convolution and the like on the context feature, the detail feature and the enhancement mapping feature to generate a multi-scale feature, and performing feature accumulation and filtering operation on the multi-scale feature to generate a fusion feature;
s80, inputting the fusion characteristics into a classifier, and judging the temperature and humidity key temperature points of tobacco baking according to the stage of tobacco baking.
2. The method for discriminating key points of temperature and humidity for baking tobacco leaves according to claim 1, wherein the specific step of S10 includes:
Setting an image acquisition device to align tobacco leaves in the curing barn;
determining the time interval of image acquisition according to the baking period;
At a preset time point, the image acquisition device automatically shoots tobacco leaves in the curing barn to acquire cured tobacco leaf images;
The captured image is stored in a storage device for later use.
3. The method for determining key points of temperature and humidity for baking tobacco leaves according to claim 2, wherein the specific step of S20 includes:
inputting the obtained tobacco leaf image data into a first basic residual error module of the feature mapping module;
The first residual error module carries out filtering processing on the image and outputs a first transition characteristic;
continuously inputting the first transition characteristic into a second residual error module, filtering, and outputting the second transition characteristic;
and finally, inputting the second transition characteristic into a third residual error module, and outputting the mapping characteristic after filtering.
4. The method for determining key points of temperature and humidity for baking tobacco leaves according to claim 3, wherein the specific step of S30 comprises:
the bottom layer detail enhancement layer comprises a plurality of groups of enhancement blocks, and each enhancement block comprises a plurality of residual error modules;
Inputting the mapping characteristic obtained in the step S20 into a first enhancement block, and outputting a first detail enhancement characteristic through filtering operation of a residual error module;
Inputting the first detail enhancement feature into a second enhancement block, filtering again, and outputting the second detail enhancement feature;
and finally, inputting the filtered data into a third enhancement block, and outputting a third detail enhancement characteristic.
5. The method for determining key points of temperature and humidity for baking tobacco leaves according to claim 4, wherein the specific step of S40 includes:
The high-level semantic enhancement layer comprises a plurality of groups of enhancement blocks, and each enhancement block consists of a plurality of residual error modules;
inputting the mapping features obtained in the step S20 into a first enhancement block, downsampling before the enhancement block, and filtering by a residual error module to output a first semantic feature;
Continuously inputting the first semantic features into a second enhancement block, performing downsampling and filtering, and outputting second semantic features;
and the last enhancement block directly filters the second semantic feature and outputs the final semantic feature.
6. The method for determining key points of temperature and humidity for baking tobacco leaves according to claim 5, wherein the specific step of S60 comprises:
in the feature fusion module, selecting the advanced semantic features S2 obtained in the step S40 as input;
Carrying out global average pooling on the S2;
Convolving the pooled features and outputting filtered context features;
The resolution of the context feature is then restored to the same size as S2 by upsampling.
7. The method for determining key points of temperature and humidity for baking tobacco leaves according to claim 6, wherein the specific step of S70 includes:
In the feature fusion module, fusing the context features obtained in the step S60, the detail features D1 in the step S30 and the semantic features S2 in the step S40;
up-sampling and convolution operation are carried out on the fused features to generate new features, and the process is repeated twice to obtain multi-scale features;
And accumulating and filtering the multi-scale features to obtain fusion features.
8. A computer readable storage medium, wherein program instructions are stored in the computer readable storage medium, and the program instructions are used for executing the method for distinguishing the key points of the baking temperature and humidity of tobacco leaves according to any one of claims 1-8 when running.
9. A tobacco flue-curing temperature and humidity key point discrimination system comprising the computer-readable storage medium of claim 9.
CN202410075612.0A 2024-01-18 2024-01-18 Tobacco leaf baking temperature and humidity key point judging method, medium and system Active CN117893773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410075612.0A CN117893773B (en) 2024-01-18 2024-01-18 Tobacco leaf baking temperature and humidity key point judging method, medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410075612.0A CN117893773B (en) 2024-01-18 2024-01-18 Tobacco leaf baking temperature and humidity key point judging method, medium and system

Publications (2)

Publication Number Publication Date
CN117893773A true CN117893773A (en) 2024-04-16
CN117893773B CN117893773B (en) 2024-10-11

Family

ID=90651792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410075612.0A Active CN117893773B (en) 2024-01-18 2024-01-18 Tobacco leaf baking temperature and humidity key point judging method, medium and system

Country Status (1)

Country Link
CN (1) CN117893773B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919443A (en) * 2021-02-24 2022-01-11 北京优创新港科技股份有限公司 Tobacco maturity state probability calculation method based on image analysis
US20230368497A1 (en) * 2022-05-10 2023-11-16 Shandong Jianzhu University Image Recognition Method and System of Convolutional Neural Network Based on Global Detail Supplement
CN117372881A (en) * 2023-12-08 2024-01-09 中国农业科学院烟草研究所(中国烟草总公司青州烟草研究所) Intelligent identification method, medium and system for tobacco plant diseases and insect pests
CN117408924A (en) * 2023-10-19 2024-01-16 桂林电子科技大学 Low-light image enhancement method based on multiple semantic feature fusion network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919443A (en) * 2021-02-24 2022-01-11 北京优创新港科技股份有限公司 Tobacco maturity state probability calculation method based on image analysis
US20230368497A1 (en) * 2022-05-10 2023-11-16 Shandong Jianzhu University Image Recognition Method and System of Convolutional Neural Network Based on Global Detail Supplement
CN117408924A (en) * 2023-10-19 2024-01-16 桂林电子科技大学 Low-light image enhancement method based on multiple semantic feature fusion network
CN117372881A (en) * 2023-12-08 2024-01-09 中国农业科学院烟草研究所(中国烟草总公司青州烟草研究所) Intelligent identification method, medium and system for tobacco plant diseases and insect pests

Also Published As

Publication number Publication date
CN117893773B (en) 2024-10-11

Similar Documents

Publication Publication Date Title
WO2022036777A1 (en) Method and device for intelligent estimation of human body movement posture based on convolutional neural network
US11798132B2 (en) Image inpainting method and apparatus, computer device, and storage medium
CN107798381B (en) Image identification method based on convolutional neural network
CN112614077B (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN112287940A (en) Semantic segmentation method of attention mechanism based on deep learning
CN110929736B (en) Multi-feature cascading RGB-D significance target detection method
CN107239514A (en) A kind of plants identification method and system based on convolutional neural networks
CN112446302B (en) Human body posture detection method, system, electronic equipment and storage medium
CN106548159A (en) Reticulate pattern facial image recognition method and device based on full convolutional neural networks
CN110222718B (en) Image processing method and device
CN107657204A (en) The construction method and facial expression recognizing method and system of deep layer network model
CN110059593B (en) Facial expression recognition method based on feedback convolutional neural network
CN111582396A (en) Fault diagnosis method based on improved convolutional neural network
CN116030537B (en) Three-dimensional human body posture estimation method based on multi-branch attention-seeking convolution
CN113066037B (en) Multispectral and full-color image fusion method and system based on graph attention machine system
CN112749675A (en) Potato disease identification method based on convolutional neural network
CN111340011B (en) Self-adaptive time sequence shift neural network time sequence behavior identification method
CN109190666B (en) Flower image classification method based on improved deep neural network
CN114283320A (en) Target detection method based on full convolution and without branch structure
CN114398972A (en) Deep learning image matching method based on joint expression attention mechanism
CN113344077A (en) Anti-noise solanaceae disease identification method based on convolution capsule network structure
CN114492634B (en) Fine granularity equipment picture classification and identification method and system
CN114419406A (en) Image change detection method, training method, device and computer equipment
CN117576402A (en) Deep learning-based multi-scale aggregation transducer remote sensing image semantic segmentation method
CN109934835B (en) Contour detection method based on deep strengthening network adjacent connection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant