CN114677351A - Deep learning training method for flue-cured tobacco leaf grading - Google Patents

Deep learning training method for flue-cured tobacco leaf grading Download PDF

Info

Publication number
CN114677351A
CN114677351A CN202210307372.3A CN202210307372A CN114677351A CN 114677351 A CN114677351 A CN 114677351A CN 202210307372 A CN202210307372 A CN 202210307372A CN 114677351 A CN114677351 A CN 114677351A
Authority
CN
China
Prior art keywords
img
image
tobacco leaf
tobacco
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210307372.3A
Other languages
Chinese (zh)
Inventor
张宇阳
夏璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Qidi Ruishi Intelligent Technology Co ltd
Original Assignee
Henan Qidi Ruishi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Qidi Ruishi Intelligent Technology Co ltd filed Critical Henan Qidi Ruishi Intelligent Technology Co ltd
Priority to CN202210307372.3A priority Critical patent/CN114677351A/en
Publication of CN114677351A publication Critical patent/CN114677351A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a deep learning training method for grading flue-cured tobacco leaves, which is used for obtaining tobacco leaf images and carrying out impurity removal treatment; acquiring position information of tobacco leaves, including a length change diagram, a width change diagram, an area diagram and a thickness change diagram of the tobacco leaves; acquiring oil content information of tobacco leaves, including an oil content distribution map of the tobacco leaves; acquiring integrity information of tobacco leaves, including a damage coefficient map of the tobacco leaves; and respectively taking the part information, the oil content information and the integrity information of the tobacco leaves as training data to train the convolutional neural network. According to the method, a special shooting method is combined with a unique characteristic operator, and the part information, the oil content information and the integrity information of the tobacco leaves can be more accurately acquired compared with the existing image processing-based method; by accurate position information, oil content information, integrity information and impurity-free information of the tobacco leaves, the characteristic learning of the neural network is realized by using less tobacco leaf data, and the development period of a deep learning algorithm and the cost of the tobacco leaf data are effectively reduced.

Description

Deep learning training method for flue-cured tobacco leaf grading
Technical Field
The invention relates to the field of intelligent tobacco leaf grading, in particular to a deep learning training method for flue-cured tobacco leaf grading.
Background
The tobacco leaf grading refers to grouping the tobacco leaves in the same production area and variety according to the growth part and color, and then carrying out intra-group grading according to quality factors such as maturity, leaf structure, oil content and the like. With the development of science and technology, the tobacco leaves are intelligently graded through technologies such as image recognition at present, so that the labor is saved, and the efficiency is improved.
At present, the tobacco leaf grading by adopting image processing uses image characteristics which are universal RGB characteristics or characteristics obtained by RGB channel combination, and can not effectively express the information of the position, oil content and integrity of the tobacco leaf. The tobacco leaf classification by adopting deep learning is characterized in that a convolution characteristic learned by the deep learning is independently learned by a computer through a large amount of tobacco leaf data, although the convolution characteristic has good robustness to factors such as illumination, angle, posture, scale and the like, the data volume required for obtaining the good convolution characteristic is often one hundred thousand and one million tobacco leaf data volumes, the tobacco leaves are all in state limited transaction and are difficult to obtain, so that the data obtaining cost of a neural network model for tobacco leaf classification is extremely high, and in addition, the characteristic learning period is long and the learning is difficult due to the huge data volume.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a deep learning training method for grading flue-cured tobacco leaves.
A deep learning training method for grading flue-cured tobacco leaves is provided, which comprises the steps of obtaining tobacco leaf images and carrying out impurity removal treatment; acquiring position information of tobacco leaves, including a length change diagram, a width change diagram, an area diagram and a thickness change diagram of the tobacco leaves; acquiring oil content information of tobacco leaves, including an oil content distribution map of the tobacco leaves; acquiring integrity information of tobacco leaves, including a damage coefficient map of the tobacco leaves; and respectively taking the part information, the oil content information and the integrity information of the tobacco leaves as training data to train the convolutional neural network.
Based on the above, after the tobacco leaf image is obtained, the background image outside the tobacco leaf in the image is removed, so as to remove the impurity information in the tobacco leaf image.
Based on the above, according to the r channel binary image and the b channel binary image of the tobacco leaf image, the area image of the tobacco leaf is calculated
Figure BDA0003566105790000021
Wherein, imgTobacco leafShows the area of the tobacco leaf, imgr _ original drawingR-channel binary map, img, representing a tobacco leaf imageb _ original figureA b-channel binary image representing an image of tobacco leaves.
Based on the above, the central point of the tobacco leaf in the image is taken as the Gaussian distribution center, the length value and the width value of the tobacco leaf are taken as Gaussian radii, an independent number is converted into a two-dimensional image, so that the length change graph and the width change graph of the tobacco leaf are calculated, and the calculation formula is as follows
Figure BDA0003566105790000022
Wherein, sigma represents a Gaussian kernel, the value is a length value or a width value, g is a Gaussian formula, x and y are central points of two-dimensional Gaussian distribution, and the central points are coordinates of the tobacco leaf central points in the image.
Based on the above, the RGB channel segmentation process is performed on the tobacco leaf image, and the binary, median, mass, maximum and minimum maps of each segmented channel image are calculated as follows
Figure BDA0003566105790000023
Figure BDA0003566105790000024
Figure BDA0003566105790000025
Figure BDA0003566105790000031
Figure BDA0003566105790000032
Wherein img represents the original tobacco leaf image, imgrBinary maps representing the r-channel, imggBinary maps, img, representing g channelsbA binary graph representing the b channel; imgr_midMedian graph, img, representing the r channelsr_modeRepresenting the mode of the r channel, imgr_minMinimum graph, img, representing the r channelr_maxA maximum value graph representing r channels; the median graph, the number graph, the minimum graph and the maximum graph of the g channel and the b channel are respectively calculated in the same way; and combining the obtained 15 two-dimensional images into one image to obtain a 15-dimensional image characteristic diagram, and taking the image characteristic diagram as a thickness variation diagram of the tobacco leaves.
Based on the above, image acquisition devices are respectively arranged above and below the transparent image acquisition region of the tobacco leaf, a plane light source is arranged above the tobacco leaf to obtain the polished image of the tobacco leaf, the polished image of the tobacco leaf is subjected to RGB channel segmentation processing, and an oil distribution diagram of the tobacco leaf is obtained after calculation of each segmented channel image, as follows
imgs1=imgr÷imgb
imgs2=imgb÷imgg
imgs3=imgg÷imgr
Wherein, imgs1Indicating a red-blue profile, imgs2Denotes the blue-green characteristic diagram, imgs3Indicating a green-red characteristic map, imgrBinary maps, img, representing r channelsgBinary maps, img, representing g channelsbA binary diagram representing the b channel.
Based on the above, the breakage coefficient of the tobacco leaf is calculated by calculating the hole area inside the tobacco leaf and the area of the whole tobacco leaf in the tobacco leaf image, and the breakage coefficient is converted into a breakage coefficient map of the tobacco leaf by Gaussian distribution.
Based on the above, the length change diagram, the width change diagram, the area diagram, the thickness change diagram, the oil distribution diagram and the damage coefficient diagram of the tobacco leaves are sent to a convolution neural network to train convolution characteristics, as follows
Figure BDA0003566105790000041
Wherein, imgall_resultShowing the combination of multiple graphs into one graph.
Compared with the prior art, the method has outstanding substantive characteristics and remarkable progress, and particularly, the method can more accurately acquire the part information, oil content information and integrity information of the tobacco leaves by combining a special shooting method with a unique characteristic operator than the existing image processing-based method; by accurate position information, oil content information, integrity information and impurity-free information of the tobacco leaves, less tobacco leaf data are used for realizing characteristic learning of a neural network, and the development period of a deep learning algorithm and the cost of the tobacco leaf data are effectively reduced.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
FIG. 2 is an image of tobacco leaves originally acquired and after decontamination according to the present invention.
FIG. 3 is a polished image of tobacco leaves and after decontamination taken originally according to the present invention.
FIG. 4 is an area diagram of an image of tobacco leaves according to the present invention.
FIG. 5 is a graph of the change in length of an image of tobacco leaves according to the present invention.
FIG. 6 is a width variation graph of an image of tobacco leaves according to the present invention.
Fig. 7-11 are thickness variation diagrams of tobacco leaf images according to the present invention.
FIG. 12 is an oil distribution diagram of a polished image of tobacco leaves according to the present invention.
FIG. 13 is a graph of the damage factor of an image of tobacco leaves according to the present invention.
Fig. 14 is a schematic structural diagram of the lighting photographing device of the invention.
Description of reference numerals: 1. a camera; 2. a light emitting panel; 3. tobacco leaves; 4. acrylic plates.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, a deep learning training method for flue-cured tobacco leaf classification, which obtains tobacco leaf images and performs roguing treatment; acquiring position information of tobacco leaves, including a length change diagram, a width change diagram, an area diagram and a thickness change diagram of the tobacco leaves; acquiring oil content information of tobacco leaves, including an oil content distribution map of the tobacco leaves; acquiring integrity information of tobacco leaves, including a damage coefficient map of the tobacco leaves; and respectively taking the part information, the oil content information and the integrity information of the tobacco leaves as training data to train the convolutional neural network.
Specifically, the tobacco leaf image is obtained in a conventional shooting mode. An image acquisition device such as a camera 1 is respectively arranged above and below a transparent image acquisition area of the tobacco leaf 3 such as a transparent acrylic plate 4, a plane light source such as a planar LED light-emitting panel 2 is arranged above the tobacco leaf, and a lighting image of the tobacco leaf under a high-brightness light source is obtained, wherein the brightness of the LED light-emitting panel is 300 cd/square meter as shown in figure 14. Removing background images outside tobacco leaves in the obtained tobacco leaf images and the obtained polished images through a segmentation algorithm, and performing pure color processing on the tobacco leaf backgrounds to remove impurity information in the tobacco leaf images, as shown in fig. 2 and 3, an upper image in fig. 2 is a tobacco leaf image obtained in a conventional shooting mode, an image subjected to impurity removal processing is arranged on the lower side, an upper image in fig. 3 is an original polished image, and a polished image subjected to impurity removal processing is arranged on the lower side.
And acquiring the part information, oil content information and integrity information of the tobacco leaves through image processing. The tobacco leaves can be divided into an upper part, a middle part and a lower part according to growing parts, and the upper part leaves are generally thick, large and wide in characteristics. The central lobe feature is generally of moderate thickness, large, moderate width. The lower lobe features are generally thin, small, round. The tobacco leaves with good quality have moderate oil content and high integrity.
Calculating the area map of the tobacco leaves according to the r channel binary map and the b channel binary map of the tobacco leaf image
Figure BDA0003566105790000051
Wherein, imgTobacco leafShows the area diagram, img, of the tobacco leafr _ original drawingR-channel binary map, img, representing a tobacco leaf imageb _ original figureA b-channel binary map representing a tobacco leaf image. In the original RGB three-channel binary image of the tobacco leaf image, the value of the area obtained by subtracting the blue channel from the red channel and larger than 30 is 255, and the area obtained by subtracting the blue channel from the red channel and larger than 30 is a yellow spectral range, so that the area of the tobacco leaf like a white area in the image 4, namely the area of the tobacco leaf can be obtained after calculation, and is shown in the image 4.
Taking the central point of the tobacco leaf in the image as a Gaussian distribution center, respectively taking the length value and the width value of the tobacco leaf as Gaussian radii, and converting an independent number into a two-dimensional image, thereby calculating a length change diagram and a width change diagram of the tobacco leaf, as shown in fig. 5 and 6, the calculation formula is as follows
Figure BDA0003566105790000061
Wherein, σ represents a Gaussian kernel, where the value is a length value or a width value, g is a Gaussian formula, and x and y are central points of two-dimensional Gaussian distribution, where the central points are coordinates of tobacco leaves in the image.
RGB channel segmentation processing is carried out on the tobacco leaf image, channel images of three channels are segmented, and a binary image of each segmented channel image is calculated as follows
Figure BDA0003566105790000062
img denotes the original tobacco leaf image, split denotes the segmentation algorithm, imgrBinary maps, img, representing r channelsgBinary maps, img, representing g channelsbA binary graph representing the b channel; the calculated image of the tobacco leaf is shown in fig. 7.
The median map of the binary map for each channel divided is calculated as follows
Figure BDA0003566105790000063
Wherein, imgr_midThe median graph of the r channel is represented, and the median graphs of the g channel and the b channel are respectively calculated in the same way; the image of the tobacco leaves obtained after the calculation is shown in fig. 8.
The mode map of the divided binary map of each channel is calculated as follows
Figure BDA0003566105790000071
Wherein, imgr_modeRepresenting a mode graph of the r channel, and respectively and similarly calculating mode graphs of the g channel and the b channel; the image of the tobacco leaves obtained after the calculation is shown in fig. 9.
The minimum value map of the divided binary maps of each channel is calculated as follows
Figure BDA0003566105790000072
Wherein, imgr_minThe minimum value graph of the r channel is represented, and the minimum value graphs of the g channel and the b channel are calculated in the same way; the image of the tobacco leaves obtained after the calculation is shown in fig. 10.
Calculating the maximum value map of the divided binary map of each channel as follows
Figure BDA0003566105790000073
Wherein, imgr_maxA maximum value graph representing the r channel; respectively and similarly calculating the maximum value graphs of the g channel and the b channel; the image of the tobacco leaves obtained after the calculation is shown in fig. 11.
And combining the obtained 15 two-dimensional images into one image to obtain a 15-dimensional image characteristic map, and taking the image characteristic map as a thickness variation map of the tobacco leaves, namely the dimension of the thickness variation map is fifteen dimensions.
Performing RGB channel segmentation on the polished image of the tobacco leaf, and calculating each segmented channel image to obtain an oil distribution map of the tobacco leaf, as shown in fig. 12, as follows
imgs1=imgr÷imgb
imgs2=imgb÷imgg
imgs3=imgg÷imgr
Wherein, imgs1Indicating a red-blue profile, imgs2Showing a blue-green characteristic diagram, imgs3Represents a green-red feature map, imgrBinary maps, img, representing r channelsgBinary map representing g-channel, imgbA binary graph representing the b channel. The dimensions of the red and blue feature map, the blue and green feature map and the green and red feature map are two-dimensional respectively; the three graphs are combined into one graph to obtain an oil distribution graph, and the dimension of the oil distribution graph is three-dimensional.
Calculating the damage coefficient of the tobacco leaf, namely the ratio of the hole area to the tobacco leaf area, by calculating the hole area inside the tobacco leaf in the tobacco leaf image and the area of the whole tobacco leaf, and converting the damage coefficient into a damage coefficient graph of the tobacco leaf by Gaussian distribution, namely converting an independent number into a two-dimensional image by taking the central point of the tobacco leaf in the image as the Gaussian distribution center and the damage coefficient as the Gaussian radius, thereby calculating the damage coefficient graph of the tobacco leaf, wherein the calculation formula is as follows
Figure BDA0003566105790000081
Wherein, sigma represents a Gaussian kernel, the value here is a damage coefficient, g is a Gaussian formula, x and y are central points of two-dimensional Gaussian distribution, and the central points are coordinates of tobacco leaves in the image. The acquired image is shown in fig. 13.
Finally, the length change diagram, the width change diagram, the area diagram, the thickness change diagram, the oil distribution diagram and the damage coefficient diagram of the tobacco leaves are sent to a convolution neural network to train convolution characteristics, as follows
Figure BDA0003566105790000082
Wherein, imgall_resultThe method comprises the steps of synthesizing a plurality of graphs into a graph, and feeding the synthesized multidimensional graph into a neural network for training. By using the method, one tobacco leaf can provide multi-dimensional image characteristics, a small amount of tobacco leaves can provide a large amount of training data, the tobacco leaf quantity of the training data provided by the tobacco leaf quantity is greatly reduced, the characteristic learning of the neural network is realized by using less tobacco leaf data, and the development period of the deep learning algorithm and the tobacco leaf cost are reduced.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1. A deep learning training method for grading flue-cured tobacco leaves is characterized by comprising the following steps:
acquiring a tobacco leaf image, and removing impurities;
acquiring position information of tobacco leaves, including a length change diagram, a width change diagram, an area diagram and a thickness change diagram of the tobacco leaves;
acquiring oil content information of tobacco leaves, including an oil content distribution map of the tobacco leaves;
acquiring integrity information of tobacco leaves, including a damage coefficient map of the tobacco leaves;
and respectively taking the part information, the oil content information and the integrity information of the tobacco leaves as training data to train the convolutional neural network.
2. The deep learning training method for flue-cured tobacco leaf grading according to claim 1, characterized in that: and after the tobacco leaf image is obtained, removing the background image outside the tobacco leaf in the image so as to remove impurity information in the tobacco leaf image.
3. The deep learning training method for flue-cured tobacco leaf grading according to claim 1, characterized in that: calculating an area diagram of the tobacco leaves according to the r channel binary image and the b channel binary image of the tobacco leaf image
Figure FDA0003566105780000011
Wherein, imgTobacco leafShows the area diagram, img, of the tobacco leafr _ original drawingR-channel binary map, img, representing a tobacco leaf imageb _ original figureA b-channel binary image representing an image of tobacco leaves.
4. The deep learning training method for flue-cured tobacco leaf grading according to claim 1, characterized in that: taking the central point of the tobacco leaf in the image as a Gaussian distribution center, taking the length value and the width value of the tobacco leaf as Gaussian radiuses, and converting an independent number into a two-dimensional image so as to calculate a length change diagram and a width change diagram of the tobacco leaf, wherein the calculation formula is as follows
Figure FDA0003566105780000012
Wherein, σ represents a Gaussian kernel, where the value is a length value or a width value, g is a Gaussian formula, and x and y are central points of two-dimensional Gaussian distribution, where the central points are coordinates of tobacco leaves in the image.
5. The deep learning training method for flue-cured tobacco leaf grading according to claim 1, characterized in that: RGB channel segmentation processing is carried out on the tobacco leaf images, and a binary value image, a median value image, a number image, a maximum value image and a minimum value image of each segmented channel image are calculated as follows
Figure FDA0003566105780000021
Figure FDA0003566105780000022
Figure FDA0003566105780000023
Figure FDA0003566105780000024
Figure FDA0003566105780000025
Wherein img represents the original tobacco leaf image, imgrBinary maps, img, representing r channelsgBinary maps, img, representing g channelsbA binary graph representing the b channel; imgr_midMedian graph, img, representing the r channelsr_modeRepresenting the mode of the r channel, imgr_minMinimum graph, img, representing the r channelr_maxA maximum value graph representing the r channel; the median graph, the number graph, the minimum graph and the maximum graph of the g channel and the b channel are respectively calculated in the same way; the obtained 15 two-dimensional images are combined into one image to obtain 1And 5-dimensional image feature map, and using the image feature map as a thickness change map of the tobacco leaves.
6. The deep learning training method for flue-cured tobacco leaf grading according to claim 1, characterized in that: respectively arranging an image acquisition device above and below a transparent image acquisition area of the tobacco leaf, arranging a plane light source above the tobacco leaf to acquire a polished image of the tobacco leaf, performing RGB channel segmentation processing on the polished image of the tobacco leaf, and calculating each segmented channel image to acquire an oil distribution diagram of the tobacco leaf, wherein the oil distribution diagram is as follows
imgs1=imgr÷imgb
imgs2=imgb÷imgg
imgs3=imgg÷imgr
Wherein, imgs1Indicating a red-blue profile, imgs2Denotes the blue-green characteristic diagram, imgs3Indicating a green-red characteristic map, imgrBinary maps, img, representing r channelsgBinary maps, img, representing g channelsbA binary diagram representing the b channel.
7. The deep learning training method for flue-cured tobacco leaf grading according to claim 1, characterized in that: the damage coefficient of the tobacco leaf is calculated by calculating the hole area inside the tobacco leaf and the area of the whole tobacco leaf in the tobacco leaf image, and the damage coefficient is converted into a damage coefficient graph of the tobacco leaf by Gaussian distribution.
8. The deep learning training method for flue-cured tobacco leaf grading according to claim 1, characterized in that: sending the length change diagram, the width change diagram, the area diagram, the thickness change diagram, the oil distribution diagram and the damage coefficient diagram of the tobacco leaves into a convolutional neural network training convolutional characteristic as follows
imgall_result=[imgr_mid,imgg_mid...imgs1,imgs2,imgs3]
Wherein, imgall_resultShowing the composition of a plurality of graphs into one graph.
CN202210307372.3A 2022-03-25 2022-03-25 Deep learning training method for flue-cured tobacco leaf grading Pending CN114677351A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210307372.3A CN114677351A (en) 2022-03-25 2022-03-25 Deep learning training method for flue-cured tobacco leaf grading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210307372.3A CN114677351A (en) 2022-03-25 2022-03-25 Deep learning training method for flue-cured tobacco leaf grading

Publications (1)

Publication Number Publication Date
CN114677351A true CN114677351A (en) 2022-06-28

Family

ID=82076285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210307372.3A Pending CN114677351A (en) 2022-03-25 2022-03-25 Deep learning training method for flue-cured tobacco leaf grading

Country Status (1)

Country Link
CN (1) CN114677351A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117589767A (en) * 2024-01-18 2024-02-23 北京香田智能科技有限公司 Tobacco leaf harvesting time determining method, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117589767A (en) * 2024-01-18 2024-02-23 北京香田智能科技有限公司 Tobacco leaf harvesting time determining method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113239954B (en) Attention mechanism-based image semantic segmentation feature fusion method
CN110400275B (en) Color correction method based on full convolution neural network and characteristic pyramid
CN115326809B (en) Tunnel lining apparent crack detection method and detection device
CN108491863A (en) Color image processing method based on Non-negative Matrix Factorization and convolutional neural networks
CN114677351A (en) Deep learning training method for flue-cured tobacco leaf grading
CN114511567B (en) Tongue body and tongue coating image identification and separation method
CN111008642A (en) High-resolution remote sensing image classification method and system based on convolutional neural network
Smith et al. Classification of archaeological ceramic fragments using texture and color descriptors
CN113505856A (en) Hyperspectral image unsupervised self-adaptive classification method
CN110503051B (en) Precious wood identification system and method based on image identification technology
CN109816629B (en) Method and device for separating moss based on k-means clustering
CN115984862A (en) Deep learning-based remote water meter digital identification method
CN109299295B (en) Blue printing layout database searching method
CN106649611B (en) Image retrieval method based on neighborhood rotation right-angle mode
CN116664431B (en) Image processing system and method based on artificial intelligence
Al Sasongko et al. Application of Gray Scale Matrix Technique for Identification of Lombok Songket Patterns Based on Backpropagation Learning
CN110070626A (en) A kind of three-dimension object search method based on multi-angle of view classification
CN111160257B (en) Monocular face in-vivo detection method stable to illumination transformation
CN116721345A (en) Morphology index nondestructive measurement method for pinus massoniana seedlings
CN111563536B (en) Bamboo strip color self-adaptive classification method based on machine learning
CN103871084A (en) Method for recognizing patterns of blueprint cloth
CN113724255A (en) Counting method for abalones in seedling raising period
CN106815314A (en) Image search method based on amplitude phase hybrid modeling
CN117876879B (en) Kiwi flower identification method based on spatial domain and frequency domain feature fusion
CN108090504A (en) Object identification method based on multichannel dictionary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination