CN114549485A - Stem detection method based on X-ray vision - Google Patents

Stem detection method based on X-ray vision Download PDF

Info

Publication number
CN114549485A
CN114549485A CN202210179428.1A CN202210179428A CN114549485A CN 114549485 A CN114549485 A CN 114549485A CN 202210179428 A CN202210179428 A CN 202210179428A CN 114549485 A CN114549485 A CN 114549485A
Authority
CN
China
Prior art keywords
stem
detection
ray
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210179428.1A
Other languages
Chinese (zh)
Inventor
李超
王岩
姚建松
饶小燕
吴雅琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Center Line Electronic Technology Co ltd
Original Assignee
Henan Center Line Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Center Line Electronic Technology Co ltd filed Critical Henan Center Line Electronic Technology Co ltd
Priority to CN202210179428.1A priority Critical patent/CN114549485A/en
Publication of CN114549485A publication Critical patent/CN114549485A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a stem detection method based on X-ray vision, which comprises the following steps: randomly selecting cigarettes of different brands as detection objects, and carrying out X-ray irradiation on the detection objects by utilizing X-ray equipment to obtain corresponding cigarette perspective images; generating a plurality of groups of pseudo-labeled samples from the cigarette perspective images by using a generating countermeasure network, and screening the pseudo-labeled samples according to screening indexes to determine final expanded labeled samples; acquiring an artificial labeling sample of the detection object, inputting the expanded labeling sample into a preset stem label classification network for pre-training, and adjusting a training network by using the artificial labeling sample; and performing stem detection on the tested cigarette sample by using the trained stem classification network. The tobacco stem detection method can solve the problems of low efficiency and low accuracy of stem detection of the existing cigarette products, improve the stem detection efficiency and reduce the false detection rate or the missing detection rate of the tobacco stem detection.

Description

Stem detection method based on X-ray vision
Technical Field
The invention relates to the technical field of cigarette product detection, in particular to a stem detection method based on X-ray vision.
Background
Due to the influence of special physical properties and processing technological factors, the stem with larger width or length is easy to appear in the shredding process. The stem is the tobacco stem which has the shape similar to that of a toothpick and does not expand or has the expansion effect which does not meet the rolling requirement in the tobacco shred. The stem sticks in the cigarettes can increase miscellaneous gas and irritation, so that the cigarettes can be punctured or leaked, and the phenomena of opening burst or flameout at a burning end and the like can occur during burning and smoking, so that the combustibility of the cigarettes is influenced, and the smoking sensory quality is influenced. Meanwhile, the cigarette weight deviation is increased in the production and processing process, the stability of the physical indexes of the cigarettes is affected, the quality control and the normal operation of equipment are not facilitated, and the efficiency of the equipment and various material consumption indexes are affected. The current detection means for the stems in the cigarettes generally adopts a manual spot check mode, namely, the cigarettes are cut one by using a blade, the cut tobacco is peeled off, and then the cut tobacco is detected by naked eyes. On one hand, the method is low in detection efficiency, and on the other hand, the accuracy is reduced due to judgment of human factors. Therefore, how to automatically and accurately detect the stem-containing labels of the cigarettes so as to improve the detection efficiency of the stem labels and reduce the false detection rate or missing detection rate of the tobacco stem detection has important significance.
Disclosure of Invention
The invention provides a stem detection method based on X-ray silk vision, which solves the problems of low efficiency and low accuracy of stem detection of the existing cigarette products, can improve the stem detection efficiency, and reduces the false detection rate or missing detection rate of tobacco stem detection.
In order to realize the purpose, the invention provides the following technical scheme:
an X-ray vision-based stem detection method comprises the following steps:
randomly selecting cigarettes of different brands as detection objects, and carrying out X-ray irradiation on the detection objects by utilizing X-ray equipment to obtain corresponding cigarette perspective images;
generating a plurality of groups of pseudo-labeled samples from the cigarette perspective images by using a generating countermeasure network, and screening the pseudo-labeled samples according to screening indexes to determine final expanded labeled samples;
acquiring an artificial labeling sample of the detection object, inputting the expanded labeling sample into a preset stem label classification network for pre-training, and adjusting a training network by using the artificial labeling sample;
and carrying out stem detection on the tested cigarette sample by utilizing the trained stem classification network.
Preferably, the method further comprises the following steps:
taking the overall classification precision as an evaluation index of the trained stem classification network, wherein the overall classification precision is according to a formula
Figure BDA0003521846380000021
And calculating, wherein OA is the overall classification precision, Z is the overall sample number, and Z is the number of all correctly classified samples.
Preferably, the method further comprises the following steps:
taking the screening index as an evaluation index of the trained stem classification network, wherein the screening index is according to a formula SDFn=αNFIDn+βTRn,n∈[0,N]Is calculated to obtain, wherein, SDFnTo generate an Nth set of pseudo-symbolsEvaluation score of the injected sample, NFIDn∈[0,1]Is normalized FID score and TRn∈[0,1]For normalized training evaluation score, α is NFIDnBeta is TRnAnd α + β is 1.
Preferably, a SinGAN model based on an improved loss function is used to generate a plurality of sets of pseudo-labeled samples, and the sample training is performed based on the SinGAN model, where a loss function of a discriminator of the SinGAN model is:
Figure BDA0003521846380000022
the loss function of the generator of the SinGAN model is:
Figure BDA0003521846380000023
wherein G isnFor the nth generator, χ is xnAnd
Figure BDA0003521846380000024
the spatial joint sampling of (a) is performed,
Figure BDA0003521846380000025
is a gradient penalty term, mu is a weight coefficient,
Figure BDA0003521846380000026
is a pseudo image generated by the (n + 1) th generator,
Figure BDA0003521846380000027
is a pseudo-image, x, generated by the nth generatornIs the corresponding real image at each scale, z*Random value, L, selected before trainingdivTo generate a ratio of the distance between images and the distance between noise.
Preferably, the screening the pseudo-labeled sample according to the screening index includes:
the SinGAN model generates N +1 groups of pseudo images (PS) after training is finishedN,…,PSn,…,PS0-said SinGAN model generating said pseudo-labeled samples from said pseudo-image,
according to the selected screening index SDFnEvaluating the authenticity and diversity of the pseudo-annotated sample and using a formula
Figure BDA0003521846380000031
Evaluating the quality of the generated image, wherein FRSAnd
Figure BDA0003521846380000032
respectively representing real images RS and n-th group generated images PSnMean of feature vectors of, CFRSAnd
Figure BDA0003521846380000033
respectively represent RS and PSnThe covariance matrix calculated from the eigenvectors of (a), Tr (-) represents the traces of the matrix.
Preferably, the stem detection of the cigarette sample to be tested by using the trained stem classification network includes:
predictively classifying the target sample by a Softmax classifier containing a focus loss function and according to the formula FL (p)i)=-αi(1-pi)γlog(pi) Determining a loss function value, wherein FL (p)i) For the value of the loss function, piRepresenting the probability of the model predicting the presence of stem in the sample, gamma representing the hyper-parameter controlling "focus", alphaiRepresenting the "contribution" of the control positive and negative samples to the total loss.
Preferably, the method further comprises the following steps:
selecting cigarettes of different brands as detection objects, making collected data into a data set, and detecting and verifying the trained stem classification network according to the data set to judge whether the final detection rate of the stems reaches a set threshold value, wherein if yes, the training of the stem classification network is qualified.
Preferably, the step of creating a data set comprises:
firstly, irradiating cigarettes by X-ray equipment to obtain a perspective image of the cigarettes and manually marking the perspective image, wherein 100 images of the cigarettes of each brand are collected, wherein 50 images of each brand with a stem label and 50 images of each brand without the stem label are 2000 images;
then, the trained SinGAN model is used for carrying out data expansion on the images with or without the stem labels of each category according to the training rates of 20% and 50%, and the rest 80% and 50% are used for subsequent testing;
the image expansion ratio is 1:20, when the training rate is 20%, the expansion data set is 20 types, each type comprises 400 images, and 8000 images;
at a training rate of 50%, the extended data set is of 20 classes, 1000 images per class, for a total of 20000 images.
Preferably, the performing X-ray irradiation on the detection object by using an X-ray device and obtaining a corresponding cigarette perspective image includes:
based on the difference of the X-ray transmission imaging characteristics of the cut tobacco and the stem, X-ray transmission imaging is carried out to form an X-ray black-and-white image;
carrying out noise filtering pretreatment on the X-ray black-and-white image to remove background noise in the image;
carrying out image segmentation on the black-and-white X-ray image subjected to noise filtering pretreatment by using a region growing method so as to divide the black-and-white X-ray image into tobacco stem pixels and background pixels;
obtaining the membership degree of each segmented tobacco stem pixel to the tobacco stem center by adopting a fuzzy C clustering algorithm so as to filter interference information and judge the membership;
after the fuzzy C clustering algorithm processing, the shape analysis is carried out on the pixels which are clustered together, and the area of the segmentation region and the aspect ratio of the region are calculated for carrying out shape recognition.
The invention provides a stem detection method based on X-ray filament vision, which is characterized in that data acquisition is carried out on a detected cigarette through X-ray equipment and manual labeling is carried out; and then expanding the data by using the generating countermeasure network, screening the generated samples by using the provided screening indexes to determine the final expanded sample, finally using the expanded sample for training the classification network, and further fine-tuning the trained network by using the real sample. The stem detection method solves the problems of low efficiency and low accuracy of stem detection of the existing cigarette products, can improve the stem detection efficiency, and reduces the false detection rate or missing detection rate of stem detection.
Drawings
In order to more clearly describe the specific embodiments of the present invention, the drawings to be used in the embodiments will be briefly described below.
Fig. 1 is a schematic diagram of a stem detection method based on X-ray filament vision provided by the invention.
Fig. 2 is a schematic view of a stem detection process provided by the present invention.
Detailed Description
In order to make the technical field of the invention better understand the scheme of the embodiment of the invention, the embodiment of the invention is further described in detail with reference to the drawings and the implementation mode.
Aiming at the problems of low efficiency and low accuracy of stem detection in the current cigarette, the invention provides a stem detection method based on X-ray vision, which solves the problems of low efficiency and low accuracy of stem detection of the existing cigarette products, can improve the stem detection efficiency, and reduce the false detection rate or the missing detection rate of tobacco stem detection.
As shown in fig. 1 and fig. 2, an X-ray vision-based stem detection method includes:
s1: the method comprises the steps of randomly selecting cigarettes of different brands as detection objects, carrying out X-ray irradiation on the detection objects by utilizing X-ray equipment, and obtaining corresponding cigarette perspective images.
S2: and generating a plurality of groups of pseudo-labeled samples from the cigarette perspective images by using a generating countermeasure network, and screening the pseudo-labeled samples according to screening indexes to determine the final expanded labeled samples.
S3: and acquiring an artificial labeling sample of the detection object, inputting the expanded labeling sample into a preset stem label classification network for pre-training, and adjusting the training network by using the artificial labeling sample.
S4: and carrying out stem detection on the tested cigarette sample by utilizing the trained stem classification network.
Specifically, cigarettes of different brands are randomly selected as the object to be inspected. Firstly, acquiring data of cigarettes one by using industrial X-ray equipment to obtain original perspective images and numbering the original perspective images one by one; and then manually stripping the cigarettes to detect that the cigarettes contain stems, and manually labeling the perspective image corresponding to each cigarette according to the result. However, because the raw data acquisition consumes time and labor, a batch of pseudo-annotation samples which can be used for subsequent network training are generated according to the raw images by utilizing a Generative Adaptive Network (GAN); then screening the generated data by using the screening method to determine a final extended sample; and finally, using the screened expansion sample for the training of a subsequent stem classification network. The method can solve the problems of low efficiency and low accuracy of stem detection of the existing cigarette products, improve the stem detection efficiency and reduce the false detection rate or missing detection rate of stem detection.
The method further comprises the following steps: taking the overall classification precision as an evaluation index of the trained stem classification network, wherein the overall classification precision is according to a formula
Figure BDA0003521846380000051
And calculating, wherein OA is the overall classification precision, Z is the overall sample number, and Z is the number of all correctly classified samples.
The method further comprises the following steps: taking the screening index as an evaluation index of the trained stem classification network, wherein the screening index is according to a formula SDFn=αNFIDn+βTRn,n∈[0,N]Is calculated to obtain, wherein, SDFnEvaluation score, NFID, for the generated N-th set of pseudo-labeled samplesn∈[0,1]Is normalized FID score and TRn∈[0,1]For normalized training evaluation score, α is NFIDnBeta is TRnAnd α + β is 1.
Further, a SinGAN model based on an improved loss function is used for generating a plurality of groups of pseudo-labeled samples, and sample training is carried out based on the SinGAN model, wherein a loss function of a discriminator of the SinGAN model is as follows:
Figure BDA0003521846380000052
the loss function of the generator of the SinGAN model is:
Figure BDA0003521846380000053
wherein G isnFor the nth generator, χ is xnAnd
Figure BDA0003521846380000054
the spatial joint sampling of (a) is performed,
Figure BDA0003521846380000061
is a gradient penalty term, mu is a weight coefficient,
Figure BDA0003521846380000062
is a pseudo image generated by the (n + 1) th generator,
Figure BDA0003521846380000063
is a pseudo-image, x, generated by the nth generatornIs the corresponding real image at each scale, z*For random values selected before training, LdivTo generate a ratio of the distance between images and the distance between noise.
In practical applications, SinGAN is an unconditional generative model that can be learned from a single natural image, and can capture the internal block distribution information of the image to generate high-quality and variable samples with the same visual content. Different from the traditional GAN which only has one generator (G) and one discriminator (D), the SinGAN which respectively has a plurality of generators and discriminators can be regarded as the cascade of a plurality of GANs, and the whole GAN structure is presented in a pyramid form. Each GAN is responsible for learning the distribution information of the image at different scales, so new samples with arbitrary size and aspect ratio can be generated. These samples have significant variation while maintaining the overall structure and fine texture features of the training images. In contrast to previous single-image GAN schemes, this method is not limited to texture images, but is unconditional (i.e. generating samples from noise). Meanwhile, different from other GANs which can only be applied to a single task, the SinGAN can be applied to tasks such as image generation, image segmentation, super-resolution tasks, rendering image conversion, image editing, image harmony and the like. The SinGAN is produced from coarse to fine from bottom to top. All G and D have the same structure and are composed of 5 groups of 3 × 3 full convolution, so that G and D both have 11 × 11 receptive fields, and the arrangement of the same receptive fields can make GAN of each layer focus on the overall layout of the image and the global structure of the target.
SinGAN is a distribution of images learned from a single image, which is the most important feature compared to other GAN models. Since SinGAN is a pyramid structure, the training process is layer-by-layer, from bottom to top. After the GAN of each layer is trained, the GAN is fixed, and the network parameters are not changed.
And generator { G0…GNCorresponding to this is a discriminator { D }0…DNThe discriminator of each layer is used for distinguishing the corresponding x obtained by the down sampling of the real image xnPseudo image generated by AND generator
Figure BDA0003521846380000064
True and false. Wherein, a discriminator DnThe loss function of (1) is WGAN-GP (Wasserin Distance Generation adaptive Networks-Gradient Penalty), and the loss function can increase the stability of network training, as shown in (1):
Figure BDA0003521846380000065
wherein D (x)n) Is xnThe generator of (a) is controlled by the controller,
Figure BDA0003521846380000066
is that
Figure BDA0003521846380000067
X is xnAnd
Figure BDA0003521846380000068
the third term is a gradient penalty term, and μ is a weight coefficient. In order to solve the problem of over concentration of parameters caused by weight limitation in WGAN and the problem of gradient explosion and disappearance in the training process, the WGAN-GP provides a method for punishing the gradient, wherein a threshold value is set, and when the sample gradient exceeds the threshold value, punishment is carried out. The method effectively solves the problems and stabilizes the training of the GAN network.
GnThe loss function is also called reconstruction loss, and the purpose of the loss function is to hopefully have a set of random noise input, so that the final output image is the original image, and therefore the stability of the training is improved. The author chooses here a particular random noise. The following:
Figure BDA0003521846380000071
wherein z is*Is a value randomly chosen before training and is not changed any more thereafter.
So GnThe loss function of (a) is as follows:
Figure BDA0003521846380000072
wherein:
Figure BDA0003521846380000073
is a pseudo image generated by the (n + 1) th generator using the above fixed noise, and xn is the corresponding real image at each scale.
The generator takes noise as input, and the noise does not change once selected. It is also noted that GAN is highly prone to mode collapse when generating images, only generating images of several of its classes. Mapping to the distribution means that the sample data of the several types are widely distributed, the data peak value is large, and the other types are opposite. Most of the data falls on the category with wide data distribution in the generation process, so that the diversity of the generated samples is reduced. Therefore, in order to further increase the diversity of the generated samples, the method adds a regularization term to the generator design. The regularization item intuitively increases the diversity of the generated images by maximizing the ratio of the distance between the generated images to the distance between the noise. The distance between the noises is fixed, so that the distance between the generated images can be directly pulled up by maximizing the ratio of the distance between the generated images to the distance between the noises, and the data of the generated images are forced to fall on the category with small peak value and narrow range. The following were used:
Figure BDA0003521846380000074
wherein z is1,z2G () represents the generated pseudo samples and d () represents the distance for different samples of the same noise space.
So that the generator GnThe final loss function is as follows:
Figure BDA0003521846380000081
the generation effect comparison before and after the SinGAN is improved shows that the image generated by the original SinGAN is not different from the real image, which indicates that the generated image has high authenticity but low diversity; the thickness, length and stem shape of the pseudo-labeled sample generated by the improved SinGAN are obviously changed, which shows that the improved SinGAN can generate a sample with higher diversity, and the robustness of a subsequent network is improved.
Further, the screening the pseudo-labeled sample according to the screening index includes:
the SinGAN model generates N +1 groups of pseudo images (PS) after training is finishedN,…,PSn,…,PS0-the SinGAN model generates the pseudo-labeled samples from the pseudo-images,
according to the selected screening index SDFnEvaluation ofThe authenticity and diversity of the pseudo-annotated sample, and using a formula
Figure BDA0003521846380000082
Evaluating the quality of the generated image, wherein FRSAnd
Figure BDA0003521846380000083
respectively representing real images RS and n-th group generated images PSnMean of feature vectors of, CFRSAnd
Figure BDA0003521846380000084
respectively represent RS and PSnThe covariance matrix calculated from the eigenvectors of (a), Tr (-) represents the traces of the matrix.
In practical application, SinGAN generates N +1 groups of pseudo images after training, namely { PSN,…,PSn,…,PS0}. But not every set of pseudo-images is suitable for training of the model. Therefore, the method provides a quantitative index SDF belonging to [0,1 ] based on improved pseudo sample screening]. The index can comprehensively evaluate the authenticity and diversity of the pseudo sample. The method comprises the following specific steps:
SDFn=αNFIDn+βTRn,n∈[0,N]; (6)
wherein SDFnRepresents PSnEvaluation score of (1), NFIDn∈[0,1]And TRn∈[0,1]Respectively represent the normalized FID (Frechet inclusion Distance) fraction and PS (PS)nTraining evaluation score of (1), alpha and beta scores NFIDnAnd TRnAnd both satisfy α + β ═ 1. At the same time, in order to make NFIDnAnd TRnFor SDFnWith equal contribution, we set both α and β to 0.5.
NFID in the formulanIs FIDnThe standardized version of (a), specifically as follows:
Figure BDA0003521846380000085
wherein FIDnIs PSnMin () represents the minimization operation because the quality of the dummy samples is inversely proportional to the FID score.
FID is a metric proposed in 2017 to evaluate the quality of the generated image and is specifically used to evaluate the performance of the generative confrontation network. Due to the excellent measurement mode of the FID, the method can well measure the reality and diversity of generated images. The formula is as follows:
Figure BDA0003521846380000091
wherein FRSAnd
Figure BDA0003521846380000092
respectively representing real images RS and n-th group generated images PSnMean of feature vectors of, CFRSAnd
Figure BDA0003521846380000093
respectively represent RS and PSnThe covariance matrix calculated from the eigenvectors of (a), Tr (-) represents the traces of the matrix. The feature vectors used for calculation in the formula are extracted from an inclusion V3 network pre-trained by an ImageNet data set.
Although the FID can directly calculate the distance between the generated image and the real image from the "inside" of the image to evaluate the quality of the generated image, it does not evaluate the generated sample from the viewpoint of improving the training quality, and improves the classification accuracy of the model, and it is the core motivation for generating a large number of pseudo samples to improve the training performance of the model. Therefore, we propose to turn TRnCombined with FID for generating an evaluation, TR, of the samplenThe device comprises two parts:
TRn=λNSIMn+ηNDIVn
Figure BDA0003521846380000094
Figure BDA0003521846380000095
wherein the SIMnRepresents RS and PSnSimilarity of (DIV)nRepresents PSnRelative diversity with respect to RS, NSIMn∈[0,1]And NDIVn∈[0,1]Are respectively SIMnAnd DIVnNormalized version of (a) and (eta) for NSIMnAnd NDIVnAnd λ + η ═ 1. Since the authenticity of the pseudo-samples is as important as the diversity, we set both the values of λ and η to 0.5.
Since the generated samples are ultimately used for training of the model, the quality of the generated samples is considered from the perspective of model training. If the pseudo sample is similar to the real sample, the pseudo sample is used in the testing stage of the deep neural network trained by the real sample, a higher score can be obtained, and the score is not worse than the score obtained by using the real sample for testing; on the other hand, if the diversity of the pseudo sample is not high, the pseudo sample cannot completely cover the data distribution of the real sample, and the deep neural network trained on the pseudo sample cannot obtain a high-precision classification result, namely DIV, when the real sample is testednIs very low. The SIM is calculated in the following mannernAnd DIVn
Figure BDA0003521846380000101
Wherein DNN (RS) and DNN (PS)n) Representing deep Neural networks DNN (deep Neural networks) composed of RS and PS, respectivelynTraining, OA (DNN (RS), PSn) Indicating a trained network DNN (RS) by PSnResult of the test, OA (DNN (PS)n) RS) stands for trained network DNN (PS)n) Results from the RS test.
Notably, due to the FIDn SIMnAnd DIVnThe original value ranges of the three are different, so that the normalization operation is necessary, which is also the reason for the normalization in the formula (7) and the formula (9). This is achieved byThe final value range can be ensured to be [0,1 ]]. And α + β ═ 1 and λ + η ═ 1 can ensure SDFnAnd TRnIs limited to [0,1 ]]。
Finally, the optimal pseudo sample PSjCan be based on SDFnFractional size determination of, SDFnThe larger the pseudo sample mass, the better. Namely:
Figure BDA0003521846380000102
further, the stalk detection is carried out to the cigarette sample of test to the stalk classification network after utilizing the training, include:
predictively classifying the target sample by a Softmax classifier containing a focus loss function and according to the formula FL (p)i)=-αi(1-pi)γlog(pi) Determining a loss function value, wherein FL (p)i) For the value of the loss function, piRepresenting the probability of the model predicting the presence of stem in the sample, gamma representing the hyper-parameter controlling "focus", alphaiRepresenting the "contribution" of the control positive and negative samples to the total loss.
In practical application, in order to better solve the problem that a sample difficult to identify is difficult to identify, the traditional cross entropy Loss function is replaced by Focal local, and the accuracy of stem detection can be further improved by applying the Loss function.
FL(pi)=-αi(1-pi)γlog(pi); (12)
Where FL () represents the loss function value, piThe probability that the model predicts the stem in the sample is represented, gamma represents a hyper-parameter for controlling focusing, namely the model is controlled to pay more attention to the sample which is difficult to be sampled, alphaiRepresenting the "contribution" of the control positive and negative samples to the total loss. When alpha isiWhen the weight of the negative sample is smaller, the weight of the positive sample is increased, so that the influence of the negative sample on training is reduced, and the final classification precision is improved.
And after the network training is finished, determining that the parameters of the network training are not changed any more, and then sending the test sample into the deep stem classification network to realize stem detection.
The method further comprises the following steps: selecting cigarettes of different brands as detection objects, making collected data into a data set, and detecting and verifying the trained stem label classification network according to the data set to judge whether the final detection rate of the stem labels reaches a set threshold value, wherein if yes, the training of the stem label classification network is qualified.
Further, selecting 20 cigarettes of different brands on the market as detection objects, and making the collected data into a data set XIC-20, wherein the step of making the data set comprises the following steps:
firstly, irradiating cigarettes by X-ray equipment to obtain a perspective image of the cigarettes and manually marking the perspective image, wherein 100 images of the cigarettes of each brand are collected, wherein 50 images of each brand with a stem label and 50 images of each brand without the stem label are 2000 images;
and then, performing data expansion on the images with or without the stem labels of each category according to the training rate of 20% and 50% by using the trained SinGAN model, and using the rest 80% and 50% for subsequent tests.
The image expansion ratio is 1:20, when the training rate is 20%, the expansion data set is 20 types, each type comprises 400 images, and 8000 images;
at a training rate of 50%, the extended data set is of 20 classes, 1000 images per class, for a total of 20000 images.
Specifically, the parameter setting adopts SinGAN default setting, i.e. N is 8, so a total of 9 sets of pseudo samples { PS are generated8,…,PS0}. Wherein the batch size is set to 1, the learning rates of the discriminator and the generator are both 0.0005, and the optimizer is Adam; for the training of ResNet50, the batch size is set to 32, the last layer of learning rate is 0.01, the other layers are all 0.001, the optimizer is ASGD, and the hyper-parameter setting in the Focal local is set according to the original default setting, namely alphai=0.25,γ=2。
In the stem detection algorithm experimental verification stage of fusing the pseudo samples and the loss functions, the training rate of the data set is consistent with that in the generation stage. The experiment was repeated 10 times at each training rate in the data set.
The workstation for running the test is configured into two pieces of E5-2650V4 CPU (2.2GHz, total 12 × 2 cores), 512GB, NVIDIA TITAN RTX for GPU, and 24GB × 8 for memory. Selecting Pythrch as a deep learning platform.
As shown in Table 1 for the results of the screening effectiveness test, the first row in Table 1 is 9 different sets of pseudo samples { PS }8,…,PS0In which PS is8Representation is generated by the bottom GAN, PS0Is top GAN generation; the second row is the quantitative screen score, SDF (equation 6); the third row is the overall classification accuracy.
TABLE 1
Figure BDA0003521846380000111
Figure BDA0003521846380000121
Clearly, as shown in table 1, the higher the SDF value, the higher the corresponding OA (equation 13) value, which directly verifies the effectiveness of the proposed quantitative screening index. Meanwhile, as the generation scale increases, the SDF value and the OA value are both reduced although they are not much different, which means that the quality of the generated image is gradually reduced. The set of pseudo-annotated samples with the highest values is selected for subsequent ResNet50 training, i.e., pseudo-samples are generated using the bottom GAN of the SinGAN as the initial GAN.
Table 2 is an overall accuracy comparison performed on the data set, as shown in table 2 for the overall accuracy comparison experimental results. RS represents a deep classification network model trained only by real samples, also known as a reference method, PS is trained using pseudo samples instead of real samples, and RS + PS jointly uses real and pseudo samples to train the deep classification network model. And respectively replacing traditional cross entropy losses of the RS, the PS and the RS + PS by the Focal local to obtain the RS + FL, the PS + FL and the RS + FS + FL, wherein the RS + FS + FL represents the method provided by the method.
TABLE 2
Figure BDA0003521846380000122
As can be seen from the data in Table 2, the overall performance of the PS is superior to that of the RS, which shows that the quality of the generated pseudo samples is good, and the performance of the deep classification network can be improved. The comparison of the RS + PS and the PS shows that the combination of the PS and the RS can further improve the performance of the deep classification network. Through comparison of RS + FL, PS + FL and RS + FS + FL with RS, PS and RS + PS, the fact that the Focal local can replace a traditional cross entropy Loss function is demonstrated, and classification accuracy of the network is improved.
Further, the irradiating the detection object with X-ray by using the X-ray device and obtaining the corresponding perspective image of the cigarette includes:
based on the difference of the X-ray transmission imaging characteristics of the cut tobacco and the stem sliver, X-ray transmission imaging forms an X-ray black and white image.
And carrying out noise filtering preprocessing on the X-ray black-and-white image to remove background noise in the image.
And carrying out image segmentation on the black-and-white X-ray image subjected to noise filtering pretreatment by using a region growing method so as to segment the black-and-white X-ray image into tobacco stem pixels and background pixels.
And (3) obtaining the membership degree of each segmented tobacco stem pixel to the tobacco stem center by adopting a fuzzy C clustering algorithm so as to filter interference information and judge the membership.
After the fuzzy C clustering algorithm processing, the shape analysis is carried out on the pixels which are clustered together, and the area of the segmentation region and the aspect ratio of the region are calculated for carrying out shape recognition.
Specifically, based on the difference of the X-ray transmission imaging characteristics of the tobacco shreds and the stem sticks, the X-ray transmission imaging method can be effectively used for detection and judgment of the cigarettes containing the stems. Therefore, the tobacco stem image identification algorithm is mainly designed for the gray level image of X-ray transmission imaging. The method comprises an image segmentation method of the image after the cigarette is subjected to X-ray transmission imaging, characteristic image parameters of stem identification, a classifier algorithm of leaf shred and stem classification identification, and finally an image identification method of the stems is established. The tobacco stem image recognition algorithm mainly comprises four parts of image preprocessing, image segmentation, interference point attribution judgment and shape characteristic judgment. In the image segmentation and the interference point attribution judgment, a region growing method with the characteristic of artificial intelligence simulation and a fuzzy C clustering algorithm based on the unsupervised machine learning function are respectively researched and adopted.
(1) The tobacco stem image is preprocessed, and the original image acquired by the system comprises more background noise, so that the subsequent image segmentation quality is influenced, and the tobacco stem misjudgment is easily caused. Therefore, the tobacco stem information needs to be strengthened through image preprocessing, and various random noises are eliminated. In order to determine a better noise elimination algorithm, methods such as mean filtering, self-adaptive wiener filtering, median filtering, morphological filtering and the like are respectively tested through data simulation, and finally a gray morphological noise filter is adopted according to a filtering effect.
(2) And (3) image segmentation by a region growing method, wherein in the obtained image, the images of cut tobacco and stem in the cigarette have very large similarity on the gray level and are gathered together, and the tobacco stems in the image can be automatically marked by the region growing according to an iteration rule. Firstly, determining the tobacco stem imaging gray scale information distribution as seed pixels, then combining the pixels with the same or similar properties with the seed pixels in the surrounding neighborhood of the seed pixels into the region where the seed pixels are located, and then taking the new pixels as new seeds for iterative operation, thereby gathering the similar property pixels to form the region.
(3) And C, fuzzy clustering attribution judgment, namely after the image is divided into tobacco stem pixels and background pixels, a lot of punctiform interferences exist in the image, so that misjudgment can be caused to a greater degree. Therefore, the interference information is filtered and the attribution is judged by adopting a fuzzy C clustering algorithm based on an unsupervised machine learning function. The fuzzy C clustering algorithm is a method for analyzing and modeling important data by using a fuzzy theory, and establishes uncertainty description of sample categories. Because the tobacco stem information is complex, some tobacco stems can be shown as a fracture form on a segmentation image, and the shape judgment error and the missing identification can be caused by adopting the traditional connectivity marking algorithm. And the membership degree of each segmented tobacco stem pixel to the tobacco stem center can be obtained by adopting fuzzy C clustering, so that a plurality of broken modules are determined to belong to one tobacco stem information. This can effectively increase the recognition rate.
(4) And the shape recognition is added into the algorithm, so that the false detection rate of the tobacco stems is further reduced. The shape recognition algorithm is to analyze the shape of the pixels gathered together and calculate shape factors such as the area of the divided region and the aspect ratio of the region after the fuzzy C clustering algorithm is used for processing. The area of the region is represented by the pixel concentration S of the region, the length ratio of the region is determined by a circumscribed rectangle method to determine the length L and the diameter D of the region, and then the length ratio R is calculated. And when the area S is more than 120 and the R is more than 10, the shape characteristics are met, and the tobacco stem is judged.
In order to verify the detection efficiency and accuracy of the stem-containing nondestructive detection device for cigarettes, ordinary cut tobaccos processed by a certain manufacturer are selected, stems contained in the cut tobaccos are manually removed, the cut tobaccos are divided into ten groups in the process of processing the part of cut tobaccos into cigarettes, the stems are manually added into the ten groups of cigarettes, and the number of the cigarettes added with the stems in each group accounts for 11%, 12%, 13%, 14%, 15%, 16%, 17%, 18%, 19% and 20% of the total number of the cigarettes in the group. The cigarettes containing the stems are sampled for 20 times respectively by using a cigarette stem nondestructive testing device, and the sampling amount is 100 cigarettes respectively. Wherein, the stem-containing proportion of the cigarettes sampled each time is consistent with the total stem-containing proportion of the group. And after the detection is finished, adjusting the detection sequence of 10 groups of cigarettes, and carrying out manual detection. And finally, carrying out comparative analysis on the detection efficiency and the accuracy of the two detection modes.
The results are shown in Table 3. The absolute error value range of equipment detection is between 0.1 and 0.9, and the absolute error range of manual detection is between 0.8 and 1.5. The standard deviation range of equipment detection is 0.9468-1.5898, and the standard deviation range of manual detection is 1.1548-1.5864. The absolute error range and the standard deviation range show that the detection accuracy and precision of the device are slightly higher than those of manual detection, which shows that the cigarette stem nondestructive detection device has higher reliability in detection accuracy and precision. On the long side of the detection, the detection time of the device is about half of the detection time of the manual detection, and the detection efficiency is greatly improved.
TABLE 3
Figure BDA0003521846380000151
The standard deviation and the absolute error generated by the equipment detection mode are both smaller than those generated by the manual detection mode, and the equipment detection has higher reliability. Compared with a manual detection mode, the detection time of the cigarette nondestructive detection equipment is about one half of that of manual detection, and the detection efficiency is improved while the detection accuracy and precision are ensured.
The stem detection method based on X-ray filament vision is characterized in that data collection and manual labeling are carried out on detected cigarettes through X-ray equipment; and then expanding the data by using the generative confrontation network, screening the generated samples by using the provided screening indexes to determine final expanded samples, finally using the expanded samples for training the classification network, and further finely adjusting the trained network by using the real samples. The stem detection method solves the problems of low efficiency and low accuracy of stem detection of the existing cigarette products, can improve the stem detection efficiency, and reduces the false detection rate or missing detection rate of stem detection.
The construction, features and functions of the present invention have been described in detail with reference to the embodiments shown in the drawings, but the present invention is not limited to the embodiments shown in the drawings, and all equivalent embodiments modified or modified by the spirit and scope of the present invention should be protected without departing from the spirit of the present invention.

Claims (9)

1. A stem detection method based on X-ray vision is characterized by comprising the following steps:
randomly selecting cigarettes of different brands as detection objects, and carrying out X-ray irradiation on the detection objects by utilizing X-ray equipment to obtain corresponding cigarette perspective images;
generating a plurality of groups of pseudo-labeling samples from the cigarette perspective image by using a generating type confrontation network, and screening the pseudo-labeling samples according to screening indexes to determine a final expanded labeling sample;
acquiring an artificial labeling sample of the detection object, inputting the expanded labeling sample into a preset stem label classification network for pre-training, and adjusting a training network by using the artificial labeling sample;
and carrying out stem detection on the tested cigarette sample by utilizing the trained stem classification network.
2. The stem detection method based on X-ray vision according to claim 1, further comprising:
taking the overall classification precision as an evaluation index of the trained stem classification network, wherein the overall classification precision is according to a formula
Figure FDA0003521846370000011
And calculating, wherein OA is the overall classification precision, Z is the overall sample number, and Z is the number of all correctly classified samples.
3. The stem detection method based on X-ray vision according to claim 2, further comprising:
taking the screening index as an evaluation index of the stem classification network after training, wherein the screening index is according to a formula SDFn=αNFIDn+βTRn,n∈[0,N]Is calculated to obtain, wherein, SDFnEvaluation score, NFID, for the generated N-th set of pseudo-labeled samplesn∈[0,1]Is normalized FID score and TRn∈[0,1]For normalized training evaluation score, α is NFIDnBeta is TRnAnd α + β is 1.
4. The stem detection method based on X-ray vision of claim 3, wherein a SinGAN model based on an improved loss function is used to generate a plurality of sets of pseudo-labeled samples, and a sample training is performed based on the SinGAN model, wherein a loss function of a discriminator of the SinGAN model is as follows:
Figure FDA0003521846370000012
the loss function of the generator of the SinGAN model is:
Figure FDA0003521846370000021
wherein G isnFor the nth generator, χ is xnAnd
Figure FDA0003521846370000022
the spatial joint sampling of (a) is performed,
Figure FDA0003521846370000023
is a gradient penalty term, mu is a weight coefficient,
Figure FDA0003521846370000024
is a pseudo image generated by the (n + 1) th generator,
Figure FDA0003521846370000025
is a pseudo-image, x, generated by the nth generatornIs the corresponding real image at each scale, z*For random values selected before training, LdivTo generate a ratio of the distance between images and the distance between noise.
5. The stem detection method based on X-ray vision according to claim 4, wherein the screening the pseudo-labeled sample according to a screening index comprises:
the SinGAN model generates N +1 groups of pseudo images (PS) after training is finishedN,…,PSn,…,PS0-said SinGAN model generating said pseudo-labeled samples from said pseudo-image,
according to the selected screening index SDFnEvaluating the authenticity and diversity of the pseudo-annotated sample and using a formula
Figure FDA0003521846370000026
Evaluating the quality of the generated image, wherein FRSAnd
Figure FDA0003521846370000027
respectively representing real images RS and n-th group generated images PSnMean of feature vectors of, CFRSAnd
Figure FDA0003521846370000028
respectively represent RS and PSnThe covariance matrix calculated from the eigenvectors of (a), Tr (-) represents the traces of the matrix.
6. The stalk detection method based on X-ray vision according to claim 5, wherein the stalk detection of the cigarette sample under test using the trained stalk classification network comprises:
predictively classifying the target sample by a Softmax classifier containing a focus loss function and according to the formula FL (p)i)=-αi(1-pi)γlog(pi) Determining a loss function value, wherein FL (p)i) For the value of the loss function, piRepresenting the probability of the model predicting the presence of stem in the sample, gamma representing the hyper-parameter controlling "focus", alphaiRepresenting the "contribution" of the control positive and negative samples to the total loss.
7. The stem detection method based on X-ray vision according to claim 6, further comprising:
selecting cigarettes of different brands as detection objects, making collected data into a data set, and detecting and verifying the trained stem classification network according to the data set to judge whether the final detection rate of the stems reaches a set threshold value, wherein if yes, the training of the stem classification network is qualified.
8. The stem detection method based on X-ray vision according to claim 7, wherein the step of making a data set comprises:
firstly, irradiating cigarettes by X-ray equipment to obtain a perspective image of the cigarettes and manually marking the perspective image, wherein 100 images of the cigarettes of each brand are collected, wherein 50 images of each brand with a stem label and 50 images of each brand without the stem label are 2000 images;
then, the trained SinGAN model is used for carrying out data expansion on the images with or without the stem labels of each category according to the training rates of 20% and 50%, and the rest 80% and 50% are used for subsequent testing;
the image expansion ratio is 1:20, when the training rate is 20%, the expansion data set is 20 types, each type comprises 400 images, and 8000 images;
at a training rate of 50%, the extended data set is of 20 classes, 1000 images per class, for a total of 20000 images.
9. The stem detection method based on X-ray vision according to claim 8, wherein the performing X-ray irradiation on the detection object by using an X-ray device and obtaining a corresponding cigarette perspective image comprises:
based on the difference of the X-ray transmission imaging characteristics of the cut tobacco and the stem, X-ray transmission imaging is carried out to form an X-ray black-and-white image;
carrying out noise filtering pretreatment on the X-ray black-and-white image to remove background noise in the image;
carrying out image segmentation on the black-and-white X-ray image subjected to noise filtering pretreatment by using a region growing method so as to divide the black-and-white X-ray image into tobacco stem pixels and background pixels;
obtaining the membership degree of each segmented tobacco stem pixel to the tobacco stem center by adopting a fuzzy C clustering algorithm so as to filter interference information and judge the membership;
after the fuzzy C clustering algorithm processing, the shape analysis is carried out on the pixels which are clustered together, and the area of the segmentation region and the aspect ratio of the region are calculated for carrying out shape recognition.
CN202210179428.1A 2022-02-25 2022-02-25 Stem detection method based on X-ray vision Pending CN114549485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210179428.1A CN114549485A (en) 2022-02-25 2022-02-25 Stem detection method based on X-ray vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210179428.1A CN114549485A (en) 2022-02-25 2022-02-25 Stem detection method based on X-ray vision

Publications (1)

Publication Number Publication Date
CN114549485A true CN114549485A (en) 2022-05-27

Family

ID=81679303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210179428.1A Pending CN114549485A (en) 2022-02-25 2022-02-25 Stem detection method based on X-ray vision

Country Status (1)

Country Link
CN (1) CN114549485A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393378A (en) * 2022-10-27 2022-11-25 深圳市大数据研究院 Low-cost and efficient cell nucleus image segmentation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393378A (en) * 2022-10-27 2022-11-25 深圳市大数据研究院 Low-cost and efficient cell nucleus image segmentation method

Similar Documents

Publication Publication Date Title
Kukreja et al. A Deep Neural Network based disease detection scheme for Citrus fruits
Mazen et al. Ripeness classification of bananas using an artificial neural network
Leemans et al. AE—automation and emerging technologies: On-line fruit grading according to their external quality using machine vision
Xiaobo et al. Apple color grading based on organization feature parameters
CN109886238A (en) Unmanned plane Image Change Detection algorithm based on semantic segmentation
CN112734734A (en) Railway tunnel crack detection method based on improved residual error network
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN110849828A (en) Saffron crocus classification method based on hyperspectral image technology
Rad et al. Classification of rice varieties using optimal color and texture features and BP neural networks
CN105427309A (en) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN110956212A (en) Threshing quality detection method based on visual feature fusion
CN110163101B (en) Method for rapidly distinguishing seeds of traditional Chinese medicinal materials and rapidly judging grades of seeds
CN111914902B (en) Traditional Chinese medicine identification and surface defect detection method based on deep neural network
Mengistu et al. An automatic coffee plant diseases identification using hybrid approaches of image processing and decision tree
CN107918487A (en) A kind of method that Chinese emotion word is identified based on skin electrical signal
CN109086794B (en) Driving behavior pattern recognition method based on T-LDA topic model
Sharma et al. Image processing techniques to estimate weight and morphological parameters for selected wheat refractions
CN115099297A (en) Soybean plant phenotype data statistical method based on improved YOLO v5 model
Erbaş et al. Classification of hazelnuts according to their quality using deep learning algorithms
CN113298780A (en) Child bone age assessment method and system based on deep learning
CN114549485A (en) Stem detection method based on X-ray vision
Supekar et al. Multi-parameter based mango grading using image processing and machine learning techniques
Mohamadzadeh Moghadam et al. Nondestructive classification of saffron using color and textural analysis
Nazulan et al. Detection of sweetness level for fruits (watermelon) with machine learning
CN111340098B (en) STA-Net age prediction method based on shoe print image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination