CN114636736A - Electronic tongue white spirit detection method based on AIF-1DCNN - Google Patents

Electronic tongue white spirit detection method based on AIF-1DCNN Download PDF

Info

Publication number
CN114636736A
CN114636736A CN202111312263.2A CN202111312263A CN114636736A CN 114636736 A CN114636736 A CN 114636736A CN 202111312263 A CN202111312263 A CN 202111312263A CN 114636736 A CN114636736 A CN 114636736A
Authority
CN
China
Prior art keywords
aif
1dcnn
white spirit
dimension
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111312263.2A
Other languages
Chinese (zh)
Inventor
章伟
朱亚龙
刘嘉明
朱晓龙
胡雪峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuzhou Yiran Sensing Technology Research Institute Co ltd
Original Assignee
Chuzhou Yiran Sensing Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuzhou Yiran Sensing Technology Research Institute Co ltd filed Critical Chuzhou Yiran Sensing Technology Research Institute Co ltd
Priority to CN202111312263.2A priority Critical patent/CN114636736A/en
Publication of CN114636736A publication Critical patent/CN114636736A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N27/00Investigating or analysing materials by the use of electric, electrochemical, or magnetic means
    • G01N27/26Investigating or analysing materials by the use of electric, electrochemical, or magnetic means by investigating electrochemical variables; by using electrolysis or electrophoresis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N27/00Investigating or analysing materials by the use of electric, electrochemical, or magnetic means
    • G01N27/26Investigating or analysing materials by the use of electric, electrochemical, or magnetic means by investigating electrochemical variables; by using electrolysis or electrophoresis
    • G01N27/28Electrolytic cell components
    • G01N27/30Electrodes, e.g. test electrodes; Half-cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an electronic tongue white spirit detection method based on AIF-1 DCNN. The method comprises the following steps: utilizing an integrated electrode to obtain an original response data matrix X of the white spirit to carry out pretreatment to form a sample set E; extracting attention characteristics of the electrode with convolution kernel size of n to obtain a length k vector
Figure DDA0003342087370000011
Extracting attention characteristics of a square wave with a convolution kernel size of u from the sample set E to obtain a vector psi with the length of w; will vector
Figure DDA0003342087370000012
(Vector)Carrying out weighted fusion on the Ψ to obtain a vector Ψ, and fusing the Ψ to the original input to obtain λ; and training an AIF-1DCNN neural network for input lambda by adopting a self-adaptive momentum random optimization method, and performing classification detection on the test set by utilizing the trained AIF-1DCNN neural network model. The method solves the technical problem of low accuracy rate caused by neglecting the influence of each electrode and different square waves due to the fact that the existing deep learning algorithm only uses full-time domain information measured by the electronic tongue, and improves the efficiency and the accuracy rate of the annual analysis of the white spirit.

Description

Electronic tongue white spirit detection method based on AIF-1DCNN
Technical Field
The invention relates to the field of electronic tongues, in particular to an electronic tongue white spirit detection method based on AIF-1 DCNN.
Background
The electronic tongue has great potential in the field of food detection as a modern intelligent sensing instrument which is real-time, accurate, efficient, non-invasive and portable. Pattern recognition is a key part of electronic tongue systems, and common pattern recognition techniques can be divided into two major categories: machine learning and deep learning. The machine learning needs to manually extract the features, the process is complex and time-consuming, the full-time domain information measured by the electronic tongue is rarely used, the accuracy rate depends on the quality of the early-stage feature extraction, and a good classification effect cannot be obtained; in the deep learning field, a Convolutional Neural Network (CNN) has a powerful function in selecting and extracting features of high-dimensional data, and the existing deep learning algorithm applied to electronic tongue recognition only performs simple convolutional feature extraction analysis on the whole sample data and ignores the deep influence of each electrode and different pulse square wave voltages on the data, so that the efficiency and the accuracy rate need to be improved to overcome the existing defects.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an electronic tongue white spirit detection method based on AIF-1DCNN, which utilizes an integrated electrode in an electronic tongue to collect white spirit sample information and combines the AIF-1DCNN detection method to analyze data, thereby achieving the purpose of detecting the white spirit sample and improving the white spirit component analysis efficiency and accuracy.
In order to solve the problems of the prior art, the invention adopts the technical scheme that:
an electronic tongue white spirit detection method based on AIF-1DCNN comprises the following steps:
s1: preprocessing an original response data matrix X of the white spirit obtained by utilizing the integrated electrode to form a sample set E
Transversely splicing single data points of white spirit measured by 6 electrodes of the integrated electrode, vertically splicing the single data points according to the measurement times to form an original response data matrix X, and obtaining the original response data matrix X shown in formula (1)
Figure BDA0003342087350000011
Wherein, [ x ]i,1,xi,2,…,xi,j]Transversely splicing single data points of the white spirit measured by the plurality of electrodes to form sample data, wherein i is the number of the plurality of electrode measurement samples, and j is the number of sampling points;
further, the original response data matrix X is subjected to zero-averaging processing by using the following formula (2):
Figure BDA0003342087350000021
e is normalized data, XmaxMaximum value, X, for each column of the raw response data matrixminIs the minimum value of each column of the original response data matrix, and X is the original response data matrix.
S2: extracting attention characteristics of the electrode with convolution kernel size of n to obtain a length k vector by using a sample set E
Figure BDA0003342087350000022
Firstly, k convolution kernels with the kernel size of n are used for carrying out convolution on an original signal to obtain a characteristic F, and further, the F is subjected to global average pooling to obtain the FGPThen two 1-by-1 convolution activation extractions are carried out to obtain the attention characteristic with the length of k electrode channels
Figure BDA0003342087350000023
Attention is drawn as followsFormulas (3), (4), (5)
Figure BDA0003342087350000024
i, j represents the dimension of the input signal E, E ∈ E, Ei,jRepresents the number in e at row i and column j,
Figure BDA0003342087350000025
representing the number of the ith row and jth column of the l convolution kernel, blOffset representing the l-th convolution kernel
Figure BDA0003342087350000026
Where H, W is the dimension of the feature U, FcIs the number of the characteristic F channels, i, j is the value of the c channel with the position (i, j)
Figure BDA0003342087350000027
Where f (-) is the sigmoid activation function.
S3: extracting the attention feature of the square wave with the convolution kernel size of u from the sample set E to obtain a vector psi with the length of w
The original signal is convolved by w convolution kernels with the kernel size u, then global average pooling and two 1 × 1 convolution activations are carried out to obtain the attention feature psi of the pulse square wave channel with the length w, the operation principle is the same as that of step S2,
wherein the relationship among k, n, w, u satisfies the formulas (6), (7)
w=k×v (6)
k×n=w×u (7)
Where v is the number of pulses per square wave for a single electrode, k is the number of electrodes, n is the length of data acquired per electrode, and u is the length of data acquired by a single electrode over the duration of a single square wave.
S4: will vector
Figure BDA0003342087350000028
Weighting and fusing the vector Ψ to obtain a vector Ψ
Further will be
Figure BDA0003342087350000031
Ψ A weighted fusion to yield T, taking into account
Figure BDA0003342087350000032
The pulse square waves corresponding to each electrode in psi are firstly aligned
Figure BDA0003342087350000033
Carrying out dimension transformation on phi to obtain phi and psi, wherein the dimension of phi is (k,1), the dimension of psi is (k, v), as shown in formulas (8) and (9), further carrying out multiplication operation on phi and psi by using a broadcasting mechanism to obtain weighted fusion attention tau, and the dimension of phi is (k, v)
Figure BDA0003342087350000034
Figure BDA0003342087350000035
Τ=Φ×Ψ (10)
S5: fusing original input with vector gamma to obtain lambda
Firstly, fusing an original input e by a weighted fusion attention T to obtain lambda. Firstly, carrying out dimension transformation on weighted fusion attention T and an input signal e, wherein the dimension of T is changed from (k, v) to (w,1) to obtain T', the dimension of the input signal e is changed from (1, kxn) to (w, u) to obtain e, further, the attention and the input signal are fused by utilizing a broadcasting mechanism to obtain Λ, as shown in the following formulas (11) and (12), finally, carrying out dimension transformation on Λ again to obtain λ, and the dimension of λ is (kxn, 1)
Figure BDA0003342087350000036
Λ=Γ×e (12)
S6: training an AIF-1DCNN neural network for input lambda by adopting a self-adaptive momentum random optimization method, and performing classification detection on a test set by utilizing the trained AIF-1DCNN neural network model
Wherein, the 1DCNN convolution neural network module in the AIF-1DCNN comprises five convolution layers, wherein the convolution kernel uses 1-dimensional convolution kernel with the sizes of (1 × 32 × 16, 1 × 32 × 8, 1 × 64 × 4, 1 × 64 × 2); in the convolution process, padding is set as 'SAME', the stride is 1, a Relu activation function is selected to increase the nonlinearity of the network, wherein, Flatten is carried out after the first pooling layer is inverted in an AIF-1DCNN neural network model, finally, a final label is generated through a Softmax classifier, the model selects cross entropy as a loss function, an Adam self-adaptive momentum random optimization algorithm is used to ensure that the loss function is converged to the minimum overall situation rapidly, and finally, the trained AIF-1DCNN neural network model is used for carrying out classification detection on the test set.
Has the advantages that:
compared with the prior art, the electronic tongue white spirit detection method based on the E-CPAF-1DCNN adopts the integrated electrodes in the electronic tongue to collect white spirit sample information, and utilizes the AIF-1DCNN neural network algorithm to analyze white spirit samples in the upper computer, so that the technical problems that the manual characteristic extraction process of the machine learning algorithm is complex, time is consumed, little full-time domain information measured by the electronic tongue is used, and the accuracy of the simple 1DCNN network is low are solved.
Drawings
FIG. 1 is a flow chart of the electronic tongue white spirit detection method based on AIF-1DCNN of the present invention;
FIG. 2 is a diagram of an AIF-1 DCNN-based network model according to an embodiment;
FIG. 3 is a block diagram of an attention module of the AIF-1DCNN in an embodiment;
FIG. 4 is a diagram of the 1DCNN module of the AIF-1DCNN in an implementation;
fig. 5 is a result graph of classification accuracy and iteration number of the electronic tongue data of white spirit in different years in an embodiment, where (a) is a relation between loss of the training set and the iteration number, and (b) is a relation between accuracy of the training set and the iteration number.
Detailed Description
As shown in fig. 2 and 3, the electronic tongue white spirit detection method based on AIF-1DCNN of the present invention includes the following steps:
s1: utilizing an integrated electrode to obtain an original response data matrix X of the white spirit to carry out pretreatment to form a sample set E;
transversely splicing single data points of white spirit measured by 6 electrodes of the integrated electrode, and vertically splicing the single data points according to the measurement times to form an original response data matrix X, as shown in formula (1)
Figure BDA0003342087350000041
Wherein, [ x ]i,1,xi,2,…,xi,j]Transversely splicing single data points of the white spirit measured by a plurality of electrodes to form sample data, wherein i is the number of the measured samples of the plurality of electrodes, and j is the number of sampling points;
further, the original response data matrix X is subjected to zero-averaging processing by using the following formula (2):
Figure BDA0003342087350000042
e is normalized data, XmaxMaximum value, X, for each column of the raw response data matrixminIs the minimum value of each column of the original response data matrix, and X is the original response data matrix.
S2: extracting attention characteristics of the electrode with convolution kernel size of n to obtain a length k vector by using a sample set E
Figure BDA0003342087350000043
Firstly, k convolution kernels with the kernel size of n are used for carrying out convolution on an original signal to obtain a characteristic F, and further, the F is subjected to global average pooling to obtain the FGPThen two 1-x 1 convolution activation extractions are carried out to obtain the length k electricityPolar tunnel attention feature
Figure BDA0003342087350000051
Attention is drawn to the following formulae (3), (4), (5)
Figure BDA0003342087350000052
i, j represents the dimension of the input signal E, E ∈ E, Ei,jRepresents the number in e at row i and column j,
Figure BDA0003342087350000053
representing the number of the ith row and jth column of the l convolution kernel, blOffset representing the l-th convolution kernel
Figure BDA0003342087350000054
Where H, W is the dimension of the feature U, FcThe number of the characteristic F channels, i, j is the value of the c channel at which the position is (i, j).
Figure BDA0003342087350000055
Where f (-) is the sigmoid activation function.
S3: extracting the attention feature of the square wave with the convolution kernel size of u from the sample set E to obtain a vector psi with the length of w
And (4) convolving the original signal by w convolution kernels with the kernel size u, then performing global average pooling and two 1 × 1 convolution activations to obtain the attention feature psi of the pulse square wave channel with the length w, wherein the operation principle is the same as that of the step S2.
Wherein the relationship among k, n, w, u satisfies the formulas (6), (7)
w=k×v (6)
k×n=w×u (7)
Where v is the number of pulses per electrode, k is the number of electrodes, n is the length of data acquired per electrode, and u is the length of data acquired by a single electrode over a single square wave duration.
S4: will vector
Figure BDA0003342087350000056
Weighting and fusing the vector Ψ to obtain a vector Ψ
Further will be
Figure BDA0003342087350000057
Ψ is weighted and fused to obtain the Ψ. In view of
Figure BDA0003342087350000058
The pulse square waves corresponding to each electrode in psi are firstly aligned
Figure BDA0003342087350000059
Carrying out dimension transformation on phi to obtain phi and psi, wherein the dimension of phi is (k,1), the dimension of psi is (k, v), as shown in formulas (8) and (9), further carrying out multiplication operation on phi and psi by using a broadcasting mechanism to obtain a weighted fusion attention tau, and the dimension of phi is (k, v),
Figure BDA0003342087350000061
Figure BDA0003342087350000062
Τ=Φ×Ψ (10)
s5: fusing original input with vector gamma to obtain lambda
Firstly, carrying out weighted fusion on original input e to obtain lambda. Firstly, carrying out dimension transformation on a weighted fusion attention T and an input signal e, wherein the dimension of the T is changed from (k, v) to (w,1) to obtain T', the dimension of the input signal e is changed from (1, k × n) to (w, u) to obtain e, and further, fusing attention and the input signal by using a broadcasting mechanism to obtain Λ, which is shown in the following formulas (11) and (12). Finally, dimension transformation is carried out on the lambda again to obtain lambda, and the lambda dimension is (k multiplied by n, 1).
Figure BDA0003342087350000063
Λ=Γ×e (12)
S6: training an AIF-1DCNN neural network for input lambda by adopting a self-adaptive momentum random optimization method, and performing classification detection on a test set by utilizing the trained AIF-1DCNN neural network model
Wherein, the 1DCNN convolution neural network module in the AIF-1DCNN comprises five convolution layers, wherein the convolution kernel uses 1-dimensional convolution kernel with the sizes of (1 × 32 × 16, 1 × 32 × 8, 1 × 64 × 4, 1 × 64 × 2); in the convolution process, padding is set to be 'SAME', the stride is 1, a Relu activation function is selected to increase the nonlinearity of the network, wherein, Flatten is carried out after the first pooling layer is inverted in an AIF-1DCNN neural network model, finally, a final label is generated through a Softmax classifier, the model selects cross entropy as a loss function, and an Adam self-adaptive momentum random optimization algorithm is used to ensure that the loss function is rapidly converged to the global minimum. And finally, carrying out classification detection on the test set by using the trained AIF-1DCNN neural network model.
The detection method of the present invention is described below with a specific embodiment, the environment of the operation of the present embodiment is the implementation environment of the experiment, which is realized by writing on a DellT t792 computer, Windows10, intel to strong 20-core processor, 64G run memory, 2 × 11GRTX2080Ti, Pycharm2019, python3.7, scinit-leann 0.21.3, tensoflow2.1.0, and kerras 2.3.1.
In the step S6, an Adam gradient descent optimization algorithm (lr being 1e-3) is used in the training process, the size of Batch _ size is set to 32, and the size of Epoch is 100; the ratio of training set, validation set and test set was 6:2: 2.
The data used in the experiment are five kinds of liquor data of different years of blending measured by an electronic tongue, and the data quantity collected by an integrated electrode is shown in table 1:
TABLE 1
Figure BDA0003342087350000071
In the experiment, 0.6 × 500 samples are used for sample training, as shown in fig. 5, the AIF-1DCNN model gradually converges with the increase of Epoch, so that the AIF-1DCNN model after the Epoch ═ 100 training is taken as the final model for completing the training, then 0.2 sample of the total number of samples is taken as the test set, and the results of the experimental data in the training set of the structure of the AIF-1DCNN model after completing the training are shown in fig. 3.
In order to verify the superiority of the AIF-1DCNN model in performance compared to the simple one-dimensional convolutional neural network model, a comparison experiment was performed, and table 3 shows the comparison result of the classification result of the AIF-1DCNN model compared to the conventional machine learning model.
TABLE 3AIF-1DCNN model compares the accuracy of a simple one-dimensional convolutional neural network
Figure BDA0003342087350000072
The table shows that the average accuracy of the AIF-1DCNN model is 96.8%, and the maximum difference of the accuracy is 1%; the average accuracy of the simple 1DCNN model is 93.4%, and the maximum difference of the accuracy is 6%.
The above description is only a few of the preferred embodiments of the present application and is not intended to limit the present application, which may be modified and varied by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (7)

1. An electronic tongue white spirit detection method based on AIF-1DCNN is characterized by comprising the following steps:
s1: utilizing an integrated electrode to obtain an original response data matrix X of the white spirit to carry out pretreatment to form a sample set E;
s2: extracting attention characteristics of the electrode with convolution kernel size of n to obtain a length k vector
Figure FDA0003342087340000011
S3: extracting attention features with convolution kernel size of u square waves from the sample set E to obtain a vector psi with length of w;
s4: will vector
Figure FDA0003342087340000012
Carrying out weighted fusion on the vector psi to obtain a vector T;
s5: fusing the vector T to the original input to obtain lambda;
s6: training an AIF-1DCNN neural network for input lambda by adopting a self-adaptive momentum random optimization method, and carrying out classification detection on a test set by utilizing a trained AIF-1DCNN neural network model;
and finally, generating a final label through a Softmax classifier, wherein the model selects cross entropy as a loss function, and ensures that the loss function is rapidly converged to the global minimum by using an Adam gradient descent optimization algorithm.
2. The method for detecting electronic tongue white spirit based on AIF-1DCNN as claimed in claim 1, wherein step S1 specifically comprises: transversely splicing single data points of white spirit measured by 6 electrodes of the integrated electrode, and vertically splicing the single data points according to the measurement times to form an original response data matrix X, as shown in formula (1)
Figure FDA0003342087340000013
Wherein, [ x ]1,1,x1,2,...,x1,j]Transversely splicing single data points of the white spirit measured by the plurality of electrodes to form sample data, wherein i is the number of the measured samples of the plurality of electrodes, and j is the number of sampling points;
further, the original response data matrix X is subjected to zero-averaging processing by using the following formula (2):
Figure FDA0003342087340000014
e is normalized data, XmaxMaximum value, X, for each column of the raw response data matrixminIs the minimum value of each column of the original response data matrix, and X is the original response data matrix.
3. The method for detecting electronic tongue white spirit based on AIF-1DCNN as claimed in claim 1, wherein in step S2, k convolution kernels with kernel size n are first used to convolve original signals to obtain feature F, and further global average pooling of F is performed to obtain FGPThen two 1-by-1 convolution activation extractions are carried out to obtain the attention characteristic with the length of k electrode channels
Figure FDA0003342087340000021
Attention is drawn to the following formulae (3), (4), (5)
Figure FDA0003342087340000022
i, j represents the dimension of the input signal E, E ∈ E, Ei,jRepresents the number in e at row i and column j,
Figure FDA0003342087340000023
representing the number of the ith row and jth column of the l convolution kernel, blOffset representing the l-th convolution kernel
Figure FDA0003342087340000024
Where H, W is the dimension of the feature U, FcIs the number of the characteristic F channels, i, j is the value of the c channel with the position (i, j)
Figure FDA0003342087340000025
Where f (-) is the sigmoid activation function.
4. The method for detecting AIF-1 DCNN-based electronic tongue white spirit according to claim 1, wherein in step S3, the original signal is convolved with w convolution kernels with kernel size u, and then global average pooling and two 1 × 1 convolution activations are performed to obtain the attention feature Ψ of the pulse square wave channel with length w,
wherein the relationship among k, n, w, u satisfies the formulas (6), (7)
w=k×v (6)
k×n=w×u (7)
Where v is the number of pulses per electrode, k is the number of electrodes, n is the length of data acquired per electrode, and u is the length of data acquired by a single electrode over a single square wave duration.
5. The method for detecting electronic tongue white spirit based on AIF-1DCNN as claimed in claim 1, wherein step S4 is further performed
Figure FDA0003342087340000026
Ψ A weighted fusion to yield T, taking into account
Figure FDA0003342087340000027
The pulse square waves corresponding to each electrode in psi are firstly aligned
Figure FDA0003342087340000028
Carrying out dimension transformation on phi to obtain phi and psi, wherein the dimension size of phi is (k,1), the dimension size of psi is (k, v), as shown in formulas (8) and (9), further carrying out multiplication operation on phi and psi by using a broadcasting mechanism to obtain a weighted fusion attention tau, and the dimension size of tau is (k, v).
Figure FDA0003342087340000029
Figure FDA0003342087340000031
Τ=Φ×Ψ (10)
6. The method for detecting electronic tongue white spirit based on AIF-1DCNN according to claim 1, wherein in step S5, a weighted fusion attention t is first fused to an original input e to obtain λ, the weighted fusion attention t and the input signal e are first dimension-transformed, the dimension of t is changed from (k, v) to (w,1) to obtain t', the dimension of the input signal e is changed from (1, kxn) to (w, u) to obtain e, the attention and the input signal are further fused by using a broadcasting mechanism to obtain Λ, as shown in the following formulas (11) and (12), finally, the Λ is dimension-transformed again to obtain λ, the dimension of λ is (kxn, 1),
Figure FDA0003342087340000032
Λ=Γ×e (12)。
7. the method according to claim 1, wherein in step S6, the 1DCNN convolutional neural network module in the AIF-1DCNN includes five convolutional layers, wherein the convolutional kernel uses 1-dimensional convolutional kernels having sizes of 1 × 32 × 16, 1 × 32 × 8, 1 × 64 × 4, 1 × 64 × 2; in the convolution process, padding is set as 'SAME', the stride is 1, a Relu activation function is selected to increase the nonlinearity of the network, wherein, Flatten is carried out after the first pooling layer is inverted in an AIF-1DCNN neural network model, finally, a final label is generated through a Softmax classifier, the model selects cross entropy as a loss function, an Adam self-adaptive momentum random optimization algorithm is used to ensure that the loss function is converged to the minimum overall situation rapidly, and finally, the trained AIF-1DCNN neural network model is used for carrying out classification detection on the test set.
CN202111312263.2A 2021-11-08 2021-11-08 Electronic tongue white spirit detection method based on AIF-1DCNN Pending CN114636736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111312263.2A CN114636736A (en) 2021-11-08 2021-11-08 Electronic tongue white spirit detection method based on AIF-1DCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111312263.2A CN114636736A (en) 2021-11-08 2021-11-08 Electronic tongue white spirit detection method based on AIF-1DCNN

Publications (1)

Publication Number Publication Date
CN114636736A true CN114636736A (en) 2022-06-17

Family

ID=81946680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111312263.2A Pending CN114636736A (en) 2021-11-08 2021-11-08 Electronic tongue white spirit detection method based on AIF-1DCNN

Country Status (1)

Country Link
CN (1) CN114636736A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090306741A1 (en) * 2006-10-26 2009-12-10 Wicab, Inc. Systems and methods for altering brain and body functions and for treating conditions and diseases of the same
CN103645182A (en) * 2013-12-13 2014-03-19 重庆大学 Method for identifying white spirit flavor type by using electronic tongue system
US20140099729A1 (en) * 2012-08-13 2014-04-10 Andreas Mershin Methods and Apparatus for Artificial Olfaction
US20140201182A1 (en) * 2012-05-07 2014-07-17 Alexander Himanshu Amin Mobile communications device with electronic nose
CN108348748A (en) * 2015-09-04 2018-07-31 赛恩神经刺激有限责任公司 System, apparatus and method for the nerve stimulation with packet modulation
CN110070069A (en) * 2019-04-30 2019-07-30 重庆大学 A kind of Classification of Tea method based on convolutional neural networks Automatic Feature Extraction
CN113313150A (en) * 2021-05-17 2021-08-27 南京益得冠电子科技有限公司 Electronic tongue detection method and system based on PCA and random forest
CN113486981A (en) * 2021-07-30 2021-10-08 西安电子科技大学 RGB image classification method based on multi-scale feature attention fusion network
CN113537278A (en) * 2021-05-17 2021-10-22 南京益得冠电子科技有限公司 Wine detection method and system based on 1D-CNN electronic tongue

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090306741A1 (en) * 2006-10-26 2009-12-10 Wicab, Inc. Systems and methods for altering brain and body functions and for treating conditions and diseases of the same
US20140201182A1 (en) * 2012-05-07 2014-07-17 Alexander Himanshu Amin Mobile communications device with electronic nose
US20140099729A1 (en) * 2012-08-13 2014-04-10 Andreas Mershin Methods and Apparatus for Artificial Olfaction
CN103645182A (en) * 2013-12-13 2014-03-19 重庆大学 Method for identifying white spirit flavor type by using electronic tongue system
CN108348748A (en) * 2015-09-04 2018-07-31 赛恩神经刺激有限责任公司 System, apparatus and method for the nerve stimulation with packet modulation
CN110070069A (en) * 2019-04-30 2019-07-30 重庆大学 A kind of Classification of Tea method based on convolutional neural networks Automatic Feature Extraction
CN113313150A (en) * 2021-05-17 2021-08-27 南京益得冠电子科技有限公司 Electronic tongue detection method and system based on PCA and random forest
CN113537278A (en) * 2021-05-17 2021-10-22 南京益得冠电子科技有限公司 Wine detection method and system based on 1D-CNN electronic tongue
CN113486981A (en) * 2021-07-30 2021-10-08 西安电子科技大学 RGB image classification method based on multi-scale feature attention fusion network

Similar Documents

Publication Publication Date Title
CN109767438B (en) Infrared thermal image defect feature identification method based on dynamic multi-objective optimization
CN110414554B (en) Stacking ensemble learning fish identification method based on multi-model improvement
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
Feng et al. Convolutional neural network based on bandwise-independent convolution and hard thresholding for hyperspectral band selection
CN107122375B (en) Image subject identification method based on image features
CN112200211B (en) Small sample fish identification method and system based on residual network and transfer learning
CN110770752A (en) Automatic pest counting method combining multi-scale feature fusion network with positioning model
CN108647588A (en) Goods categories recognition methods, device, computer equipment and storage medium
CN109767437B (en) Infrared thermal image defect feature extraction method based on k-means dynamic multi-target
CN109559309B (en) Multi-objective optimization infrared thermal image defect feature extraction method based on uniform evolution
CN106951915B (en) One-dimensional range profile multi-classifier fusion recognition method based on category confidence
CN103942749B (en) A kind of based on revising cluster hypothesis and the EO-1 hyperion terrain classification method of semi-supervised very fast learning machine
CN108830312B (en) Integrated learning method based on sample adaptive expansion
Hussain et al. Automatic disease detection in wheat crop using convolution neural network
CN107528824B (en) Deep belief network intrusion detection method based on two-dimensional sparsification
CN110210625A (en) Modeling method, device, computer equipment and storage medium based on transfer learning
CN109816638B (en) Defect extraction method based on dynamic environment characteristics and weighted Bayes classifier
Bonifacio et al. Determination of common Maize (Zea mays) disease detection using Gray-Level Segmentation and edge-detection technique
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN110414587A (en) Depth convolutional neural networks training method and system based on progressive learning
CN113537278A (en) Wine detection method and system based on 1D-CNN electronic tongue
CN116310425B (en) Fine-grained image retrieval method, system, equipment and storage medium
CN114636736A (en) Electronic tongue white spirit detection method based on AIF-1DCNN
CN109872319B (en) Thermal image defect extraction method based on feature mining and neural network
Wang et al. Crop pest detection by three-scale convolutional neural network with attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination