CN117607120A - Food additive Raman spectrum detection method and device based on improved Resnext model - Google Patents
Food additive Raman spectrum detection method and device based on improved Resnext model Download PDFInfo
- Publication number
- CN117607120A CN117607120A CN202311296175.7A CN202311296175A CN117607120A CN 117607120 A CN117607120 A CN 117607120A CN 202311296175 A CN202311296175 A CN 202311296175A CN 117607120 A CN117607120 A CN 117607120A
- Authority
- CN
- China
- Prior art keywords
- resnext
- model
- improved
- raman
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000002778 food additive Substances 0.000 title claims abstract description 41
- 235000013373 food additive Nutrition 0.000 title claims abstract description 41
- 238000001237 Raman spectrum Methods 0.000 title claims abstract description 40
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 239000000203 mixture Substances 0.000 claims abstract description 17
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 12
- 238000000034 method Methods 0.000 claims abstract description 10
- 239000000126 substance Substances 0.000 claims abstract description 9
- 238000013528 artificial neural network Methods 0.000 claims abstract description 6
- 238000003062 neural network model Methods 0.000 claims abstract description 5
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 19
- 238000001069 Raman spectroscopy Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 9
- 238000001228 spectrum Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 3
- 238000000227 grinding Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000003860 storage Methods 0.000 claims description 3
- 230000006872 improvement Effects 0.000 claims description 2
- 238000012360 testing method Methods 0.000 claims 2
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 235000016709 nutrition Nutrition 0.000 description 2
- 230000035764 nutrition Effects 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 206010020751 Hypersensitivity Diseases 0.000 description 1
- 208000008589 Obesity Diseases 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 208000026935 allergic disease Diseases 0.000 description 1
- 230000007815 allergy Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 230000000378 dietary effect Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000003623 enhancer Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 208000030159 metabolic disease Diseases 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 235000020824 obesity Nutrition 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/62—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
- G01N21/63—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
- G01N21/65—Raman scattering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2201/00—Features of devices classified in G01N21/00
- G01N2201/12—Circuits of general importance; Signal processing
- G01N2201/129—Using chemometrical methods
- G01N2201/1296—Using chemometrical methods using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Immunology (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Biochemistry (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
Abstract
A food additive detection method and device based on an improved ResNext model and combined with Raman spectrum, wherein the method comprises the following steps: 1) Constructing an improved ResNext module for feature extraction; 2) Constructing an improved ResNext neural network model, setting parameters and training; 3) Raman spectra of the mixture containing the food additives were identified using a trained modified ResNext neural network. The invention reduces the number of the model hyper-parameters, and is more convenient for the portability of the model; the method has stronger characteristic extraction capability and enhances the accuracy of substance identification.
Description
Technical Field
The invention relates to a Raman spectrum technology, in particular to a method and a device for quickly identifying substances by utilizing Raman spectrum and deep learning.
Background
Although the food additive can be used for increasing dietary nutrition (food nutrition enhancer), prolonging the shelf life of food and improving the sensory properties of food to meet the sensory requirements of people, the negative effect of the food additive is not negligible. Some researchers have shown that excessive use of food additives may cause various diseases such as allergy, diabetes, obesity, and metabolic disorders. Thus, food additives used or produced in industrial processes should be identified and quantified. In addition, some manufacturers use illegal food additives that are extremely harmful to the human body, and careful monitoring is also required.
However, the conventional detection methods often have certain drawbacks, such as the need for personnel with specialized literacy, complex and time-consuming sample preparation process, need for laboratory procedures, and lack of applicability for large-scale screening. The Raman spectrum technology is an advanced non-invasive detection technology, has the advantages of simple and rapid operation, capability of carrying out nondestructive detection and the like, and can be used for detecting food additives. When food additives are tested, raman spectra can be used to determine the chemical composition, structure and purity of the additives, each having a unique molecular vibration pattern, and thus a raman spectrum having characteristic peak positions and intensities. By comparing with the known standard sample, the identification and even quantitative analysis of the food additive can be performed according to the characteristic peak of the Raman spectrum.
In recent years, deep learning can learn multi-level representation features from a large amount of original data by virtue of a flexible architecture and an efficient algorithm, and achieves good performance in various fields such as computer vision, natural language processing, voice recognition and the like. Also, deep learning can be used in the field of substance identification, in combination with raman spectroscopy to detect food additives.
In view of this situation, this patent proposes an identification of food additives based on an improved ResNext model in combination with raman spectroscopy.
Disclosure of Invention
In order to solve the problem of food additive detection, the invention provides a food additive Raman spectrum detection method and device based on an improved ResNext model, which combines a ResNet unique residual structure and an acceptance grouping convolution, adopts a sub-module topological structure, reduces the number of model hyper-parameters, and is more convenient for the portability of the model.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the food additive detection method based on the improved ResNext model and combined with Raman spectrum comprises the following steps:
1) Constructing an improved ResNext module for feature extraction;
2) Constructing an improved ResNext neural network model, setting parameters and training;
3) Raman spectra of the mixture containing the food additives were identified using a trained modified ResNext neural network.
Further, the step 1) includes the steps of:
(11) The ResNext model is improved, and the improved neural network module is as follows: sending the spliced two-dimensional feature map into a residual block with an output channel of 32 to extract features, carrying out grouping convolution on the extracted 32 features by an acceptance block, and fusing features with different scales to obtain 32 new features;
(12) And the common convolution module is replaced by the improved ResNext module, so that the characteristic extraction capability of the network is enhanced.
Further, the step 2) includes the steps of:
(21) The whole ResNext Raman spectrum recognition network based on improvement is divided into three parts: the first part consists of two one-dimensional convolution networks and a characteristic splicing layer, wherein the size of the one-dimensional convolution networks is 7 x 1 x 16, input data are raman spectrums of a pure object and an object to be detected respectively, the size of the input data is 1753 x 1, characteristics are extracted through the one-dimensional convolution networks, and then the extracted two 16 characteristics are subjected to Concat to obtain 32 characteristics. The second part is then the modified ResNext module for feature extraction, which consists of one residual block and one acceptance block. The characteristic output X of the first part is firstly passed through a residual block which is firstly a 2D convolution layer with a convolution kernel size of 3*3, and then the above operation is repeated by a batch normalization and nonlinear activation function ReLU to obtain an output Y 1 Can be expressed as:
Y 1 =ReLU(BN(Conv3(X 1 ))) (1)
wherein X is 1 The representation is:
X 1 =ReLU(BN(Conv3(X))) (2)
in jump connection of residual blocks, a 2D convolution network with a convolution kernel of 1*1 is used for processing an input characteristic X to obtain an output characteristic Y 2 :
Y 2 =Conv1(X) (3)
Then Y is taken up 1 And Y 2 Accumulating, performing batch normalization and nonlinear activation function ReLU, and maximally pooling to obtain output characteristic Y 3 ,
Y 3 =MP(ReLU(BN(Y 1 +Y 2 ))) (4)
Then the output characteristics of the residual blocks enter an acceptance structure, after the dimension of the packets is reduced, the packets enter convolution networks with different convolution kernel sizes, and after Concat, the final output characteristics are obtained:
Y=CAT(Y 4 ,Y 5 ,Y 6 ,Y 7 ) (5)
wherein Y is 4 ,Y 5 ,Y 6 ,Y 7 Expressed as:
Y 4 =Conv1(Y 3 ) (6)
Y 5 =Conv3(Conv1(Y 3 )) (7)
Y 6 =Conv5(Conv1(Y 3 )) (8)
Y 7 =Conv1(MP(Y 3 )) (9)
the third part is an output network, the output characteristics of the second part enter a full-connection layer after passing through a ReLU function, the robustness of the model is increased through dropout operation, and then the second full-connection layer is entered to obtain final output, wherein the final output comprises two probability values which respectively represent the probability of whether the object to be detected contains the pure object.
(22) And collecting Raman spectra of various different pure matters, and carrying out pretreatment operations such as denoising, baseline correction and the like. For each purity, multiple simulated mixture spectra were generated in random proportions, with the positive spectrum (containing the particular purity) and the negative spectrum (without the particular purity) each accounting for half.
(23) And determining model parameters, and loading one-dimensional Raman spectrum data in the training set into the model for training.
Still further, the procedure of step 3) is as follows: preparing mixture solutions containing food additives in different proportions, acquiring Raman spectra of the mixture to be detected by adopting a laboratory self-grinding Raman spectrum acquisition system, preprocessing, inputting data to be detected into a trained network, and finally outputting substances possibly contained in the mixture to be detected to finish the aim of detecting the food additives.
A second aspect of the invention relates to a food additive detection device based on an improved ResNext model in combination with raman spectroscopy, comprising a memory and one or more processors, said memory having stored therein executable code, which when executed by said one or more processors, is adapted to carry out a food additive detection method based on an improved ResNext model in combination with raman spectroscopy according to the invention.
A third aspect of the invention relates to a computer readable storage medium having stored thereon a program which, when executed by a processor, implements a method of detecting a food additive based on an improved ResNext model in combination with raman spectroscopy according to the invention.
The main body of the network adopts a residual structure and an acceptance structure to perform feature extraction, jump connection is introduced into the residual structure, information is allowed to be directly transmitted in the network, so that the network can obtain effective gradient more easily during training, the problem of gradient disappearance is solved, and the convergence speed of the residual structure is generally faster than that of a traditional convolution network structure; the convolution kernels with different sizes are used in the acceptance structure, so that the number of parameters can be reduced to a certain extent, the parameter efficiency of a network is improved to a certain extent, and the risk of overfitting is reduced.
Compared with the prior art, the invention has the beneficial effects that: 1. the number of the model hyper-parameters is reduced, and the portability of the model is facilitated. 2. The method has stronger characteristic extraction capability and enhances the accuracy of substance identification.
Drawings
FIG. 1 is a network architecture of the present invention;
FIG. 2 is a residual block of the present invention;
FIG. 3 is an acceptance structure of the present invention;
fig. 4 is a network flow diagram of the present invention.
Detailed Description
The invention aims to solve the problem of food additive detection, and the invention is further described below with reference to the accompanying drawings and embodiments:
example 1
Referring to fig. 4, the improved ResNext model-based food additive raman spectrum detection method comprises the following steps:
1) The construction of the improved ResNext module for feature extraction specifically comprises the following steps:
(11) The modified ResNext module first consists of one residual block, with the input features going into the residual part and the direct mapped part of the residual block, respectively. The structure of the residual block is shown in fig. 2, wherein the residual part consists of two convolution layers with the convolution kernel size of 3*3, the direct mapping part consists of one convolution layer with the convolution kernel size of 1*1, and then the outputs of the two parts are accumulated;
(12) The output characteristics of the residual block enter the following acceptance structure. The structure of the concept is shown in fig. 3, and the structure is composed of four groups of convolution layers with different convolution kernel sizes, and after the output characteristics of the residual block are subjected to group convolution, each output result is subjected to Concat.
2) Referring to fig. 1, an improved ResNext neural network model is constructed, parameters are set, training is performed, and the whole training process of the model is shown in fig. 4.
(21) The whole neural network model consists of three parts: the first part consists of two one-dimensional convolution networks and a characteristic splicing layer, wherein the size of the one-dimensional convolution networks is 7 x 1 x 16, input data are raman spectrums of a pure object and an object to be detected respectively, the size of the input data is 1753 x 1, characteristics are extracted through the one-dimensional convolution networks, and then the extracted two 16 characteristics are subjected to Concat to obtain 32 characteristics. The second part is responsible for feature extraction, which is done by the modified ResNext module. The characteristic output X of the first part is firstly passed through a residual block which is firstly a 2D convolution layer with a convolution kernel size of 3*3, and then the above operation is repeated by a batch normalization and nonlinear activation function ReLU to obtain an output Y 1 Can be expressed as:
Y 1 =ReLU(BN(Conv3(X 1 ))) (1)
wherein X is 1 The representation is:
X 1 =ReLU(BN(Conv3(X))) (2)
in jump connection of residual blocks, a 2D convolution network with a convolution kernel of 1*1 is used for processing an input characteristic X to obtain an output characteristic Y 2 :
Y 2 =Conv1(X) (3)
Then Y is taken up 1 And Y 2 Accumulating, performing batch normalization and nonlinear activation function ReLU, and maximally pooling to obtain output characteristic Y 3 ,
Y 3 =MP(ReLU(BN(Y 1 +Y 2 ))) (4)
Then the output characteristics of the residual blocks enter an acceptance structure, after the dimension of the packets is reduced, the packets enter convolution networks with different convolution kernel sizes, and after Concat, the final output characteristics are obtained:
Y=CAT(Y 4 ,Y 5 ,Y 6 ,Y 7 ) (5)
wherein Y is 4 ,Y 5 ,Y 6 ,Y 7 Expressed as:
Y 4 =Conv1(Y 3 ) (6)
T 5 =Conv3(Conv1(Y 3 )) (7)
T 6 =Conv5(Conv1(Y 3 )) (8)
Y 7 =Conv1(MP(Y 3 )) (9)
the third part is an output network, the output characteristics of the second part enter a full-connection layer after passing through a ReLU function, the robustness of the model is increased through dropout operation, and then the second full-connection layer is entered to obtain final output, wherein the final output comprises two probability values which respectively represent the probability of whether the object to be detected contains the pure object.
(22) And acquiring the Raman spectra of 50 different pure matters by using a Raman spectrum acquisition system, and carrying out pretreatment operations such as denoising, baseline correction and the like. For each purity, multiple simulated mixture spectra were generated in random proportions, with the positive spectrum (containing the particular purity) and the negative spectrum (without the particular purity) each accounting for half.
(23) And determining model parameters, and loading one-dimensional Raman spectrum data in the training set into the model for training.
3) Raman spectra of the mixture containing the food additives were identified using a trained modified ResNext neural network. The process is as follows:
preparing mixture solutions containing food additives in different proportions, acquiring Raman spectra of the mixture to be detected by adopting a laboratory self-grinding Raman spectrum acquisition system, preprocessing, inputting data to be detected into a trained network, and finally outputting substances possibly contained in the mixture to be detected to finish the aim of detecting the food additives.
Example 2
The embodiment relates to a food additive detection device based on an improved Resnext model and a Raman spectrum, comprising a memory and one or more processors, wherein the memory stores executable codes, and the one or more processors execute the executable codes to realize the food additive detection method based on the improved Resnext model and the Raman spectrum in the embodiment 1.
Example 3
The present embodiment relates to a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements a food additive detection method of embodiment 1 based on an improved ResNext model in combination with raman spectroscopy.
The embodiments described in this specification are merely illustrative of the manner in which the inventive concepts may be implemented. The scope of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, but the scope of the present invention and the equivalents thereof as would occur to one skilled in the art based on the inventive concept.
Claims (6)
1. The food additive detection method based on the improved ResNext model and combined with Raman spectrum comprises the following steps:
1) Constructing an improved ResNext module for feature extraction;
2) Constructing an improved ResNext neural network model, setting parameters and training;
3) Raman spectra of the mixture containing the food additives were identified using a trained modified ResNext neural network.
2. The improved ResNext model in combination with raman spectroscopy based food additive detection method of claim 1, wherein: the step 1) comprises the following steps:
(11) The ResNext model is improved, and the improved neural network module is as follows: sending the spliced two-dimensional feature map into a residual block with an output channel of 32 to extract features, carrying out grouping convolution on the extracted 32 features by an acceptance block, and fusing features with different scales to obtain 32 new features;
(12) And the common convolution module is replaced by the improved ResNext module, so that the characteristic extraction capability of the network is enhanced.
3. The improved ResNext model in combination with raman spectroscopy based food additive detection method of claim 1, wherein: said step 2) comprises the steps of:
(21) The whole ResNext Raman spectrum recognition network based on improvement is divided into three parts: the first part consists of two one-dimensional convolution networks and a characteristic splicing layer, wherein the size of the one-dimensional convolution networks is 7 x 1 x 16, input data are respectively pure object raman spectra and to-be-detected raman spectra, the size of the input data is 1753 x 1, characteristics are extracted through the one-dimensional convolution networks, and then the extracted two 16 characteristics are subjected to Concat to obtain 32 characteristics; the second part is a modified Resnext module for feature extraction, and consists of a residual block and an acceptance block; the characteristic output X of the first part is firstly passed through a residual block which is firstly a 2D convolution layer with a convolution kernel size of 3*3, and then the above operation is repeated by a batch normalization and nonlinear activation function ReLU to obtain an output Y 1 Can be expressed as:
Y 1 =ReLU(BN(Conv3(X 1 ))) (1)
wherein X is 1 The representation is:
X 1 =ReLU(BN(Conv3(X))) (2)
in jump connection of residual blocks, a 2D convolution network with a convolution kernel of 1*1 is used for processing an input characteristic X to obtain an output characteristic Y 2 :
Y 2 =Conv1(X) (3)
Then Y is taken up 1 And Y 2 Accumulating, performing batch normalization and nonlinear activation function ReLU, and maximally pooling to obtain output characteristic Y 3 ,
Y 3 =MP(ReLU(BN(Y 1 +Y 2 ))) (4)
Then the output characteristics of the residual blocks enter an acceptance structure, after the dimension of the packets is reduced, the packets enter convolution networks with different convolution kernel sizes, and after Concat, the final output characteristics are obtained:
Y=CAT(Y 4 ,Y 5 ,Y 6 ,Y 7 ) (5)
wherein Y is 4 ,Y 5 ,Y 6 ,Y 7 Expressed as:
Y 4 =Conv1(Y 3 ) (6)
Y 5 =Conv3(Conv1(Y 3 )) (7)
Y 6 =Conv5(Conv1(Y 3 )) (8)
Y 7 =Conv1(MP(Y 3 )) (9)
the third part is an output network, the output characteristics of the second part enter a full-connection layer after passing through a ReLU function, the robustness of a model is increased through dropout operation, and then the second full-connection layer is entered to obtain final output, wherein the final output comprises two probability values which respectively represent the probability of whether the object to be detected contains the pure object;
(22) Collecting Raman spectra of various pure matters, and carrying out pretreatment operation of denoising and baseline correction; generating a plurality of simulated mixture spectra for each pure substance in random proportions, wherein the positive spectrum and the negative spectrum each account for half;
(23) And determining model parameters, and loading one-dimensional Raman spectrum data in the training set into the model for training.
4. The improved ResNext model in combination with raman spectroscopy based food additive detection method of claim 1, wherein: the process of the step 3) is as follows: preparing mixture solutions containing food additives in different proportions, acquiring Raman spectra of the mixture to be detected by adopting a laboratory self-grinding Raman spectrum acquisition system, preprocessing, inputting data to be detected into a trained network, and finally outputting substances possibly contained in the mixture to be detected to finish the aim of detecting the food additives.
5. A modified ResNext model-coupled raman spectrum-based food additive testing device comprising a memory and one or more processors, the memory having executable code stored therein, the one or more processors, when executing the executable code, configured to implement the modified ResNext model-coupled raman spectrum-based food additive testing method of any one of claims 1-4.
6. A computer readable storage medium, having stored thereon a program which, when executed by a processor, implements the improved ResNext model in combination with raman spectroscopy based food additive detection method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311296175.7A CN117607120A (en) | 2023-10-08 | 2023-10-08 | Food additive Raman spectrum detection method and device based on improved Resnext model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311296175.7A CN117607120A (en) | 2023-10-08 | 2023-10-08 | Food additive Raman spectrum detection method and device based on improved Resnext model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117607120A true CN117607120A (en) | 2024-02-27 |
Family
ID=89952202
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311296175.7A Pending CN117607120A (en) | 2023-10-08 | 2023-10-08 | Food additive Raman spectrum detection method and device based on improved Resnext model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117607120A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117912599A (en) * | 2024-03-20 | 2024-04-19 | 西安大业食品有限公司 | Food additive detection method based on artificial intelligence |
-
2023
- 2023-10-08 CN CN202311296175.7A patent/CN117607120A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117912599A (en) * | 2024-03-20 | 2024-04-19 | 西安大业食品有限公司 | Food additive detection method based on artificial intelligence |
CN117912599B (en) * | 2024-03-20 | 2024-05-28 | 西安大业食品有限公司 | Food additive detection method based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hou et al. | Classification of tongue color based on CNN | |
CN116363440B (en) | Deep learning-based identification and detection method and system for colored microplastic in soil | |
CN110490895B (en) | Hyperspectral image processing-based method for improving meat source authenticity identification accuracy | |
CN111611851B (en) | Model generation method, iris detection method and device | |
CN117607120A (en) | Food additive Raman spectrum detection method and device based on improved Resnext model | |
CN116361801B (en) | Malicious software detection method and system based on semantic information of application program interface | |
CN106446011B (en) | The method and device of data processing | |
Kumar et al. | Content based image retrieval using gray scale weighted average method | |
CN115602337A (en) | Cryptocaryon irritans disease early warning method and system based on machine learning | |
Wang et al. | Fine-grained classification based on multi-scale pyramid convolution networks | |
Prasetya et al. | Indonesian food items labeling for tourism information using Convolution Neural Network | |
CN117611918A (en) | Marine organism classification method based on hierarchical neural network | |
Kotwal et al. | Yolov5-based convolutional feature attention neural network for plant disease classification | |
CN110726813B (en) | Electronic nose prediction method based on double-layer integrated neural network | |
CN117152823A (en) | Multi-task age estimation method based on dynamic cavity convolution pyramid attention | |
CN114970596A (en) | Raman spectrum analysis-based fish meal adulteration rapid detection method | |
CN114898137A (en) | Face recognition-oriented black box sample attack resisting method, device, equipment and medium | |
Malhotra et al. | Approaching bio cellular classification for malaria infected cells using machine learning and then deep learning to compare & analyze K-nearest neighbours and deep CNNs | |
CN109444360B (en) | Fruit juice storage period detection algorithm based on cellular neural network and electronic nose feature extraction | |
Sunyoto et al. | The Performance Evaluation of Transfer Learning VGG16 Algorithm on Various Chest X-ray Imaging Datasets for COVID-19 Classification | |
CN108764367A (en) | A kind of characteristic image extraction element and extracting method based on relationship regularization | |
CN118090699B (en) | Aflatoxin detection method combining differential Raman spectrum and SE-Res2Net | |
CN117116351B (en) | Construction method of species identification model based on machine learning algorithm, species identification method and species identification system | |
CN117789078B (en) | Experiment operation evaluation method and system based on AI visual recognition | |
CN117929356B (en) | LIBS quantitative analysis method based on Gaussian process regression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |