CN117607120A - Food additive Raman spectrum detection method and device based on improved Resnext model - Google Patents

Food additive Raman spectrum detection method and device based on improved Resnext model Download PDF

Info

Publication number
CN117607120A
CN117607120A CN202311296175.7A CN202311296175A CN117607120A CN 117607120 A CN117607120 A CN 117607120A CN 202311296175 A CN202311296175 A CN 202311296175A CN 117607120 A CN117607120 A CN 117607120A
Authority
CN
China
Prior art keywords
resnext
model
improved
raman
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311296175.7A
Other languages
Chinese (zh)
Inventor
浦世亮
杜康
张怡龙
毛慧
陈朋
王海霞
蔡宏
朱镇峰
张世峰
梁荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Zhejiang University of Technology ZJUT
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT, Hangzhou Hikvision Digital Technology Co Ltd filed Critical Zhejiang University of Technology ZJUT
Priority to CN202311296175.7A priority Critical patent/CN117607120A/en
Publication of CN117607120A publication Critical patent/CN117607120A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/65Raman scattering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biochemistry (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

A food additive detection method and device based on an improved ResNext model and combined with Raman spectrum, wherein the method comprises the following steps: 1) Constructing an improved ResNext module for feature extraction; 2) Constructing an improved ResNext neural network model, setting parameters and training; 3) Raman spectra of the mixture containing the food additives were identified using a trained modified ResNext neural network. The invention reduces the number of the model hyper-parameters, and is more convenient for the portability of the model; the method has stronger characteristic extraction capability and enhances the accuracy of substance identification.

Description

Food additive Raman spectrum detection method and device based on improved Resnext model
Technical Field
The invention relates to a Raman spectrum technology, in particular to a method and a device for quickly identifying substances by utilizing Raman spectrum and deep learning.
Background
Although the food additive can be used for increasing dietary nutrition (food nutrition enhancer), prolonging the shelf life of food and improving the sensory properties of food to meet the sensory requirements of people, the negative effect of the food additive is not negligible. Some researchers have shown that excessive use of food additives may cause various diseases such as allergy, diabetes, obesity, and metabolic disorders. Thus, food additives used or produced in industrial processes should be identified and quantified. In addition, some manufacturers use illegal food additives that are extremely harmful to the human body, and careful monitoring is also required.
However, the conventional detection methods often have certain drawbacks, such as the need for personnel with specialized literacy, complex and time-consuming sample preparation process, need for laboratory procedures, and lack of applicability for large-scale screening. The Raman spectrum technology is an advanced non-invasive detection technology, has the advantages of simple and rapid operation, capability of carrying out nondestructive detection and the like, and can be used for detecting food additives. When food additives are tested, raman spectra can be used to determine the chemical composition, structure and purity of the additives, each having a unique molecular vibration pattern, and thus a raman spectrum having characteristic peak positions and intensities. By comparing with the known standard sample, the identification and even quantitative analysis of the food additive can be performed according to the characteristic peak of the Raman spectrum.
In recent years, deep learning can learn multi-level representation features from a large amount of original data by virtue of a flexible architecture and an efficient algorithm, and achieves good performance in various fields such as computer vision, natural language processing, voice recognition and the like. Also, deep learning can be used in the field of substance identification, in combination with raman spectroscopy to detect food additives.
In view of this situation, this patent proposes an identification of food additives based on an improved ResNext model in combination with raman spectroscopy.
Disclosure of Invention
In order to solve the problem of food additive detection, the invention provides a food additive Raman spectrum detection method and device based on an improved ResNext model, which combines a ResNet unique residual structure and an acceptance grouping convolution, adopts a sub-module topological structure, reduces the number of model hyper-parameters, and is more convenient for the portability of the model.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the food additive detection method based on the improved ResNext model and combined with Raman spectrum comprises the following steps:
1) Constructing an improved ResNext module for feature extraction;
2) Constructing an improved ResNext neural network model, setting parameters and training;
3) Raman spectra of the mixture containing the food additives were identified using a trained modified ResNext neural network.
Further, the step 1) includes the steps of:
(11) The ResNext model is improved, and the improved neural network module is as follows: sending the spliced two-dimensional feature map into a residual block with an output channel of 32 to extract features, carrying out grouping convolution on the extracted 32 features by an acceptance block, and fusing features with different scales to obtain 32 new features;
(12) And the common convolution module is replaced by the improved ResNext module, so that the characteristic extraction capability of the network is enhanced.
Further, the step 2) includes the steps of:
(21) The whole ResNext Raman spectrum recognition network based on improvement is divided into three parts: the first part consists of two one-dimensional convolution networks and a characteristic splicing layer, wherein the size of the one-dimensional convolution networks is 7 x 1 x 16, input data are raman spectrums of a pure object and an object to be detected respectively, the size of the input data is 1753 x 1, characteristics are extracted through the one-dimensional convolution networks, and then the extracted two 16 characteristics are subjected to Concat to obtain 32 characteristics. The second part is then the modified ResNext module for feature extraction, which consists of one residual block and one acceptance block. The characteristic output X of the first part is firstly passed through a residual block which is firstly a 2D convolution layer with a convolution kernel size of 3*3, and then the above operation is repeated by a batch normalization and nonlinear activation function ReLU to obtain an output Y 1 Can be expressed as:
Y 1 =ReLU(BN(Conv3(X 1 ))) (1)
wherein X is 1 The representation is:
X 1 =ReLU(BN(Conv3(X))) (2)
in jump connection of residual blocks, a 2D convolution network with a convolution kernel of 1*1 is used for processing an input characteristic X to obtain an output characteristic Y 2
Y 2 =Conv1(X) (3)
Then Y is taken up 1 And Y 2 Accumulating, performing batch normalization and nonlinear activation function ReLU, and maximally pooling to obtain output characteristic Y 3
Y 3 =MP(ReLU(BN(Y 1 +Y 2 ))) (4)
Then the output characteristics of the residual blocks enter an acceptance structure, after the dimension of the packets is reduced, the packets enter convolution networks with different convolution kernel sizes, and after Concat, the final output characteristics are obtained:
Y=CAT(Y 4 ,Y 5 ,Y 6 ,Y 7 ) (5)
wherein Y is 4 ,Y 5 ,Y 6 ,Y 7 Expressed as:
Y 4 =Conv1(Y 3 ) (6)
Y 5 =Conv3(Conv1(Y 3 )) (7)
Y 6 =Conv5(Conv1(Y 3 )) (8)
Y 7 =Conv1(MP(Y 3 )) (9)
the third part is an output network, the output characteristics of the second part enter a full-connection layer after passing through a ReLU function, the robustness of the model is increased through dropout operation, and then the second full-connection layer is entered to obtain final output, wherein the final output comprises two probability values which respectively represent the probability of whether the object to be detected contains the pure object.
(22) And collecting Raman spectra of various different pure matters, and carrying out pretreatment operations such as denoising, baseline correction and the like. For each purity, multiple simulated mixture spectra were generated in random proportions, with the positive spectrum (containing the particular purity) and the negative spectrum (without the particular purity) each accounting for half.
(23) And determining model parameters, and loading one-dimensional Raman spectrum data in the training set into the model for training.
Still further, the procedure of step 3) is as follows: preparing mixture solutions containing food additives in different proportions, acquiring Raman spectra of the mixture to be detected by adopting a laboratory self-grinding Raman spectrum acquisition system, preprocessing, inputting data to be detected into a trained network, and finally outputting substances possibly contained in the mixture to be detected to finish the aim of detecting the food additives.
A second aspect of the invention relates to a food additive detection device based on an improved ResNext model in combination with raman spectroscopy, comprising a memory and one or more processors, said memory having stored therein executable code, which when executed by said one or more processors, is adapted to carry out a food additive detection method based on an improved ResNext model in combination with raman spectroscopy according to the invention.
A third aspect of the invention relates to a computer readable storage medium having stored thereon a program which, when executed by a processor, implements a method of detecting a food additive based on an improved ResNext model in combination with raman spectroscopy according to the invention.
The main body of the network adopts a residual structure and an acceptance structure to perform feature extraction, jump connection is introduced into the residual structure, information is allowed to be directly transmitted in the network, so that the network can obtain effective gradient more easily during training, the problem of gradient disappearance is solved, and the convergence speed of the residual structure is generally faster than that of a traditional convolution network structure; the convolution kernels with different sizes are used in the acceptance structure, so that the number of parameters can be reduced to a certain extent, the parameter efficiency of a network is improved to a certain extent, and the risk of overfitting is reduced.
Compared with the prior art, the invention has the beneficial effects that: 1. the number of the model hyper-parameters is reduced, and the portability of the model is facilitated. 2. The method has stronger characteristic extraction capability and enhances the accuracy of substance identification.
Drawings
FIG. 1 is a network architecture of the present invention;
FIG. 2 is a residual block of the present invention;
FIG. 3 is an acceptance structure of the present invention;
fig. 4 is a network flow diagram of the present invention.
Detailed Description
The invention aims to solve the problem of food additive detection, and the invention is further described below with reference to the accompanying drawings and embodiments:
example 1
Referring to fig. 4, the improved ResNext model-based food additive raman spectrum detection method comprises the following steps:
1) The construction of the improved ResNext module for feature extraction specifically comprises the following steps:
(11) The modified ResNext module first consists of one residual block, with the input features going into the residual part and the direct mapped part of the residual block, respectively. The structure of the residual block is shown in fig. 2, wherein the residual part consists of two convolution layers with the convolution kernel size of 3*3, the direct mapping part consists of one convolution layer with the convolution kernel size of 1*1, and then the outputs of the two parts are accumulated;
(12) The output characteristics of the residual block enter the following acceptance structure. The structure of the concept is shown in fig. 3, and the structure is composed of four groups of convolution layers with different convolution kernel sizes, and after the output characteristics of the residual block are subjected to group convolution, each output result is subjected to Concat.
2) Referring to fig. 1, an improved ResNext neural network model is constructed, parameters are set, training is performed, and the whole training process of the model is shown in fig. 4.
(21) The whole neural network model consists of three parts: the first part consists of two one-dimensional convolution networks and a characteristic splicing layer, wherein the size of the one-dimensional convolution networks is 7 x 1 x 16, input data are raman spectrums of a pure object and an object to be detected respectively, the size of the input data is 1753 x 1, characteristics are extracted through the one-dimensional convolution networks, and then the extracted two 16 characteristics are subjected to Concat to obtain 32 characteristics. The second part is responsible for feature extraction, which is done by the modified ResNext module. The characteristic output X of the first part is firstly passed through a residual block which is firstly a 2D convolution layer with a convolution kernel size of 3*3, and then the above operation is repeated by a batch normalization and nonlinear activation function ReLU to obtain an output Y 1 Can be expressed as:
Y 1 =ReLU(BN(Conv3(X 1 ))) (1)
wherein X is 1 The representation is:
X 1 =ReLU(BN(Conv3(X))) (2)
in jump connection of residual blocks, a 2D convolution network with a convolution kernel of 1*1 is used for processing an input characteristic X to obtain an output characteristic Y 2
Y 2 =Conv1(X) (3)
Then Y is taken up 1 And Y 2 Accumulating, performing batch normalization and nonlinear activation function ReLU, and maximally pooling to obtain output characteristic Y 3
Y 3 =MP(ReLU(BN(Y 1 +Y 2 ))) (4)
Then the output characteristics of the residual blocks enter an acceptance structure, after the dimension of the packets is reduced, the packets enter convolution networks with different convolution kernel sizes, and after Concat, the final output characteristics are obtained:
Y=CAT(Y 4 ,Y 5 ,Y 6 ,Y 7 ) (5)
wherein Y is 4 ,Y 5 ,Y 6 ,Y 7 Expressed as:
Y 4 =Conv1(Y 3 ) (6)
T 5 =Conv3(Conv1(Y 3 )) (7)
T 6 =Conv5(Conv1(Y 3 )) (8)
Y 7 =Conv1(MP(Y 3 )) (9)
the third part is an output network, the output characteristics of the second part enter a full-connection layer after passing through a ReLU function, the robustness of the model is increased through dropout operation, and then the second full-connection layer is entered to obtain final output, wherein the final output comprises two probability values which respectively represent the probability of whether the object to be detected contains the pure object.
(22) And acquiring the Raman spectra of 50 different pure matters by using a Raman spectrum acquisition system, and carrying out pretreatment operations such as denoising, baseline correction and the like. For each purity, multiple simulated mixture spectra were generated in random proportions, with the positive spectrum (containing the particular purity) and the negative spectrum (without the particular purity) each accounting for half.
(23) And determining model parameters, and loading one-dimensional Raman spectrum data in the training set into the model for training.
3) Raman spectra of the mixture containing the food additives were identified using a trained modified ResNext neural network. The process is as follows:
preparing mixture solutions containing food additives in different proportions, acquiring Raman spectra of the mixture to be detected by adopting a laboratory self-grinding Raman spectrum acquisition system, preprocessing, inputting data to be detected into a trained network, and finally outputting substances possibly contained in the mixture to be detected to finish the aim of detecting the food additives.
Example 2
The embodiment relates to a food additive detection device based on an improved Resnext model and a Raman spectrum, comprising a memory and one or more processors, wherein the memory stores executable codes, and the one or more processors execute the executable codes to realize the food additive detection method based on the improved Resnext model and the Raman spectrum in the embodiment 1.
Example 3
The present embodiment relates to a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements a food additive detection method of embodiment 1 based on an improved ResNext model in combination with raman spectroscopy.
The embodiments described in this specification are merely illustrative of the manner in which the inventive concepts may be implemented. The scope of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, but the scope of the present invention and the equivalents thereof as would occur to one skilled in the art based on the inventive concept.

Claims (6)

1. The food additive detection method based on the improved ResNext model and combined with Raman spectrum comprises the following steps:
1) Constructing an improved ResNext module for feature extraction;
2) Constructing an improved ResNext neural network model, setting parameters and training;
3) Raman spectra of the mixture containing the food additives were identified using a trained modified ResNext neural network.
2. The improved ResNext model in combination with raman spectroscopy based food additive detection method of claim 1, wherein: the step 1) comprises the following steps:
(11) The ResNext model is improved, and the improved neural network module is as follows: sending the spliced two-dimensional feature map into a residual block with an output channel of 32 to extract features, carrying out grouping convolution on the extracted 32 features by an acceptance block, and fusing features with different scales to obtain 32 new features;
(12) And the common convolution module is replaced by the improved ResNext module, so that the characteristic extraction capability of the network is enhanced.
3. The improved ResNext model in combination with raman spectroscopy based food additive detection method of claim 1, wherein: said step 2) comprises the steps of:
(21) The whole ResNext Raman spectrum recognition network based on improvement is divided into three parts: the first part consists of two one-dimensional convolution networks and a characteristic splicing layer, wherein the size of the one-dimensional convolution networks is 7 x 1 x 16, input data are respectively pure object raman spectra and to-be-detected raman spectra, the size of the input data is 1753 x 1, characteristics are extracted through the one-dimensional convolution networks, and then the extracted two 16 characteristics are subjected to Concat to obtain 32 characteristics; the second part is a modified Resnext module for feature extraction, and consists of a residual block and an acceptance block; the characteristic output X of the first part is firstly passed through a residual block which is firstly a 2D convolution layer with a convolution kernel size of 3*3, and then the above operation is repeated by a batch normalization and nonlinear activation function ReLU to obtain an output Y 1 Can be expressed as:
Y 1 =ReLU(BN(Conv3(X 1 ))) (1)
wherein X is 1 The representation is:
X 1 =ReLU(BN(Conv3(X))) (2)
in jump connection of residual blocks, a 2D convolution network with a convolution kernel of 1*1 is used for processing an input characteristic X to obtain an output characteristic Y 2
Y 2 =Conv1(X) (3)
Then Y is taken up 1 And Y 2 Accumulating, performing batch normalization and nonlinear activation function ReLU, and maximally pooling to obtain output characteristic Y 3
Y 3 =MP(ReLU(BN(Y 1 +Y 2 ))) (4)
Then the output characteristics of the residual blocks enter an acceptance structure, after the dimension of the packets is reduced, the packets enter convolution networks with different convolution kernel sizes, and after Concat, the final output characteristics are obtained:
Y=CAT(Y 4 ,Y 5 ,Y 6 ,Y 7 ) (5)
wherein Y is 4 ,Y 5 ,Y 6 ,Y 7 Expressed as:
Y 4 =Conv1(Y 3 ) (6)
Y 5 =Conv3(Conv1(Y 3 )) (7)
Y 6 =Conv5(Conv1(Y 3 )) (8)
Y 7 =Conv1(MP(Y 3 )) (9)
the third part is an output network, the output characteristics of the second part enter a full-connection layer after passing through a ReLU function, the robustness of a model is increased through dropout operation, and then the second full-connection layer is entered to obtain final output, wherein the final output comprises two probability values which respectively represent the probability of whether the object to be detected contains the pure object;
(22) Collecting Raman spectra of various pure matters, and carrying out pretreatment operation of denoising and baseline correction; generating a plurality of simulated mixture spectra for each pure substance in random proportions, wherein the positive spectrum and the negative spectrum each account for half;
(23) And determining model parameters, and loading one-dimensional Raman spectrum data in the training set into the model for training.
4. The improved ResNext model in combination with raman spectroscopy based food additive detection method of claim 1, wherein: the process of the step 3) is as follows: preparing mixture solutions containing food additives in different proportions, acquiring Raman spectra of the mixture to be detected by adopting a laboratory self-grinding Raman spectrum acquisition system, preprocessing, inputting data to be detected into a trained network, and finally outputting substances possibly contained in the mixture to be detected to finish the aim of detecting the food additives.
5. A modified ResNext model-coupled raman spectrum-based food additive testing device comprising a memory and one or more processors, the memory having executable code stored therein, the one or more processors, when executing the executable code, configured to implement the modified ResNext model-coupled raman spectrum-based food additive testing method of any one of claims 1-4.
6. A computer readable storage medium, having stored thereon a program which, when executed by a processor, implements the improved ResNext model in combination with raman spectroscopy based food additive detection method according to any one of claims 1 to 4.
CN202311296175.7A 2023-10-08 2023-10-08 Food additive Raman spectrum detection method and device based on improved Resnext model Pending CN117607120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311296175.7A CN117607120A (en) 2023-10-08 2023-10-08 Food additive Raman spectrum detection method and device based on improved Resnext model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311296175.7A CN117607120A (en) 2023-10-08 2023-10-08 Food additive Raman spectrum detection method and device based on improved Resnext model

Publications (1)

Publication Number Publication Date
CN117607120A true CN117607120A (en) 2024-02-27

Family

ID=89952202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311296175.7A Pending CN117607120A (en) 2023-10-08 2023-10-08 Food additive Raman spectrum detection method and device based on improved Resnext model

Country Status (1)

Country Link
CN (1) CN117607120A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117912599A (en) * 2024-03-20 2024-04-19 西安大业食品有限公司 Food additive detection method based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117912599A (en) * 2024-03-20 2024-04-19 西安大业食品有限公司 Food additive detection method based on artificial intelligence
CN117912599B (en) * 2024-03-20 2024-05-28 西安大业食品有限公司 Food additive detection method based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111611851B (en) Model generation method, iris detection method and device
CN117607120A (en) Food additive Raman spectrum detection method and device based on improved Resnext model
CN116363440B (en) Deep learning-based identification and detection method and system for colored microplastic in soil
Shamir Computer analysis reveals similarities between the artistic styles of Van Gogh and Pollock
CN116361801B (en) Malicious software detection method and system based on semantic information of application program interface
Kumar et al. Content based image retrieval using gray scale weighted average method
CN115602337A (en) Cryptocaryon irritans disease early warning method and system based on machine learning
Wang et al. Fine-grained classification based on multi-scale pyramid convolution networks
Ullah et al. Automatic diseases detection and classification in maize crop using convolution neural network
Prasetya et al. Indonesian food items labeling for tourism information using Convolution Neural Network
CN110726813B (en) Electronic nose prediction method based on double-layer integrated neural network
CN117152823A (en) Multi-task age estimation method based on dynamic cavity convolution pyramid attention
CN114898137A (en) Face recognition-oriented black box sample attack resisting method, device, equipment and medium
CN101334843B (en) Pattern recognition characteristic extraction method and apparatus
Campanile et al. An open source plugin for image analysis in biology
Malhotra et al. Approaching bio cellular classification for malaria infected cells using machine learning and then deep learning to compare & analyze K-nearest neighbours and deep CNNs
Sunyoto et al. The Performance Evaluation of Transfer Learning VGG16 Algorithm on Various Chest X-ray Imaging Datasets for COVID-19 Classification
CN108764367A (en) A kind of characteristic image extraction element and extracting method based on relationship regularization
CN117116351B (en) Construction method of species identification model based on machine learning algorithm, species identification method and species identification system
CN113870265B (en) Industrial part surface defect detection method
CN114400049B (en) Training method and device for peptide fragment quantitative model, computer equipment and storage medium
CN113407439B (en) Detection method for software self-recognition type technical liabilities
CN115965953B (en) Grain Variety Classification Method Based on Hyperspectral Imaging and Deep Learning
Singh et al. Plant Disease Detection using Convolution Neural Network Approach
Li A Psychophysically Oriented Saliency Map Prediction Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination