CN113077002A - Machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectrum - Google Patents
Machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectrum Download PDFInfo
- Publication number
- CN113077002A CN113077002A CN202110381702.9A CN202110381702A CN113077002A CN 113077002 A CN113077002 A CN 113077002A CN 202110381702 A CN202110381702 A CN 202110381702A CN 113077002 A CN113077002 A CN 113077002A
- Authority
- CN
- China
- Prior art keywords
- image
- gas
- gas classification
- classification
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000008786 sensory perception of smell Effects 0.000 title claims abstract description 18
- 238000007405 data analysis Methods 0.000 title claims abstract description 14
- 238000012800 visualization Methods 0.000 title claims abstract description 13
- 238000001237 Raman spectrum Methods 0.000 title abstract description 3
- 238000012545 processing Methods 0.000 claims abstract description 27
- 230000006870 function Effects 0.000 claims abstract description 21
- 238000012216 screening Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims abstract description 11
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 10
- 238000011946 reduction process Methods 0.000 claims abstract description 4
- 238000001069 Raman spectroscopy Methods 0.000 claims description 13
- 230000002159 abnormal effect Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 230000001105 regulatory effect Effects 0.000 claims description 2
- 239000007789 gas Substances 0.000 description 77
- 230000000007 visual effect Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000003915 air pollution Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 235000001968 nicotinic acid Nutrition 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 231100000614 poison Toxicity 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000003900 soil pollution Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002341 toxic gas Substances 0.000 description 1
- 239000003440 toxic substance Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 238000003911 water pollution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Multimedia (AREA)
- Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
Abstract
The invention provides a machine olfaction visualization sensing data analysis method based on a spatial heterodyne Raman spectrum, which comprises the following steps: s1, acquiring gas classification data information through the gas sensor, collecting image data of gas classification, and performing gray processing and noise reduction processes on the image data; s2, performing convolution processing according to the image data and defining a target function of the gas classification image; carrying out convolution neural network expected value calculation on the image error according to the target function; and S3, training the expected value of the convolutional neural network, continuously optimizing the weight parameter of the gas classification image, and classifying and screening the gas to form an integrated classifier.
Description
Technical Field
The invention relates to the field of machine olfaction intelligent analysis, in particular to a machine olfaction visual sensing data analysis method based on spatial heterodyne Raman spectroscopy.
Background
The machine olfaction is also called electronic nose, is a technology derived based on the principle of bionics, and is characterized in that a plurality of gas sensors with mutually overlapped performances simulate human olfactory cells, and a mode recognition method is used for simulating the cognition and judgment process of human to olfactory information, so that the intelligent perception of the gas is realized.
One of the reasons for the problems of "application scenario, environmental stability and detection time centralization" is that the "gas sensing technology" of the existing machine olfactory system is still imperfect, the data processing algorithm of the existing machine olfactory sense is relatively single, and the data processing algorithm shows that the existing machine olfactory sense data processing algorithm has a good analysis result when solving small samples and one-dimensional sensing data, but the problems of high calculation complexity, low analysis efficiency and the like exist when processing large samples and visual sensing data, and the massive visual sensing data cannot be classified, so that the technical personnel in the field are urgently needed to solve the corresponding technical problems.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly provides a machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectroscopy.
In order to achieve the above purpose, the present invention provides a machine olfaction visualization sensing data analysis method based on spatial heterodyne raman spectroscopy, which includes the following steps:
s1, acquiring gas classification data information through the gas sensor, collecting image data of gas classification, and performing gray processing and noise reduction processes on the image data;
s2, performing convolution processing according to the image data and defining a target function of the gas classification image; carrying out convolution neural network expected value calculation on the image error according to the target function;
and S3, training the expected value of the convolutional neural network, continuously optimizing the weight parameter of the gas classification image, and classifying and screening the gas to form an integrated classifier.
Further, the S1 includes:
s1-1, after collecting the gas data, classifying the gas, dividing the gas image of different color type, L, a and b channel image I of the imageL(n),Ia(n) and Ib(n);
Wherein IL(n),Ia(n) and Ib(n) images representing channels L, a and b, respectively; i isi(n) represents a grayscale image; obtaining a gas classification screening feature description histogram Mfeature=N(Ii(N)), N () is for the grayscale image I in any gas classificationi(n) screening characteristics.
Further, the S1 further includes:
s1-2, setting the gray image IiIn (n), white gaussian noise p ═ { p (I) × (I) | I ∈ I }, a noise-containing gray scale estimation value p (I) in any gas classification image I is multiplied by a filtering parameter λ (I), and a weighted average value of the obtained images isDelta (I, j) is the weight of image gray processing, I is the pixel set of the gray image, and j is the noise gas classification image;
in the denoising process in the gray level image, because the gas classification screening characteristics are distributed differently, the abnormal gas has characteristic similarity, the characteristic factor and the texture factor are added to the weight value delta (i, j) of the image gray level processing, the similarity of the abnormal gas is measured again, the denoising process is completed,
wherein, delta1(i, j) is a characteristic factor of image gradation processing, δ2(i, j) image feature P in gas classification image i as texture factor for image gray scale processingiAnd image features P in the noisy gas classification image jjThe square of the absolute difference value and the value of a are the number of the similarity degrees of the noise gas images.
Further, the S2 includes:
s2-1, after the noise reduction is finished, setting the scale parameter of the image to extract the gas classification image for convolution output processing,
the output of the convolution isWhere r () is a nonlinear function, the total number of pixel sets of the grayscale image of the gas classification image I is I,the weights of the gas classification images output for convolution,the image feature vectors are classified for the convolved input gas.
Further, the S2 further includes:
s2-2, forming an objective function through convolutional neural network training:
images output for predictionSamples, d is the actual output image sample,for the purpose of a gas classification image loss function,predict output label for ith gas classification image, l (d)i) The label is actually output for the ith gas classification image.
Further, the S2 further includes:
s2-3, after the gas classification image to be identified is operated through a target function, according to the result output by the convolutional neural network, the degree of conformity of the expected value is judged to carry out back propagation, the error between the gas classification image result and the expected value is solved, and then the weight value is updated; adjusting and acquiring a gas classification image through a training sample and an expected value;
the expected value is calculated asGhOutput value G of h-th normal gas classification image data after neural network operationkAnd (4) outputting the k kinds of abnormal gas classified image data after neural network operation, wherein omega is a characteristic regulating value.
Further, the S3 includes:
s3-1, after training of the expected value, optimizing the weight parameter of the gas classification image,
Dx,yExtracting a feature verification value for the gas classification image, wherein x is the direction of extracting the feature, y is the scale of extracting the feature, | | · | | is norm calculation, hx,yFinding frequency for the features of the x direction y scale, wherein k is the position coordinate of the gas image;
s3-2, after optimization calculation, forming a classifier
U=Uin·K·σ
Wherein, UinAnd K is a selected classification authentication parameter for the characteristic authentication set of the input gas classification image.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the invention firstly starts with the basic requirement of the machine olfaction system on the gas sensor array, establishes a mathematical model of high-resolution visual gas sensing based on the spatial heterodyne Raman spectroscopy technology, and then is used for gas sensing to construct a novel machine olfaction gas sensing system. The analysis method can quickly and accurately obtain a detection result when large-sample and two-dimensional visual data are faced.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a general flow diagram of the present invention;
fig. 2 is a schematic diagram of the effect of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1 and 2, the invention provides a machine olfaction visualization sensing data analysis method based on spatial heterodyne raman spectroscopy, which comprises the following steps:
s1, acquiring gas classification data information through the gas sensor, collecting image data of gas classification, and performing gray processing and noise reduction processes on the image data;
s2, performing convolution processing according to the image data and defining a target function of the gas classification image; carrying out convolution neural network expected value calculation on the image error according to the target function;
and S3, training the expected value of the convolutional neural network, continuously optimizing the weight parameter of the gas classification image, and classifying and screening the gas to form an integrated classifier.
The S1 includes:
s1-1, after collecting the gas data, classifying the gas, dividing the gas image of different color type, L, a and b channel image I of the imageL(n),Ia(n) and Ib(n);
Wherein IL(n),Ia(n) and Ib(n) images representing channels L, a and b, respectively; i isi(n) represents a grayscale image; obtaining a gas classification screening feature description histogram Mfeature=N(Ii(N)), N () is for the grayscale image I in any gas classificationi(n) a screening feature;
s1-2, setting the gray image IiIn (n), white gaussian noise p ═ { p (I) × (I) | I ∈ I }, a noise-containing gray scale estimation value p (I) in any gas classification image I is multiplied by a filtering parameter λ (I), and a weighted average value of the obtained images isDelta (I, j) is the weight of image gray processing, I is the pixel set of the gray image, and j is the noise gas classification image;
in the denoising process in the gray level image, because the gas classification screening characteristics are distributed differently, the abnormal gas has characteristic similarity, the characteristic factor and the texture factor are added to the weight value delta (i, j) of the image gray level processing, the similarity of the abnormal gas is measured again, the denoising process is completed,
wherein, delta1(i, j) is a characteristic factor of image gradation processing, δ2(i, j) image feature P in gas classification image i as texture factor for image gray scale processingiAnd image features P in the noisy gas classification image jjThe square of the absolute difference, the value of a is the number of noise gas image similarities,
the S2 includes:
s2-1, after the noise reduction is finished, setting the scale parameter of the image to extract the gas classification image for convolution output processing,
the output of the convolution isWhere r () is a nonlinear function, the total number of pixel sets of the grayscale image of the gas classification image I is I,the weights of the gas classification images output for convolution,the image feature vectors are classified for the convolved input gas,
s2-2, forming an objective function through convolutional neural network training:
for predicting the output image samples, d for the actual output image samples,for the purpose of a gas classification image loss function,predict output label for ith gas classification image, l (d)i) The label is actually output for the ith gas classification image,
s2-3, after the gas classification image to be identified is operated through a target function, according to the result output by the convolutional neural network, the degree of conformity of the expected value is judged to carry out back propagation, the error between the gas classification image result and the expected value is solved, and then the weight value is updated; adjusting and acquiring a gas classification image through a training sample and an expected value;
the expected value is calculated asGhOutput value G of h-th normal gas classification image data after neural network operationkThe method comprises the steps of (1) obtaining output values of k kinds of abnormal gas classified image data after neural network operation, wherein omega is a characteristic adjusting value;
the S3 includes:
s3-1, after training of the expected value, optimizing the weight parameter of the gas classification image,
Dx,yExtracting a feature verification value for the gas classification image, wherein x is the direction of extracting the feature, y is the scale of extracting the feature, | | · | | is norm calculation, hx,yFinding frequency for the features of the x direction y scale, wherein k is the position coordinate of the gas image;
s3-2, after optimization calculation, forming a classifier
U=Uin·K·σ
Wherein, UinAnd K is a selected classification authentication parameter for the characteristic authentication set of the input gas classification image.
The machine olfactory smell sensing system based on the spatial heterodyne Raman spectrometer is mainly used for detecting air pollution, soil pollution, water pollution, toxic substances (such as drugs and poison gas) and the like, and finally realizes intelligent detection and real-time monitoring of air, water quality, soil and the like so as to achieve the aim of environmental protection.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (7)
1. A machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectroscopy is characterized by comprising the following steps:
s1, acquiring gas classification data information through the gas sensor, collecting image data of gas classification, and performing gray processing and noise reduction processes on the image data;
s2, performing convolution processing according to the image data and defining a target function of the gas classification image; carrying out convolution neural network expected value calculation on the image error according to the target function;
and S3, training the expected value of the convolutional neural network, continuously optimizing the weight parameter of the gas classification image, and classifying and screening the gas to form an integrated classifier.
2. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 1, wherein the S1 includes:
s1-1, after collecting the gas data, classifying the gas, dividing the gas image of different color type, L, a and b channel image I of the imageL(n),Ia(n) and Ib(n);
Wherein IL(n),Ia(n) and Ib(n) images representing channels L, a and b, respectively; i isi(n) represents a grayscale image; obtaining a gas classification screening feature description histogram Mfeature=N(Ii(N)), N () is for the grayscale image I in any gas classificationi(n) screening characteristics.
3. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 2, wherein the S1 further includes:
s1-2, setting the gray image IiIn (n), white gaussian noise p ═ { p (I) × (I) | I ∈ I }, a noise-containing gray scale estimation value p (I) in any gas classification image I is multiplied by a filtering parameter λ (I), and a weighted average value of the obtained images isDelta (I, j) is the weight of image gray processing, I is the pixel set of the gray image, and j is the noise gas classification image;
in the denoising process in the gray level image, because the gas classification screening characteristics are distributed differently, the abnormal gas has characteristic similarity, the characteristic factor and the texture factor are added to the weight value delta (i, j) of the image gray level processing, the similarity of the abnormal gas is measured again, the denoising process is completed,
wherein, delta1(i, j) is a characteristic factor of image gradation processing, δ2(i, j) image feature P in gas classification image i as texture factor for image gray scale processingiAnd image features P in the noisy gas classification image jjThe square of the absolute difference value and the value of a are the number of the similarity degrees of the noise gas images.
4. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 1, wherein the S2 includes:
s2-1, after the noise reduction is finished, setting the scale parameter of the image to extract the gas classification image for convolution output processing,
5. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 4, wherein the S2 further includes:
s2-2, forming an objective function through convolutional neural network training:
6. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 4, wherein the S2 further includes:
s2-3, after the gas classification image to be identified is operated through a target function, according to the result output by the convolutional neural network, the degree of conformity of the expected value is judged to carry out back propagation, the error between the gas classification image result and the expected value is solved, and then the weight value is updated; adjusting and acquiring a gas classification image through a training sample and an expected value;
7. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 1, wherein the S3 includes:
s3-1, after training of the expected value, optimizing the weight parameter of the gas classification image,
Dx,yExtracting a feature verification value for the gas classification image, wherein x is the direction of extracting the feature, y is the scale of extracting the feature, | | · | | is norm calculation, hx,yFinding frequency for the features of the x direction y scale, wherein k is the position coordinate of the gas image;
s3-2, after optimization calculation, forming a classifier
U=Uin·K·σ
Wherein, UinAnd K is a selected classification authentication parameter for the characteristic authentication set of the input gas classification image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110381702.9A CN113077002B (en) | 2021-04-09 | 2021-04-09 | Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110381702.9A CN113077002B (en) | 2021-04-09 | 2021-04-09 | Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113077002A true CN113077002A (en) | 2021-07-06 |
CN113077002B CN113077002B (en) | 2023-07-21 |
Family
ID=76615727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110381702.9A Active CN113077002B (en) | 2021-04-09 | 2021-04-09 | Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113077002B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107545281A (en) * | 2017-09-29 | 2018-01-05 | 浙江工商大学 | A kind of single pernicious gas infrared image classifying identification method based on deep learning |
US20190005330A1 (en) * | 2016-02-09 | 2019-01-03 | Hrl Laboratories, Llc | System and method for the fusion of bottom-up whole-image features and top-down enttiy classification for accurate image/video scene classification |
CN110146642A (en) * | 2019-05-14 | 2019-08-20 | 上海大学 | A kind of smell analysis method and device |
CN110309867A (en) * | 2019-06-21 | 2019-10-08 | 北京工商大学 | A kind of Mixed gas identification method based on convolutional neural networks |
-
2021
- 2021-04-09 CN CN202110381702.9A patent/CN113077002B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190005330A1 (en) * | 2016-02-09 | 2019-01-03 | Hrl Laboratories, Llc | System and method for the fusion of bottom-up whole-image features and top-down enttiy classification for accurate image/video scene classification |
CN107545281A (en) * | 2017-09-29 | 2018-01-05 | 浙江工商大学 | A kind of single pernicious gas infrared image classifying identification method based on deep learning |
CN110146642A (en) * | 2019-05-14 | 2019-08-20 | 上海大学 | A kind of smell analysis method and device |
CN110309867A (en) * | 2019-06-21 | 2019-10-08 | 北京工商大学 | A kind of Mixed gas identification method based on convolutional neural networks |
Non-Patent Citations (5)
Title |
---|
PAI PENG等: "Gas Classification Using Deep Convolutional Neural Networks", 《SENSORS》, vol. 18, no. 1, pages 1 - 11 * |
ROBERLÂNIO OLIVEIRA MELO等: "Applying Convolutional Neural Networks to Detect Natural Gas Leaks in Wellhead Images", 《IEEE ACCESS》, pages 191775 - 191784 * |
刘辉翔: "MOS型电子鼻关键问题研究及其应用", 《中国博士学位论文全文数据库 (信息科技辑)》, pages 140 - 109 * |
张文理: "面向电子鼻的复合光气体传感方法研究", 《中国博士学位论文全文数据库 (工程科技Ⅰ辑)》, no. 01, pages 014 - 540 * |
臧建莲: "图像处理技术在有机气敏传感器设计中的应用", 《电子元件与材料》, pages 77 - 78 * |
Also Published As
Publication number | Publication date |
---|---|
CN113077002B (en) | 2023-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109765053B (en) | Rolling bearing fault diagnosis method using convolutional neural network and kurtosis index | |
Martins et al. | Automatic detection of surface defects on rolled steel using computer vision and artificial neural networks | |
CN105678332B (en) | Converter steelmaking end point judgment method and system based on flame image CNN recognition modeling | |
CN106295124B (en) | The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts | |
CN110717481B (en) | Method for realizing face detection by using cascaded convolutional neural network | |
CN109919241B (en) | Hyperspectral unknown class target detection method based on probability model and deep learning | |
CN110188774B (en) | Eddy current scanning image classification and identification method based on deep learning | |
EP0363828A2 (en) | Method and apparatus for adaptive learning type general purpose image measurement and recognition | |
CN110400293B (en) | No-reference image quality evaluation method based on deep forest classification | |
KR20070060496A (en) | Method for classification of geological materials using image processing and apparatus thereof | |
CN112258490A (en) | Low-emissivity coating intelligent damage detection method based on optical and infrared image fusion | |
Xu et al. | D4Net: De-deformation defect detection network for non-rigid products with large patterns | |
CN114463843A (en) | Multi-feature fusion fish abnormal behavior detection method based on deep learning | |
CN111402249B (en) | Image evolution analysis method based on deep learning | |
CN115294109A (en) | Real wood board production defect identification system based on artificial intelligence, and electronic equipment | |
CN114119551A (en) | Quantitative analysis method for human face image quality | |
CN116996665B (en) | Intelligent monitoring method, device, equipment and storage medium based on Internet of things | |
CN114065798A (en) | Visual identification method and device based on machine identification | |
CN106682604B (en) | Blurred image detection method based on deep learning | |
CN117237736A (en) | Daqu quality detection method based on machine vision and deep learning | |
CN116309493A (en) | Method and system for detecting defects of textile products | |
CN113077002B (en) | Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum | |
CN116109849A (en) | SURF feature matching-based high-voltage isolating switch positioning and state identification method | |
CN116309270A (en) | Binocular image-based transmission line typical defect identification method | |
CN113671599A (en) | Global climate mode-based login cyclone identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |