CN113077002A - Machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectrum - Google Patents

Machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectrum Download PDF

Info

Publication number
CN113077002A
CN113077002A CN202110381702.9A CN202110381702A CN113077002A CN 113077002 A CN113077002 A CN 113077002A CN 202110381702 A CN202110381702 A CN 202110381702A CN 113077002 A CN113077002 A CN 113077002A
Authority
CN
China
Prior art keywords
image
gas
gas classification
classification
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110381702.9A
Other languages
Chinese (zh)
Other versions
CN113077002B (en
Inventor
张文理
孙要伟
郑晶月
刘兆瑜
王毅
陈宇
梁坤
赵贞贞
王恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Aeronautics
Original Assignee
Zhengzhou University of Aeronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Aeronautics filed Critical Zhengzhou University of Aeronautics
Priority to CN202110381702.9A priority Critical patent/CN113077002B/en
Publication of CN113077002A publication Critical patent/CN113077002A/en
Application granted granted Critical
Publication of CN113077002B publication Critical patent/CN113077002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

The invention provides a machine olfaction visualization sensing data analysis method based on a spatial heterodyne Raman spectrum, which comprises the following steps: s1, acquiring gas classification data information through the gas sensor, collecting image data of gas classification, and performing gray processing and noise reduction processes on the image data; s2, performing convolution processing according to the image data and defining a target function of the gas classification image; carrying out convolution neural network expected value calculation on the image error according to the target function; and S3, training the expected value of the convolutional neural network, continuously optimizing the weight parameter of the gas classification image, and classifying and screening the gas to form an integrated classifier.

Description

Machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectrum
Technical Field
The invention relates to the field of machine olfaction intelligent analysis, in particular to a machine olfaction visual sensing data analysis method based on spatial heterodyne Raman spectroscopy.
Background
The machine olfaction is also called electronic nose, is a technology derived based on the principle of bionics, and is characterized in that a plurality of gas sensors with mutually overlapped performances simulate human olfactory cells, and a mode recognition method is used for simulating the cognition and judgment process of human to olfactory information, so that the intelligent perception of the gas is realized.
One of the reasons for the problems of "application scenario, environmental stability and detection time centralization" is that the "gas sensing technology" of the existing machine olfactory system is still imperfect, the data processing algorithm of the existing machine olfactory sense is relatively single, and the data processing algorithm shows that the existing machine olfactory sense data processing algorithm has a good analysis result when solving small samples and one-dimensional sensing data, but the problems of high calculation complexity, low analysis efficiency and the like exist when processing large samples and visual sensing data, and the massive visual sensing data cannot be classified, so that the technical personnel in the field are urgently needed to solve the corresponding technical problems.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly provides a machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectroscopy.
In order to achieve the above purpose, the present invention provides a machine olfaction visualization sensing data analysis method based on spatial heterodyne raman spectroscopy, which includes the following steps:
s1, acquiring gas classification data information through the gas sensor, collecting image data of gas classification, and performing gray processing and noise reduction processes on the image data;
s2, performing convolution processing according to the image data and defining a target function of the gas classification image; carrying out convolution neural network expected value calculation on the image error according to the target function;
and S3, training the expected value of the convolutional neural network, continuously optimizing the weight parameter of the gas classification image, and classifying and screening the gas to form an integrated classifier.
Further, the S1 includes:
s1-1, after collecting the gas data, classifying the gas, dividing the gas image of different color type, L, a and b channel image I of the imageL(n),Ia(n) and Ib(n);
Computing
Figure BDA0003013257630000021
Wherein IL(n),Ia(n) and Ib(n) images representing channels L, a and b, respectively; i isi(n) represents a grayscale image; obtaining a gas classification screening feature description histogram Mfeature=N(Ii(N)), N () is for the grayscale image I in any gas classificationi(n) screening characteristics.
Further, the S1 further includes:
s1-2, setting the gray image IiIn (n), white gaussian noise p ═ { p (I) × (I) | I ∈ I }, a noise-containing gray scale estimation value p (I) in any gas classification image I is multiplied by a filtering parameter λ (I), and a weighted average value of the obtained images is
Figure BDA0003013257630000022
Delta (I, j) is the weight of image gray processing, I is the pixel set of the gray image, and j is the noise gas classification image;
in the denoising process in the gray level image, because the gas classification screening characteristics are distributed differently, the abnormal gas has characteristic similarity, the characteristic factor and the texture factor are added to the weight value delta (i, j) of the image gray level processing, the similarity of the abnormal gas is measured again, the denoising process is completed,
Figure BDA0003013257630000023
wherein, delta1(i, j) is a characteristic factor of image gradation processing, δ2(i, j) image feature P in gas classification image i as texture factor for image gray scale processingiAnd image features P in the noisy gas classification image jjThe square of the absolute difference value and the value of a are the number of the similarity degrees of the noise gas images.
Further, the S2 includes:
s2-1, after the noise reduction is finished, setting the scale parameter of the image to extract the gas classification image for convolution output processing,
the output of the convolution is
Figure BDA0003013257630000031
Where r () is a nonlinear function, the total number of pixel sets of the grayscale image of the gas classification image I is I,
Figure BDA0003013257630000032
the weights of the gas classification images output for convolution,
Figure BDA0003013257630000033
the image feature vectors are classified for the convolved input gas.
Further, the S2 further includes:
s2-2, forming an objective function through convolutional neural network training:
Figure BDA0003013257630000034
Figure BDA0003013257630000035
images output for predictionSamples, d is the actual output image sample,
Figure BDA0003013257630000036
for the purpose of a gas classification image loss function,
Figure BDA0003013257630000037
predict output label for ith gas classification image, l (d)i) The label is actually output for the ith gas classification image.
Further, the S2 further includes:
s2-3, after the gas classification image to be identified is operated through a target function, according to the result output by the convolutional neural network, the degree of conformity of the expected value is judged to carry out back propagation, the error between the gas classification image result and the expected value is solved, and then the weight value is updated; adjusting and acquiring a gas classification image through a training sample and an expected value;
the expected value is calculated as
Figure BDA0003013257630000038
GhOutput value G of h-th normal gas classification image data after neural network operationkAnd (4) outputting the k kinds of abnormal gas classified image data after neural network operation, wherein omega is a characteristic regulating value.
Further, the S3 includes:
s3-1, after training of the expected value, optimizing the weight parameter of the gas classification image,
optimized to calculate as
Figure BDA0003013257630000041
Dx,yExtracting a feature verification value for the gas classification image, wherein x is the direction of extracting the feature, y is the scale of extracting the feature, | | · | | is norm calculation, hx,yFinding frequency for the features of the x direction y scale, wherein k is the position coordinate of the gas image;
s3-2, after optimization calculation, forming a classifier
U=Uin·K·σ
Wherein, UinAnd K is a selected classification authentication parameter for the characteristic authentication set of the input gas classification image.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the invention firstly starts with the basic requirement of the machine olfaction system on the gas sensor array, establishes a mathematical model of high-resolution visual gas sensing based on the spatial heterodyne Raman spectroscopy technology, and then is used for gas sensing to construct a novel machine olfaction gas sensing system. The analysis method can quickly and accurately obtain a detection result when large-sample and two-dimensional visual data are faced.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a general flow diagram of the present invention;
fig. 2 is a schematic diagram of the effect of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1 and 2, the invention provides a machine olfaction visualization sensing data analysis method based on spatial heterodyne raman spectroscopy, which comprises the following steps:
s1, acquiring gas classification data information through the gas sensor, collecting image data of gas classification, and performing gray processing and noise reduction processes on the image data;
s2, performing convolution processing according to the image data and defining a target function of the gas classification image; carrying out convolution neural network expected value calculation on the image error according to the target function;
and S3, training the expected value of the convolutional neural network, continuously optimizing the weight parameter of the gas classification image, and classifying and screening the gas to form an integrated classifier.
The S1 includes:
s1-1, after collecting the gas data, classifying the gas, dividing the gas image of different color type, L, a and b channel image I of the imageL(n),Ia(n) and Ib(n);
Computing
Figure BDA0003013257630000051
Wherein IL(n),Ia(n) and Ib(n) images representing channels L, a and b, respectively; i isi(n) represents a grayscale image; obtaining a gas classification screening feature description histogram Mfeature=N(Ii(N)), N () is for the grayscale image I in any gas classificationi(n) a screening feature;
s1-2, setting the gray image IiIn (n), white gaussian noise p ═ { p (I) × (I) | I ∈ I }, a noise-containing gray scale estimation value p (I) in any gas classification image I is multiplied by a filtering parameter λ (I), and a weighted average value of the obtained images is
Figure BDA0003013257630000052
Delta (I, j) is the weight of image gray processing, I is the pixel set of the gray image, and j is the noise gas classification image;
in the denoising process in the gray level image, because the gas classification screening characteristics are distributed differently, the abnormal gas has characteristic similarity, the characteristic factor and the texture factor are added to the weight value delta (i, j) of the image gray level processing, the similarity of the abnormal gas is measured again, the denoising process is completed,
Figure BDA0003013257630000061
wherein, delta1(i, j) is a characteristic factor of image gradation processing, δ2(i, j) image feature P in gas classification image i as texture factor for image gray scale processingiAnd image features P in the noisy gas classification image jjThe square of the absolute difference, the value of a is the number of noise gas image similarities,
the S2 includes:
s2-1, after the noise reduction is finished, setting the scale parameter of the image to extract the gas classification image for convolution output processing,
the output of the convolution is
Figure BDA0003013257630000062
Where r () is a nonlinear function, the total number of pixel sets of the grayscale image of the gas classification image I is I,
Figure BDA0003013257630000063
the weights of the gas classification images output for convolution,
Figure BDA0003013257630000064
the image feature vectors are classified for the convolved input gas,
s2-2, forming an objective function through convolutional neural network training:
Figure BDA0003013257630000065
Figure BDA0003013257630000066
for predicting the output image samples, d for the actual output image samples,
Figure BDA0003013257630000067
for the purpose of a gas classification image loss function,
Figure BDA0003013257630000068
predict output label for ith gas classification image, l (d)i) The label is actually output for the ith gas classification image,
s2-3, after the gas classification image to be identified is operated through a target function, according to the result output by the convolutional neural network, the degree of conformity of the expected value is judged to carry out back propagation, the error between the gas classification image result and the expected value is solved, and then the weight value is updated; adjusting and acquiring a gas classification image through a training sample and an expected value;
the expected value is calculated as
Figure BDA0003013257630000071
GhOutput value G of h-th normal gas classification image data after neural network operationkThe method comprises the steps of (1) obtaining output values of k kinds of abnormal gas classified image data after neural network operation, wherein omega is a characteristic adjusting value;
the S3 includes:
s3-1, after training of the expected value, optimizing the weight parameter of the gas classification image,
optimized to calculate as
Figure BDA0003013257630000072
Dx,yExtracting a feature verification value for the gas classification image, wherein x is the direction of extracting the feature, y is the scale of extracting the feature, | | · | | is norm calculation, hx,yFinding frequency for the features of the x direction y scale, wherein k is the position coordinate of the gas image;
s3-2, after optimization calculation, forming a classifier
U=Uin·K·σ
Wherein, UinAnd K is a selected classification authentication parameter for the characteristic authentication set of the input gas classification image.
The machine olfactory smell sensing system based on the spatial heterodyne Raman spectrometer is mainly used for detecting air pollution, soil pollution, water pollution, toxic substances (such as drugs and poison gas) and the like, and finally realizes intelligent detection and real-time monitoring of air, water quality, soil and the like so as to achieve the aim of environmental protection.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (7)

1. A machine olfaction visualization sensing data analysis method based on spatial heterodyne Raman spectroscopy is characterized by comprising the following steps:
s1, acquiring gas classification data information through the gas sensor, collecting image data of gas classification, and performing gray processing and noise reduction processes on the image data;
s2, performing convolution processing according to the image data and defining a target function of the gas classification image; carrying out convolution neural network expected value calculation on the image error according to the target function;
and S3, training the expected value of the convolutional neural network, continuously optimizing the weight parameter of the gas classification image, and classifying and screening the gas to form an integrated classifier.
2. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 1, wherein the S1 includes:
s1-1, after collecting the gas data, classifying the gas, dividing the gas image of different color type, L, a and b channel image I of the imageL(n),Ia(n) and Ib(n);
Computing
Figure FDA0003013257620000011
Wherein IL(n),Ia(n) and Ib(n) images representing channels L, a and b, respectively; i isi(n) represents a grayscale image; obtaining a gas classification screening feature description histogram Mfeature=N(Ii(N)), N () is for the grayscale image I in any gas classificationi(n) screening characteristics.
3. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 2, wherein the S1 further includes:
s1-2, setting the gray image IiIn (n), white gaussian noise p ═ { p (I) × (I) | I ∈ I }, a noise-containing gray scale estimation value p (I) in any gas classification image I is multiplied by a filtering parameter λ (I), and a weighted average value of the obtained images is
Figure FDA0003013257620000021
Delta (I, j) is the weight of image gray processing, I is the pixel set of the gray image, and j is the noise gas classification image;
in the denoising process in the gray level image, because the gas classification screening characteristics are distributed differently, the abnormal gas has characteristic similarity, the characteristic factor and the texture factor are added to the weight value delta (i, j) of the image gray level processing, the similarity of the abnormal gas is measured again, the denoising process is completed,
Figure FDA0003013257620000022
wherein, delta1(i, j) is a characteristic factor of image gradation processing, δ2(i, j) image feature P in gas classification image i as texture factor for image gray scale processingiAnd image features P in the noisy gas classification image jjThe square of the absolute difference value and the value of a are the number of the similarity degrees of the noise gas images.
4. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 1, wherein the S2 includes:
s2-1, after the noise reduction is finished, setting the scale parameter of the image to extract the gas classification image for convolution output processing,
the output of the convolution is
Figure FDA0003013257620000023
Where r () is a nonlinear function, the total number of pixel sets of the grayscale image of the gas classification image I is I,
Figure FDA0003013257620000024
the weights of the gas classification images output for convolution,
Figure FDA0003013257620000025
the image feature vectors are classified for the convolved input gas.
5. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 4, wherein the S2 further includes:
s2-2, forming an objective function through convolutional neural network training:
Figure FDA0003013257620000026
Figure FDA0003013257620000031
for predicting the output image samples, d for the actual output image samples,
Figure FDA0003013257620000032
for the purpose of a gas classification image loss function,
Figure FDA0003013257620000033
predict output label for ith gas classification image, l (d)i) The label is actually output for the ith gas classification image.
6. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 4, wherein the S2 further includes:
s2-3, after the gas classification image to be identified is operated through a target function, according to the result output by the convolutional neural network, the degree of conformity of the expected value is judged to carry out back propagation, the error between the gas classification image result and the expected value is solved, and then the weight value is updated; adjusting and acquiring a gas classification image through a training sample and an expected value;
the expected value is calculated as
Figure FDA0003013257620000034
GhOutput value G of h-th normal gas classification image data after neural network operationkAnd (4) outputting the k kinds of abnormal gas classified image data after neural network operation, wherein omega is a characteristic regulating value.
7. The method for machine olfaction visualization sensing data analysis based on spatial heterodyne raman spectroscopy according to claim 1, wherein the S3 includes:
s3-1, after training of the expected value, optimizing the weight parameter of the gas classification image,
optimized to calculate as
Figure FDA0003013257620000035
Dx,yExtracting a feature verification value for the gas classification image, wherein x is the direction of extracting the feature, y is the scale of extracting the feature, | | · | | is norm calculation, hx,yFinding frequency for the features of the x direction y scale, wherein k is the position coordinate of the gas image;
s3-2, after optimization calculation, forming a classifier
U=Uin·K·σ
Wherein, UinAnd K is a selected classification authentication parameter for the characteristic authentication set of the input gas classification image.
CN202110381702.9A 2021-04-09 2021-04-09 Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum Active CN113077002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110381702.9A CN113077002B (en) 2021-04-09 2021-04-09 Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110381702.9A CN113077002B (en) 2021-04-09 2021-04-09 Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum

Publications (2)

Publication Number Publication Date
CN113077002A true CN113077002A (en) 2021-07-06
CN113077002B CN113077002B (en) 2023-07-21

Family

ID=76615727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110381702.9A Active CN113077002B (en) 2021-04-09 2021-04-09 Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum

Country Status (1)

Country Link
CN (1) CN113077002B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545281A (en) * 2017-09-29 2018-01-05 浙江工商大学 A kind of single pernicious gas infrared image classifying identification method based on deep learning
US20190005330A1 (en) * 2016-02-09 2019-01-03 Hrl Laboratories, Llc System and method for the fusion of bottom-up whole-image features and top-down enttiy classification for accurate image/video scene classification
CN110146642A (en) * 2019-05-14 2019-08-20 上海大学 A kind of smell analysis method and device
CN110309867A (en) * 2019-06-21 2019-10-08 北京工商大学 A kind of Mixed gas identification method based on convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190005330A1 (en) * 2016-02-09 2019-01-03 Hrl Laboratories, Llc System and method for the fusion of bottom-up whole-image features and top-down enttiy classification for accurate image/video scene classification
CN107545281A (en) * 2017-09-29 2018-01-05 浙江工商大学 A kind of single pernicious gas infrared image classifying identification method based on deep learning
CN110146642A (en) * 2019-05-14 2019-08-20 上海大学 A kind of smell analysis method and device
CN110309867A (en) * 2019-06-21 2019-10-08 北京工商大学 A kind of Mixed gas identification method based on convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
PAI PENG等: "Gas Classification Using Deep Convolutional Neural Networks", 《SENSORS》, vol. 18, no. 1, pages 1 - 11 *
ROBERLÂNIO OLIVEIRA MELO等: "Applying Convolutional Neural Networks to Detect Natural Gas Leaks in Wellhead Images", 《IEEE ACCESS》, pages 191775 - 191784 *
刘辉翔: "MOS型电子鼻关键问题研究及其应用", 《中国博士学位论文全文数据库 (信息科技辑)》, pages 140 - 109 *
张文理: "面向电子鼻的复合光气体传感方法研究", 《中国博士学位论文全文数据库 (工程科技Ⅰ辑)》, no. 01, pages 014 - 540 *
臧建莲: "图像处理技术在有机气敏传感器设计中的应用", 《电子元件与材料》, pages 77 - 78 *

Also Published As

Publication number Publication date
CN113077002B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN109765053B (en) Rolling bearing fault diagnosis method using convolutional neural network and kurtosis index
Martins et al. Automatic detection of surface defects on rolled steel using computer vision and artificial neural networks
CN105678332B (en) Converter steelmaking end point judgment method and system based on flame image CNN recognition modeling
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
CN110717481B (en) Method for realizing face detection by using cascaded convolutional neural network
CN109919241B (en) Hyperspectral unknown class target detection method based on probability model and deep learning
CN110188774B (en) Eddy current scanning image classification and identification method based on deep learning
EP0363828A2 (en) Method and apparatus for adaptive learning type general purpose image measurement and recognition
CN110400293B (en) No-reference image quality evaluation method based on deep forest classification
KR20070060496A (en) Method for classification of geological materials using image processing and apparatus thereof
CN112258490A (en) Low-emissivity coating intelligent damage detection method based on optical and infrared image fusion
Xu et al. D4Net: De-deformation defect detection network for non-rigid products with large patterns
CN114463843A (en) Multi-feature fusion fish abnormal behavior detection method based on deep learning
CN111402249B (en) Image evolution analysis method based on deep learning
CN115294109A (en) Real wood board production defect identification system based on artificial intelligence, and electronic equipment
CN114119551A (en) Quantitative analysis method for human face image quality
CN116996665B (en) Intelligent monitoring method, device, equipment and storage medium based on Internet of things
CN114065798A (en) Visual identification method and device based on machine identification
CN106682604B (en) Blurred image detection method based on deep learning
CN117237736A (en) Daqu quality detection method based on machine vision and deep learning
CN116309493A (en) Method and system for detecting defects of textile products
CN113077002B (en) Machine olfaction visual sensing data analysis method based on space heterodyne Raman spectrum
CN116109849A (en) SURF feature matching-based high-voltage isolating switch positioning and state identification method
CN116309270A (en) Binocular image-based transmission line typical defect identification method
CN113671599A (en) Global climate mode-based login cyclone identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant