CN112595672B - Mixed gas photoacoustic spectrum identification method and device based on deep learning - Google Patents

Mixed gas photoacoustic spectrum identification method and device based on deep learning Download PDF

Info

Publication number
CN112595672B
CN112595672B CN202110236015.8A CN202110236015A CN112595672B CN 112595672 B CN112595672 B CN 112595672B CN 202110236015 A CN202110236015 A CN 202110236015A CN 112595672 B CN112595672 B CN 112595672B
Authority
CN
China
Prior art keywords
photoacoustic
spectrum
photoacoustic spectrum
derivative
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110236015.8A
Other languages
Chinese (zh)
Other versions
CN112595672A (en
Inventor
陈斌
罗浩
李俊逸
代犇
黄杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Infotech Co ltd
Original Assignee
Hubei Infotech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Infotech Co ltd filed Critical Hubei Infotech Co ltd
Priority to CN202110236015.8A priority Critical patent/CN112595672B/en
Publication of CN112595672A publication Critical patent/CN112595672A/en
Application granted granted Critical
Publication of CN112595672B publication Critical patent/CN112595672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/1702Systems in which incident light is modified in accordance with the properties of the material investigated with opto-acoustic detection, e.g. for gases or analysing solids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/1702Systems in which incident light is modified in accordance with the properties of the material investigated with opto-acoustic detection, e.g. for gases or analysing solids
    • G01N2021/1704Systems in which incident light is modified in accordance with the properties of the material investigated with opto-acoustic detection, e.g. for gases or analysing solids in gases

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

The invention relates to a mixed gas photoacoustic spectrum identification method and device based on deep learning, wherein the method comprises the following steps: acquiring photoacoustic spectrums of a plurality of mixed gases, and recording the photoacoustic spectrums as first photoacoustic spectrums; carrying out Fourier deconvolution and bilateral filtering on the plurality of first photoacoustic spectrums in sequence to obtain a plurality of second photoacoustic spectrums; carrying out peak separation on the second photoacoustic spectrum by using a derivative method, and then extracting waveform characteristics and gas characteristic information of each wave band; constructing a multi-dimensional vector by utilizing the waveform characteristics and the gas information, and constructing a sample data set; and training the target recognition neural network by using the sample data set, and inputting the photoacoustic spectrum to be recognized into the trained target recognition neural network to obtain recognition information. The method combines the traditional filtering method and the derivative method to carry out peak separation on the overlapped peaks, and then utilizes the target recognition neural network to recognize the photoacoustic spectrum, so that the method has the advantages of high recognition speed, low cost and good stability.

Description

Mixed gas photoacoustic spectrum identification method and device based on deep learning
Technical Field
The invention belongs to the field of data processing and deep learning of photoacoustic spectroscopy, and particularly relates to a mixed gas photoacoustic spectroscopy identification method and device based on deep learning.
Background
The photoacoustic spectroscopy technology is a new analysis and test means which is simple, high in detection sensitivity, high in selectivity, large in dynamic range, strong in universality and free of damage to samples, is different from an absorption spectroscopy analysis method in a method for detecting trace gas, is used for directly measuring absorbed energy instead of measuring the intensity of projected or reflected light, is considered to be one of the best tools for detecting trace gas, is widely applied to many fields, and has the basic principle of photoacoustic effect. The photoacoustic effect was the photoacoustic conversion phenomenon first found in solids by Alexander Graham Bell (a.g. be 11), an founder of Bell telephone, scientists in the united states, in 1880. He found that when a sample in a closed container is irradiated intermittently with sunlight, acoustic waves are generated inside the container, a phenomenon called "photoacoustic effect".
Each gas molecule has its own absorption peak, and the absorption peaks of different gases are different from each other to some extent, but in some regions, the absorption peaks overlap with each other, and when gas analysis is performed using light in this wavelength band, cross-talk between gases is likely to occur. In general, in order to avoid the problem of overlapping peaks of absorption peaks and reduce the difficulty of identification, a frequency modulation device or a filter is generally used to change the entrance of an excitation light source into different photoacoustic chambers. But this brings about problems of cost increase and stability of the apparatus.
Disclosure of Invention
In order to solve the problems that in the existing mixed gas photoacoustic spectrometry detection technology, identification of overlapping peaks in photoacoustic spectrometry of mixed gas is difficult, and the existing identification method is high in cost and low in equipment stability, the invention provides a mixed gas photoacoustic spectrometry identification method based on deep learning, which comprises the following steps: acquiring photoacoustic spectrums of a plurality of mixed gases, and recording the photoacoustic spectrums as first photoacoustic spectrums; carrying out Fourier deconvolution and bilateral filtering on the plurality of first photoacoustic spectrums in sequence to obtain a plurality of second photoacoustic spectrums; determining the order of the derivative number of the single peak according to the number of the single peaks contained in the overlapped peak in each second photoacoustic spectrum, so that the number of the overlapped peaks in the derivative photoacoustic spectrum of each second photoacoustic spectrum is lower than a threshold value; extracting the maximum absorption position, the absorption depth, the symmetry and the corresponding gas information of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof, and mapping the maximum absorption position, the absorption depth and the symmetry into a multi-dimensional vector; the gas information includes a concentration of a gas; respectively taking the first photoacoustic spectrum and the multi-dimensional vector as a sample and a label to construct a sample data set; training a target recognition neural network by using the sample data set until the error is lower than a threshold value and tends to be stable, and obtaining a trained target recognition neural network; inputting a photoacoustic spectrum to be identified into a trained target identification neural network to obtain identification information in the photoacoustic spectrum; the identification information includes the composition of the mixed gas, the maximum absorption position of the absorption peak, the absorption depth, and the degree of symmetry.
In some embodiments of the present invention, the sequentially performing fourier deconvolution and bilateral filtering on a plurality of first photoacoustic spectra to obtain a plurality of second photoacoustic spectra includes: carrying out Fourier deconvolution on overlapped peaks in the first photoacoustic spectrums, carrying out bilateral filtering on the first photoacoustic spectrums after the Fourier deconvolution to obtain second photoacoustic spectrums; the calculation method of the bilateral filtering is represented as follows:
Figure 276257DEST_PATH_IMAGE001
g(i, j)represents an output point;S(i,j)is meant to refer to(i,j)A range of sizes of (2N +1) at the center;f(k,l)a plurality of input points representing photoacoustic spectra;w(i,j,k,l)representing the values calculated by two gaussian functions.
In some embodiments of the present invention, the determining the order of the derivative number according to the number of the single peaks included in the overlapped peak in each second photoacoustic spectrum so that the number of the overlapped peaks in the derivative photoacoustic spectrum of each second photoacoustic spectrum is lower than the threshold value comprises the following steps: taking the order of the initial derivative number as 1, counting the number of single peaks contained in each overlapped peak in each second photoacoustic spectrum, and deriving the overlapped peak containing the most single peaks: if the number of the single peaks obtained by derivation of the overlapped peak containing the most single peaks is larger than or equal to a threshold value, taking the order of the derivative as the order of the derivative of each second photoacoustic spectrum; and if the number of the superposed peaks obtained by derivation of the superposed peaks containing the maximum number of the single peaks is smaller than the threshold, gradually increasing the order of the derivative according to the step length of 1 until the number of the superposed peaks obtained by derivation is larger than or equal to the threshold, and taking the order of the derivative as the order of the derivative of each second photoacoustic spectrum.
In some embodiments of the present invention, the extracting and mapping the maximum absorption position, the absorption depth, the symmetry, and the corresponding gas information of each wavelength band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof into the multidimensional vector comprises the steps of: extracting the maximum absorption position, the absorption depth, the symmetry and the corresponding gas information of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof; the gas information comprises a concentration or volume fraction of a gas; taking the maximum absorption position, the absorption depth and the symmetry degree of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof as a first feature vector; the corresponding gas information is used as a second eigenvector; and fusing the first feature vector and the second feature vector and mapping the first feature vector and the second feature vector into a multi-dimensional vector.
In some embodiments of the present invention, the target recognition neural network includes a first YOLO neural network and a second YOLO neural network, a full connection layer of the first YOLO neural network and the second YOLO neural network being connected to each other, the first YOLO neural network identifying a component of the mixed gas; the second YOLO neural network is used for identifying the maximum absorption position, the depth and the symmetry degree of an absorption peak. Preferably, the second YOLO neural network is a YOLO V4 neural network.
The invention provides a gas photoacoustic spectrum recognition device based on deep learning, which comprises an acquisition module, a determination module, an extraction module, a training module and a recognition module, wherein the acquisition module is used for acquiring photoacoustic spectra of a plurality of mixed gases and recording the photoacoustic spectra as first photoacoustic spectra; carrying out Fourier deconvolution and bilateral filtering on the plurality of first photoacoustic spectrums in sequence to obtain a plurality of second photoacoustic spectrums; the determining module is used for determining the order of the derivative number of the single peak according to the number of the single peaks contained in the overlapped peak in each second photoacoustic spectrum, so that the number of the overlapped peaks in the derivative photoacoustic spectrum of each second photoacoustic spectrum is lower than a threshold value; the extraction module is used for extracting the maximum absorption position, the absorption depth, the symmetry degree and the corresponding gas information of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof, and mapping the maximum absorption position, the absorption depth, the symmetry degree and the corresponding gas information into a multidimensional vector; the gas information includes a concentration of a gas; the training module is used for respectively taking the first photoacoustic spectrum and the multi-dimensional vector as a sample and a label to construct a sample data set; training a target recognition neural network by using the sample data set until the error is lower than a threshold value and tends to be stable, and obtaining a trained target recognition neural network; the identification module is used for inputting the photoacoustic spectrum to be identified into a trained target identification neural network to obtain identification information in the photoacoustic spectrum; the identification information includes the composition of the mixed gas, the maximum absorption position of the absorption peak, the absorption depth, and the degree of symmetry.
In some embodiments of the present invention, the identification means includes a first identification means for identifying a component of the mixed gas of photoacoustic spectroscopy; the second identification module is used for identifying the maximum absorption position, the depth and the symmetry of the absorption peak in the photoacoustic spectrum.
In a third aspect of the present invention, there is provided an electronic device comprising: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the deep learning-based mixed gas photoacoustic spectrum identification method provided by the first aspect of the present invention.
In a fourth aspect of the present invention, a computer readable medium is provided, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the deep learning-based mixed gas photoacoustic spectrum identification method provided by the first aspect of the present invention.
The invention has the beneficial effects that:
1. firstly, the Fourier deconvolution of the photoacoustic spectrum preliminarily separates overlapped peaks, and bilateral filtering sharpens waveform edges while retaining edge characteristics, so that the definition of the photoacoustic spectrum is improved; separating the overlapped peaks into a plurality of single peaks by a derivative method; then, the maximum absorption position, the absorption depth, the symmetry and the corresponding gas information are used as characteristics to extract the characteristics of the photoacoustic spectrum, so that the data dimensionality is reduced while the main characteristics of the photoacoustic spectrum of the mixed gas are ensured to be covered;
the YOLO neural network is used as a fast and lightweight target identification network, can identify or output a plurality of targets and related information, and has the characteristics of fast identification, high accuracy and the like;
3. because the neural network model is adopted for spectrum identification, compared with the traditional analysis method, the identification speed is higher; because the detection conditions are relaxed, the device does not depend on the modulation of an excitation light source or an optical filter and other equipment, the identification cost is reduced, and the stability of an identification device or a system is improved. With the improvement of future hardware computing power, the real-time accurate measurement of the components detected by the mixed gas can be realized.
Drawings
FIG. 1 is a schematic flow diagram of a deep learning based mixed gas photoacoustic spectroscopy identification method in some embodiments of the present invention;
FIG. 2 is a schematic representation of a simulation of photoacoustic spectroscopy of water vapor and carbon dioxide;
FIG. 3 is a schematic structural diagram of a mixed gas photoacoustic spectrum identification apparatus based on deep learning in some embodiments of the present invention;
FIG. 4 is a basic block diagram of an electronic device in some embodiments of the invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, in a first aspect of the present invention, there is provided a mixed gas photoacoustic spectrum identification method based on deep learning, including the following steps: s101, acquiring photoacoustic spectrums of a plurality of mixed gases, and recording the photoacoustic spectrums as first photoacoustic spectrums; carrying out Fourier deconvolution and bilateral filtering on the plurality of first photoacoustic spectrums in sequence to obtain a plurality of second photoacoustic spectrums; s102, determining the order of the derivative number of the superposed peaks according to the number of single peaks contained in the superposed peaks in each second photoacoustic spectrum, so that the number of the superposed peaks in the derivative photoacoustic spectrum of each second photoacoustic spectrum is lower than a threshold value; s103, extracting the maximum absorption position, the absorption depth, the symmetry and the corresponding gas information of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof, and mapping the maximum absorption position, the absorption depth, the symmetry and the corresponding gas information into a multi-dimensional vector; the gas information includes a concentration of a gas; s104, the first photoacoustic spectrum and the multi-dimensional vector are respectively used as a sample and a label to construct a sample data set; training a target recognition neural network by using the sample data set until the error is lower than a threshold value and tends to be stable, and obtaining a trained target recognition neural network; s105, inputting the photoacoustic spectrum to be identified into a trained target identification neural network to obtain identification information in the photoacoustic spectrum; the identification information includes the composition of the mixed gas, the maximum absorption position of the absorption peak, the absorption depth, and the degree of symmetry. The above-mentioned acquisition of the photoacoustic spectra of the plurality of mixed gases may be derived from a history database (HITRAN) in which measurement has been completed or actual measurement. Optionally, the bilateral filtering is replaced by a filtering method based on mallat, such as wavelet transform and median filtering.
It can be understood that when the recognition model of the invention is used for recognizing the photoacoustic spectrum, the limitation that the traditional recognition method of the photoacoustic spectrum depends on a specific excitation light source and the modulation equipment thereof and the number of photoacoustic cells is broken through: in the traditional gas measurement, in order to ensure high precision, gas is introduced into different photoacoustic cells, and the concentration of the gas is respectively measured through different optical filters. Therefore, the mixed gas includes at least one of water, hydrogen, methane, ethane, ethylene, acetylene, carbon monoxide, carbon dioxide, oxygen, or nitrogen.
In step S101 of some embodiments of the present invention, performing fourier deconvolution and bilateral filtering on a plurality of first photoacoustic spectra in sequence to obtain a plurality of second photoacoustic spectra includes: and carrying out bilateral filtering on the first photoacoustic spectrums subjected to the Fourier deconvolution to obtain second photoacoustic spectrums. The calculation method of the bilateral filtering is represented as follows:
Figure 762733DEST_PATH_IMAGE001
g(i,j)represents an output point;S(i,j)is meant to refer to(i,j)A range of sizes of (2N +1) at the center;f(k,l)a plurality of input points representing photoacoustic spectra;w(i,j,k,l)representing the values calculated by two gaussian functions.
Referring to FIG. 2, schematically, H2O and CO2At 3600cm-1To 3640cm-1In the wavelength band, a plurality of overlapping peaks occur and are difficult to distinguish, which results in difficulty in photoacoustic spectrum identification or low identification accuracy, and thus it is necessary to perform peak separation (separation of 1 or more overlapping peaks into a plurality of single peaks) or separate analysis of the overlapping peaks. In step S102 of some embodiments of the present invention, the determining the derivative of each second photoacoustic spectrum according to the number of the single peaks included in the overlapped peak in the second photoacoustic spectrum such that the number of the overlapped peaks in the derivative of each second photoacoustic spectrum is lower than the threshold value includes: taking the order of the initial derivative as 1, counting the number of single peaks contained in each overlapped peak in each second photoacoustic spectrum, and deriving the overlapped peak containing the most single peaks (only 1 peak in 1 wave band): if the number of the single peaks obtained by derivation of the overlapped peak containing the most single peaks is larger than or equal to a threshold value, taking the order of the derivative as the order of the derivative of each second photoacoustic spectrum; if the number of the overlapped peaks obtained by derivation of the overlapped peak containing the most single peak number is less than the threshold value, the order of the derivative is gradually increased according to the step length of 1 until the number of the overlapped peaks is obtained by derivationAnd taking the order of the derivative as the order of the derivative of each second photoacoustic spectrum, wherein the number of the superposed peaks obtained after the derivation is larger than or equal to a threshold value.
To better extract the features of the photoacoustic spectra, in step S103 of some embodiments of the present invention, extracting the maximum absorption position, the absorption depth, the symmetry, and the corresponding gas information of each wavelength band in each second photoacoustic spectrum and its derivative photoacoustic spectrum, and mapping them into a multidimensional vector comprises the steps of: extracting the maximum absorption position, the absorption depth, the symmetry and the corresponding gas information of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof; the gas information comprises a concentration or volume fraction of a gas; taking the maximum absorption position, the absorption depth and the symmetry degree of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof as a first feature vector; the corresponding gas information is used as a second eigenvector; and fusing the first feature vector and the second feature vector and mapping the first feature vector and the second feature vector into a multi-dimensional vector. The absorption depth refers to a distance between a highest point (trough) of absorption intensity or absorption coefficient and the normalized envelope line within a certain waveband; the absorption width refers to the spectral bandwidth of half of the maximum absorption depth; the symmetry degree refers to the ratio of the area of the right region to the area of the left region, which is defined by the vertical line of the absorption position (usually, a single peak in a certain wave band is selected as a reference point).
Optionally, in addition to the maximum absorption position, the absorption depth, and the symmetry characteristic, the first feature vector further includes one or more characteristic parameters that can characterize the photoacoustic spectrum, such as an absorption width, an area enclosed by a waveform and a coordinate axis, a waveform slope, a slope direction, or an absorption index.
In some embodiments of the present invention, the target recognition neural network comprises a first YOLO neural network and a second YOLO (you Only Look one) neural network, the fully connected layer of the first YOLO neural network being interconnected with the second YOLO neural network, the first YOLO neural network identifying the components of the mixed gas; the second YOLO neural network is used for identifying the maximum absorption position, the depth and the symmetry degree of an absorption peak. Preferably, the second YOLO neural network is a YOLO V4 neural network. Optionally, the first and second neural networks are a YOLO V3 neural network or a YOLO V5 neural network.
Referring to fig. 3, in a second aspect of the present invention, there is provided a gas photoacoustic spectrum identification apparatus 1 based on deep learning, including an acquisition module 11, a determination module 12, an extraction module 13, a training module 14, and an identification module 15, where the acquisition module 11 is configured to acquire photoacoustic spectra of a plurality of mixed gases, and the photoacoustic spectra are taken as first photoacoustic spectra; carrying out Fourier deconvolution and bilateral filtering on the plurality of first photoacoustic spectrums in sequence to obtain a plurality of second photoacoustic spectrums; the determining module 12 is configured to determine the order of the derivative according to the number of the single peaks included in the overlapped peak in each second photoacoustic spectrum, so that the number of the overlapped peaks in the derivative photoacoustic spectrum of each second photoacoustic spectrum is lower than a threshold; the extraction module 13 is configured to extract a maximum absorption position, an absorption depth, a symmetry, and corresponding gas information of each wavelength band in each second photoacoustic spectrum and a derivative photoacoustic spectrum thereof, and map the maximum absorption position, the absorption depth, the symmetry, and the corresponding gas information into a multidimensional vector; the gas information includes a concentration of a gas; the training module 14 is configured to use the first photoacoustic spectrum and the multidimensional vector as a sample and a label, respectively, to construct a sample data set; training a target recognition neural network by using the sample data set until the error is lower than a threshold value and tends to be stable, and obtaining a trained target recognition neural network; the identification module 15 is configured to input the photoacoustic spectrum to be identified into a trained target identification neural network, so as to obtain identification information in the photoacoustic spectrum; the identification information includes the composition of the mixed gas, the maximum absorption position of the absorption peak, the absorption depth, and the degree of symmetry.
In some embodiments of the present invention, the identification module 15 includes a first identification module and a second identification module, the first identification module is used for identifying the components of the mixed gas of the photoacoustic spectrum; the second identification module is used for identifying the maximum absorption position, the depth and the symmetry of the absorption peak in the photoacoustic spectrum.
Referring to fig. 4, in a third aspect of the present invention, there is provided an electronic apparatus comprising: one or more processors; the storage device is used for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method provided by the first aspect of the invention.
The electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following devices may be connected to the I/O interface 505 in general: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; a storage device 508 including, for example, a hard disk; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in the figures may represent one device or a plurality of devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more computer programs which, when executed by the electronic device, cause the electronic device to:
computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, Python, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A mixed gas photoacoustic spectrum identification method based on deep learning is characterized by comprising the following steps:
acquiring photoacoustic spectrums of a plurality of mixed gases, and recording the photoacoustic spectrums as first photoacoustic spectrums; carrying out Fourier deconvolution and bilateral filtering on the plurality of first photoacoustic spectrums in sequence to obtain a plurality of second photoacoustic spectrums;
determining the order of the derivative number of the single peak according to the number of the single peaks contained in the overlapped peak in each second photoacoustic spectrum, so that the number of the overlapped peaks in the derivative photoacoustic spectrum of each second photoacoustic spectrum is lower than a threshold value;
extracting the maximum absorption position, the absorption depth, the symmetry and the corresponding gas information of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof, and mapping the maximum absorption position, the absorption depth and the symmetry into a multi-dimensional vector; the gas information includes a concentration of a gas;
respectively taking the first photoacoustic spectrum and the multi-dimensional vector as a sample and a label to construct a sample data set; training a target recognition neural network by using the sample data set until the error is lower than a threshold value and tends to be stable, and obtaining a trained target recognition neural network;
inputting a photoacoustic spectrum to be identified into a trained target identification neural network to obtain identification information in the photoacoustic spectrum; the identification information includes the composition of the mixed gas, the maximum absorption position of the absorption peak, the depth and the symmetry.
2. The mixed gas photoacoustic spectrum identification method based on deep learning of claim 1, wherein the step of sequentially performing fourier deconvolution and bilateral filtering on the plurality of first photoacoustic spectra to obtain a plurality of second photoacoustic spectra comprises the steps of:
fourier deconvolving the overlapping peaks in the plurality of first photoacoustic spectra;
carrying out bilateral filtering on the first photoacoustic spectrums subjected to Fourier deconvolution to obtain second photoacoustic spectrums; the calculation method of the bilateral filtering is represented as follows:
Figure DEST_PATH_IMAGE002
g(i,j)representing an output point;S(i,j)to indicate the finger or fingers(i,j)A range of sizes of (2N +1) at the center;f(k,l)a plurality of input points representing a photoacoustic spectrum;w(i,j,k,l)representing a value calculated through two gaussian functions.
3. The mixed gas photoacoustic spectrum identification method based on deep learning of claim 1, wherein the step of determining the derivative number of the superposed peaks according to the number of the single peaks contained in the superposed peaks in each second photoacoustic spectrum, so that the number of the superposed peaks in the derivative photoacoustic spectrum of each second photoacoustic spectrum is lower than a threshold value, comprises the following steps:
taking the order of the initial derivative number as 1, counting the number of single peaks contained in each overlapped peak in each second photoacoustic spectrum, and deriving the overlapped peak containing the most single peaks:
if the number of the single peaks obtained by derivation of the overlapped peak containing the most single peaks is larger than or equal to a threshold value, taking the order of the derivative as the order of the derivative of each second photoacoustic spectrum;
and if the number of the superposed peaks obtained by derivation of the superposed peaks containing the maximum number of the single peaks is smaller than the threshold, gradually increasing the order of the derivative according to the step length of 1 until the number of the superposed peaks obtained by derivation is larger than or equal to the threshold, and taking the order of the derivative as the order of the derivative of each second photoacoustic spectrum.
4. The mixed gas photoacoustic spectrum identification method based on deep learning of claim 2 or 3, wherein the step of extracting the maximum absorption position, the absorption depth, the degree of symmetry and the corresponding gas information of each wavelength band in each second photoacoustic spectrum and its derivative photoacoustic spectrum and mapping them into a multidimensional vector comprises the steps of:
extracting the maximum absorption position, the absorption depth, the symmetry and the corresponding gas information of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof; the gas information includes a concentration of a gas;
taking the maximum absorption position, the absorption depth and the symmetry degree of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof as a first feature vector; the corresponding gas information is used as a second eigenvector;
and fusing the first feature vector and the second feature vector and mapping the first feature vector and the second feature vector into a multi-dimensional vector.
5. The deep learning based mixed gas photoacoustic spectroscopy identification method of claim 1, wherein the target recognition neural network comprises a first and a second YOLO neural networks, the fully connected layer of the first YOLO neural network and the second YOLO neural network are connected to each other,
the first YOLO neural network is used for identifying the components of the mixed gas;
the second YOLO neural network is used for identifying the maximum absorption position, the depth and the symmetry degree of an absorption peak.
6. The deep learning based mixed gas photoacoustic spectrometry identification method of claim 5, wherein the second YOLO neural network is a YOLO V4 neural network.
7. A gas photoacoustic spectrum recognition device based on deep learning is characterized by comprising an acquisition module, a determination module, an extraction module, a training module and a recognition module,
the acquisition module is used for acquiring the photoacoustic spectrums of a plurality of mixed gases and recording the photoacoustic spectrums as first photoacoustic spectrums; carrying out Fourier deconvolution and bilateral filtering on the plurality of first photoacoustic spectrums in sequence to obtain a plurality of second photoacoustic spectrums;
the determining module is used for determining the order of the derivative number of the single peak according to the number of the single peaks contained in the overlapped peak in each second photoacoustic spectrum, so that the number of the overlapped peaks in the derivative photoacoustic spectrum of each second photoacoustic spectrum is lower than a threshold value;
the extraction module is used for extracting the maximum absorption position, the absorption depth, the symmetry degree and the corresponding gas information of each wave band in each second photoacoustic spectrum and the derivative photoacoustic spectrum thereof, and mapping the maximum absorption position, the absorption depth, the symmetry degree and the corresponding gas information into a multidimensional vector; the gas information includes a concentration of a gas;
the training module is used for respectively taking the first photoacoustic spectrum and the multi-dimensional vector as a sample and a label to construct a sample data set; training a target recognition neural network by using the sample data set until the error is lower than a threshold value and tends to be stable, and obtaining a trained target recognition neural network;
the identification module is used for inputting the photoacoustic spectrum to be identified into a trained target identification neural network to obtain identification information in the photoacoustic spectrum; the identification information includes the composition of the mixed gas, the maximum absorption position of the absorption peak, the depth and the symmetry.
8. The deep learning based gas photoacoustic spectroscopy identification apparatus of claim 7, wherein the identification means comprises a first identification means and a second identification means,
the first identification module is used for identifying the components of the mixed gas of the photoacoustic spectrum;
the second identification module is used for identifying the maximum absorption position, the depth and the symmetry of the absorption peak in the photoacoustic spectrum.
9. An electronic device, comprising: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the deep learning-based mixed gas photoacoustic spectroscopy identification method according to any one of claims 1 to 6.
10. A computer readable medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the deep learning-based mixed gas photoacoustic spectroscopy identification method according to any one of claims 1 to 6.
CN202110236015.8A 2021-03-03 2021-03-03 Mixed gas photoacoustic spectrum identification method and device based on deep learning Active CN112595672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110236015.8A CN112595672B (en) 2021-03-03 2021-03-03 Mixed gas photoacoustic spectrum identification method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110236015.8A CN112595672B (en) 2021-03-03 2021-03-03 Mixed gas photoacoustic spectrum identification method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN112595672A CN112595672A (en) 2021-04-02
CN112595672B true CN112595672B (en) 2021-05-14

Family

ID=75210201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110236015.8A Active CN112595672B (en) 2021-03-03 2021-03-03 Mixed gas photoacoustic spectrum identification method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN112595672B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111944B (en) * 2021-04-13 2022-05-31 湖北鑫英泰系统技术股份有限公司 Photoacoustic spectrum identification method and device based on deep learning and gas photoacoustic effect
CN113723011B (en) * 2021-09-10 2024-04-26 上海无线电设备研究所 Method for rapidly calculating infrared radiation characteristics of high-temperature mixed gas
CN113740268B (en) * 2021-09-15 2022-09-09 同济大学 Photoacoustic time spectrum-based puncture tissue strip grading method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3095384B1 (en) * 2015-05-19 2020-01-01 SAMTD GmbH & Co. KG Method and device for the non-invasive determination of a measurement parameter of an analyte in a biological body
CN112304869A (en) * 2019-07-26 2021-02-02 英飞凌科技股份有限公司 Gas sensing device for sensing gas in gas mixture and method for operating the same
CN112384785A (en) * 2018-05-11 2021-02-19 开利公司 Photoacoustic detection system
CN112432905A (en) * 2021-01-28 2021-03-02 湖北鑫英泰系统技术股份有限公司 Voiceprint recognition method and device based on photoacoustic spectrum of characteristic gas in transformer oil

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3095384B1 (en) * 2015-05-19 2020-01-01 SAMTD GmbH & Co. KG Method and device for the non-invasive determination of a measurement parameter of an analyte in a biological body
CN112384785A (en) * 2018-05-11 2021-02-19 开利公司 Photoacoustic detection system
CN112304869A (en) * 2019-07-26 2021-02-02 英飞凌科技股份有限公司 Gas sensing device for sensing gas in gas mixture and method for operating the same
CN112432905A (en) * 2021-01-28 2021-03-02 湖北鑫英泰系统技术股份有限公司 Voiceprint recognition method and device based on photoacoustic spectrum of characteristic gas in transformer oil

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Characterization of a photoacoustic system through neural networks to determine multicomponent samples;N.M. Zajarevich,et al;《Infrared Physics & Technology》;20161231;第485-489页 *
基于光声光谱技术的多组分气体检测技术研究;金星阁;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20210115;第41页 *
基于多光谱成像和改进YOLOv4的煤矸石检测;来文豪 等;《光学学报》;20201231;第1-9页 *
基于导数光声光谱锅炉烟气多组分检测方法研究;郑学丽;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20170315;第17页 *
通过数学方法进行重叠峰分解的国内外研究现状综述;沈晴 等;《价值工程》;20111231;第197页 *

Also Published As

Publication number Publication date
CN112595672A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112595672B (en) Mixed gas photoacoustic spectrum identification method and device based on deep learning
CN112504971B (en) Photoacoustic spectrum identification method and device for characteristic gas in transformer oil
Ahrabian et al. Synchrosqueezing-based time-frequency analysis of multivariate data
US20180276540A1 (en) Modeling of the latent embedding of music using deep neural network
CN112432905B (en) Voiceprint recognition method and device based on photoacoustic spectrum of characteristic gas in transformer oil
US20150199974A1 (en) Detecting distorted audio signals based on audio fingerprinting
US9523635B2 (en) Apparatus and methods of spectral searching using wavelet transform coefficients
CN108780048A (en) A kind of method, detection device and the readable storage medium storing program for executing of determining detection device
CN114428324B (en) Pre-stack high-angle fast Fourier transform seismic imaging method, system and equipment
CN113111944B (en) Photoacoustic spectrum identification method and device based on deep learning and gas photoacoustic effect
CN116858785A (en) Soil nickel concentration inversion method, device, storage medium and computer equipment
Richardson et al. SRMD: Sparse random mode decomposition
Zhou et al. An improved algorithm for peak detection based on weighted continuous wavelet transform
CN112504970B (en) Gas photoacoustic spectrum enhanced voiceprint recognition method and device based on deep learning
CN102880861B (en) High-spectrum image classification method based on linear prediction cepstrum coefficient
CN117054396A (en) Raman spectrum detection method and device based on double-path multiplicative neural network
CN113642629B (en) Visualization method and device for improving reliability of spectroscopy analysis based on random forest
US11120820B2 (en) Detection of signal tone in audio signal
Gao et al. Combining direct orthogonal signal correction and wavelet packet transform with partial least squares to analyze overlapping voltammograms of nitroaniline isomers
CN117935963B (en) Qualitative analysis method, system and equipment for Raman spectrum of mixture
US11821863B2 (en) System and method for detecting structural change of a molecule or its environment with NMR spectroscopy
CN117574245B (en) Intelligent detector index self-checking method and system applied to mountain exploration
CN117934019B (en) Copper concentrate sample tracing method and system based on deep learning
CN117849875B (en) Earthquake signal analysis method, system, device and storage medium
WO2023165018A1 (en) Method and device for extracting element in chemical reaction flow chart

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method and device for photoacoustic spectrum recognition of mixed gas based on deep learning

Effective date of registration: 20220610

Granted publication date: 20210514

Pledgee: Guanggu Branch of Wuhan Rural Commercial Bank Co.,Ltd.

Pledgor: HUBEI INFOTECH CO.,LTD.

Registration number: Y2022420000153

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230922

Granted publication date: 20210514

Pledgee: Guanggu Branch of Wuhan Rural Commercial Bank Co.,Ltd.

Pledgor: HUBEI INFOTECH CO.,LTD.

Registration number: Y2022420000153

PC01 Cancellation of the registration of the contract for pledge of patent right