CN115206335A - Noise monitoring method for automatic sample retention and evidence collection - Google Patents
Noise monitoring method for automatic sample retention and evidence collection Download PDFInfo
- Publication number
- CN115206335A CN115206335A CN202211118403.7A CN202211118403A CN115206335A CN 115206335 A CN115206335 A CN 115206335A CN 202211118403 A CN202211118403 A CN 202211118403A CN 115206335 A CN115206335 A CN 115206335A
- Authority
- CN
- China
- Prior art keywords
- sound
- energy spectrum
- neural network
- network model
- noise
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012544 monitoring process Methods 0.000 title claims abstract description 26
- 230000014759 maintenance of location Effects 0.000 title claims abstract description 9
- 238000001228 spectrum Methods 0.000 claims abstract description 67
- 230000005236 sound signal Effects 0.000 claims abstract description 58
- 238000003062 neural network model Methods 0.000 claims abstract description 41
- 239000011159 matrix material Substances 0.000 claims abstract description 30
- 230000006870 function Effects 0.000 claims abstract description 28
- 239000013598 vector Substances 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000006243 chemical reaction Methods 0.000 claims abstract description 3
- 238000005070 sampling Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 6
- 238000012824 chemical production Methods 0.000 abstract description 14
- 230000001360 synchronised effect Effects 0.000 abstract description 6
- 238000003889 chemical engineering Methods 0.000 abstract description 2
- 238000004883 computer application Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000000605 extraction Methods 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0224—Processing in the time domain
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0232—Processing in the frequency domain
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0272—Voice signal separating
- G10L21/0308—Voice signal separating characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
Abstract
The invention discloses a noise monitoring method for automatic sample retention and evidence collection, which relates to the technical field of computer application and chemical engineering intersection and comprises the following steps: acquiring a sound signal, expressed as a function of time; intercepting sound signals at different time to obtain a plurality of local sound sequences, and converting to obtain a plurality of conversion sequences; calculating an energy spectrum to obtain a plurality of energy spectrum vectors and generating an energy spectrum matrix; establishing a neural network model; obtaining a learning cost function of the neural network model through learning; iterative computation to obtain a learned neural network model; inputting the energy spectrum matrix into the learned neural network model to obtain an output value; and when the output value is larger than the preset value, storing the sound signal into a database as a sample for sample reservation and evidence collection. The invention sets two synchronous sound acquisition devices in the environment to be monitored, analyzes the sound signals by an automatic intelligent processing technology, completes the extraction of sound characteristics and the monitoring of noise, can effectively identify the noise of chemical production, and realizes the automatic monitoring of the noise.
Description
Technical Field
The invention relates to the technical field of computer application and chemical engineering intersection, in particular to a noise monitoring method for automatic sample retention and evidence collection.
Background
In recent years, with the rapid development of the modern chemical industry, the problem of noise disturbing residents is more prominent due to the large-scale chemical plants. In order to reduce the influence of noise on the surrounding environment and the life of residents, certain measures are needed to weaken the noise and prevent and control noise pollution.
The noise source in chemical production is wide, for example, airflow noise generated by sudden change of gas pressure of compressed air, high-pressure steam, heating furnace and the like, mechanical noise generated by mechanical friction, vibration, impact or high-speed rotation of ball mill, pulverizer and the like, electromagnetic noise generated by magnetic field alternation and pulsation of transformer and the like, and the like. Because the noise pollution of chemical production has universality and persistence, on one hand, the complexity of the production process of chemical enterprises causes the noise source to be extensive and the influence area to be large; on the other hand, as long as the sound source does not stop operating, the noise effect does not stop. Therefore, the noise needs to be continuously monitored, and even if a prevention measure is taken, whether the suppression of the noise reaches the standard needs to be continuously monitored, and the suppression of the noise is timely found and corrected when the noise exceeds the standard, so that the influence degree of the noise is reduced as much as possible.
For example, utility model CN202058028U provides a forensic noise centralized monitoring system, but this method determines the noise by monitoring the amplitude of the noise signal, and cannot distinguish the noise category in the environment, such as the chemical production noise source and the general environmental noise source. The invention patent CN113340401A provides an online noise monitoring method and device, which use a spectrum analysis and peak matching method to monitor noise, are suitable for sound source scenes which simultaneously generate noise and large vibration, such as building noise sources, but cannot be well adapted to the complex chemical production noise source mode of the sound source.
Disclosure of Invention
The invention provides a noise monitoring method for automatic sample retention and evidence collection, which is used for overcoming at least one technical problem in the prior art.
The embodiment of the invention provides a noise monitoring method for automatic sample reservation and evidence collection, which comprises the following steps:
arranging two sound collection devices in an environment to be monitored, wherein the two sound collection devices are spaced at a set distance and simultaneously collect sound signals;
representing the sound signal as a function of timeWherein, i represents the number of the sound collecting device, T represents the sampling period of the sound, and n represents the discrete sampling sequence number;
intercepting the sound signal at different times to obtain a plurality of local sound sequencesWherein, in the process,denotes the local sound sequence size, k denotes the interception time point,𝜔represents an offset;
respectively carrying out discrete Fourier transform on a plurality of local sound sequences to obtain a plurality of transform sequences,𝑖Representing the complex imaginary part;
calculating energy spectrums of a plurality of transformation sequences to obtain a plurality of energy spectrum vectors,,Representing a local window function;
recombining a plurality of energy spectrum vectors to generate an energy spectrum matrix,𝑢The order in which the energy spectra are presented,𝑣a number representing an energy spectrum vector;
taking the energy spectrum matrix as an input,as output, establishing a neural network model; wherein the content of the first and second substances,representing the corresponding linear mapping parameters that are to be mapped,representing a linear bias parameter; the neural network model comprises a first hidden layer and a second hidden layer, wherein the first hidden layer is ,A matrix of the energy spectrum is represented,𝑚is shown in𝑢As an offset amount of the center of the image,in the form of a convolution window,𝑝a window number is indicated and a window number,a deviation parameter is indicated which is indicative of,𝜎is a non-linear function defined as(ii) a The second hidden layer is ,𝑞Representing the dimensions of the energy spectrum vector and,the parameters of the linear mapping are represented,representing a linear bias parameter;
obtaining a learning cost function of the neural network model by learning the neural network model,𝑜The value of the output is represented by,indicates the value of the learning sample flag and,𝜀represents a learning parameter;
iteratively calculating the neural network model by using the learning cost function to obtain a learned neural network model;
collecting sound signals, processing the sound signals to obtain an energy spectrum matrix, and inputting the energy spectrum matrix into the learned neural network model to obtain an output value;
and when the output value is larger than a preset value, the noise of the sound signal exceeds the standard, and the sound signal is stored in a database to be used as a sample for sample reservation and evidence collection.
Optionally, the set distance ranges from 50m to 100m.
the innovation points of the embodiment of the invention comprise:
1. in the embodiment, two synchronous sound collecting devices are arranged in the environment to be monitored, the sound signals are analyzed through an automatic intelligent processing technology, the extraction of sound characteristics and the monitoring of noise are completed, the noise of chemical production can be effectively identified, and the automatic monitoring of the noise is realized, so that the method is one of the innovation points of the embodiment of the invention.
2. In the embodiment, the sound signals collected by the two devices are processed, and digital features which can be used for identification are generated; the energy spectrum is calculated according to the input sound signal, the identification characteristic of the noise is constructed according to the energy spectrum subjected to windowing processing, and the chemical production noise is distinguished, which is one of the innovation points of the embodiment of the invention.
3. In the embodiment, the energy spectrum matrix is adopted to extract and separate the chemical production noise and other background noises, the neural network model is adopted to identify the noise, the noise exceeding identification and automatic sample remaining evidence obtaining when the noise exceeds the standard are realized, the continuous monitoring and real-time evidence obtaining of the chemical production noise are further realized, and the influence degree of the chemical production noise is favorably reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a noise monitoring method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without inventive step, are within the scope of the present invention.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a noise monitoring method for automatic sample retention and evidence collection. The following are detailed below.
Fig. 1 is a flowchart of a noise monitoring method according to an embodiment of the present invention, and referring to fig. 1, the noise monitoring method for automatic leave-sample and evidence collection according to the embodiment includes:
step 1: arranging two sound collection devices in an environment to be monitored, wherein the two sound collection devices are spaced at a set distance and simultaneously collect sound signals;
step 2: representing sound signals as a function of timeWherein, 1 and 2 represent the number of the sound collecting device, T represents the sampling period of the sound, and n represents the discrete sampling sequence number;
and step 3: intercepting sound signals at different times to obtain a plurality of partial sound sequencesWherein, in the step (A),denotes the local sound sequence size, k denotes the interception time point,𝜔represents an offset;
and 4, step 4: respectively carrying out discrete Fourier transform on a plurality of local sound sequences to obtain a plurality of transform sequences,𝑖Representing the complex imaginary part;
and 5: calculating energy spectrums of a plurality of conversion sequences to obtain a plurality of energy spectrum vectors,,Representing a local window function;
step 6: recombining the multiple energy spectrum vectors to generate an energy spectrum matrix,𝑢The order in which the energy spectra are presented,𝑣a number representing an energy spectrum vector;
and 7: taking as an input the energy spectrum matrix and,as output, establishing a neural network model; wherein the content of the first and second substances,which represents the corresponding linear mapping parameters, and,representing a linear bias parameter; the neural network model comprises a first hidden layer and a second hidden layer, wherein the first hidden layer is ,A matrix of the energy spectrum is represented,𝑚is shown in𝑢As an offset amount of the center of the image,in the form of a convolution window,𝑝a window number is indicated and a window number,a deviation parameter is indicated which is indicative of,𝜎is a nonlinear function defined as(ii) a The second hidden layer is ,𝑞Representing the dimensions of the energy spectrum vector and,the parameters of the linear mapping are represented,representing a linear bias parameter;
and step 8: obtaining a learning cost function of the neural network model by learning the neural network model,𝑜The value of the output is represented by,indicates the value of the learning sample flag and,𝜀represents a learning parameter;
and step 9: iteratively calculating a neural network model by using a learning cost function to obtain a learned neural network model;
step 10: collecting sound signals, processing the sound signals to obtain an energy spectrum matrix, and inputting the energy spectrum matrix into the learned neural network model to obtain an output value;
step 11: and when the output value is larger than the preset value, the noise of the sound signal exceeds the standard, and the sound signal is stored in a database to be used as a sample for sample reservation and evidence collection.
Specifically, referring to fig. 1, in the noise monitoring method for automatic sample retention and evidence collection provided in this embodiment, first two synchronous sound collection devices are set in an environment to be monitored through step 1, so that the two synchronous sound collection devices collect sound signals with synchronized time. In order to enable the sound signals acquired by the two sound acquisition devices in time synchronization to generate difference, the distance between the two sound acquisition devices is set to be a set distance in the embodiment, so that the sound signals acquired by the two sound acquisition devices are different and can be used for identifying sound characteristics. Meanwhile, when the two sound collecting devices are spaced too far apart, on one hand, deployment is difficult, and on the other hand, sound signals are attenuated. Therefore, the set distance between the two sound collection devices preferably ranges from 50m to 100m, so that the sound signals collected by the two sound collection devices can be distinguished, and the problems of sound signal attenuation and difficult deployment can be avoided.
The sound collecting equipment is provided with a wireless network module, after sound signals are collected, the wireless network module is used for synchronizing the time of the sound collecting equipment in step 2, and the sound signals are expressed as a function of the timeWherein 1 and 2 represent the number of the sound collecting equipment,𝑡=𝑛𝑇t denotes a sampling period of sound, and n denotes a discrete sampling number. The collected sound signals are time stamped by representing the sound signals as a function of time, so that different paths of sound can be synchronized in the subsequent processing process.
After representing the sound signal as a function of time, a partial sound sequence is obtained by intercepting the complete sound signal in step 3. In order to obtain a partial sequence including sufficient sound characteristics, multiple truncations are required at different time points, so that multiple partial sound sequences can be obtainedWherein, in the step (A),representing the size of the local sound sequence, k representing the interception time point, and k may take the value of𝑘=0,1,…,𝜔An offset is indicated and is indicated by,𝜔can take on values of𝜔=0,1,…, 。
It should be noted that, in practical applications,𝑘the value of (c) may be specifically set according to actual needs, in order to make the local sequence include sufficient sound features and reduce redundancy of data, the value of the interception time point k may be 0, 150, 300 ·,150 × n, that is, a segment of the local sound sequence is intercepted every 150 sampling periods. The larger the local sound sequence is, the more abundant the sound characteristics are, but as the local sound sequence increases, the data amount increases, so that the sound signal can include sufficient sound characteristics. Meanwhile, it is avoided that the calculation amount is too large due to too large data amount, and preferably, the size of the local sound sequence is set in the embodimentIs 400. Of course,the value of 400 is only one embodiment in the present embodiment, and is not a limitation to the present application, and in other embodiments,the values of (a) may be other, such as 300, 500, etc.
In step 4, performing discrete Fourier transform on the local sound sequence to obtain a Fourier transform sequence of the local sound sequence. The energy spectrum of the transformed sequence is then calculated by step 5,Wherein, in the step (A),and represents a local window function. In the present embodiment, the definition of the local window function is as follows:,,,,,。
in this embodiment, the energy spectrums of the sound signals collected by the two sound collection devices are respectively subjected to windowing calculation, and different window functions are adopted, so that the sound differences of different frequencies can be reflected. And digital features which can be used for identification are generated by calculating the energy spectrum and are used for identifying the sound signals and distinguishing chemical production noise and other background noise to be identified.
Referring to step 5, after a local sound sequence is processed, the energy spectrum obtained by calculation is a six-dimensional vector. In order to fully reflect the change characteristics of the sound signal, multiple times of interception are carried out in step 3 to obtain a plurality of local sound sequences, and then a plurality of six-dimensional energy spectrum vectors are obtained after processing in step 4 and step 5.
Through the step 6, a plurality of six-dimensional energy spectrum vectors are recombined, so that a 6*u-dimensional energy spectrum matrix can be obtainedWherein, in the step (A),𝑢representing the order of the energy spectrum, taking values1-u, corresponding to a time sequence,𝑣the numbers representing the energy spectrum vectors, taking values of 11, 12, 13, 21, 22, 23, correspond to six energy spectrum directions. Preferably, in order to sufficiently reflect the sound features, the number of times of truncation is not too small, and in order to avoid data redundancy, the number of times of truncation is not too large, so that in the present embodiment, preferably, 9 times of truncation are performed, so that 9 six-dimensional energy spectrum vectors can be obtained, and the energy spectrum matrix of 6*9 is generated after recombination.
Then, a neural network model is established through step 7, the energy spectrum matrix is used as input, and the classification of noise meeting or failing to meet the standard is used as output, which is defined in the embodimentAs an output of the neural network model, wherein,which represents the corresponding linear mapping parameters, and,the linear bias parameter is represented by a linear bias parameter,a vector representing the second hidden layer is then added,𝜎is a non-linear function.
The neural network model comprises a first hidden layer defined as ,For the energy spectrum matrix generated in step 6,𝑚is shown in𝑢As an offset amount of the center of the image,in the form of a convolution window, the convolution window,𝑝indicating window numberThe value of the convolution window is 1-64, the convolution window is used for modeling the time sequence characteristic of the energy spectrum and extracting the change rule of the sound signal in a short time, and therefore the characteristic of distinguishing the noise is achieved.Which is indicative of a deviation parameter that is,𝜎is a nonlinear function defined asThe function of the nonlinear function is to enable the neural network model to better fit the nonlinear relation in reality.
The neural network model further comprises a second hidden layer which is ,𝑞The vector dimension representing the second hidden layer, takes values of 1-256,the parameters of the linear mapping are represented,representing a linear bias parameter. The second hidden layer is used for establishing correlation among different frequency energy spectrums, and further mapping the output of the first hidden layer to 256-dimensional feature vectorsThe above.
After obtaining the neural network model, in step 8, learning the neural network model to obtain a learning cost function of the neural network model,𝑜The value of the output is represented by,indicates the value of the learning sample flag and,𝜀representing the learning parameters. In the learning of the neural network model, firstly, sound signals are collected according to the method of step 1, then, six-dimensional energy spectrum vectors of each group of sound signals are calculated according to the methods of steps 2 to 5, then, according to step 6, energy spectrum vectors are generated into energy spectrum matrixes on a time sequence, the energy spectrum matrixes are used as learning samples, and the learning samples are manually markedMarking a valueIs 0 or 1. In the present embodiment, the index value of the learning sample is represented by,When marking a valueWhen the value is 1, the noise of the sound signal corresponding to the learning sample exceeds the standard, and when the value is markedAnd when the value is 0, the noise of the sound signal corresponding to the learning sample does not exceed the standard. Defining a learning cost function of the neural network model as,𝑜The value of the output is represented by,𝜀represents learning parameters and takes valuesTo avoid falling into local extrema too quickly during learning, it is preferred。
In step 9Iterative computation of a neural network model using a learning cost function to find parameters in a first hidden layer and a second hidden layer, e.g. convolution windowsDeviation parameter(s)Linear bias parameterAnd the like, thereby obtaining the learned neural network model. For the iterative calculation, a Back Propagation (BP) algorithm may be used, and the BP algorithm may refer to the existing data, which is not described herein again.
Then, in step 10, sound signals are collected and processed to generate an energy spectrum matrix. It should be noted that, step 1 may be referred to for collecting the sound signal in step 10, and the step 2 to step 6 may be referred to for processing the sound signal and generating the energy spectrum matrix, which is not described herein again. It should be noted that when the sound signals are continuously collected, the sound signals with the length defined above are continuously intercepted according to the duration of the local sound sequence defined in steps 3-6 and the size of the energy spectrum matrix, and the energy spectrum matrix is calculated. And inputting the learned neural network model when an energy spectrum matrix is generated, and calculating an output value according to the learned neural network model.
After the output value is obtained, in step 11, it is determined whether the noise in the audio signal exceeds the standard, a preset value is set, for example, the preset value is set to 0.5, when the output value is greater than the preset value, it is determined that the noise of the corresponding audio signal exceeds the standard, and the audio signal is stored in the database as a sample for sample collection and evidence collection.
The application provides an automatic noise monitoring method who stays a kind and collect evidence, through handling the sound signal who gathers, generate the digital feature that can be used to the discernment, and adopt energy spectrum matrix to extract, separation chemical production noise and other background noises, adopt neural network model to discern the noise, can stay a kind automatically and collect evidence when the noise exceeds standard, realized the continuous monitoring and the real-time evidence collection of chemical production noise, help reducing the influence degree of chemical production noise.
Based on the method provided by the inventor, the set distance between two sound acquisition devices is set to be 50m-100m, and the size of the local sound sequence is set to be 50m-100mThe value of (2) is 400, a local sound sequence is intercepted every 150 sampling periods, the interception frequency is 9, namely, the obtained energy spectrum matrix is a 6*9 dimensional matrix, experimental verification is carried out, the obtained experimental result is shown in table 1, and table 1 is an experimental result schematic diagram. Referring to table 1, experimental results show that the noise monitoring method for automatic sample retention and evidence collection provided by the invention has a monitoring accuracy of over 90%, a response time of within 10 seconds, high monitoring efficiency and accurate identification result.
TABLE 1
Those of ordinary skill in the art will understand that: the figures are schematic representations of one embodiment, and the blocks or processes shown in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (3)
1. A noise monitoring method for automatic sample retention and evidence collection is characterized by comprising the following steps:
arranging two sound collection devices in an environment to be monitored, wherein the two sound collection devices are spaced at a set distance and simultaneously collect sound signals;
representing the sound signal as a function of timeWherein i represents the number of the sound collecting device, T represents the sampling period of the sound, and n represents the discrete sampling sequence number;
intercepting the sound signal at different times to obtain a plurality of local sound sequencesWherein, in the process,denotes the local sound sequence size, k denotes the interception time point,𝜔represents an offset;
respectively performing discrete Fourier transform on the local sound sequences to obtain a plurality of transform sequences,𝑖Representing the complex imaginary part;
calculating energy spectrums of a plurality of conversion sequences to obtain a plurality of energy spectrum vectors,,Representing a local window function;
recombining a plurality of the energy spectrum vectors to generate an energy spectrum matrix,𝑢The order in which the energy spectra are presented,𝑣a number representing an energy spectrum vector;
taking the energy spectrum matrix as an input,as output, establishing a neural network model; wherein the content of the first and second substances,which represents the corresponding linear mapping parameters, and,𝛽3 represents a linear bias parameter; the neural network model comprises a first hidden layer and a second hidden layer, wherein the first hidden layer is ,𝐼 (𝑚+𝑢,𝑣) A matrix of the energy spectrum is represented,𝑚is shown in𝑢As an offset amount of the center of the image,in the form of a convolution window,𝑝a window sequence number is indicated and,𝛽a 1 indicates a deviation parameter which is,𝜎is a nonlinear function defined as(ii) a The second hidden layer is ,𝑞Representing the dimensions of the energy spectrum vector and,the parameters of the linear mapping are represented,representing a linear bias parameter;
obtaining a learning cost function of the neural network model by learning the neural network model,𝑜The value of the output is represented by,indicates the value of the learning sample flag and,𝜀represents a learning parameter;
iteratively calculating the neural network model by using the learning cost function to obtain a learned neural network model;
collecting sound signals, processing the sound signals to obtain an energy spectrum matrix, and inputting the energy spectrum matrix into the learned neural network model to obtain an output value;
and when the output value is larger than a preset value, the noise of the sound signal exceeds the standard, and the sound signal is stored in a database to be used as a sample for sample reservation and evidence collection.
2. The method according to claim 1, wherein the set distance is in a range of 50m to 100m.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211118403.7A CN115206335B (en) | 2022-09-15 | 2022-09-15 | Noise monitoring method for automatic sample retention and evidence collection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211118403.7A CN115206335B (en) | 2022-09-15 | 2022-09-15 | Noise monitoring method for automatic sample retention and evidence collection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115206335A true CN115206335A (en) | 2022-10-18 |
CN115206335B CN115206335B (en) | 2022-12-02 |
Family
ID=83571973
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211118403.7A Active CN115206335B (en) | 2022-09-15 | 2022-09-15 | Noise monitoring method for automatic sample retention and evidence collection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115206335B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5799276A (en) * | 1995-11-07 | 1998-08-25 | Accent Incorporated | Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals |
CN101320566A (en) * | 2008-06-30 | 2008-12-10 | 中国人民解放军第四军医大学 | Non-air conduction speech reinforcement method based on multi-band spectrum subtraction |
CN104616663A (en) * | 2014-11-25 | 2015-05-13 | 重庆邮电大学 | Music separation method of MFCC (Mel Frequency Cepstrum Coefficient)-multi-repetition model in combination with HPSS (Harmonic/Percussive Sound Separation) |
CN105390141A (en) * | 2015-10-14 | 2016-03-09 | 科大讯飞股份有限公司 | Sound conversion method and sound conversion device |
CN109524014A (en) * | 2018-11-29 | 2019-03-26 | 辽宁工业大学 | A kind of Application on Voiceprint Recognition analysis method based on depth convolutional neural networks |
CN111292762A (en) * | 2018-12-08 | 2020-06-16 | 南京工业大学 | Single-channel voice separation method based on deep learning |
-
2022
- 2022-09-15 CN CN202211118403.7A patent/CN115206335B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5799276A (en) * | 1995-11-07 | 1998-08-25 | Accent Incorporated | Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals |
CN101320566A (en) * | 2008-06-30 | 2008-12-10 | 中国人民解放军第四军医大学 | Non-air conduction speech reinforcement method based on multi-band spectrum subtraction |
CN104616663A (en) * | 2014-11-25 | 2015-05-13 | 重庆邮电大学 | Music separation method of MFCC (Mel Frequency Cepstrum Coefficient)-multi-repetition model in combination with HPSS (Harmonic/Percussive Sound Separation) |
CN105390141A (en) * | 2015-10-14 | 2016-03-09 | 科大讯飞股份有限公司 | Sound conversion method and sound conversion device |
CN109524014A (en) * | 2018-11-29 | 2019-03-26 | 辽宁工业大学 | A kind of Application on Voiceprint Recognition analysis method based on depth convolutional neural networks |
CN111292762A (en) * | 2018-12-08 | 2020-06-16 | 南京工业大学 | Single-channel voice separation method based on deep learning |
Non-Patent Citations (2)
Title |
---|
AYA Y. KHUDHAIR 等: "Reduction of the Noise Effect to Detect the DSSS Signal using the Artificial Neural Network", 《2021 1ST BABYLON INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND SCIENCE (BICITS)》 * |
王金超: "基于神经网络的语音增强算法研究", 《研究与设计》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115206335B (en) | 2022-12-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111325095B (en) | Intelligent detection method and system for equipment health state based on acoustic wave signals | |
CN111985567B (en) | Automatic pollution source type identification method based on machine learning | |
CN108805269B (en) | Method for picking seismic facies arrival time based on LSTM recurrent neural network | |
CN102163427B (en) | Method for detecting audio exceptional event based on environmental model | |
CN113298134B (en) | System and method for remotely and non-contact health monitoring of fan blade based on BPNN | |
CN109473119B (en) | Acoustic target event monitoring method | |
CN112367273B (en) | Flow classification method and device of deep neural network model based on knowledge distillation | |
CN115640915A (en) | Intelligent gas pipe network compressor safety management method and Internet of things system | |
CN110440148A (en) | A kind of leakage loss acoustical signal classifying identification method, apparatus and system | |
CN111506635A (en) | System and method for analyzing residential electricity consumption behavior based on autoregressive naive Bayes algorithm | |
CN112348052A (en) | Power transmission and transformation equipment abnormal sound source positioning method based on improved EfficientNet | |
CN110020637A (en) | A kind of analog circuit intermittent fault diagnostic method based on more granularities cascade forest | |
CN104951553A (en) | Content collecting and data mining platform accurate in data processing and implementation method thereof | |
CN115499185A (en) | Method and system for analyzing abnormal behavior of network security object of power monitoring system | |
CN111600878A (en) | Low-rate denial of service attack detection method based on MAF-ADM | |
CN115376526A (en) | Power equipment fault detection method and system based on voiceprint recognition | |
CN115034671A (en) | Secondary system information fault analysis method based on association rule and cluster | |
CN113593605B (en) | Industrial audio fault monitoring system and method based on deep neural network | |
CN115206335B (en) | Noise monitoring method for automatic sample retention and evidence collection | |
CN109409216B (en) | Speed self-adaptive indoor human body detection method based on subcarrier dynamic selection | |
CN117789081A (en) | Dual-attention mechanism small object identification method based on self-information | |
CN116388865B (en) | PON optical module-based automatic screening method for abnormal optical power | |
CN112801033A (en) | AlexNet network-based construction disturbance and leakage identification method along long oil and gas pipeline | |
CN107742162A (en) | A kind of multidimensional characteristic association analysis method based on auxiliary tone monitoring information | |
Jiang et al. | Fault detection and diagnosis of wind turbine gearbox based on acoustic analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240422 Address after: 101400 28-3, floor 1, building 28, yard 13, Paradise West Street, Huairou District, Beijing Patentee after: Central Carbon and (Beijing) Technology Co.,Ltd. Country or region after: China Address before: 101200 No. 81-1904, Shunfu Road, Daxingzhuang Town, Pinggu District, Beijing Patentee before: BEIJING ZHONGHUA HIGH-TECH ENVIRONMENTAL MANAGEMENT CO.,LTD. Country or region before: China |
|
TR01 | Transfer of patent right |