CN115206335A - Noise monitoring method for automatic sample retention and evidence collection - Google Patents

Noise monitoring method for automatic sample retention and evidence collection Download PDF

Info

Publication number
CN115206335A
CN115206335A CN202211118403.7A CN202211118403A CN115206335A CN 115206335 A CN115206335 A CN 115206335A CN 202211118403 A CN202211118403 A CN 202211118403A CN 115206335 A CN115206335 A CN 115206335A
Authority
CN
China
Prior art keywords
sound
energy spectrum
neural network
network model
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211118403.7A
Other languages
Chinese (zh)
Other versions
CN115206335B (en
Inventor
王延敦
秦云松
宋博
王岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central Carbon And Beijing Technology Co ltd
Original Assignee
Beijing Zhonghua High Tech Environmental Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhonghua High Tech Environmental Management Co ltd filed Critical Beijing Zhonghua High Tech Environmental Management Co ltd
Priority to CN202211118403.7A priority Critical patent/CN115206335B/en
Publication of CN115206335A publication Critical patent/CN115206335A/en
Application granted granted Critical
Publication of CN115206335B publication Critical patent/CN115206335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0224Processing in the time domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • G10L21/0308Voice signal separating characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a noise monitoring method for automatic sample retention and evidence collection, which relates to the technical field of computer application and chemical engineering intersection and comprises the following steps: acquiring a sound signal, expressed as a function of time; intercepting sound signals at different time to obtain a plurality of local sound sequences, and converting to obtain a plurality of conversion sequences; calculating an energy spectrum to obtain a plurality of energy spectrum vectors and generating an energy spectrum matrix; establishing a neural network model; obtaining a learning cost function of the neural network model through learning; iterative computation to obtain a learned neural network model; inputting the energy spectrum matrix into the learned neural network model to obtain an output value; and when the output value is larger than the preset value, storing the sound signal into a database as a sample for sample reservation and evidence collection. The invention sets two synchronous sound acquisition devices in the environment to be monitored, analyzes the sound signals by an automatic intelligent processing technology, completes the extraction of sound characteristics and the monitoring of noise, can effectively identify the noise of chemical production, and realizes the automatic monitoring of the noise.

Description

Noise monitoring method for automatic sample retention and evidence collection
Technical Field
The invention relates to the technical field of computer application and chemical engineering intersection, in particular to a noise monitoring method for automatic sample retention and evidence collection.
Background
In recent years, with the rapid development of the modern chemical industry, the problem of noise disturbing residents is more prominent due to the large-scale chemical plants. In order to reduce the influence of noise on the surrounding environment and the life of residents, certain measures are needed to weaken the noise and prevent and control noise pollution.
The noise source in chemical production is wide, for example, airflow noise generated by sudden change of gas pressure of compressed air, high-pressure steam, heating furnace and the like, mechanical noise generated by mechanical friction, vibration, impact or high-speed rotation of ball mill, pulverizer and the like, electromagnetic noise generated by magnetic field alternation and pulsation of transformer and the like, and the like. Because the noise pollution of chemical production has universality and persistence, on one hand, the complexity of the production process of chemical enterprises causes the noise source to be extensive and the influence area to be large; on the other hand, as long as the sound source does not stop operating, the noise effect does not stop. Therefore, the noise needs to be continuously monitored, and even if a prevention measure is taken, whether the suppression of the noise reaches the standard needs to be continuously monitored, and the suppression of the noise is timely found and corrected when the noise exceeds the standard, so that the influence degree of the noise is reduced as much as possible.
For example, utility model CN202058028U provides a forensic noise centralized monitoring system, but this method determines the noise by monitoring the amplitude of the noise signal, and cannot distinguish the noise category in the environment, such as the chemical production noise source and the general environmental noise source. The invention patent CN113340401A provides an online noise monitoring method and device, which use a spectrum analysis and peak matching method to monitor noise, are suitable for sound source scenes which simultaneously generate noise and large vibration, such as building noise sources, but cannot be well adapted to the complex chemical production noise source mode of the sound source.
Disclosure of Invention
The invention provides a noise monitoring method for automatic sample retention and evidence collection, which is used for overcoming at least one technical problem in the prior art.
The embodiment of the invention provides a noise monitoring method for automatic sample reservation and evidence collection, which comprises the following steps:
arranging two sound collection devices in an environment to be monitored, wherein the two sound collection devices are spaced at a set distance and simultaneously collect sound signals;
representing the sound signal as a function of time
Figure 769523DEST_PATH_IMAGE001
Wherein, i represents the number of the sound collecting device, T represents the sampling period of the sound, and n represents the discrete sampling sequence number;
intercepting the sound signal at different times to obtain a plurality of local sound sequences
Figure 110374DEST_PATH_IMAGE002
Wherein, in the process,
Figure 930562DEST_PATH_IMAGE003
denotes the local sound sequence size, k denotes the interception time point,𝜔represents an offset;
respectively carrying out discrete Fourier transform on a plurality of local sound sequences to obtain a plurality of transform sequences
Figure 469997DEST_PATH_IMAGE004
,𝑖Representing the complex imaginary part;
calculating energy spectrums of a plurality of transformation sequences to obtain a plurality of energy spectrum vectors
Figure 615808DEST_PATH_IMAGE005
Figure 240693DEST_PATH_IMAGE006
Figure 598993DEST_PATH_IMAGE007
Representing a local window function;
recombining a plurality of energy spectrum vectors to generate an energy spectrum matrix
Figure 602721DEST_PATH_IMAGE008
,𝑢The order in which the energy spectra are presented,𝑣a number representing an energy spectrum vector;
taking the energy spectrum matrix as an input,
Figure 434280DEST_PATH_IMAGE009
as output, establishing a neural network model; wherein the content of the first and second substances,
Figure 413557DEST_PATH_IMAGE010
representing the corresponding linear mapping parameters that are to be mapped,
Figure 778811DEST_PATH_IMAGE011
representing a linear bias parameter; the neural network model comprises a first hidden layer and a second hidden layer, wherein the first hidden layer is
Figure 292837DEST_PATH_IMAGE012
Figure 108347DEST_PATH_IMAGE013
Figure 927398DEST_PATH_IMAGE014
A matrix of the energy spectrum is represented,𝑚is shown in𝑢As an offset amount of the center of the image,
Figure 486555DEST_PATH_IMAGE015
in the form of a convolution window,𝑝a window number is indicated and a window number,
Figure 58351DEST_PATH_IMAGE016
a deviation parameter is indicated which is indicative of,𝜎is a non-linear function defined as
Figure 185707DEST_PATH_IMAGE017
(ii) a The second hidden layer is
Figure 538060DEST_PATH_IMAGE018
Figure 41853DEST_PATH_IMAGE019
,𝑞Representing the dimensions of the energy spectrum vector and,
Figure 874680DEST_PATH_IMAGE020
the parameters of the linear mapping are represented,
Figure 891047DEST_PATH_IMAGE021
representing a linear bias parameter;
obtaining a learning cost function of the neural network model by learning the neural network model
Figure 809324DEST_PATH_IMAGE022
,𝑜The value of the output is represented by,
Figure 38180DEST_PATH_IMAGE023
indicates the value of the learning sample flag and,𝜀represents a learning parameter;
iteratively calculating the neural network model by using the learning cost function to obtain a learned neural network model;
collecting sound signals, processing the sound signals to obtain an energy spectrum matrix, and inputting the energy spectrum matrix into the learned neural network model to obtain an output value;
and when the output value is larger than a preset value, the noise of the sound signal exceeds the standard, and the sound signal is stored in a database to be used as a sample for sample reservation and evidence collection.
Optionally, the set distance ranges from 50m to 100m.
Alternatively,
Figure 335300DEST_PATH_IMAGE024
=400。
the innovation points of the embodiment of the invention comprise:
1. in the embodiment, two synchronous sound collecting devices are arranged in the environment to be monitored, the sound signals are analyzed through an automatic intelligent processing technology, the extraction of sound characteristics and the monitoring of noise are completed, the noise of chemical production can be effectively identified, and the automatic monitoring of the noise is realized, so that the method is one of the innovation points of the embodiment of the invention.
2. In the embodiment, the sound signals collected by the two devices are processed, and digital features which can be used for identification are generated; the energy spectrum is calculated according to the input sound signal, the identification characteristic of the noise is constructed according to the energy spectrum subjected to windowing processing, and the chemical production noise is distinguished, which is one of the innovation points of the embodiment of the invention.
3. In the embodiment, the energy spectrum matrix is adopted to extract and separate the chemical production noise and other background noises, the neural network model is adopted to identify the noise, the noise exceeding identification and automatic sample remaining evidence obtaining when the noise exceeds the standard are realized, the continuous monitoring and real-time evidence obtaining of the chemical production noise are further realized, and the influence degree of the chemical production noise is favorably reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a noise monitoring method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without inventive step, are within the scope of the present invention.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a noise monitoring method for automatic sample retention and evidence collection. The following are detailed below.
Fig. 1 is a flowchart of a noise monitoring method according to an embodiment of the present invention, and referring to fig. 1, the noise monitoring method for automatic leave-sample and evidence collection according to the embodiment includes:
step 1: arranging two sound collection devices in an environment to be monitored, wherein the two sound collection devices are spaced at a set distance and simultaneously collect sound signals;
step 2: representing sound signals as a function of time
Figure 663514DEST_PATH_IMAGE025
Wherein, 1 and 2 represent the number of the sound collecting device, T represents the sampling period of the sound, and n represents the discrete sampling sequence number;
and step 3: intercepting sound signals at different times to obtain a plurality of partial sound sequences
Figure 193721DEST_PATH_IMAGE026
Wherein, in the step (A),
Figure 898372DEST_PATH_IMAGE027
denotes the local sound sequence size, k denotes the interception time point,𝜔represents an offset;
and 4, step 4: respectively carrying out discrete Fourier transform on a plurality of local sound sequences to obtain a plurality of transform sequences
Figure 315578DEST_PATH_IMAGE028
,𝑖Representing the complex imaginary part;
and 5: calculating energy spectrums of a plurality of conversion sequences to obtain a plurality of energy spectrum vectors
Figure 814692DEST_PATH_IMAGE029
Figure 566616DEST_PATH_IMAGE030
Figure 12641DEST_PATH_IMAGE031
Representing a local window function;
step 6: recombining the multiple energy spectrum vectors to generate an energy spectrum matrix
Figure 408987DEST_PATH_IMAGE032
,𝑢The order in which the energy spectra are presented,𝑣a number representing an energy spectrum vector;
and 7: taking as an input the energy spectrum matrix and,
Figure 219949DEST_PATH_IMAGE033
as output, establishing a neural network model; wherein the content of the first and second substances,
Figure 334535DEST_PATH_IMAGE034
which represents the corresponding linear mapping parameters, and,
Figure 240043DEST_PATH_IMAGE035
representing a linear bias parameter; the neural network model comprises a first hidden layer and a second hidden layer, wherein the first hidden layer is
Figure 959737DEST_PATH_IMAGE036
Figure 800655DEST_PATH_IMAGE037
Figure 277903DEST_PATH_IMAGE038
A matrix of the energy spectrum is represented,𝑚is shown in𝑢As an offset amount of the center of the image,
Figure 862468DEST_PATH_IMAGE039
in the form of a convolution window,𝑝a window number is indicated and a window number,
Figure 623620DEST_PATH_IMAGE040
a deviation parameter is indicated which is indicative of,𝜎is a nonlinear function defined as
Figure 635438DEST_PATH_IMAGE041
(ii) a The second hidden layer is
Figure 334404DEST_PATH_IMAGE042
Figure 925922DEST_PATH_IMAGE043
,𝑞Representing the dimensions of the energy spectrum vector and,
Figure 151367DEST_PATH_IMAGE044
the parameters of the linear mapping are represented,
Figure 458721DEST_PATH_IMAGE045
representing a linear bias parameter;
and step 8: obtaining a learning cost function of the neural network model by learning the neural network model
Figure 300775DEST_PATH_IMAGE046
,𝑜The value of the output is represented by,
Figure 102509DEST_PATH_IMAGE047
indicates the value of the learning sample flag and,𝜀represents a learning parameter;
and step 9: iteratively calculating a neural network model by using a learning cost function to obtain a learned neural network model;
step 10: collecting sound signals, processing the sound signals to obtain an energy spectrum matrix, and inputting the energy spectrum matrix into the learned neural network model to obtain an output value;
step 11: and when the output value is larger than the preset value, the noise of the sound signal exceeds the standard, and the sound signal is stored in a database to be used as a sample for sample reservation and evidence collection.
Specifically, referring to fig. 1, in the noise monitoring method for automatic sample retention and evidence collection provided in this embodiment, first two synchronous sound collection devices are set in an environment to be monitored through step 1, so that the two synchronous sound collection devices collect sound signals with synchronized time. In order to enable the sound signals acquired by the two sound acquisition devices in time synchronization to generate difference, the distance between the two sound acquisition devices is set to be a set distance in the embodiment, so that the sound signals acquired by the two sound acquisition devices are different and can be used for identifying sound characteristics. Meanwhile, when the two sound collecting devices are spaced too far apart, on one hand, deployment is difficult, and on the other hand, sound signals are attenuated. Therefore, the set distance between the two sound collection devices preferably ranges from 50m to 100m, so that the sound signals collected by the two sound collection devices can be distinguished, and the problems of sound signal attenuation and difficult deployment can be avoided.
The sound collecting equipment is provided with a wireless network module, after sound signals are collected, the wireless network module is used for synchronizing the time of the sound collecting equipment in step 2, and the sound signals are expressed as a function of the time
Figure 182460DEST_PATH_IMAGE048
Wherein 1 and 2 represent the number of the sound collecting equipment,𝑡=𝑛𝑇t denotes a sampling period of sound, and n denotes a discrete sampling number. The collected sound signals are time stamped by representing the sound signals as a function of time, so that different paths of sound can be synchronized in the subsequent processing process.
After representing the sound signal as a function of time, a partial sound sequence is obtained by intercepting the complete sound signal in step 3. In order to obtain a partial sequence including sufficient sound characteristics, multiple truncations are required at different time points, so that multiple partial sound sequences can be obtained
Figure 457453DEST_PATH_IMAGE049
Wherein, in the step (A),
Figure 131011DEST_PATH_IMAGE050
representing the size of the local sound sequence, k representing the interception time point, and k may take the value of𝑘=0,1,…,𝜔An offset is indicated and is indicated by,𝜔can take on values of𝜔=0,1,…,
Figure 64332DEST_PATH_IMAGE024
It should be noted that, in practical applications,𝑘the value of (c) may be specifically set according to actual needs, in order to make the local sequence include sufficient sound features and reduce redundancy of data, the value of the interception time point k may be 0, 150, 300 ·,150 × n, that is, a segment of the local sound sequence is intercepted every 150 sampling periods. The larger the local sound sequence is, the more abundant the sound characteristics are, but as the local sound sequence increases, the data amount increases, so that the sound signal can include sufficient sound characteristics. Meanwhile, it is avoided that the calculation amount is too large due to too large data amount, and preferably, the size of the local sound sequence is set in the embodiment
Figure 185740DEST_PATH_IMAGE024
Is 400. Of course,
Figure 116787DEST_PATH_IMAGE051
the value of 400 is only one embodiment in the present embodiment, and is not a limitation to the present application, and in other embodiments,
Figure 402275DEST_PATH_IMAGE024
the values of (a) may be other, such as 300, 500, etc.
In step 4, performing discrete Fourier transform on the local sound sequence to obtain a Fourier transform sequence of the local sound sequence
Figure 795079DEST_PATH_IMAGE052
. The energy spectrum of the transformed sequence is then calculated by step 5
Figure 849623DEST_PATH_IMAGE053
Figure 13888DEST_PATH_IMAGE054
Wherein, in the step (A),
Figure 396459DEST_PATH_IMAGE055
and represents a local window function. In the present embodiment, the definition of the local window function is as follows:
Figure 468320DEST_PATH_IMAGE056
Figure 767583DEST_PATH_IMAGE057
Figure 633908DEST_PATH_IMAGE058
Figure 34933DEST_PATH_IMAGE059
Figure 644906DEST_PATH_IMAGE060
Figure 798676DEST_PATH_IMAGE061
in this embodiment, the energy spectrums of the sound signals collected by the two sound collection devices are respectively subjected to windowing calculation, and different window functions are adopted, so that the sound differences of different frequencies can be reflected. And digital features which can be used for identification are generated by calculating the energy spectrum and are used for identifying the sound signals and distinguishing chemical production noise and other background noise to be identified.
Referring to step 5, after a local sound sequence is processed, the energy spectrum obtained by calculation is a six-dimensional vector. In order to fully reflect the change characteristics of the sound signal, multiple times of interception are carried out in step 3 to obtain a plurality of local sound sequences, and then a plurality of six-dimensional energy spectrum vectors are obtained after processing in step 4 and step 5.
Through the step 6, a plurality of six-dimensional energy spectrum vectors are recombined, so that a 6*u-dimensional energy spectrum matrix can be obtained
Figure 39164DEST_PATH_IMAGE062
Wherein, in the step (A),𝑢representing the order of the energy spectrum, taking values1-u, corresponding to a time sequence,𝑣the numbers representing the energy spectrum vectors, taking values of 11, 12, 13, 21, 22, 23, correspond to six energy spectrum directions. Preferably, in order to sufficiently reflect the sound features, the number of times of truncation is not too small, and in order to avoid data redundancy, the number of times of truncation is not too large, so that in the present embodiment, preferably, 9 times of truncation are performed, so that 9 six-dimensional energy spectrum vectors can be obtained, and the energy spectrum matrix of 6*9 is generated after recombination.
Then, a neural network model is established through step 7, the energy spectrum matrix is used as input, and the classification of noise meeting or failing to meet the standard is used as output, which is defined in the embodiment
Figure 786541DEST_PATH_IMAGE063
As an output of the neural network model, wherein,
Figure 75571DEST_PATH_IMAGE064
which represents the corresponding linear mapping parameters, and,
Figure 959213DEST_PATH_IMAGE065
the linear bias parameter is represented by a linear bias parameter,
Figure 823133DEST_PATH_IMAGE066
a vector representing the second hidden layer is then added,𝜎is a non-linear function.
The neural network model comprises a first hidden layer defined as
Figure 792226DEST_PATH_IMAGE067
Figure 619367DEST_PATH_IMAGE068
Figure 826358DEST_PATH_IMAGE069
For the energy spectrum matrix generated in step 6,𝑚is shown in𝑢As an offset amount of the center of the image,
Figure 126758DEST_PATH_IMAGE070
in the form of a convolution window, the convolution window,𝑝indicating window numberThe value of the convolution window is 1-64, the convolution window is used for modeling the time sequence characteristic of the energy spectrum and extracting the change rule of the sound signal in a short time, and therefore the characteristic of distinguishing the noise is achieved.
Figure 583147DEST_PATH_IMAGE071
Which is indicative of a deviation parameter that is,𝜎is a nonlinear function defined as
Figure 213980DEST_PATH_IMAGE072
The function of the nonlinear function is to enable the neural network model to better fit the nonlinear relation in reality.
The neural network model further comprises a second hidden layer which is
Figure 72214DEST_PATH_IMAGE073
Figure 12357DEST_PATH_IMAGE074
,𝑞The vector dimension representing the second hidden layer, takes values of 1-256,
Figure 956043DEST_PATH_IMAGE075
the parameters of the linear mapping are represented,
Figure 124987DEST_PATH_IMAGE076
representing a linear bias parameter. The second hidden layer is used for establishing correlation among different frequency energy spectrums, and further mapping the output of the first hidden layer to 256-dimensional feature vectors
Figure 837728DEST_PATH_IMAGE077
The above.
After obtaining the neural network model, in step 8, learning the neural network model to obtain a learning cost function of the neural network model
Figure 27401DEST_PATH_IMAGE078
,𝑜The value of the output is represented by,
Figure 583016DEST_PATH_IMAGE079
indicates the value of the learning sample flag and,𝜀representing the learning parameters. In the learning of the neural network model, firstly, sound signals are collected according to the method of step 1, then, six-dimensional energy spectrum vectors of each group of sound signals are calculated according to the methods of steps 2 to 5, then, according to step 6, energy spectrum vectors are generated into energy spectrum matrixes on a time sequence, the energy spectrum matrixes are used as learning samples, and the learning samples are manually marked
Figure 680285DEST_PATH_IMAGE080
Marking a value
Figure 575429DEST_PATH_IMAGE081
Is 0 or 1. In the present embodiment, the index value of the learning sample is represented by
Figure 732741DEST_PATH_IMAGE082
Figure 526384DEST_PATH_IMAGE083
When marking a value
Figure 365027DEST_PATH_IMAGE082
When the value is 1, the noise of the sound signal corresponding to the learning sample exceeds the standard, and when the value is marked
Figure 52360DEST_PATH_IMAGE084
And when the value is 0, the noise of the sound signal corresponding to the learning sample does not exceed the standard. Defining a learning cost function of the neural network model as
Figure 770787DEST_PATH_IMAGE085
,𝑜The value of the output is represented by,𝜀represents learning parameters and takes values
Figure 910781DEST_PATH_IMAGE086
To avoid falling into local extrema too quickly during learning, it is preferred
Figure 490798DEST_PATH_IMAGE087
In step 9Iterative computation of a neural network model using a learning cost function to find parameters in a first hidden layer and a second hidden layer, e.g. convolution windows
Figure 422851DEST_PATH_IMAGE088
Deviation parameter(s)
Figure 594069DEST_PATH_IMAGE089
Linear bias parameter
Figure 221360DEST_PATH_IMAGE090
And the like, thereby obtaining the learned neural network model. For the iterative calculation, a Back Propagation (BP) algorithm may be used, and the BP algorithm may refer to the existing data, which is not described herein again.
Then, in step 10, sound signals are collected and processed to generate an energy spectrum matrix. It should be noted that, step 1 may be referred to for collecting the sound signal in step 10, and the step 2 to step 6 may be referred to for processing the sound signal and generating the energy spectrum matrix, which is not described herein again. It should be noted that when the sound signals are continuously collected, the sound signals with the length defined above are continuously intercepted according to the duration of the local sound sequence defined in steps 3-6 and the size of the energy spectrum matrix, and the energy spectrum matrix is calculated. And inputting the learned neural network model when an energy spectrum matrix is generated, and calculating an output value according to the learned neural network model.
After the output value is obtained, in step 11, it is determined whether the noise in the audio signal exceeds the standard, a preset value is set, for example, the preset value is set to 0.5, when the output value is greater than the preset value, it is determined that the noise of the corresponding audio signal exceeds the standard, and the audio signal is stored in the database as a sample for sample collection and evidence collection.
The application provides an automatic noise monitoring method who stays a kind and collect evidence, through handling the sound signal who gathers, generate the digital feature that can be used to the discernment, and adopt energy spectrum matrix to extract, separation chemical production noise and other background noises, adopt neural network model to discern the noise, can stay a kind automatically and collect evidence when the noise exceeds standard, realized the continuous monitoring and the real-time evidence collection of chemical production noise, help reducing the influence degree of chemical production noise.
Based on the method provided by the inventor, the set distance between two sound acquisition devices is set to be 50m-100m, and the size of the local sound sequence is set to be 50m-100m
Figure 57597DEST_PATH_IMAGE091
The value of (2) is 400, a local sound sequence is intercepted every 150 sampling periods, the interception frequency is 9, namely, the obtained energy spectrum matrix is a 6*9 dimensional matrix, experimental verification is carried out, the obtained experimental result is shown in table 1, and table 1 is an experimental result schematic diagram. Referring to table 1, experimental results show that the noise monitoring method for automatic sample retention and evidence collection provided by the invention has a monitoring accuracy of over 90%, a response time of within 10 seconds, high monitoring efficiency and accurate identification result.
Figure 985102DEST_PATH_IMAGE092
TABLE 1
Those of ordinary skill in the art will understand that: the figures are schematic representations of one embodiment, and the blocks or processes shown in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (3)

1. A noise monitoring method for automatic sample retention and evidence collection is characterized by comprising the following steps:
arranging two sound collection devices in an environment to be monitored, wherein the two sound collection devices are spaced at a set distance and simultaneously collect sound signals;
representing the sound signal as a function of time
Figure 328625DEST_PATH_IMAGE001
Wherein i represents the number of the sound collecting device, T represents the sampling period of the sound, and n represents the discrete sampling sequence number;
intercepting the sound signal at different times to obtain a plurality of local sound sequences
Figure 698295DEST_PATH_IMAGE002
Wherein, in the process,
Figure 212453DEST_PATH_IMAGE003
denotes the local sound sequence size, k denotes the interception time point,𝜔represents an offset;
respectively performing discrete Fourier transform on the local sound sequences to obtain a plurality of transform sequences
Figure 14187DEST_PATH_IMAGE004
,𝑖Representing the complex imaginary part;
calculating energy spectrums of a plurality of conversion sequences to obtain a plurality of energy spectrum vectors
Figure 359718DEST_PATH_IMAGE005
Figure 837973DEST_PATH_IMAGE006
Figure 636164DEST_PATH_IMAGE007
Representing a local window function;
recombining a plurality of the energy spectrum vectors to generate an energy spectrum matrix
Figure 507168DEST_PATH_IMAGE008
,𝑢The order in which the energy spectra are presented,𝑣a number representing an energy spectrum vector;
taking the energy spectrum matrix as an input,
Figure 707205DEST_PATH_IMAGE009
as output, establishing a neural network model; wherein the content of the first and second substances,
Figure 434990DEST_PATH_IMAGE010
which represents the corresponding linear mapping parameters, and,𝛽3 represents a linear bias parameter; the neural network model comprises a first hidden layer and a second hidden layer, wherein the first hidden layer is
Figure 845112DEST_PATH_IMAGE011
Figure 316544DEST_PATH_IMAGE012
,𝐼 (𝑚+𝑢,𝑣) A matrix of the energy spectrum is represented,𝑚is shown in𝑢As an offset amount of the center of the image,
Figure 230143DEST_PATH_IMAGE013
in the form of a convolution window,𝑝a window sequence number is indicated and,𝛽a 1 indicates a deviation parameter which is,𝜎is a nonlinear function defined as
Figure 863249DEST_PATH_IMAGE014
(ii) a The second hidden layer is
Figure 839295DEST_PATH_IMAGE015
Figure 645577DEST_PATH_IMAGE016
,𝑞Representing the dimensions of the energy spectrum vector and,
Figure 741578DEST_PATH_IMAGE017
the parameters of the linear mapping are represented,
Figure 811165DEST_PATH_IMAGE018
representing a linear bias parameter;
obtaining a learning cost function of the neural network model by learning the neural network model
Figure 946612DEST_PATH_IMAGE019
,𝑜The value of the output is represented by,
Figure 556585DEST_PATH_IMAGE020
indicates the value of the learning sample flag and,𝜀represents a learning parameter;
iteratively calculating the neural network model by using the learning cost function to obtain a learned neural network model;
collecting sound signals, processing the sound signals to obtain an energy spectrum matrix, and inputting the energy spectrum matrix into the learned neural network model to obtain an output value;
and when the output value is larger than a preset value, the noise of the sound signal exceeds the standard, and the sound signal is stored in a database to be used as a sample for sample reservation and evidence collection.
2. The method according to claim 1, wherein the set distance is in a range of 50m to 100m.
3. The noise monitoring method for automatic leave-on evidence collection according to claim 1,
Figure 710354DEST_PATH_IMAGE021
=400。
CN202211118403.7A 2022-09-15 2022-09-15 Noise monitoring method for automatic sample retention and evidence collection Active CN115206335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211118403.7A CN115206335B (en) 2022-09-15 2022-09-15 Noise monitoring method for automatic sample retention and evidence collection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211118403.7A CN115206335B (en) 2022-09-15 2022-09-15 Noise monitoring method for automatic sample retention and evidence collection

Publications (2)

Publication Number Publication Date
CN115206335A true CN115206335A (en) 2022-10-18
CN115206335B CN115206335B (en) 2022-12-02

Family

ID=83571973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211118403.7A Active CN115206335B (en) 2022-09-15 2022-09-15 Noise monitoring method for automatic sample retention and evidence collection

Country Status (1)

Country Link
CN (1) CN115206335B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5799276A (en) * 1995-11-07 1998-08-25 Accent Incorporated Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals
CN101320566A (en) * 2008-06-30 2008-12-10 中国人民解放军第四军医大学 Non-air conduction speech reinforcement method based on multi-band spectrum subtraction
CN104616663A (en) * 2014-11-25 2015-05-13 重庆邮电大学 Music separation method of MFCC (Mel Frequency Cepstrum Coefficient)-multi-repetition model in combination with HPSS (Harmonic/Percussive Sound Separation)
CN105390141A (en) * 2015-10-14 2016-03-09 科大讯飞股份有限公司 Sound conversion method and sound conversion device
CN109524014A (en) * 2018-11-29 2019-03-26 辽宁工业大学 A kind of Application on Voiceprint Recognition analysis method based on depth convolutional neural networks
CN111292762A (en) * 2018-12-08 2020-06-16 南京工业大学 Single-channel voice separation method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5799276A (en) * 1995-11-07 1998-08-25 Accent Incorporated Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals
CN101320566A (en) * 2008-06-30 2008-12-10 中国人民解放军第四军医大学 Non-air conduction speech reinforcement method based on multi-band spectrum subtraction
CN104616663A (en) * 2014-11-25 2015-05-13 重庆邮电大学 Music separation method of MFCC (Mel Frequency Cepstrum Coefficient)-multi-repetition model in combination with HPSS (Harmonic/Percussive Sound Separation)
CN105390141A (en) * 2015-10-14 2016-03-09 科大讯飞股份有限公司 Sound conversion method and sound conversion device
CN109524014A (en) * 2018-11-29 2019-03-26 辽宁工业大学 A kind of Application on Voiceprint Recognition analysis method based on depth convolutional neural networks
CN111292762A (en) * 2018-12-08 2020-06-16 南京工业大学 Single-channel voice separation method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AYA Y. KHUDHAIR 等: "Reduction of the Noise Effect to Detect the DSSS Signal using the Artificial Neural Network", 《2021 1ST BABYLON INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND SCIENCE (BICITS)》 *
王金超: "基于神经网络的语音增强算法研究", 《研究与设计》 *

Also Published As

Publication number Publication date
CN115206335B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN111325095B (en) Intelligent detection method and system for equipment health state based on acoustic wave signals
CN111985567B (en) Automatic pollution source type identification method based on machine learning
CN108805269B (en) Method for picking seismic facies arrival time based on LSTM recurrent neural network
CN102163427B (en) Method for detecting audio exceptional event based on environmental model
CN113298134B (en) System and method for remotely and non-contact health monitoring of fan blade based on BPNN
CN109473119B (en) Acoustic target event monitoring method
CN112367273B (en) Flow classification method and device of deep neural network model based on knowledge distillation
CN115640915A (en) Intelligent gas pipe network compressor safety management method and Internet of things system
CN110440148A (en) A kind of leakage loss acoustical signal classifying identification method, apparatus and system
CN111506635A (en) System and method for analyzing residential electricity consumption behavior based on autoregressive naive Bayes algorithm
CN112348052A (en) Power transmission and transformation equipment abnormal sound source positioning method based on improved EfficientNet
CN110020637A (en) A kind of analog circuit intermittent fault diagnostic method based on more granularities cascade forest
CN104951553A (en) Content collecting and data mining platform accurate in data processing and implementation method thereof
CN115499185A (en) Method and system for analyzing abnormal behavior of network security object of power monitoring system
CN111600878A (en) Low-rate denial of service attack detection method based on MAF-ADM
CN115376526A (en) Power equipment fault detection method and system based on voiceprint recognition
CN115034671A (en) Secondary system information fault analysis method based on association rule and cluster
CN113593605B (en) Industrial audio fault monitoring system and method based on deep neural network
CN115206335B (en) Noise monitoring method for automatic sample retention and evidence collection
CN109409216B (en) Speed self-adaptive indoor human body detection method based on subcarrier dynamic selection
CN117789081A (en) Dual-attention mechanism small object identification method based on self-information
CN116388865B (en) PON optical module-based automatic screening method for abnormal optical power
CN112801033A (en) AlexNet network-based construction disturbance and leakage identification method along long oil and gas pipeline
CN107742162A (en) A kind of multidimensional characteristic association analysis method based on auxiliary tone monitoring information
Jiang et al. Fault detection and diagnosis of wind turbine gearbox based on acoustic analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240422

Address after: 101400 28-3, floor 1, building 28, yard 13, Paradise West Street, Huairou District, Beijing

Patentee after: Central Carbon and (Beijing) Technology Co.,Ltd.

Country or region after: China

Address before: 101200 No. 81-1904, Shunfu Road, Daxingzhuang Town, Pinggu District, Beijing

Patentee before: BEIJING ZHONGHUA HIGH-TECH ENVIRONMENTAL MANAGEMENT CO.,LTD.

Country or region before: China

TR01 Transfer of patent right