CN114758674A - Sound information monitoring method, device, equipment and medium for target area - Google Patents

Sound information monitoring method, device, equipment and medium for target area Download PDF

Info

Publication number
CN114758674A
CN114758674A CN202210224155.8A CN202210224155A CN114758674A CN 114758674 A CN114758674 A CN 114758674A CN 202210224155 A CN202210224155 A CN 202210224155A CN 114758674 A CN114758674 A CN 114758674A
Authority
CN
China
Prior art keywords
sound
data
target
value
pressure level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210224155.8A
Other languages
Chinese (zh)
Inventor
张承云
孟瑞德
黄云峰
尹天鑫
陈鹏旭
余上
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Inspiration Ecological Technology Co ltd
Guangzhou University
Original Assignee
Guangzhou Inspiration Ecological Technology Co ltd
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Inspiration Ecological Technology Co ltd, Guangzhou University filed Critical Guangzhou Inspiration Ecological Technology Co ltd
Priority to CN202210224155.8A priority Critical patent/CN114758674A/en
Publication of CN114758674A publication Critical patent/CN114758674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for monitoring sound information of a target area, wherein a sound acquisition device is used for acquiring first sound information of the target area, acquiring second sound information generated by a monitoring loudspeaker at the position of the sound acquisition device, calculating a sound pressure level correction value according to the second sound information, carrying out noise screening processing according to the first sound information and the sound pressure level correction value, and determining target sound data after noise screening; the method comprises the steps of carrying out similarity comparison processing according to target sound data and pre-stored template data, determining useful sound data, generating monitoring data according to the useful sound data, enabling the monitoring data finally used for monitoring to be useful sound data concerned after noise screening, reducing data transmission amount and analysis difficulty when a sound acquisition device transmits data and when the monitoring data are analyzed, and improving convenience.

Description

Sound information monitoring method, device, equipment and medium for target area
Technical Field
The invention relates to the field of audio processing, in particular to a method, a device, equipment and a medium for monitoring sound information of a target area.
Background
In recent years, people are more and more concerned about ecological environment monitoring based on sound information, and environmental sounds contain a large amount of information and can reflect dynamic change conditions of the current environment to a certain extent, such as sounds needing attention: 1) for example, the sound emitted by the animal can be analyzed through sound signals to obtain the information such as the activity degree of the animal in the region, and further know the species, the abundance and the like; 2) the gunshot can know when and where to steal the hunting by the gunshot of the hunter. However, this monitoring of the ecological environment by sound could theoretically be done for 24 hours, but this uninterrupted way would generate a large amount of sound data containing noise, which is difficult to upload or analyze, and to transmit data or analyze sound data of interest.
Disclosure of Invention
In view of the above, in order to solve at least one of the above technical problems, the present invention provides a method, an apparatus, a device and a medium for conveniently monitoring sound information of a target area.
The embodiment of the invention adopts the technical scheme that:
a method for monitoring sound information of a target area comprises the following steps:
acquiring first sound information of a target area through a sound acquisition device, and acquiring second sound information generated at the position of the sound acquisition device through a monitoring loudspeaker;
calculating a sound pressure level correction value according to the second sound information;
performing noise screening processing according to the first sound information and the sound pressure level correction value, and determining target sound data after noise screening; the noise screened by the noise screening process comprises at least one of background noise, rain noise and wind noise;
and performing similarity comparison processing according to the target sound data and pre-stored template data, determining useful sound data, and generating monitoring data according to the useful sound data.
Further, the step of determining the target sound data after the noise screening by performing the noise screening process according to the first sound information and the sound pressure level correction value includes:
determining a target data segment from the first data segment, and calculating the average sound pressure level of the target data segment according to the number of data points of the target data segment and the sound pressure level correction value;
when the serial number corresponding to the target data segment is equal to a preset serial number threshold, determining the average sound pressure level with the minimum number of preset serial number thresholds from the average sound pressure levels, and calculating the average sound pressure value of the average sound pressure level with the minimum number of preset serial number thresholds;
calculating a background noise value according to the minimum average sound pressure level and the average sound pressure value, and generating a background noise sound pressure level threshold according to the average sound pressure value;
when the average sound pressure level is less than or equal to the background noise sound pressure level threshold, taking the target data segment as a candidate background noise segment and adding the candidate background noise segment into a first set, determining a new target data segment and returning to the step of calculating the average sound pressure level of the target data segment according to the number of data points of the target data segment and the sound pressure level correction value;
when the average sound pressure level is larger than the background noise sound pressure level threshold and the number of the candidate background noise sections in the first set is smaller than a count value threshold, adding the candidate background noise sections in the first set into a second set and emptying the first set, determining a new target data section and returning to the step of calculating the average sound pressure level of the target data section according to the number of the data points of the target data section and the sound pressure level correction value until all the first data sections are used as target data sections; the second set comprises at least one first target data segment;
and at least one of rain noise screening and wind noise screening is carried out according to the first target data segment to obtain target sound data.
Further, the obtaining of the target sound data by performing at least one of rain noise screening and wind noise screening according to the first target data segment includes:
calculating a power spectrum of the first target data segment, and determining a first power value, a second power value, a first frequency corresponding to the first power value and a second frequency corresponding to the second power value which are arranged in the power spectrum from big to small;
acquiring a sampling rate of the first sound information, and calculating a weighted frequency according to the first power value, the second power value, the first frequency, the second frequency, the sampling rate and the number of data points;
when the weighted frequency is greater than or equal to a rain noise threshold value, determining the first target data segment as a second target data segment;
alternatively, the first and second electrodes may be,
when the weighted frequency is smaller than a rain noise threshold value, determining a first frequency band by taking the weighted frequency as a center; the frequency band has an upper limit frequency and a lower limit frequency;
calculating a first average power value of the power spectrum in the first frequency band, and calculating the variance of the power spectrum according to the first average power value;
determining a second frequency band according to the upper limit frequency and the upper limit frequency of a preset multiple, and calculating a second average power value of the power spectrum in the second frequency band;
when a first ratio of the first average power value to the second average power value is smaller than a first preset threshold value and a second ratio of the first average power value to the variance is smaller than a second preset threshold value, determining the first target data segment as a second target data segment;
and carrying out wind noise screening according to the second target data segment to obtain target sound data.
Further, the wind noise screening according to the second target data segment to obtain target sound data includes:
when the weighted frequency of the second target data segment is greater than or equal to a wind noise threshold, taking the second target data segment as target sound data;
alternatively, the first and second liquid crystal display panels may be,
when the weighted frequency of the second target data segment is smaller than a wind noise threshold value, calculating a module value of the second target data segment, and calculating the variance of the second target data segment according to the maximum module value;
and when the variance of the second target data segment is greater than or equal to a variance threshold value, taking the second target data segment as target sound data.
Further, the calculating a sound pressure level correction value according to the second sound information includes:
acquiring the recording length of the second sound information; a second data segment of the second sound information;
calculating a sum of squares of the second data segment;
determining a correction parameter according to the ratio of the sum to the recording length;
and determining the sound pressure level correction value according to the difference value between a preset numerical value and the correction parameter.
Further, the generating of the pre-stored template data comprises:
acquiring attention sound data;
classifying the concerned sound data according to the number of preset frequency points to obtain a plurality of audio segments;
performing first Fourier transform processing on the audio segment, and calculating a first modulus of a first Fourier transform processing result;
and calculating the average value of the first modulus values according to the number of the preset frequency points, and performing normalization processing on the maximum average value to obtain pre-stored template data.
Further, the generating monitoring data according to the useful sound data includes:
calculating sound information according to the useful sound data, and performing second Fourier transform processing on the useful sound data; the acoustic information comprises at least one of an acoustic index, an acoustic diversity index, an acoustic uniformity index, an acoustic abundance index, a bioacoustic index, an acoustic entropy index and a normalized difference soundscape index;
generating monitoring data according to the sound information, a second Fourier transform processing result and the average sound pressure level corresponding to the useful sound data; and the monitoring data is used for being uploaded to a monitoring platform for displaying.
An embodiment of the present invention further provides a device for monitoring sound information of a target area, including:
the acquisition module is used for acquiring first sound information of a target area through a sound acquisition device and acquiring second sound information generated at the position of the sound acquisition device through a monitoring loudspeaker;
the calculation module is used for calculating a sound pressure level correction value according to the second sound information;
the screening module is used for carrying out noise screening processing according to the first sound information and the sound pressure level correction value and determining target sound data after noise screening; the noise screened by the noise screening process comprises at least one of background noise, rain noise and wind noise;
and the generating module is used for carrying out similarity comparison processing according to the target sound data and pre-stored template data, determining useful sound data and generating monitoring data according to the useful sound data.
An embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the method.
Embodiments of the present invention also provide a computer-readable storage medium, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the method.
The invention has the beneficial effects that: acquiring first sound information of a target area through a sound acquisition device, acquiring second sound information generated at the position of the sound acquisition device through a monitoring loudspeaker, calculating a sound pressure level correction value according to the second sound information, performing noise screening processing according to the first sound information and the sound pressure level correction value, and determining target sound data after noise screening, so that the accuracy of the noise screening processing is improved; the method comprises the steps of carrying out similarity contrast processing according to target sound data and pre-stored template data, determining useful sound data, and generating monitoring data according to the useful sound data, so that the monitoring data finally used for monitoring are useful sound data concerned after noise screening, and when a sound acquisition device transmits data and analyzes the monitoring data, the data volume and the analysis difficulty of data transmission are reduced, and the convenience is improved.
Drawings
FIG. 1 is a flowchart illustrating the steps of a method for monitoring audio information in a target area according to the present invention;
fig. 2 is a schematic system structure according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As shown in fig. 1, an embodiment of the present invention provides a method for monitoring sound information of a target area, including steps S100-S400:
s100, acquiring first sound information of a target area through a sound acquisition device, and acquiring second sound information generated at the position of the sound acquisition device through a monitoring loudspeaker.
As shown in fig. 2, information collection can be performed by a plurality of sound collection devices, each of which has an omnidirectional pointing characteristic and includes a microphone, an amplifier, an a/D converter, a microprocessor, a communication module, and an SD memory card, wherein the microprocessor communicates with the monitoring platform through the communication module (e.g., wifi, bluetooth, radio frequency, etc.). In the embodiment of the invention, a microphone is used for sound collection, the sensitivity is more than or equal to 50mV/Pa, the frequency range is 100Hz-16000Hz, the sound pressure level range is 30-115dB, the signal-to-noise ratio is more than or equal to 60dB, and the harmonic distortion is less than or equal to 10 percent; the amplifier amplifies the sound signal received by the microphone, and the amplification factor is fixed to ns times when the sound pressure level is 30-95 dB. It should be noted that when the sound pressure level is greater than 95dB, the circuit has a voltage limiting function, the ratio is nv:1, and the values of ns and nv are determined by experiments, and the output voltage range of the amplifier is the same as the input voltage range of the a/D. For example: the sound pressure level is 30-95, for example 95dB, the output voltage of the microphone is 30mV, and assuming that the amplification factor ns is 10, the output of the amplifier is 30mV × 10 — 300 mV; if the sound pressure level is 100dB, the output voltage of the microphone is 50mV, assuming that the amplification factor ns is 10 and nv is 2, the output of the amplifier is calculated by dividing the output of the amplifier into two parts, the microphone output corresponding to 95dB is 30mV, the amplification factor is fixed to ns, the corresponding amplifier output is 30mV × ns is 300mV, and the remaining 50mV-30mV is caused by exceeding 95dB, the output of the part is not amplified by ns times, but is limited, 20mV × ns/nv is 20 × 10/2 is 100mV, so that the total output is 300mV +100mV + 400 mV. The analog audio signal output from the amplifier is converted into a digital signal by A/D, the sampling rate fs is equal to or more than 32000, the sampling bit length is at least 16 bits, and the signal read by the microprocessor and output from the A/D is marked as the first audio information x (n).
In the embodiment of the invention, a monitoring loudspeaker is used for playing a sinusoidal signal of 1000Hz, the sound pressure level generated by the monitoring loudspeaker at a microphone of a sound acquisition device is 94dB, and therefore, a signal output by the A/D (analog/digital) device, namely second sound information is read in a microprocessor and is recorded as xr(N) recording length is Nr=fs×10。
And S200, calculating a sound pressure level correction value according to the second sound information.
Optionally, step S200 includes steps S210-S240:
and S210, acquiring the recording length of the second sound information.
Note that the second sound information includes a second data segment, and the recording length of the second data segment is Nr ═ fs × 10.
And S220, calculating the sum of the squares of the second data segment.
And S230, determining a correction parameter according to the ratio of the sum to the recording length.
Specifically, the correction parameter LprThe calculation formula is as follows: l ispr20 × lg (pr), wherein:
Figure BDA0003534972560000061
and S240, determining a sound pressure level correction value according to the difference value between the preset value and the correction parameter.
Specifically, the sound pressure level correction value Δ L is calculated by the formula:
ΔL=A-Lprwherein L ispr=20×lg(pr)
Where A is a predetermined number, including but not limited to 94.
S300, carrying out noise screening processing according to the first sound information and the sound pressure level correction value, and determining target sound data after noise screening.
It should be noted that the noise screened by the noise screening process includes one or more of background noise, rain noise and wind noise, and the order of screening the noise may be adjusted according to needs. It is to be understood that the target sound data is sound information remaining after screening background noise, rain noise, and wind noise from among the first sound information.
Optionally, step S300 includes steps S311-S316:
s311, determining a target data segment from the first data segment, and calculating the average sound pressure level of the target data segment according to the number of the data points of the target data segment and the sound pressure level correction value.
It should be noted that the first sound information x (n) includes a plurality of sound information arranged in sequenceJ' first data segments of (a), each first data segment having a plurality of data blocks xi(n) each data block xi(N) a plurality of data points, i is the serial number of the data block, and every N data points in the received first sound signal are taken as a data block x in the microprocessori(n) a first data segment having a plurality of data blocks xi(N), where N is 0,1,2,., N-1, i is 1,2,., M, and a total of M data blocks are used to extract the first sound information. It should be noted that the first sound information time period is set as needed, for example, between 1 second and 10 seconds. For example: setting a signal sampling rate to be 32k (32000), acquiring 100s data, if every 10s is taken as a first data segment, totally 10 first data segments, namely j' is 10, the serial number of the first data segment is represented by j, the value of j is 1,2,3,. multidot.10, if every 0.01s signal is taken as a block of data, the first data segment of each 10s is divided into 1000 data blocks, the serial number of the data block is represented by i, and the value range of i is 1,2,3,. multidot.100; each data block is 0.01s, the number of data points is 32000 × 0.01 — 320, the serial number of the data points is denoted by n, and n has a value ranging from 0,1, 2.
In an embodiment of the present invention, a target data segment is determined from the first data segments, for example, j ═ 1,2 … … j 'is determined from j' first data segments to be the target data segment. Then, an average sound pressure level L of the target data segment is calculated based on the number of data points of the target data segment and the sound pressure level correction valuep(j) Wherein x in the following formula (1)i(n) refers to a data block under the jth target data segment, specifically:
Lp(j) 10lg E + DeltaL, wherein
Figure BDA0003534972560000071
And S312, when the serial number corresponding to the target data segment is equal to the preset serial number threshold, determining the average sound pressure level with the minimum number of preset serial number thresholds from the average sound pressure levels, and calculating the average sound pressure value of the average sound pressure level with the minimum number of preset serial number thresholds.
It should be noted that the preset sequence number threshold NL can be set as requiredAnd (4) determining. Optionally, when j is NL, the preset number NL of smallest average sound pressure levels MIN _ L from the average sound pressure levels is determinedpNL is set as desired (including but not limited to 600), e.g., NL is 2, i.e., the smallest average sound pressure level and the second to last smallest average sound pressure level are determined, and NL smallest average sound pressure levels MIN _ L are calculatedpAverage sound pressure value MEAN _ L ofp
And S313, calculating a background noise value according to the minimum average sound pressure level and the average sound pressure value, and generating a background noise sound pressure level threshold according to the average sound pressure value.
It should be noted that, generating the background noise sound pressure level threshold refers to generating the background noise sound pressure level threshold when the background noise sound pressure level threshold does not exist, or when the background noise sound pressure level threshold has an initial value, updating the background noise sound pressure level threshold of the initial value to obtain a new background noise sound pressure level threshold.
Optionally, a background noise value BG _ L is calculatedp=αMIN_Lp+(1-α)MEAN_LpAlpha is obtained by debugging according to an actual scene, and the default value includes but is not limited to 0.8. Wherein, the default value of the background noise sound pressure level threshold is assumed to be TH _' LpThen the background noise sound pressure level threshold TH _ Lp=βTH_′Lp+(1-β)BG_LpBeta is adjusted according to the actual scene, and the default value includes but is not limited to 0.7.
And S314, when the average sound pressure level is less than or equal to the background noise sound pressure level threshold, taking the target data segment as a candidate background noise segment and adding the candidate background noise segment into the first set, determining a new target data segment, and returning to the step of calculating the average sound pressure level of the target data segment according to the number of the data points of the target data segment and the sound pressure level correction value.
Alternatively, when the average sound pressure level Lp(j)≤TH_LpWhen the target data segment is the candidate background noise segment and added into the first set, the count value CandidateBNCnt of the candidate background noise segment counter is updated to CandidateBNCnt (the initial value is 0) +1, and the count value represents the candidate in the first setSelecting the number of the background noise segments, then determining a new target data segment (e.g., j +1) from the first data segment according to the sequence number, and returning to the step of calculating the average sound pressure level of the target data segment according to the number of data points of the target data segment and the sound pressure level correction value in step S311.
S315, when the average sound pressure level is larger than the background noise sound pressure level threshold and the number of the candidate background noise sections in the first set is smaller than the count value threshold, adding the candidate background noise sections in the first set into the second set and emptying the first set, determining a new target data section, and returning to the step of calculating the average sound pressure level of the target data section according to the number of the data points of the target data section and the sound pressure level correction value until all the first data sections are used as the target data sections; the second set includes at least one first target data segment.
Alternatively, when the average sound pressure level Lp(j)>TH_LpLooking at the value of CandidateBNCnt, i.e. the number of candidate background noise segments in the first set, when CandidateBNCnt is greater than or equal to the count value threshold TH _ BNN, the previous CandidateBNCnt segment signal is background noise, i.e. all candidate background noise segments in the first set are background noise, these background noise segments need to be discarded without subsequent processing, and CandidateBNCnt is made 0, i.e. the first set is emptied, wherein the count value threshold TH _ BNN is determined experimentally, and the default value includes but is not limited to 3.
It is understood that when the CandidateBNCnt is smaller than the count value threshold TH _ BNN, the previous CandidateCnt segment signal is not background noise, i.e. all candidate background noise segments in the first set are not background noise, and these background noise segments need to be preserved, so that the candidate background noise segments in the first set are added to the second set and the first set is emptied, i.e. CandidateBNCnt is equal to 0, and then a new target data segment (e.g. j + B (B is equal to 1,2,3, 4 … …)) is determined from the first data segment according to the sequence number, the step of calculating the average sound pressure level of the target data segment according to the number of data points of the target data segment and the sound pressure level correction value in step S311 is returned until all the first data segments are used as target data segments, at which time the final second set is obtained, at which the candidate background noise segments in the second set are not background noise and are marked as first target data segments, at least one first target data segment may be included in the second set.
S316, at least one of rain noise screening and wind noise screening is carried out according to the first target data segment, and target sound data are obtained.
It should be noted that, the embodiment of the present invention includes a rain noise screening and a wind noise screening, and other embodiments may include one of them, which is not particularly limited.
Optionally, the step of rain noise screening includes step S321 or S322, and includes step S323, specifically, step S321 includes steps S3211-S3213, and step S322 includes steps S3221-S3224:
s3211, calculating a power spectrum of the first target data segment, and determining a first power value, a second power value, a first frequency corresponding to the first power value, and a second frequency corresponding to the second power value, which are arranged in the power spectrum from large to small.
In the embodiment of the present invention, the jth first data segment is taken as the first target data segment for explanation, and the power spectrum of the first target data segment is calculated
Figure BDA0003534972560000091
k is 0,1,2, a, N/2 or N-1, Xi(k) Is for x in (1) in the formulai(n) performing Discrete Fourier Transform (DFT) to obtain a result, and determining a first power value P (k) which is arranged in a power spectrum from big to small and then is arranged firstm1) (i.e., maximum value), and second power value P (k) is arrangedm2) (i.e., sub-maximum), first frequency km1And a second frequency km2
S3212, obtaining a sampling rate of the first sound information, and calculating a weighted frequency according to the first power value, the second power value, the first frequency, the second frequency, the sampling rate, and the number of data points.
In particular, the weighting frequency fPmThe calculation formula of (2) is as follows:
Figure BDA0003534972560000092
where fs is the sampling rate and N is the number of data points.
And S3213, when the weighting frequency is greater than or equal to the rain noise threshold, determining the first target data segment as a second target data segment.
Optionally, when weighting the frequency fPm≥TH_Frain(rain noise threshold), then the first target data segment is not rain noise, and is marked as a second target data segment, and further wind noise screening is required. It should be noted that the rain noise threshold may be set as needed, for example, the default value is 1500.
S3221, when the weighting frequency is smaller than the rain noise threshold, determining a first frequency band by taking the weighting frequency as a center; the frequency band has an upper limit frequency and a lower limit frequency.
When weighting frequency fPm<TH_Frain(rain noise threshold) at a weighted frequency fPmA first frequency band is taken as the center, and the lower limit frequency of the first frequency band is fL=fPmΔ f, upper limit frequency fH=fPm+ Δ f, where Δ f may be adjusted by experiment, e.g., by default to 300.
S3222, a first average power value of the power spectrum in the first frequency band is calculated, and the variance of the power spectrum is calculated according to the first average power value.
S3223, determining a second frequency band according to the upper limit frequency and the preset multiple upper limit frequency, and calculating a second average power value of the power spectrum in the second frequency band.
Optionally, f isL、fHConverting into corresponding frequency index value, and the formula is:
Figure BDA0003534972560000093
(symbol)
Figure BDA0003534972560000094
meaning rounding down. Then, the power spectrum P (k) is averaged, for kL≤k≤kHAverage value is denoted as mean Pm(first average power value) according toCalculating the variance of the power spectrum by an average power value, and marking the variance as sigma; for the second frequency band: k is a radical ofH<k≤3kH(the preset times are 3 times for example, but not limited to 3 times), and the average value is recorded as meanPs(second average power value). Taking the variance of P (k), denoted as σ, where kL≤k≤kH
S3224, when the first ratio of the first average power value to the second average power value is smaller than a first preset threshold, and the second ratio of the first average power value to the variance is smaller than a second preset threshold, determining the first target data segment as a second target data segment.
When the first ratio of the first average power value and the second average power value is smaller than a first preset threshold THMEANAnd a second ratio of the first average power value to the variance is smaller than a second preset threshold TH _ STD:
Figure BDA0003534972560000101
Figure BDA0003534972560000102
determining the current data segment is not rain noise, determining the first target data segment as a second target data segment, and performing further wind noise screening; otherwise, a new target data segment (e.g., j + B (B ═ 1,2,3, 4 … …)) is determined from the first data segment according to the sequence number, and the process returns to the step of calculating the average sound pressure level of the target data segment according to the number of data points of the target data segment and the sound pressure level correction value in step S311. Alternatively, TH _ MEAN, TH _ STD may be adjusted experimentally, e.g., default values of 2,3, respectively.
And S323, carrying out wind noise screening according to the second target data segment to obtain target sound data.
Optionally, step S323 includes step S3231, or includes S3232-S3233:
and S3231, when the weighting frequency of the second target data segment is greater than or equal to the wind noise threshold value, taking the second target data segment as target sound data.
Optionally, when weighting the frequency fPm≥TH_Ewind(wind noise threshold), the second target data segment is not wind noise, and the second target data segment is taken as target sound data. It should be noted that the target sound data finally obtained in the process of the screening iteration may include a plurality of second target data segments; TH _ FwindCan be adjusted by experimentation, for example to a default value of 750.
S3232, when the weighted frequency of the second target data segment is smaller than the wind noise threshold, calculating the module value of the second target data segment, and calculating the variance of the second target data segment according to the maximum module value.
And S3233, when the variance of the second target data segment is larger than or equal to the variance threshold value, taking the second target data segment as the target sound data.
Optionally, when weighting the frequency fPm<TH_Fwind(wind noise threshold), if the second target data segment is wind noise, calculating the module value of the second target data segment, finding out the maximum module value and storing the maximum module value into an array am (i); where i 1, 2. The variance of am (i) is recorded as deltaam(variance of second target data segment). If deltaam≥TH_δam(variance threshold), if the current second target data segment is not wind noise, taking the second target data segment as target sound data; otherwise, a new target data segment (e.g., j + B (B ═ 1,2,3, 4.. M-1)) is determined from the first data segment according to the sequence number, and the step of calculating the average sound pressure level of the target data segment according to the number of data points of the target data segment and the sound pressure level correction value in step S311 is returned. Alternatively, TH _ DeltaamThis may be adjusted by experiment, for example to a default value of 0.08.
S400, carrying out similarity contrast processing according to the target sound data and the pre-stored template data, determining useful sound data, and generating monitoring data according to the useful sound data.
Optionally, the generating of the pre-stored template data comprises S411-S414:
s411, acquiring the attention sound data.
S412, classifying the concerned sound data according to the number of the preset frequency points to obtain a plurality of audio segments.
And S413, performing first Fourier transform processing on the audio segment, and calculating a first modulus of a first Fourier transform processing result.
And S414, calculating the average value of the first modulus values according to the number of the preset frequency points, and normalizing the maximum average value to obtain prestored template data.
Optionally, the sound data of interest may be set according to requirements, for example, a sound data file of sound data such as rare animal sound, shotgun sound, and plosive sound, the data of the sound data file is denoted as Y (N), Y (N) is an audio segment at every N points (N is a default value of 512), so that Y (N) has a plurality of audio segments, the audio segment is subjected to a first fourier transform (including but not limited to discrete fourier transform), a first module value of a result of the first fourier transform is calculated, then an average value of the first module value is calculated according to a preset number N of frequency points, and a maximum average value is normalized to obtain pre-stored template data Yavg(k) N/2 denotes a frequency point. Note that the pre-stored template data is stored in the SD memory card.
Optionally, the step of generating the monitoring data according to the useful sound data comprises S421-S422:
s421 calculates acoustic information from the useful sound data, and performs a second fourier transform process on the useful sound data.
Alternatively, the useful sound data is regarded as sound data having a sound of interest, and the useful sound data is stored in the SD memory card in the wav file format for calling when needed. Optionally, the Acoustic information includes, but is not limited to, Acoustic Index ACI, Acoustic Complexity Index (ACI), Acoustic Diversity Index (ADI), Acoustic uniformity Index (ACI Evenness Index, AEI), Acoustic Richness Index (ACI Richness Index, AR), Bioacoustic Index (biocoustic Index, BI), Acoustic Entropy Index (ACI entrypy Index, H), Normalized Difference sound scene Index (Normalized Difference sound scene Index, NDSI); for useful soundPerforming a second Fourier transform on the data, including but not limited to discrete Fourier transform, to obtain a second Fourier transform processing result Xl(k),
Figure BDA0003534972560000111
l is the sequence number of the second target data segment retained in the useful sound data.
And S422, generating monitoring data according to the sound information, the second Fourier transform processing result and the average sound pressure level corresponding to the useful sound data.
Optionally, the monitoring data is for uploading to a monitoring platform for display. Specifically, the sound information, the modulus of the second fourier transform processing result, and the average sound pressure level corresponding to the useful sound data are used as monitoring data, and the monitoring data are stored in the monitoring platform for display through the communication module in a non-useful sound signal time period, namely background noise, rain noise, and wind noise, namely an idle time period of the microprocessor.
Optionally, the monitoring platform may perform processing and displaying of monitoring data, such as: drawing and displaying a space-time distribution diagram on a monitoring platform according to the monitoring data of the plurality of sound acquisition devices; and for a sound which may be a sound needing special attention, the modulus value of the second Fourier transform processing result is further identified and confirmed by a neural network method, and a judgment result is output and displayed. In addition, the space-time distribution diagram and the judgment result can be displayed or early warned through the monitoring platform or sent to the terminal.
Optionally, the step of performing similarity comparison processing according to the target sound data and the pre-stored template data specifically includes: calculating a modulus result
Figure BDA0003534972560000121
k is 0,1,2,.., N/2, wherein X isi(k) Finding X for the result obtained by Fourier transform calculation of target sound dataabs(k) Maximum value of (2) max [ X ]abs(k)]To Xabs(k) Is subjected to normalization processing, i.e.
Figure BDA0003534972560000122
Respectively associated with pre-stored template data Y stored in SD memory card in advanceavg(k) Similarity comparison was performed: calculation of Ad(k)=Xavg(k)-Yavg(k) Then calculate Ad(k) Sum of squares of
Figure BDA0003534972560000123
And variance
Figure BDA0003534972560000124
Alternatively, if
Figure BDA0003534972560000125
And sigmaAd<TH _ σ AD, the current target sound data may be a sound of particular interest, i.e., useful sound data. Optionally, each sound to be paid special attention has a corresponding third threshold TH _ AD and a fourth threshold TH _ σ AD, and the specific values are determined through experiments.
According to the sound information monitoring method of the target area, disclosed by the embodiment of the invention, invalid sound signals such as background noise, rain noise and wind noise in 24-hour recording every day can be identified, and the concerned useful sound data is determined to be transmitted and monitored, so that the monitoring data volume is greatly reduced, the power consumption and bandwidth of remote wireless transmission are greatly reduced, meanwhile, the early warning can be realized when the sound signals which are particularly concerned are collected, and the monitoring effect is good.
An embodiment of the present invention further provides a device for monitoring sound information of a target area, including:
the acquisition module is used for acquiring first sound information of a target area through the sound acquisition device and acquiring second sound information generated at the position of the sound acquisition device through the monitoring loudspeaker;
the calculation module is used for calculating a sound pressure level correction value according to the second sound information;
the screening module is used for carrying out noise screening processing according to the first sound information and the sound pressure level correction value and determining target sound data after noise screening; the noise screened by the noise screening process comprises at least one of background noise, rain noise and wind noise;
and the generating module is used for carrying out similarity comparison processing according to the target sound data and the pre-stored template data, determining useful sound data and generating monitoring data according to the useful sound data.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
The embodiment of the present invention further provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the sound information monitoring method for the target area in the foregoing embodiments. The electronic equipment of the embodiment of the invention comprises but is not limited to any intelligent terminal such as a mobile phone, a tablet computer, a vehicle-mounted computer and the like.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the beneficial effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
The embodiment of the present invention further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the sound information monitoring method for the target area of the foregoing embodiment.
Embodiments of the present invention also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the sound information monitoring method of the target area of the foregoing embodiment.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b and c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for monitoring acoustic information of a target area, comprising:
acquiring first sound information of a target area through a sound acquisition device, and acquiring second sound information generated at the position of the sound acquisition device through a monitoring loudspeaker;
calculating a sound pressure level correction value according to the second sound information;
performing noise screening processing according to the first sound information and the sound pressure level correction value, and determining target sound data after noise screening; the noise screened by the noise screening process comprises at least one of background noise, rain noise and wind noise;
and performing similarity contrast processing according to the target sound data and pre-stored template data, determining useful sound data, and generating monitoring data according to the useful sound data.
2. The method for monitoring the sound information of the target area according to claim 1, wherein: the first sound information includes a plurality of first data segments arranged in sequence, each of the first data segments has a plurality of data points, and the noise screening process is performed according to the first sound information and the sound pressure level correction value to determine target sound data after noise screening, including:
determining a target data segment from the first data segment, and calculating the average sound pressure level of the target data segment according to the number of data points of the target data segment and the sound pressure level correction value;
when the serial number corresponding to the target data segment is equal to a preset serial number threshold, determining the average sound pressure level with the minimum number of preset serial number thresholds from the average sound pressure levels, and calculating the average sound pressure value of the average sound pressure level with the minimum number of preset serial number thresholds;
calculating a background noise value according to the minimum average sound pressure level and the average sound pressure value, and generating a background noise sound pressure level threshold according to the average sound pressure value;
when the average sound pressure level is less than or equal to the background noise sound pressure level threshold, taking the target data segment as a candidate background noise segment and adding the candidate background noise segment into a first set, determining a new target data segment and returning to the step of calculating the average sound pressure level of the target data segment according to the number of data points of the target data segment and the sound pressure level correction value;
when the average sound pressure level is larger than the background noise sound pressure level threshold and the number of the candidate background noise sections in the first set is smaller than a count value threshold, adding the candidate background noise sections in the first set into a second set and emptying the first set, determining a new target data section and returning to the step of calculating the average sound pressure level of the target data section according to the number of the data points of the target data section and the sound pressure level correction value until all the first data sections are used as target data sections; the second set comprises at least one first target data segment;
and at least one of rain noise screening and wind noise screening is carried out according to the first target data segment to obtain target sound data.
3. The method for monitoring the sound information of the target area according to claim 2, wherein: the obtaining of the target sound data by at least one of rain noise screening and wind noise screening according to the first target data segment includes:
calculating a power spectrum of the first target data segment, and determining a first power value, a second power value, a first frequency corresponding to the first power value and a second frequency corresponding to the second power value which are arranged in the power spectrum from big to small;
acquiring a sampling rate of the first sound information, and calculating a weighted frequency according to the first power value, the second power value, the first frequency, the second frequency, the sampling rate and the number of data points;
when the weighted frequency is greater than or equal to a rain noise threshold value, determining the first target data segment as a second target data segment;
alternatively, the first and second electrodes may be,
when the weighted frequency is smaller than a rain noise threshold value, determining a first frequency band by taking the weighted frequency as a center; the frequency band has an upper limit frequency and a lower limit frequency;
calculating a first average power value of the power spectrum in the first frequency band, and calculating the variance of the power spectrum according to the first average power value;
determining a second frequency band according to the upper limit frequency and the upper limit frequency of a preset multiple, and calculating a second average power value of the power spectrum in the second frequency band;
when a first ratio of the first average power value to the second average power value is smaller than a first preset threshold value and a second ratio of the first average power value to the variance is smaller than a second preset threshold value, determining the first target data segment as a second target data segment;
and carrying out wind noise screening according to the second target data segment to obtain target sound data.
4. The method for monitoring the sound information of the target area according to claim 3, wherein: the wind noise screening according to the second target data segment to obtain target sound data includes:
when the weighted frequency of the second target data segment is greater than or equal to a wind noise threshold, taking the second target data segment as target sound data;
alternatively, the first and second electrodes may be,
when the weighted frequency of the second target data segment is smaller than a wind noise threshold value, calculating a module value of the second target data segment, and calculating the variance of the second target data segment according to the maximum module value;
and when the variance of the second target data segment is greater than or equal to a variance threshold value, taking the second target data segment as target sound data.
5. The method for monitoring the sound information of the target area according to claim 1, wherein: the calculating of the sound pressure level correction value according to the second sound information includes:
acquiring the recording length of the second sound information; the second sound information comprises a second data segment;
calculating a sum of squares of the second data segment;
determining a correction parameter according to the ratio of the sum to the recording length;
and determining the sound pressure level correction value according to the difference value between a preset numerical value and the correction parameter.
6. The method for monitoring the sound information of the target area according to claim 1, wherein: the pre-stored template data generation step comprises:
acquiring attention sound data;
classifying the concerned sound data according to the number of preset frequency points to obtain a plurality of audio segments;
performing first Fourier transform processing on the audio segment, and calculating a first modulus of a first Fourier transform processing result;
and calculating the average value of the first modulus values according to the number of the preset frequency points, and performing normalization processing on the maximum average value to obtain pre-stored template data.
7. The method for monitoring acoustic information of a target area according to any one of claims 2 to 6, wherein: the generating of monitoring data from the useful sound data comprises:
calculating sound information according to the useful sound data, and performing second Fourier transform processing on the useful sound data; the acoustic information comprises at least one of an acoustic index, an acoustic diversity index, an acoustic uniformity index, an acoustic abundance index, a bioacoustic index, an acoustic entropy index and a normalized difference soundscape index;
generating monitoring data according to the sound information, a second Fourier transform processing result and the average sound pressure level corresponding to the useful sound data; and the monitoring data is used for being uploaded to a monitoring platform for displaying.
8. An apparatus for monitoring acoustic information of a target area, comprising:
the acquisition module is used for acquiring first sound information of a target area through a sound acquisition device and acquiring second sound information generated at the position of the sound acquisition device through a monitoring loudspeaker;
the calculation module is used for calculating a sound pressure level correction value according to the second sound information;
the screening module is used for carrying out noise screening processing according to the first sound information and the sound pressure level correction value and determining target sound data after noise screening; the noise screened by the noise screening process comprises at least one of background noise, rain noise and wind noise;
and the generating module is used for carrying out similarity comparison processing according to the target sound data and pre-stored template data, determining useful sound data and generating monitoring data according to the useful sound data.
9. An electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the method according to any one of claims 1-7.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method according to any one of claims 1 to 7.
CN202210224155.8A 2022-03-07 2022-03-07 Sound information monitoring method, device, equipment and medium for target area Pending CN114758674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210224155.8A CN114758674A (en) 2022-03-07 2022-03-07 Sound information monitoring method, device, equipment and medium for target area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210224155.8A CN114758674A (en) 2022-03-07 2022-03-07 Sound information monitoring method, device, equipment and medium for target area

Publications (1)

Publication Number Publication Date
CN114758674A true CN114758674A (en) 2022-07-15

Family

ID=82326246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210224155.8A Pending CN114758674A (en) 2022-03-07 2022-03-07 Sound information monitoring method, device, equipment and medium for target area

Country Status (1)

Country Link
CN (1) CN114758674A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359807A (en) * 2022-10-21 2022-11-18 金叶仪器(山东)有限公司 Noise online monitoring system for urban noise pollution

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359807A (en) * 2022-10-21 2022-11-18 金叶仪器(山东)有限公司 Noise online monitoring system for urban noise pollution
CN115359807B (en) * 2022-10-21 2023-01-20 金叶仪器(山东)有限公司 Noise online monitoring system for urban noise pollution

Similar Documents

Publication Publication Date Title
CN111210021B (en) Audio signal processing method, model training method and related device
CN107943583B (en) Application processing method and device, storage medium and electronic equipment
CN112270913B (en) Pitch adjusting method and device and computer storage medium
CN109979469B (en) Signal processing method, apparatus and storage medium
CN107343085A (en) Method for playing music and Related product
CN110633067A (en) Sound effect parameter adjusting method and mobile terminal
CN110211578B (en) Sound box control method, device and equipment
CN108388340B (en) Electronic equipment control method and related product
CN114758674A (en) Sound information monitoring method, device, equipment and medium for target area
CN112751648A (en) Packet loss data recovery method and related device
CN115884032B (en) Smart call noise reduction method and system for feedback earphone
CN107728772B (en) Application processing method and device, storage medium and electronic equipment
CN113726940A (en) Recording method and device
CN111613246A (en) Audio classification prompting method and related equipment
CN109363660B (en) Heart rate monitoring method and server based on BP neural network
CN108230104A (en) Using category feature generation method, mobile terminal and readable storage medium storing program for executing
CN111951021A (en) Method and device for discovering suspicious communities, storage medium and computer equipment
KR100580783B1 (en) Method and apparatus for measuring the speech quality according to measuring mode
CN116612778A (en) Echo and noise suppression method, related device and medium
CN113808566B (en) Vibration noise processing method and device, electronic equipment and storage medium
CN113990363A (en) Audio playing parameter adjusting method and device, electronic equipment and storage medium
CN113056756B (en) Sleep recognition method and device, storage medium and electronic equipment
CN109636445A (en) A kind of advertisement update method, device and terminal device based on user&#39;s operation information
CN116936132B (en) Intelligent medical condition monitoring method and system based on big data
CN116863957B (en) Method, device, equipment and storage medium for identifying operation state of industrial equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination