WO2014024382A1 - 騒音観測装置及び騒音観測方法 - Google Patents

騒音観測装置及び騒音観測方法 Download PDF

Info

Publication number
WO2014024382A1
WO2014024382A1 PCT/JP2013/004343 JP2013004343W WO2014024382A1 WO 2014024382 A1 WO2014024382 A1 WO 2014024382A1 JP 2013004343 W JP2013004343 W JP 2013004343W WO 2014024382 A1 WO2014024382 A1 WO 2014024382A1
Authority
WO
WIPO (PCT)
Prior art keywords
time delay
time
noise
axis
cross
Prior art date
Application number
PCT/JP2013/004343
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
篠原 健二
恵司 廻田
Original Assignee
リオン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by リオン株式会社 filed Critical リオン株式会社
Priority to DE112013003958.3T priority Critical patent/DE112013003958T5/de
Priority to CN201380041613.3A priority patent/CN104583737B/zh
Publication of WO2014024382A1 publication Critical patent/WO2014024382A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • G01S3/808Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems
    • G01S3/8083Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems determining direction of source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H3/00Measuring characteristics of vibrations by using a detector in a fluid

Definitions

  • the present invention relates to a noise observation apparatus and a noise observation method suitable for use in an environment where a plurality of sound sources exist in an observation target area.
  • Patent Document 1 a prior art effective for automatic identification of aircraft flight noise observed under a flight route such as an aircraft has been known (see, for example, Japanese Patent Laid-Open No. 7-43203: Patent Document 1).
  • the arrival direction vector (elevation angle, azimuth angle) of the moving sound source is calculated from the cross-correlation coefficient of the time delay of the sound reaching four microphones arranged at intervals on the X, Y, and Z axes. And the movement trajectory of the moving sound source is automatically identified from the obtained vector set.
  • Ground noise includes, for example, noise generated by the operation of the auxiliary power supply unit (APU) in the parked aircraft and aircraft that are moving between the terminal and the runway (taxing) for propulsion. And noise generated when an aircraft performs an engine test operation in an engine test operation area in an airfield.
  • APU auxiliary power supply unit
  • the noise observed around the airfield is complex, and various noises such as cars and sirens come from the surroundings at the observation point, so only the ground noise generated by the aircraft can be compared with other noise from the ground. It is difficult to detect pinpoints.
  • Aircraft noise includes transient single-shot noise that occurs on the airfield as the aircraft operates, engine test operations and APU operations that are observed around the airfield due to aircraft maintenance, etc. Noise continues for a long time, and there are quasi-stationary noises that are steady but have considerable level fluctuations, making noise identification more difficult.
  • Patent Document 1 when noise is simultaneously generated from a plurality of sound sources, the cross-correlation coefficient is on each axis.
  • the maximum time delay does not necessarily indicate the direction of arrival of sound from the same sound source.
  • a plurality of noises are overlapped with each other, only the sound having the maximum cross-correlation is used, and it is difficult to automatically identify other noises.
  • this solution calculates the time delay cross-correlation coefficient for each axis at regular intervals, and for the time delay in which the cross-correlation coefficient shows a peak (maximum) tendency, the cross-correlation coefficient starts from the largest.
  • a plurality of time delay variations are extracted in the time domain to form a continuous time delay set for each axis.
  • the set of time delays formed for each axis is a set separated for each sound source when there are a plurality of sound sources.
  • the time delay sets formed for each axis if the cross-correlation between the different axes is now observed, the time delay sets having the same sound source can be combined therewith. As a result, it is possible to automatically separate and identify a plurality of simultaneously occurring noises.
  • the noise with the highest sound pressure level tends to dominate the maximum peak of the cross-correlation coefficient for each axis.
  • the noise with the highest sound pressure level tends to dominate the maximum peak of the cross-correlation coefficient for each axis.
  • the figure when looking at a plurality of peaks in order from the top, attention is paid to the fact that one of the peaks shows the influence of noise (other sound sources) other than the maximum noise.
  • the cross-correlation coefficient when a sound source moves in the observation space, even if the cross-correlation coefficient is indicated by a time delay that is the maximum peak in a certain time zone, it is not the maximum peak in another time zone, It may be indicated by the time delay of other lower peaks. In this case, the time delay at which the cross-correlation coefficient reaches the maximum peak in the same time zone indicates other sound sources, so simply tracing the maximum peak always in the time domain does not identify the sound source. .
  • this solving means extracts a plurality of time delay fluctuations in the time domain that are higher in the peak of the cross-correlation coefficient calculated at regular intervals (for example, from the first to the third), and continuously for each axis.
  • a set separated on each axis is combined between different axes. That is, in the present invention, when a cross correlation coefficient is considered instead of individual time delay for a certain set, attention is paid to the fact that the sets considered to be the same sound source have very similar fluctuations in the cross correlation coefficient. is doing. This is presumably because the sound emitted from the actual sound source is affected by, for example, output fluctuations and weather changes, and these changes appear in common with the sound input to the microphone. Therefore, if a set of sets exists at the same time between different axes in a certain time domain, the sets are determined from the normalized cross-correlation coefficient when the time delay is 0 with respect to the time variation of the cross-correlation coefficient of each set. Can be combined. Since the sets combined between the axes represent the arrival directions of the same sound source, a plurality of sound sources existing in the observation space can be automatically identified therefrom.
  • the noise observation apparatus has a configuration including a calculation unit, an aggregation unit, and an integration unit.
  • the noise observation method of the present solution can be executed using these configurations.
  • the calculation means uses two microphones arranged at intervals on a plurality of axes defined in an observation space where a plurality of sound sources exist, and uses a time delay that represents a difference in sound arrival time for each axis.
  • the step of calculating the cross-correlation coefficient is performed at regular intervals.
  • the aggregation means extracts a plurality of time delay fluctuations in the time domain in descending order of the cross correlation coefficient for a plurality of time delays in which the cross correlation coefficient calculated every fixed time by the calculating means shows a peak tendency, A step of forming a set of continuous time delays for each axis is executed. Then, the integration unit executes a step of combining sets of time delays having the same sound source from the cross-correlation between different axes with respect to the sets of time delays for each axis formed by the aggregation unit.
  • the following various aspects can be provided in forming a continuous set of time delays.
  • the aggregation means can execute a step of confirming whether or not a plurality of time delays showing a peak tendency at each fixed time become initial values for each sound source prior to formation of a set of continuous time delays.
  • the steps performed by the aggregation means can include the following steps. (1) Among a plurality of time delays showing a peak tendency at every fixed time, whether or not there is a single value of time delay that makes a difference from a specific time delay showing a peak tendency before the fixed time less than a predetermined threshold Step to determine whether. (2) A step of adding a unique value to the same set as a specific time delay if it is determined in (1) that the unique value exists. (3) A step of calculating a virtual time delay by the least square method using at least a specific time delay (the latest several) when it is determined in (1) that there is no unique value.
  • the time delay of the upper peak calculated every fixed time after confirming whether it is a time delay by the same sound source as the last time delay, the time delay (it is expected that the sound source is the same) Only unique noise data) can be added to a continuous set. Thereby, the accuracy of the identification result can be improved and the reliability of the noise observation result can be increased. Also, if there is nothing that can be predicted as the same sound source in the time delay of the upper peak this time, obtain the virtual time delay by the least square method using the time delay for the last several pieces including the previous value, Can be used for the steps.
  • the aggregation unit further executes the following steps. (4) Since there are a plurality of time delays in which the difference from a specific time delay is less than a predetermined threshold, there is no single value, or there is a predetermined difference from a specific time delay among a plurality of time delays. Determining whether there is only one value because there is no time delay that is less than the threshold value.
  • the difference from the virtual time delay is Determining whether there is a specific value that is less than a specific threshold; (6) A step of adding the specific value to the same set as the specific time delay when it is determined in (5) that there is the specific value. (7) In the above (5), it is determined that there is no specific time, or in the above (4), there is one time delay in which the difference from the specific time delay is less than a predetermined threshold among the plurality of time delays. Adding a virtual time delay to the same set as a particular time delay if it is determined that there is no unique value because it does not exist.
  • the virtual time delay calculated in (3) above represents the latest time delay variation in the time domain. Therefore, when there is a plurality of time delays and the only value cannot be narrowed down, it is possible to add the ones close to the virtual time delay to the set after predicting the same sound source (6). On the other hand, if there is nothing close to the virtual time delay and there is no single value, the virtual time delay can be added to the set to continue the assembly (7).
  • the aggregation means can further execute the following steps.
  • the last time delay in the set is a virtual time delay and all of the immediately preceding predetermined numbers are virtual time delays, the virtual time delay for a predetermined number of consecutive is deleted, and for the set Further aggregation can be terminated.
  • end end processing the collection of continuous time delays in accordance with the end time of a noise event or the like.
  • the aggregation means deletes a predetermined number of virtual time delays in the end determination step of (8) above, and as a result of completing the formation of the set, the number of time delays included in the set is equal to or less than a specified number. If so, the set can be invalidated.
  • a plurality of sound sources that are simultaneously generated in the observation space can be automatically separated and identified.
  • calculation load can be reduced by processing periodically rather than processing after accumulating a large amount of calculation results, real-time processing using a computer can be easily realized.
  • FIG. 1 is a schematic diagram showing one embodiment when a noise observation device is installed in an airfield
  • FIG. 2 is a diagram schematically showing the configuration of the noise observation apparatus and the noise identification method based on the cross-correlation method.
  • Fig. 3 is a diagram explaining the noise event detection method for single noise, along with the temporal change of the noise level under the channel
  • FIG. 4 is a diagram explaining the noise event detection method for quasi-stationary noise, along with the temporal change of the noise level in the airfield (or nearby)
  • FIG. 5 is a simplified model diagram for explaining the principle of the identification method.
  • FIG. 6 is a flowchart illustrating an example of a procedure of sound source separation processing executed by the sound source separation processing unit.
  • FIG. 1 is a schematic diagram showing one embodiment when a noise observation device is installed in an airfield
  • FIG. 2 is a diagram schematically showing the configuration of the noise observation apparatus and the noise identification method based on the cross-correlation method.
  • Fig. 3 is a diagram
  • FIG. 7 is a schematic diagram showing ranking of time delays due to the upper peak of the cross-correlation coefficient
  • FIG. 8 is a flowchart showing a procedure example of the same sound source segmentation processing for each axis
  • FIG. 9 is a diagram showing the variation of the time delay on the X axis in the simplified model
  • FIG. 10 is a diagram showing an example in which the variation in time delay on the X-axis shown in FIG. 9 is separated into sound source segments
  • FIG. 11 is a diagram illustrating an example when the simplified model is applied to a moving sound source
  • FIG. 12 is a diagram showing fluctuations in time delay in the X axis and the Y axis.
  • FIG. 13 is a diagram showing an example of segment separation in the X-axis.
  • FIG. 14 is a diagram showing the fluctuation results of the time delay in the X axis and the Y axis for the measured data.
  • FIG. 15 is a diagram showing a result of separating the time delay variation in the X axis and the Y axis into segments for the measured data.
  • FIG. 16 is a diagram showing a variation pattern of the cross-correlation coefficient of each segment.
  • FIG. 17 is a diagram illustrating a calculation example of a normalized cross-correlation coefficient between overlapping segments in the time domain.
  • FIG. 1 is a schematic diagram showing one embodiment when a noise observation apparatus is installed in an airfield.
  • a target area such as an airfield (or its surroundings)
  • noise coming from the sky as the aircraft travels take-off / landing noise generated on the runway, and reverse noise during landing (hereinafter referred to as “flight noise”).
  • light noise In addition to the noise associated with aircraft operations and aircraft maintenance in the airfield, there is a noise environment that is mixed with noise (hereinafter referred to as “ground noise”) associated with taxing, engine trial operation, and APU operation. Is formed.
  • the noise observation apparatus can be used with a microphone unit 10 installed at an observation point in an airfield.
  • An observation unit (not shown) is connected to the microphone unit 10.
  • noise sources such as the landing area 20, the taxing road 30, the lander 40, the take-off machine 50, and the engine trial operation area 60 that run or fly on the runway 25.
  • various noises are generated from these places and arrive at the observation point from each direction.
  • the noise observation apparatus of the present embodiment is suitable for an application for automatically identifying a plurality of noises arriving at an observation point by using the microphone unit 10. In the following, explanation will be made for each area that is a source of noise.
  • auxiliary power unit Auxiliary Power Unit
  • the auxiliary power unit is a small engine that is used as a power source for supplying compressed air, hydraulic pressure, electric power, and the like into the parked aircraft AP.
  • the taxing road 30 is a runway on which the aircraft AP moves between the parking area and the runway 25. From the aircraft AP during taxing, the engine is operated to obtain a propulsive force necessary for ground running, and noise is thereby generated.
  • the landing aircraft 40 enters and descends toward the runway 25 when the aircraft AP arrives, and, in many cases, performs reverse engine injection (reverse) on the runway 25 for deceleration, and finally the runway. Generates noise associated with the flight until 25.
  • the take-off aircraft 50 starts to run at the start position of the runway 25 when the aircraft AP departs, and generates noise associated with the operation until it flies, rises and flies off in the middle of the runway 25.
  • FIG. 2 is a diagram schematically showing the configuration of the noise observation apparatus and the noise identification method based on the cross-correlation method.
  • the noise observation apparatus has a function of performing calculation processing using the microphone unit 10 and identifying noise by a cross-correlation method.
  • the microphone unit 10 includes, for example, four microphones M0, M1, M2, and M3.
  • the individual microphones M0 to M3 are arranged on the X axis, the Y axis, and the Z axis that are virtually determined in the observation space. It is arranged on the axis and at the origin of the three-axis coordinate system. Specifically, the microphone M0 is disposed at the origin, and another microphone M1 is disposed on the Z axis extending in the vertical direction from the origin.
  • Another microphone M2 is installed on the Y axis extending in the horizontal direction from the origin and opening 90 ° with the X axis
  • another microphone M3 is installed on the X axis extending in the horizontal direction from the origin.
  • the microphone unit 10 holds the relative positions of the microphones M0 to M3 in an installed state while mechanically fixing the individual microphones M0 to M3.
  • two microphones are disposed on each axis on the X axis, the Y axis, and the Z axis in the observation space.
  • the microphone unit 10 includes a microphone MB different from the four microphones M0 to M3 described above.
  • the four microphones M0 to M3 are for noise identification by the cross correlation method, while the microphone MB is for measuring ambient noise. That is, the microphone MB is used to measure the noise level at the observation point alone, for example.
  • the noise observation apparatus includes an observation unit 100, and a microphone unit 10 is connected to the observation unit 100.
  • the observation unit 100 includes, for example, computer equipment including a central processing unit (CPU), a semiconductor memory (ROM, RAM), a hard disk drive (HDD), an input / output interface, a liquid crystal display, and the like (not shown).
  • this elevation angle ⁇ information can be used for identifying flight noise (see Patent Document 1 cited in the prior art). That is, for example, when the noise level detected by the microphone MB exceeds a certain threshold value (when a noise event occurs), if the elevation angle change ⁇ (t) is recorded at the same time as the sound arrival direction data, it is designated in advance. It is possible to determine that the noise of the sound arrival direction data larger than the elevation angle is the flight noise caused by the aircraft AP.
  • the azimuth angle ⁇ can be obtained by calculation in addition to the elevation angle ⁇ if the sound arrival direction is expanded not only in the vertical direction but also in the three axes of the XY, YZ, and ZX axes. It is. Then, by obtaining the elevation angle ⁇ and the azimuth angle ⁇ , it is possible to calculate a noise arrival direction vector (unit vector) in a three-axis observation space (vector space) with the observation point as a reference. Further, the moving product of the sound source (aircraft AP) (from which direction to which direction) can be more reliably known with the cross product of the calculated arrival direction vectors as a reference.
  • the observation unit 100 includes a noise event detection unit 102 and a direction-of-arrival vector calculation unit 106 as functional elements thereof, and further includes a sound source separation processing unit 110 and a separated sound source integration unit 120 including a plurality of functional elements.
  • the noise event detection unit 102 detects the ground noise level generated in the target area based on the noise detection signal from the microphones MB, M0 to M3, for example. Specifically, the result of digital conversion of the noise detection signal is sampled, and the noise level value (dB) at the observation point is calculated.
  • Aircraft noise can be broadly divided into single noise and quasi-stationary noise.
  • single noise is transient noise that occurs once, such as noise observed in the vicinity of an airfield as the aircraft AP operates.
  • taxing noise is often observed as a single noise.
  • the quasi-stationary noise is noise that continues for a long time and is steady but accompanied by a considerable level fluctuation.
  • it is regarded as ground noise of the aircraft.
  • this includes engine test operation observed in the vicinity of an airfield accompanying the maintenance of the aircraft AP, operation noise of the APU, noise during standby before takeoff at the end of the runway, and the like.
  • helicopter idling and hovering noises often continue constantly and may be observed as quasi-stationary noises.
  • the noise event detection unit 102 registers a condition (threshold level) for detecting a noise event of single noise or quasi-stationary noise from the noise level value in the observation unit 100.
  • the noise event detection unit 102 applies the calculated noise level value (dB) to the registered conditions, detects a single-shot flight noise or ground noise event, or detects a quasi-steady noise ground noise event. be able to.
  • An example of noise event detection will be described later.
  • the arrival direction vector calculation unit 106 calculates the sound arrival direction vector (elevation angle ⁇ , azimuth angle ⁇ ) by the above-described three-axis cross-correlation method based on the detection signals from the four microphones M0 to M3. Moreover, the arrival direction vector calculation unit 106 records the elevation angle ⁇ (t) and the azimuth angle ⁇ (t) represented by a time function as sound arrival direction data.
  • the sound source separation processing unit 110 includes a cross correlation coefficient calculation unit 112, a peak search processing unit 114, and a segmentation processing unit 116.
  • the sound source separation processing unit 110 has a function of separating the time delay obtained for each axis into sound sources on each axis.
  • the segmentation processing unit 116 collects a plurality of time delay fluctuations higher than the peak value extracted by the peak search processing unit 114 in the time domain, and collects the same sound source and expected ones in the same set. A process for forming a time delay set is performed. Hereinafter, such processing is referred to as “segmentation”, and the formed set is referred to as “segment”. Details of processing by the sound source separation processing unit 110 will be described later with reference to another flowchart.
  • the separated sound source integration unit 120 includes a normalized cross correlation coefficient calculation unit 122 and a segment integration processing unit 124.
  • the separated sound source integration unit 120 has a function of combining segments for each axis formed by the segmentation processing unit 116 with the same sound source. Since the segments formed by the segmentation processing unit 116 are separated for each sound source on each axis as described above, in order to identify sound sources in the observation space, it is necessary to combine the sets between each axis. is there. Therefore, here, the segments between the axes are combined using the fluctuation of the correlation coefficient R ( ⁇ ).
  • the normalized cross-correlation coefficient calculation unit 122 generates a time delay for the time variation of the cross-correlation coefficient R ( ⁇ ), not the individual time delay ⁇ , for the segment formed on each axis.
  • the segment integration processing unit 124 integrates segments having a sufficiently large calculated cross-correlation coefficient R (0) as segments of the same sound source. Details of processing performed by the separated sound source integration unit 120 will also be described later.
  • the observation unit 100 includes an identification result output unit 130.
  • the segments integrated by the segmentation processing unit 116 are provided to the identification result output unit 130.
  • the identification result output unit 130 identifies a plurality of types of sound sources from the information of the segments integrated into the same sound source and the arrival direction vector calculated by the arrival direction vector calculation unit 106, and outputs the result.
  • the output result can be displayed on a display device (not shown) or transmitted as data to an external computer of the observation unit 100, for example.
  • FIG. 3 is a diagram illustrating a noise event detection method for single noise, along with a temporal change in the noise level under the channel.
  • the observation unit 100 calculates the background noise level (BGN) at the observation point by, for example, continuously detecting the noise level in the noise event detection unit 102.
  • BGN background noise level
  • Single noise is generated as transient noise when the aircraft AP passes over the sky as described above. Accordingly, the temporal change in the single noise level increases with time, and increases to a level 10 dB higher than the background noise level at time t1. Thereafter, the noise level reaches the maximum value (Nmax) and becomes the background noise level (BGN) again.
  • the observation unit 100 starts noise event detection at the noise event detection unit 102 from time t1. That is, when the noise level of the microphone MB rises to a level 10 dB higher than the background noise level (BGN), noise event detection processing is started.
  • BGN background noise level
  • a threshold level (Na) for determining that single noise has occurred is set in advance. Therefore, the noise event detection unit 102 identifies single noise only when the observed value exceeds the threshold level (Na). In this example, since the observed value actually exceeds the threshold level (Na), the noise event detection unit 102 determines that the single noise generation time is at time t3 when the noise level reaches the maximum value (Nmax). it can.
  • the noise event detection unit 102 determines the time t4 when the noise level is lowered by 10 dB from the maximum value (Nmax) as the end time of the single noise.
  • Nmax the maximum value
  • the noise event detection unit 102 cuts out a period when the noise level is higher than the value lower than the maximum value (Nmax) by 10 dB, and determines this as a noise event section.
  • the noise event section is regarded as the time when single noise continues at the observation point.
  • FIG. 4 is a diagram explaining the noise event detection method for the quasi-stationary noise along with the temporal change of the noise level in the airfield (or in the vicinity).
  • the observation unit 100 calculates the background noise level (BGN) at the observation point by continuously detecting the noise level in the noise event detection unit 102.
  • BGN background noise level
  • the observation unit 100 starts detection of the noise event at time t12 in the noise event detection unit 102.
  • the noise event detection process is started when the level rises to 10 dB higher than the background noise level (BGN) (NP1).
  • BGN background noise level
  • no threshold level is set.
  • the noise event detection part 102 cuts out the period when the observed value was 10 dB higher than the background noise level (BGN), and determines this as the noise event section.
  • the noise event section in this case is regarded as the time when the quasi-stationary noise continues when the observation point continues for a certain long time.
  • the quasi-stationary noise caused by the engine test performed by the aircraft AP in the airport or the APU may have a relatively long duration (quasi-stationary noise section).
  • the quasi-stationary noise may include a plurality of sound sources in the noise section, it is difficult to automatically identify what kind of noise event it is after the quasi-stationary noise section is detected.
  • the sound source separation processing unit 110 and the separated sound source integration unit 120 described above use the maximum value of the cross-correlation coefficient (the number of upper peaks) for a plurality of sound sources generated simultaneously, and the direction of arrival of each sound.
  • the maximum value of the cross-correlation coefficient the number of upper peaks
  • identification processing is possible simultaneously with the occurrence of a noise event, and so-called “real time processing” is possible.
  • FIG. 5 is a simplified model diagram for explaining the principle of the identification method in the present embodiment.
  • three microphones M0, M2, and M3 are arranged on two axes (XY axes) in an anechoic chamber AR. That is, here, in order to focus on ground noise in the airfield, the microphone M1 on the Z axis is omitted, and the observation space is simplified in two dimensions. Then, an observation horizontal plane PL is virtually defined on two XY axes, and two fixed sound sources SS1 and SS2 are arranged there.
  • sound is output from the two sound sources SS1 and SS2 before and after time, and the sound source separation processing unit 110 performs segmentation by sound source.
  • FIG. 6 is a flowchart illustrating a procedure example of the sound source separation processing executed by the sound source separation processing unit 110.
  • the sound source separation processing unit 110 can execute the sound source separation processing unit at regular intervals (for example, every 200 ms) by, for example, timer interruption.
  • regular intervals for example, every 200 ms
  • Step S12 Next, the sound source separation processing unit 110 performs cross-correlation coefficient calculation processing for each axis by the cross-correlation coefficient calculation unit 112 described above.
  • the cross-correlation coefficient calculation unit 112 calculates a cross-correlation coefficient R (axis, i, ⁇ ) for each axis (here, X-axis and Y-axis, and so on).
  • axis X, Y, Z, but the simplified model omits the Z axis.
  • the subsequent processing is executed for each axis.
  • Step S14 That is, the sound source separation processing unit 110 performs the peak search processing for each axis by the peak search processing unit 114 described above.
  • FIG. 7 is a schematic diagram showing ranking of time delays due to the upper peak of the cross-correlation coefficient.
  • the cross-correlation coefficient R ( ⁇ ) is calculated on each axis, a mutual phase relationship is obtained with a plurality of time delays ⁇ 1, ⁇ 2, and ⁇ 3.
  • the number shows a peak (maximum) tendency, and the first peak R ( ⁇ 1), the second peak R ( ⁇ 2), and the third peak R ( ⁇ 3) are observed in the upper rank.
  • a plurality of time delays ⁇ 1, ⁇ 2, and ⁇ 3 can be ranked in descending order of the peak of the cross correlation coefficient.
  • the result of ranking performed for each axis by the peak search processing unit 114 is ⁇ axis, i, j described above.
  • Step S16 The sound source separation processing unit 110 uses the segmentation processing unit 116 to execute a sound source initial value determination process for each axis and each peak.
  • the segmentation processing unit 116 confirms whether or not each Peak ( ⁇ axis, i, j ) is an initial value of the sound source. For example, some tau axis, i, the particular tau axis which is expected to the same sound source immediately before j, if i-1, j is absent, segmentation processing unit 116 the Peak ( ⁇ axis, i, j ) the It is regarded as the start point ⁇ axis, s, k of the sound source (s is one of i and k is one of rank j). Thereafter, segmentation is determined for this initial value.
  • Step S18 the sound source separation processing unit 110 causes the segmentation processing unit 116 to perform the same sound source segmentation processing for each axis.
  • the segmentation processing unit 116 determines whether or not the current ⁇ axis, i, j is the same as the specific ⁇ axis, i ⁇ 1, j before the fixed time , and the sound source, When the sound sources are the same, they are set as one segment. If ⁇ axis, i ⁇ 1, j one time before has already been part of the segment, this time ⁇ axis, i, j is added to the same segment, and the segment is grown in the time domain. The specific contents of the process will be described later using still another flowchart.
  • Step S20 the sound source separation processing unit 110 executes the segment-by-segment end determination process by the segmentation processing unit 116. This process is performed for each of all the segments being formed.
  • FIG. 8 is a flowchart showing a procedure example of the same sound source segmentation processing for each axis. Hereinafter, the contents of the process will be described according to a procedure example.
  • Step S100 The segmentation processing unit 116 has one time delay of this time that is regarded as the same sound source as any one of the initial values ⁇ axis, s, k or specific segmented ⁇ axis, i ⁇ 1, j. (Unique value) Determine whether it exists. Specifically, the following equation (2) is calculated. here, ⁇ : Constant depending on the moving speed of the sound source (predetermined threshold)
  • Step S102 As a result, when there is only one (only) j ′ that satisfies the formula (2) (step S100: Yes), the segmentation processing unit 116 uses the time delay ⁇ axis, i, j ′ . Segment. Specifically, the time delay ⁇ axis, i, j ′ is added to the members of the same segment as ⁇ axis, i ⁇ 1, j .
  • Step S104 On the other hand, if there are two or more j ′ satisfying the expression (2) or none exists (step S100: No), the segmentation processing unit 116 uses the previous ⁇ to minimize ⁇ virtual is calculated from the square method. At this time, it is not necessary to use all ⁇ immediately before, and several pieces of data are sufficient. That is, it is sufficient here to obtain the latest variation of ⁇ as ⁇ virtual .
  • the order of the least squares method is empirically effective as the second order.
  • Step S106 The segmentation processing unit 116 checks whether there is a current time delay that is regarded as two or more identical sound sources.
  • Step S108 As a result, when there are two or more j ′ satisfying the expression (2) (Step S106: Yes), the segmentation processing unit 116 determines whether or not there is a time delay that can be segmented. to decide. Specifically, the following equation (3) is calculated. here, ⁇ : preset constant (specific threshold), value can be set experimentally or empirically.
  • Step S110 When there is an optimum ⁇ axis, i, j ′ that satisfies the above expression (3) at the minimum (step S108: Yes), the segmentation processing unit 116 uses the optimum ⁇ axis, i, j ′ . Segment. Specifically, the optimum ⁇ axis, i, j ′ is added to the members of the same segment as ⁇ axis, i ⁇ 1, j .
  • Step S112 On the other hand, if there is no j ′ satisfying Expression (3) (Step S108: No), or if there is no j ′ satisfying Expression (2) (Step S106: No), the segment The segmentation processing unit 116 performs segmentation using ⁇ virtual calculated in the previous step S104. Specifically, ⁇ virtual is added to the members of the same segment as ⁇ axis, i ⁇ 1, j .
  • FIG. 9 is a diagram illustrating a variation in time delay on the X axis in the simplified model.
  • the horizontal axis in FIG. 9 indicates the number of time indexes, and the vertical axis indicates the time delay on the X axis.
  • the white circles shown in FIG. 9 indicate the time delay ranked in the first peak of the cross-correlation coefficient, and the hatched circles indicate the time ranked in the second peak of the cross-correlation coefficient. Indicates a delay.
  • time delay on the X-axis is represented as a plurality of ranks ranked in the first peak and the second peak of the cross correlation coefficient.
  • the time delay (> 0) ranked in the first peak corresponds to the first sound source SS1
  • the time delay ( ⁇ 0) ranked in the second peak is in the second sound source SS2. It corresponds. Therefore, even in this time domain, the time delay ranked to the first peak of the cross correlation coefficient is continuously added to the segment of the same sound source SS1.
  • the time delay ranked in the second peak of the cross-correlation coefficient when the time index number is Ti1 is regarded as an initial value for the second sound source SS2.
  • the time delay ranked at the second peak of the cross-correlation coefficient is added to the segment of the sound source SS2.
  • time delay on the X-axis is represented only by the one ranked in the first peak of the cross correlation coefficient.
  • the time delay ranked to the first peak corresponds to the second sound source SS2, so from here the time delay ranked to the first peak of the cross-correlation coefficient Is added to the segment of the sound source SS2.
  • FIG. 10 is a diagram showing an example in which the variation in time delay on the X axis shown in FIG. 9 is separated into sound source segments. Among these, (A) in FIG. 10 shows the segment of the first sound source SS1, and (B) in FIG. 10 shows the segment of the second sound source SS2.
  • the time delay variation on each axis can be separated into segments for each sound source.
  • the X axis is shown here, the Y axis can be similarly divided into segments for each sound source.
  • FIG. 11 is a diagram illustrating an example when the simplified model is applied to a moving sound source.
  • two moving sound sources SS1, SS2 are arranged in the observation horizontal plane PL. Further, the angles (X axis and Y axis) of the microphone unit 10 are different from the simplified model.
  • Example 2 For example, of the two sound sources SS1 and SS2 arranged in the anechoic room AR, the first sound source SS1 is moved from the vicinity of one wall toward the other wall and then moved again to the vicinity of one wall. Let Conversely, the first sound source SS2 was moved from the vicinity of the other wall toward one wall and then moved to the vicinity of the other wall. Further, these two sound sources SS1, SS2 were moved in parallel at the same time.
  • FIG. 12 is a diagram illustrating a variation in time delay in the X-axis and the Y-axis.
  • (A) in FIG. 12 shows the fluctuation of the time delay on the X axis
  • (B) in FIG. 12 shows the fluctuation of the time delay on the Y axis.
  • the horizontal axis represents the number of time indexes
  • the vertical axis represents the ranked time delay ⁇ .
  • the time delay ranked from the top first peak to the third peak is displayed.
  • white circles indicate the time delays ranked in the first peak of the cross-correlation coefficient
  • hatched circles indicate the cross-correlation coefficient.
  • the time delay ranked in the second peak is shown.
  • Black circles indicate time delays ranked in the third peak.
  • FIG. 13 is a diagram illustrating an example of segment separation along the X axis.
  • (A) in FIG. 13 represents a segment corresponding to the sound source SS1
  • (B) in FIG. 13 represents a segment corresponding to the sound source SS2.
  • the horizontal axis in the figure indicates the number of time indexes
  • the vertical axis indicates the ranked time delay.
  • black rhombus marks shown in (A) and (B) indicate ⁇ virtual (virtual time delay) used in the segmentation process (the same applies hereinafter).
  • FIG. 14 is a diagram showing a result of fluctuation in time delay in the X axis and the Y axis for the measured data.
  • 14A shows the time delay variation on the X axis
  • FIG. 14B shows the time delay variation on the Y axis.
  • the fluctuation results in the situation where the landing sound is observed in the middle of the taxing sound are shown.
  • the white circles in FIG. 14 indicate time delays ranked in the first peak of the cross-correlation coefficient
  • the hatched circles indicate time delays ranked in the second peak of the cross-correlation coefficient.
  • the black circles indicate time delays ranked in the third peak of the cross correlation coefficient.
  • FIG. 15 is a diagram illustrating a result of separating the time delay variation on the X-axis and the Y-axis into segments for the measured data.
  • (A) and (B) in FIG. 15 show examples of time-delayed segment separation on the X axis
  • (C) and (D) in FIG. 15 show examples of time-delayed segment separation on the Y-axis.
  • the white circles in FIG. 15 indicate the time delays ranked in the first peak of the cross correlation coefficient
  • the hatched circles are ranked in the second peak of the cross correlation coefficient. Shows a time delay.
  • the black rhombus marks indicate ⁇ virtual (virtual time delay) used in the segmentation process.
  • the time delay variation is separated into, for example, four segments X1, X2, X3, and X4 on the X axis, and three segments Y1 and Y2 on the Y axis. , Y3.
  • the number of segments after separation basically represents the number of sound sources.
  • noise with a large sound pressure level in the middle becomes dominant, and the continuity of the time delay from before is broken, As in the separation example of FIG. 15, it is fully possible that the number of segments does not match the number of sound sources.
  • segments separated on each axis are further integrated with the same sound source between different axes.
  • segment integration processing That is, the segment integration processing is executed by the above-described separated sound source integration unit 120.
  • the segments separated on each axis as described above are combined between different axes.
  • segments between different axes are integrated using fluctuations in the cross-correlation coefficient R ( ⁇ ).
  • FIG. 16 is a diagram showing a variation pattern of the cross-correlation coefficient R ( ⁇ ) of each segment.
  • (A), (B), (C) in FIG. 16 show the variation pattern of the cross-correlation coefficient R ( ⁇ ) of the X-axis segment
  • (D), (E), (F) in FIG. Indicates a variation pattern of the cross-correlation coefficient R ( ⁇ ) of the Y-axis segment.
  • the white circle in the figure indicates the first peak value of the cross-correlation coefficient R ( ⁇ )
  • the hatched circle indicates the second peak value
  • the black circle indicates the third peak value. Yes.
  • a segment considered to be the same sound source has a very close variation pattern of R ( ⁇ ). ing. This is because the sound generated from an actual sound source has fluctuations such as changes in the engine output, or is affected by the weather before reaching the observation point. It is thought to be caused by appearing in
  • FIG. 17 is a diagram illustrating a calculation example of a normalized cross-correlation coefficient between overlapping segments in the time domain.
  • segments on the X axis are arranged in the vertical direction and segments on the Y axis are arranged in the horizontal direction, and the normalized cross-correlation coefficients between the segments are shown in a 4 ⁇ 3 matrix.
  • Example of segment integration In FIG. 17, it can be seen that if a combination having a normalized cross-correlation coefficient greater than 0.9 is selected, four combinations of X1-Y1, X2-Y2, X3-Y3, and X4-Y3 can be obtained. Reflecting this result, an example of segment integration between axes is shown in FIG. 16 by dashed arrows.
  • the peak value of the taxing sound is not detected between the time indexes Tia and Tib because the actual noise level of the landing sound is higher than the noise level of the taxing sound. This is because it was large enough.
  • the time delay is divided into sound source-specific segments by extracting the time delay variation in the time domain due to the peak of the cross-correlation coefficient on each axis, and The segments between the axes are integrated using the variation of the cross-correlation coefficient for each segment. This makes it possible to separate the sound arrival directions of the sound sources from a plurality of noise sources that are generated simultaneously in the observation space.
  • the number of aircraft that emit significant noise levels that could not be identified only by the method using only the first peak of the cross-correlation coefficient and the fluctuation of the sound pressure level by the identification method of this embodiment. Can be recognized automatically. Thereby, for example, in noise level evaluation of single noise, information for inferring the influence of background noise can be obtained. At the same time, since the direction of arrival of each sound can be obtained, it is possible to reliably identify each sound source by using the structure information of the airport such as a runway, a taxiway, and a surrounding road.
  • the observation on the Z-axis is omitted for simplification, but it is naturally possible to use the three-axis correlation of the X-axis, the Y-axis, and the Z-axis when implementing the present invention.
  • the peak tendency of the cross-correlation coefficient mainly due to ground reflection is prominent, and if such a tendency is utilized, the ground noise identification of the aircraft can be made more accurate. Very useful in terms.
  • the airfield is the target area.
  • the noise observation apparatus and the noise observation method of the present invention can use an observation space (target area) other than the airfield.
  • the conditions ( ⁇ , ⁇ , 0.9) related to the formation of segments and the integration of segments mentioned in one embodiment are examples, and the setting of the conditions is matched to the characteristics of the observation target area and the noise source. Can be changed as appropriate.
  • the cross-correlation coefficient is calculated for each scheduled interruption, but the interval of “every scheduled” may not be constant.
  • the calculation is performed with an interval of 200 ms at a certain fixed time, but the next fixed time may be after 100 ms shorter than 200 ms, or conversely after 300 ms longer than 200 ms.
  • a taxing sound and a landing sound are exemplified as the plurality of noises, but the plurality of noises may be other combinations. Further, the disclosed invention can be applied even when three or more noises are generated simultaneously.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
PCT/JP2013/004343 2012-08-09 2013-07-16 騒音観測装置及び騒音観測方法 WO2014024382A1 (ja)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112013003958.3T DE112013003958T5 (de) 2012-08-09 2013-07-16 Geräuschbeobachtungsvorrichtung und Geräuschbeobachtungsverfahren
CN201380041613.3A CN104583737B (zh) 2012-08-09 2013-07-16 噪声观测装置和噪声观测方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-177181 2012-08-09
JP2012177181A JP5150004B1 (ja) 2012-08-09 2012-08-09 騒音観測装置及び騒音観測方法

Publications (1)

Publication Number Publication Date
WO2014024382A1 true WO2014024382A1 (ja) 2014-02-13

Family

ID=47890586

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/004343 WO2014024382A1 (ja) 2012-08-09 2013-07-16 騒音観測装置及び騒音観測方法

Country Status (4)

Country Link
JP (1) JP5150004B1 (de)
CN (1) CN104583737B (de)
DE (1) DE112013003958T5 (de)
WO (1) WO2014024382A1 (de)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10140873B2 (en) * 2016-08-16 2018-11-27 The Boeing Company Performance-based track variation for aircraft flight management
CN110501674A (zh) * 2019-08-20 2019-11-26 长安大学 一种基于半监督学习的声信号非视距识别方法
EP4181000A1 (de) 2021-11-15 2023-05-17 Siemens Mobility GmbH Verfahren und rechenumgebung zum erstellen und anwenden eines prüfalgorithmus für rechenvorgänge

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959842A (zh) * 2016-04-29 2016-09-21 歌尔股份有限公司 一种耳机降噪处理的方法、装置及耳机
CN110907895A (zh) * 2019-12-05 2020-03-24 重庆商勤科技有限公司 噪声监测识别定位方法、系统及计算机可读存储介质
CN114001758B (zh) * 2021-11-05 2024-04-19 江西洪都航空工业集团有限责任公司 一种捷联导引头捷联解耦准确确定时间延迟的方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59216022A (ja) * 1983-05-24 1984-12-06 Yoichi Ando 音場評価計測器
JPH023126A (ja) * 1987-11-26 1990-01-08 Ricoh Co Ltd 情報記録媒体
JPH0743203A (ja) * 1993-07-30 1995-02-14 Kobayashi Rigaku Kenkyusho 移動音源識別方法及び装置
JPH11190777A (ja) * 1997-10-24 1999-07-13 Sekisui Chem Co Ltd 振動検出器及びその取り付け方法とその取り付け器具、並びにそれを使用した地盤の振動波の伝搬速度の測定方法
JP2005184426A (ja) * 2003-12-19 2005-07-07 Chiyuuden Plant Kk 音源方向検出装置および方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3350713B2 (ja) * 2000-08-15 2002-11-25 神戸大学長 騒音源の種類を特定する方法、その装置および媒体
JP4113169B2 (ja) * 2004-08-18 2008-07-09 日本電信電話株式会社 信号源数の推定方法、推定装置、推定プログラム及び記録媒体
DE102008062291B3 (de) * 2008-12-15 2010-07-22 Abb Technology Ag Messeinrichtung und Verfahren zur Diagnose von Geräuschen in fluidischen Systemen
JP5016724B1 (ja) * 2011-03-18 2012-09-05 リオン株式会社 騒音観測装置及び騒音観測方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59216022A (ja) * 1983-05-24 1984-12-06 Yoichi Ando 音場評価計測器
JPH023126A (ja) * 1987-11-26 1990-01-08 Ricoh Co Ltd 情報記録媒体
JPH0743203A (ja) * 1993-07-30 1995-02-14 Kobayashi Rigaku Kenkyusho 移動音源識別方法及び装置
JPH11190777A (ja) * 1997-10-24 1999-07-13 Sekisui Chem Co Ltd 振動検出器及びその取り付け方法とその取り付け器具、並びにそれを使用した地盤の振動波の伝搬速度の測定方法
JP2005184426A (ja) * 2003-12-19 2005-07-07 Chiyuuden Plant Kk 音源方向検出装置および方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10140873B2 (en) * 2016-08-16 2018-11-27 The Boeing Company Performance-based track variation for aircraft flight management
US10565885B2 (en) 2016-08-16 2020-02-18 The Boeing Company Performance-based track variation for aircraft flight management
CN110501674A (zh) * 2019-08-20 2019-11-26 长安大学 一种基于半监督学习的声信号非视距识别方法
EP4181000A1 (de) 2021-11-15 2023-05-17 Siemens Mobility GmbH Verfahren und rechenumgebung zum erstellen und anwenden eines prüfalgorithmus für rechenvorgänge

Also Published As

Publication number Publication date
DE112013003958T5 (de) 2015-04-23
JP5150004B1 (ja) 2013-02-20
CN104583737A (zh) 2015-04-29
CN104583737B (zh) 2016-11-23
JP2014035287A (ja) 2014-02-24

Similar Documents

Publication Publication Date Title
WO2014024382A1 (ja) 騒音観測装置及び騒音観測方法
AU2014250633B2 (en) Dynamic alarm zones for bird detection systems
JP5016724B1 (ja) 騒音観測装置及び騒音観測方法
JP7232543B2 (ja) 雷脅威情報の提供装置、雷脅威情報の提供方法及びプログラム
US10134292B2 (en) Navigating and guiding an aircraft to a reachable airport during complete engine failure
CN103482071A (zh) 结冰情况探测系统
CN103903101A (zh) 一种通用航空多源信息监管平台及其方法
JP2012126394A (ja) 航空機に対する気象の位置を予測するシステム及び方法
CN113869379A (zh) 一种基于数据驱动的航空器能量异常识别方法
US20240221358A1 (en) System for automatic stop sign violation identification
JP5016726B1 (ja) 騒音観測装置及び騒音観測方法
CN109190325A (zh) 基于行人拥挤度分析的人群疏散路径规划仿真方法
CN110781457A (zh) 离场阶段油耗数据的处理方法及装置、电子设备和存储介质
Asensio et al. Implementation of a thrust reverse noise detection system for airports
Penkin et al. Detection of the aircraft vortex wake with the aid of a coherent Doppler lidar
Yoshikawa et al. Wake vortex observation campaign by ultra fast-scanning lidar in Narita airport, Japan
JP5561424B1 (ja) 表示制御装置、表示制御方法およびプログラム
KR102475554B1 (ko) 학습용 데이터 생성 방법, 학습용 데이터 생성 장치 및 학습용 데이터 생성 프로그램
JP5685802B2 (ja) レーダ制御装置、該装置に用いられるレーダ監視覆域設定方法及びレーダ監視覆域設定プログラム
Timar et al. Analysis of s-turn approaches at John F. Kennedy airport
CN105427676A (zh) 航空器通过出港点时间的确定方法和装置
CN118762493A (zh) 一种点线网融合的边坡灾害监测预警方法及系统
JP6366540B2 (ja) 監視装置
JPH10246780A (ja) 空港面レーダ監視装置
CN110609574A (zh) 飞行数据的处理方法及装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13828374

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 1120130039583

Country of ref document: DE

Ref document number: 112013003958

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13828374

Country of ref document: EP

Kind code of ref document: A1