WO2014024382A1 - Noise observation device and noise observation method - Google Patents

Noise observation device and noise observation method Download PDF

Info

Publication number
WO2014024382A1
WO2014024382A1 PCT/JP2013/004343 JP2013004343W WO2014024382A1 WO 2014024382 A1 WO2014024382 A1 WO 2014024382A1 JP 2013004343 W JP2013004343 W JP 2013004343W WO 2014024382 A1 WO2014024382 A1 WO 2014024382A1
Authority
WO
WIPO (PCT)
Prior art keywords
time delay
time
noise
axis
cross
Prior art date
Application number
PCT/JP2013/004343
Other languages
French (fr)
Japanese (ja)
Inventor
篠原 健二
恵司 廻田
Original Assignee
リオン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by リオン株式会社 filed Critical リオン株式会社
Priority to DE112013003958.3T priority Critical patent/DE112013003958T5/en
Priority to CN201380041613.3A priority patent/CN104583737B/en
Publication of WO2014024382A1 publication Critical patent/WO2014024382A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • G01S3/808Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems
    • G01S3/8083Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems determining direction of source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H3/00Measuring characteristics of vibrations by using a detector in a fluid

Definitions

  • the present invention relates to a noise observation apparatus and a noise observation method suitable for use in an environment where a plurality of sound sources exist in an observation target area.
  • Patent Document 1 a prior art effective for automatic identification of aircraft flight noise observed under a flight route such as an aircraft has been known (see, for example, Japanese Patent Laid-Open No. 7-43203: Patent Document 1).
  • the arrival direction vector (elevation angle, azimuth angle) of the moving sound source is calculated from the cross-correlation coefficient of the time delay of the sound reaching four microphones arranged at intervals on the X, Y, and Z axes. And the movement trajectory of the moving sound source is automatically identified from the obtained vector set.
  • Ground noise includes, for example, noise generated by the operation of the auxiliary power supply unit (APU) in the parked aircraft and aircraft that are moving between the terminal and the runway (taxing) for propulsion. And noise generated when an aircraft performs an engine test operation in an engine test operation area in an airfield.
  • APU auxiliary power supply unit
  • the noise observed around the airfield is complex, and various noises such as cars and sirens come from the surroundings at the observation point, so only the ground noise generated by the aircraft can be compared with other noise from the ground. It is difficult to detect pinpoints.
  • Aircraft noise includes transient single-shot noise that occurs on the airfield as the aircraft operates, engine test operations and APU operations that are observed around the airfield due to aircraft maintenance, etc. Noise continues for a long time, and there are quasi-stationary noises that are steady but have considerable level fluctuations, making noise identification more difficult.
  • Patent Document 1 when noise is simultaneously generated from a plurality of sound sources, the cross-correlation coefficient is on each axis.
  • the maximum time delay does not necessarily indicate the direction of arrival of sound from the same sound source.
  • a plurality of noises are overlapped with each other, only the sound having the maximum cross-correlation is used, and it is difficult to automatically identify other noises.
  • this solution calculates the time delay cross-correlation coefficient for each axis at regular intervals, and for the time delay in which the cross-correlation coefficient shows a peak (maximum) tendency, the cross-correlation coefficient starts from the largest.
  • a plurality of time delay variations are extracted in the time domain to form a continuous time delay set for each axis.
  • the set of time delays formed for each axis is a set separated for each sound source when there are a plurality of sound sources.
  • the time delay sets formed for each axis if the cross-correlation between the different axes is now observed, the time delay sets having the same sound source can be combined therewith. As a result, it is possible to automatically separate and identify a plurality of simultaneously occurring noises.
  • the noise with the highest sound pressure level tends to dominate the maximum peak of the cross-correlation coefficient for each axis.
  • the noise with the highest sound pressure level tends to dominate the maximum peak of the cross-correlation coefficient for each axis.
  • the figure when looking at a plurality of peaks in order from the top, attention is paid to the fact that one of the peaks shows the influence of noise (other sound sources) other than the maximum noise.
  • the cross-correlation coefficient when a sound source moves in the observation space, even if the cross-correlation coefficient is indicated by a time delay that is the maximum peak in a certain time zone, it is not the maximum peak in another time zone, It may be indicated by the time delay of other lower peaks. In this case, the time delay at which the cross-correlation coefficient reaches the maximum peak in the same time zone indicates other sound sources, so simply tracing the maximum peak always in the time domain does not identify the sound source. .
  • this solving means extracts a plurality of time delay fluctuations in the time domain that are higher in the peak of the cross-correlation coefficient calculated at regular intervals (for example, from the first to the third), and continuously for each axis.
  • a set separated on each axis is combined between different axes. That is, in the present invention, when a cross correlation coefficient is considered instead of individual time delay for a certain set, attention is paid to the fact that the sets considered to be the same sound source have very similar fluctuations in the cross correlation coefficient. is doing. This is presumably because the sound emitted from the actual sound source is affected by, for example, output fluctuations and weather changes, and these changes appear in common with the sound input to the microphone. Therefore, if a set of sets exists at the same time between different axes in a certain time domain, the sets are determined from the normalized cross-correlation coefficient when the time delay is 0 with respect to the time variation of the cross-correlation coefficient of each set. Can be combined. Since the sets combined between the axes represent the arrival directions of the same sound source, a plurality of sound sources existing in the observation space can be automatically identified therefrom.
  • the noise observation apparatus has a configuration including a calculation unit, an aggregation unit, and an integration unit.
  • the noise observation method of the present solution can be executed using these configurations.
  • the calculation means uses two microphones arranged at intervals on a plurality of axes defined in an observation space where a plurality of sound sources exist, and uses a time delay that represents a difference in sound arrival time for each axis.
  • the step of calculating the cross-correlation coefficient is performed at regular intervals.
  • the aggregation means extracts a plurality of time delay fluctuations in the time domain in descending order of the cross correlation coefficient for a plurality of time delays in which the cross correlation coefficient calculated every fixed time by the calculating means shows a peak tendency, A step of forming a set of continuous time delays for each axis is executed. Then, the integration unit executes a step of combining sets of time delays having the same sound source from the cross-correlation between different axes with respect to the sets of time delays for each axis formed by the aggregation unit.
  • the following various aspects can be provided in forming a continuous set of time delays.
  • the aggregation means can execute a step of confirming whether or not a plurality of time delays showing a peak tendency at each fixed time become initial values for each sound source prior to formation of a set of continuous time delays.
  • the steps performed by the aggregation means can include the following steps. (1) Among a plurality of time delays showing a peak tendency at every fixed time, whether or not there is a single value of time delay that makes a difference from a specific time delay showing a peak tendency before the fixed time less than a predetermined threshold Step to determine whether. (2) A step of adding a unique value to the same set as a specific time delay if it is determined in (1) that the unique value exists. (3) A step of calculating a virtual time delay by the least square method using at least a specific time delay (the latest several) when it is determined in (1) that there is no unique value.
  • the time delay of the upper peak calculated every fixed time after confirming whether it is a time delay by the same sound source as the last time delay, the time delay (it is expected that the sound source is the same) Only unique noise data) can be added to a continuous set. Thereby, the accuracy of the identification result can be improved and the reliability of the noise observation result can be increased. Also, if there is nothing that can be predicted as the same sound source in the time delay of the upper peak this time, obtain the virtual time delay by the least square method using the time delay for the last several pieces including the previous value, Can be used for the steps.
  • the aggregation unit further executes the following steps. (4) Since there are a plurality of time delays in which the difference from a specific time delay is less than a predetermined threshold, there is no single value, or there is a predetermined difference from a specific time delay among a plurality of time delays. Determining whether there is only one value because there is no time delay that is less than the threshold value.
  • the difference from the virtual time delay is Determining whether there is a specific value that is less than a specific threshold; (6) A step of adding the specific value to the same set as the specific time delay when it is determined in (5) that there is the specific value. (7) In the above (5), it is determined that there is no specific time, or in the above (4), there is one time delay in which the difference from the specific time delay is less than a predetermined threshold among the plurality of time delays. Adding a virtual time delay to the same set as a particular time delay if it is determined that there is no unique value because it does not exist.
  • the virtual time delay calculated in (3) above represents the latest time delay variation in the time domain. Therefore, when there is a plurality of time delays and the only value cannot be narrowed down, it is possible to add the ones close to the virtual time delay to the set after predicting the same sound source (6). On the other hand, if there is nothing close to the virtual time delay and there is no single value, the virtual time delay can be added to the set to continue the assembly (7).
  • the aggregation means can further execute the following steps.
  • the last time delay in the set is a virtual time delay and all of the immediately preceding predetermined numbers are virtual time delays, the virtual time delay for a predetermined number of consecutive is deleted, and for the set Further aggregation can be terminated.
  • end end processing the collection of continuous time delays in accordance with the end time of a noise event or the like.
  • the aggregation means deletes a predetermined number of virtual time delays in the end determination step of (8) above, and as a result of completing the formation of the set, the number of time delays included in the set is equal to or less than a specified number. If so, the set can be invalidated.
  • a plurality of sound sources that are simultaneously generated in the observation space can be automatically separated and identified.
  • calculation load can be reduced by processing periodically rather than processing after accumulating a large amount of calculation results, real-time processing using a computer can be easily realized.
  • FIG. 1 is a schematic diagram showing one embodiment when a noise observation device is installed in an airfield
  • FIG. 2 is a diagram schematically showing the configuration of the noise observation apparatus and the noise identification method based on the cross-correlation method.
  • Fig. 3 is a diagram explaining the noise event detection method for single noise, along with the temporal change of the noise level under the channel
  • FIG. 4 is a diagram explaining the noise event detection method for quasi-stationary noise, along with the temporal change of the noise level in the airfield (or nearby)
  • FIG. 5 is a simplified model diagram for explaining the principle of the identification method.
  • FIG. 6 is a flowchart illustrating an example of a procedure of sound source separation processing executed by the sound source separation processing unit.
  • FIG. 1 is a schematic diagram showing one embodiment when a noise observation device is installed in an airfield
  • FIG. 2 is a diagram schematically showing the configuration of the noise observation apparatus and the noise identification method based on the cross-correlation method.
  • Fig. 3 is a diagram
  • FIG. 7 is a schematic diagram showing ranking of time delays due to the upper peak of the cross-correlation coefficient
  • FIG. 8 is a flowchart showing a procedure example of the same sound source segmentation processing for each axis
  • FIG. 9 is a diagram showing the variation of the time delay on the X axis in the simplified model
  • FIG. 10 is a diagram showing an example in which the variation in time delay on the X-axis shown in FIG. 9 is separated into sound source segments
  • FIG. 11 is a diagram illustrating an example when the simplified model is applied to a moving sound source
  • FIG. 12 is a diagram showing fluctuations in time delay in the X axis and the Y axis.
  • FIG. 13 is a diagram showing an example of segment separation in the X-axis.
  • FIG. 14 is a diagram showing the fluctuation results of the time delay in the X axis and the Y axis for the measured data.
  • FIG. 15 is a diagram showing a result of separating the time delay variation in the X axis and the Y axis into segments for the measured data.
  • FIG. 16 is a diagram showing a variation pattern of the cross-correlation coefficient of each segment.
  • FIG. 17 is a diagram illustrating a calculation example of a normalized cross-correlation coefficient between overlapping segments in the time domain.
  • FIG. 1 is a schematic diagram showing one embodiment when a noise observation apparatus is installed in an airfield.
  • a target area such as an airfield (or its surroundings)
  • noise coming from the sky as the aircraft travels take-off / landing noise generated on the runway, and reverse noise during landing (hereinafter referred to as “flight noise”).
  • light noise In addition to the noise associated with aircraft operations and aircraft maintenance in the airfield, there is a noise environment that is mixed with noise (hereinafter referred to as “ground noise”) associated with taxing, engine trial operation, and APU operation. Is formed.
  • the noise observation apparatus can be used with a microphone unit 10 installed at an observation point in an airfield.
  • An observation unit (not shown) is connected to the microphone unit 10.
  • noise sources such as the landing area 20, the taxing road 30, the lander 40, the take-off machine 50, and the engine trial operation area 60 that run or fly on the runway 25.
  • various noises are generated from these places and arrive at the observation point from each direction.
  • the noise observation apparatus of the present embodiment is suitable for an application for automatically identifying a plurality of noises arriving at an observation point by using the microphone unit 10. In the following, explanation will be made for each area that is a source of noise.
  • auxiliary power unit Auxiliary Power Unit
  • the auxiliary power unit is a small engine that is used as a power source for supplying compressed air, hydraulic pressure, electric power, and the like into the parked aircraft AP.
  • the taxing road 30 is a runway on which the aircraft AP moves between the parking area and the runway 25. From the aircraft AP during taxing, the engine is operated to obtain a propulsive force necessary for ground running, and noise is thereby generated.
  • the landing aircraft 40 enters and descends toward the runway 25 when the aircraft AP arrives, and, in many cases, performs reverse engine injection (reverse) on the runway 25 for deceleration, and finally the runway. Generates noise associated with the flight until 25.
  • the take-off aircraft 50 starts to run at the start position of the runway 25 when the aircraft AP departs, and generates noise associated with the operation until it flies, rises and flies off in the middle of the runway 25.
  • FIG. 2 is a diagram schematically showing the configuration of the noise observation apparatus and the noise identification method based on the cross-correlation method.
  • the noise observation apparatus has a function of performing calculation processing using the microphone unit 10 and identifying noise by a cross-correlation method.
  • the microphone unit 10 includes, for example, four microphones M0, M1, M2, and M3.
  • the individual microphones M0 to M3 are arranged on the X axis, the Y axis, and the Z axis that are virtually determined in the observation space. It is arranged on the axis and at the origin of the three-axis coordinate system. Specifically, the microphone M0 is disposed at the origin, and another microphone M1 is disposed on the Z axis extending in the vertical direction from the origin.
  • Another microphone M2 is installed on the Y axis extending in the horizontal direction from the origin and opening 90 ° with the X axis
  • another microphone M3 is installed on the X axis extending in the horizontal direction from the origin.
  • the microphone unit 10 holds the relative positions of the microphones M0 to M3 in an installed state while mechanically fixing the individual microphones M0 to M3.
  • two microphones are disposed on each axis on the X axis, the Y axis, and the Z axis in the observation space.
  • the microphone unit 10 includes a microphone MB different from the four microphones M0 to M3 described above.
  • the four microphones M0 to M3 are for noise identification by the cross correlation method, while the microphone MB is for measuring ambient noise. That is, the microphone MB is used to measure the noise level at the observation point alone, for example.
  • the noise observation apparatus includes an observation unit 100, and a microphone unit 10 is connected to the observation unit 100.
  • the observation unit 100 includes, for example, computer equipment including a central processing unit (CPU), a semiconductor memory (ROM, RAM), a hard disk drive (HDD), an input / output interface, a liquid crystal display, and the like (not shown).
  • this elevation angle ⁇ information can be used for identifying flight noise (see Patent Document 1 cited in the prior art). That is, for example, when the noise level detected by the microphone MB exceeds a certain threshold value (when a noise event occurs), if the elevation angle change ⁇ (t) is recorded at the same time as the sound arrival direction data, it is designated in advance. It is possible to determine that the noise of the sound arrival direction data larger than the elevation angle is the flight noise caused by the aircraft AP.
  • the azimuth angle ⁇ can be obtained by calculation in addition to the elevation angle ⁇ if the sound arrival direction is expanded not only in the vertical direction but also in the three axes of the XY, YZ, and ZX axes. It is. Then, by obtaining the elevation angle ⁇ and the azimuth angle ⁇ , it is possible to calculate a noise arrival direction vector (unit vector) in a three-axis observation space (vector space) with the observation point as a reference. Further, the moving product of the sound source (aircraft AP) (from which direction to which direction) can be more reliably known with the cross product of the calculated arrival direction vectors as a reference.
  • the observation unit 100 includes a noise event detection unit 102 and a direction-of-arrival vector calculation unit 106 as functional elements thereof, and further includes a sound source separation processing unit 110 and a separated sound source integration unit 120 including a plurality of functional elements.
  • the noise event detection unit 102 detects the ground noise level generated in the target area based on the noise detection signal from the microphones MB, M0 to M3, for example. Specifically, the result of digital conversion of the noise detection signal is sampled, and the noise level value (dB) at the observation point is calculated.
  • Aircraft noise can be broadly divided into single noise and quasi-stationary noise.
  • single noise is transient noise that occurs once, such as noise observed in the vicinity of an airfield as the aircraft AP operates.
  • taxing noise is often observed as a single noise.
  • the quasi-stationary noise is noise that continues for a long time and is steady but accompanied by a considerable level fluctuation.
  • it is regarded as ground noise of the aircraft.
  • this includes engine test operation observed in the vicinity of an airfield accompanying the maintenance of the aircraft AP, operation noise of the APU, noise during standby before takeoff at the end of the runway, and the like.
  • helicopter idling and hovering noises often continue constantly and may be observed as quasi-stationary noises.
  • the noise event detection unit 102 registers a condition (threshold level) for detecting a noise event of single noise or quasi-stationary noise from the noise level value in the observation unit 100.
  • the noise event detection unit 102 applies the calculated noise level value (dB) to the registered conditions, detects a single-shot flight noise or ground noise event, or detects a quasi-steady noise ground noise event. be able to.
  • An example of noise event detection will be described later.
  • the arrival direction vector calculation unit 106 calculates the sound arrival direction vector (elevation angle ⁇ , azimuth angle ⁇ ) by the above-described three-axis cross-correlation method based on the detection signals from the four microphones M0 to M3. Moreover, the arrival direction vector calculation unit 106 records the elevation angle ⁇ (t) and the azimuth angle ⁇ (t) represented by a time function as sound arrival direction data.
  • the sound source separation processing unit 110 includes a cross correlation coefficient calculation unit 112, a peak search processing unit 114, and a segmentation processing unit 116.
  • the sound source separation processing unit 110 has a function of separating the time delay obtained for each axis into sound sources on each axis.
  • the segmentation processing unit 116 collects a plurality of time delay fluctuations higher than the peak value extracted by the peak search processing unit 114 in the time domain, and collects the same sound source and expected ones in the same set. A process for forming a time delay set is performed. Hereinafter, such processing is referred to as “segmentation”, and the formed set is referred to as “segment”. Details of processing by the sound source separation processing unit 110 will be described later with reference to another flowchart.
  • the separated sound source integration unit 120 includes a normalized cross correlation coefficient calculation unit 122 and a segment integration processing unit 124.
  • the separated sound source integration unit 120 has a function of combining segments for each axis formed by the segmentation processing unit 116 with the same sound source. Since the segments formed by the segmentation processing unit 116 are separated for each sound source on each axis as described above, in order to identify sound sources in the observation space, it is necessary to combine the sets between each axis. is there. Therefore, here, the segments between the axes are combined using the fluctuation of the correlation coefficient R ( ⁇ ).
  • the normalized cross-correlation coefficient calculation unit 122 generates a time delay for the time variation of the cross-correlation coefficient R ( ⁇ ), not the individual time delay ⁇ , for the segment formed on each axis.
  • the segment integration processing unit 124 integrates segments having a sufficiently large calculated cross-correlation coefficient R (0) as segments of the same sound source. Details of processing performed by the separated sound source integration unit 120 will also be described later.
  • the observation unit 100 includes an identification result output unit 130.
  • the segments integrated by the segmentation processing unit 116 are provided to the identification result output unit 130.
  • the identification result output unit 130 identifies a plurality of types of sound sources from the information of the segments integrated into the same sound source and the arrival direction vector calculated by the arrival direction vector calculation unit 106, and outputs the result.
  • the output result can be displayed on a display device (not shown) or transmitted as data to an external computer of the observation unit 100, for example.
  • FIG. 3 is a diagram illustrating a noise event detection method for single noise, along with a temporal change in the noise level under the channel.
  • the observation unit 100 calculates the background noise level (BGN) at the observation point by, for example, continuously detecting the noise level in the noise event detection unit 102.
  • BGN background noise level
  • Single noise is generated as transient noise when the aircraft AP passes over the sky as described above. Accordingly, the temporal change in the single noise level increases with time, and increases to a level 10 dB higher than the background noise level at time t1. Thereafter, the noise level reaches the maximum value (Nmax) and becomes the background noise level (BGN) again.
  • the observation unit 100 starts noise event detection at the noise event detection unit 102 from time t1. That is, when the noise level of the microphone MB rises to a level 10 dB higher than the background noise level (BGN), noise event detection processing is started.
  • BGN background noise level
  • a threshold level (Na) for determining that single noise has occurred is set in advance. Therefore, the noise event detection unit 102 identifies single noise only when the observed value exceeds the threshold level (Na). In this example, since the observed value actually exceeds the threshold level (Na), the noise event detection unit 102 determines that the single noise generation time is at time t3 when the noise level reaches the maximum value (Nmax). it can.
  • the noise event detection unit 102 determines the time t4 when the noise level is lowered by 10 dB from the maximum value (Nmax) as the end time of the single noise.
  • Nmax the maximum value
  • the noise event detection unit 102 cuts out a period when the noise level is higher than the value lower than the maximum value (Nmax) by 10 dB, and determines this as a noise event section.
  • the noise event section is regarded as the time when single noise continues at the observation point.
  • FIG. 4 is a diagram explaining the noise event detection method for the quasi-stationary noise along with the temporal change of the noise level in the airfield (or in the vicinity).
  • the observation unit 100 calculates the background noise level (BGN) at the observation point by continuously detecting the noise level in the noise event detection unit 102.
  • BGN background noise level
  • the observation unit 100 starts detection of the noise event at time t12 in the noise event detection unit 102.
  • the noise event detection process is started when the level rises to 10 dB higher than the background noise level (BGN) (NP1).
  • BGN background noise level
  • no threshold level is set.
  • the noise event detection part 102 cuts out the period when the observed value was 10 dB higher than the background noise level (BGN), and determines this as the noise event section.
  • the noise event section in this case is regarded as the time when the quasi-stationary noise continues when the observation point continues for a certain long time.
  • the quasi-stationary noise caused by the engine test performed by the aircraft AP in the airport or the APU may have a relatively long duration (quasi-stationary noise section).
  • the quasi-stationary noise may include a plurality of sound sources in the noise section, it is difficult to automatically identify what kind of noise event it is after the quasi-stationary noise section is detected.
  • the sound source separation processing unit 110 and the separated sound source integration unit 120 described above use the maximum value of the cross-correlation coefficient (the number of upper peaks) for a plurality of sound sources generated simultaneously, and the direction of arrival of each sound.
  • the maximum value of the cross-correlation coefficient the number of upper peaks
  • identification processing is possible simultaneously with the occurrence of a noise event, and so-called “real time processing” is possible.
  • FIG. 5 is a simplified model diagram for explaining the principle of the identification method in the present embodiment.
  • three microphones M0, M2, and M3 are arranged on two axes (XY axes) in an anechoic chamber AR. That is, here, in order to focus on ground noise in the airfield, the microphone M1 on the Z axis is omitted, and the observation space is simplified in two dimensions. Then, an observation horizontal plane PL is virtually defined on two XY axes, and two fixed sound sources SS1 and SS2 are arranged there.
  • sound is output from the two sound sources SS1 and SS2 before and after time, and the sound source separation processing unit 110 performs segmentation by sound source.
  • FIG. 6 is a flowchart illustrating a procedure example of the sound source separation processing executed by the sound source separation processing unit 110.
  • the sound source separation processing unit 110 can execute the sound source separation processing unit at regular intervals (for example, every 200 ms) by, for example, timer interruption.
  • regular intervals for example, every 200 ms
  • Step S12 Next, the sound source separation processing unit 110 performs cross-correlation coefficient calculation processing for each axis by the cross-correlation coefficient calculation unit 112 described above.
  • the cross-correlation coefficient calculation unit 112 calculates a cross-correlation coefficient R (axis, i, ⁇ ) for each axis (here, X-axis and Y-axis, and so on).
  • axis X, Y, Z, but the simplified model omits the Z axis.
  • the subsequent processing is executed for each axis.
  • Step S14 That is, the sound source separation processing unit 110 performs the peak search processing for each axis by the peak search processing unit 114 described above.
  • FIG. 7 is a schematic diagram showing ranking of time delays due to the upper peak of the cross-correlation coefficient.
  • the cross-correlation coefficient R ( ⁇ ) is calculated on each axis, a mutual phase relationship is obtained with a plurality of time delays ⁇ 1, ⁇ 2, and ⁇ 3.
  • the number shows a peak (maximum) tendency, and the first peak R ( ⁇ 1), the second peak R ( ⁇ 2), and the third peak R ( ⁇ 3) are observed in the upper rank.
  • a plurality of time delays ⁇ 1, ⁇ 2, and ⁇ 3 can be ranked in descending order of the peak of the cross correlation coefficient.
  • the result of ranking performed for each axis by the peak search processing unit 114 is ⁇ axis, i, j described above.
  • Step S16 The sound source separation processing unit 110 uses the segmentation processing unit 116 to execute a sound source initial value determination process for each axis and each peak.
  • the segmentation processing unit 116 confirms whether or not each Peak ( ⁇ axis, i, j ) is an initial value of the sound source. For example, some tau axis, i, the particular tau axis which is expected to the same sound source immediately before j, if i-1, j is absent, segmentation processing unit 116 the Peak ( ⁇ axis, i, j ) the It is regarded as the start point ⁇ axis, s, k of the sound source (s is one of i and k is one of rank j). Thereafter, segmentation is determined for this initial value.
  • Step S18 the sound source separation processing unit 110 causes the segmentation processing unit 116 to perform the same sound source segmentation processing for each axis.
  • the segmentation processing unit 116 determines whether or not the current ⁇ axis, i, j is the same as the specific ⁇ axis, i ⁇ 1, j before the fixed time , and the sound source, When the sound sources are the same, they are set as one segment. If ⁇ axis, i ⁇ 1, j one time before has already been part of the segment, this time ⁇ axis, i, j is added to the same segment, and the segment is grown in the time domain. The specific contents of the process will be described later using still another flowchart.
  • Step S20 the sound source separation processing unit 110 executes the segment-by-segment end determination process by the segmentation processing unit 116. This process is performed for each of all the segments being formed.
  • FIG. 8 is a flowchart showing a procedure example of the same sound source segmentation processing for each axis. Hereinafter, the contents of the process will be described according to a procedure example.
  • Step S100 The segmentation processing unit 116 has one time delay of this time that is regarded as the same sound source as any one of the initial values ⁇ axis, s, k or specific segmented ⁇ axis, i ⁇ 1, j. (Unique value) Determine whether it exists. Specifically, the following equation (2) is calculated. here, ⁇ : Constant depending on the moving speed of the sound source (predetermined threshold)
  • Step S102 As a result, when there is only one (only) j ′ that satisfies the formula (2) (step S100: Yes), the segmentation processing unit 116 uses the time delay ⁇ axis, i, j ′ . Segment. Specifically, the time delay ⁇ axis, i, j ′ is added to the members of the same segment as ⁇ axis, i ⁇ 1, j .
  • Step S104 On the other hand, if there are two or more j ′ satisfying the expression (2) or none exists (step S100: No), the segmentation processing unit 116 uses the previous ⁇ to minimize ⁇ virtual is calculated from the square method. At this time, it is not necessary to use all ⁇ immediately before, and several pieces of data are sufficient. That is, it is sufficient here to obtain the latest variation of ⁇ as ⁇ virtual .
  • the order of the least squares method is empirically effective as the second order.
  • Step S106 The segmentation processing unit 116 checks whether there is a current time delay that is regarded as two or more identical sound sources.
  • Step S108 As a result, when there are two or more j ′ satisfying the expression (2) (Step S106: Yes), the segmentation processing unit 116 determines whether or not there is a time delay that can be segmented. to decide. Specifically, the following equation (3) is calculated. here, ⁇ : preset constant (specific threshold), value can be set experimentally or empirically.
  • Step S110 When there is an optimum ⁇ axis, i, j ′ that satisfies the above expression (3) at the minimum (step S108: Yes), the segmentation processing unit 116 uses the optimum ⁇ axis, i, j ′ . Segment. Specifically, the optimum ⁇ axis, i, j ′ is added to the members of the same segment as ⁇ axis, i ⁇ 1, j .
  • Step S112 On the other hand, if there is no j ′ satisfying Expression (3) (Step S108: No), or if there is no j ′ satisfying Expression (2) (Step S106: No), the segment The segmentation processing unit 116 performs segmentation using ⁇ virtual calculated in the previous step S104. Specifically, ⁇ virtual is added to the members of the same segment as ⁇ axis, i ⁇ 1, j .
  • FIG. 9 is a diagram illustrating a variation in time delay on the X axis in the simplified model.
  • the horizontal axis in FIG. 9 indicates the number of time indexes, and the vertical axis indicates the time delay on the X axis.
  • the white circles shown in FIG. 9 indicate the time delay ranked in the first peak of the cross-correlation coefficient, and the hatched circles indicate the time ranked in the second peak of the cross-correlation coefficient. Indicates a delay.
  • time delay on the X-axis is represented as a plurality of ranks ranked in the first peak and the second peak of the cross correlation coefficient.
  • the time delay (> 0) ranked in the first peak corresponds to the first sound source SS1
  • the time delay ( ⁇ 0) ranked in the second peak is in the second sound source SS2. It corresponds. Therefore, even in this time domain, the time delay ranked to the first peak of the cross correlation coefficient is continuously added to the segment of the same sound source SS1.
  • the time delay ranked in the second peak of the cross-correlation coefficient when the time index number is Ti1 is regarded as an initial value for the second sound source SS2.
  • the time delay ranked at the second peak of the cross-correlation coefficient is added to the segment of the sound source SS2.
  • time delay on the X-axis is represented only by the one ranked in the first peak of the cross correlation coefficient.
  • the time delay ranked to the first peak corresponds to the second sound source SS2, so from here the time delay ranked to the first peak of the cross-correlation coefficient Is added to the segment of the sound source SS2.
  • FIG. 10 is a diagram showing an example in which the variation in time delay on the X axis shown in FIG. 9 is separated into sound source segments. Among these, (A) in FIG. 10 shows the segment of the first sound source SS1, and (B) in FIG. 10 shows the segment of the second sound source SS2.
  • the time delay variation on each axis can be separated into segments for each sound source.
  • the X axis is shown here, the Y axis can be similarly divided into segments for each sound source.
  • FIG. 11 is a diagram illustrating an example when the simplified model is applied to a moving sound source.
  • two moving sound sources SS1, SS2 are arranged in the observation horizontal plane PL. Further, the angles (X axis and Y axis) of the microphone unit 10 are different from the simplified model.
  • Example 2 For example, of the two sound sources SS1 and SS2 arranged in the anechoic room AR, the first sound source SS1 is moved from the vicinity of one wall toward the other wall and then moved again to the vicinity of one wall. Let Conversely, the first sound source SS2 was moved from the vicinity of the other wall toward one wall and then moved to the vicinity of the other wall. Further, these two sound sources SS1, SS2 were moved in parallel at the same time.
  • FIG. 12 is a diagram illustrating a variation in time delay in the X-axis and the Y-axis.
  • (A) in FIG. 12 shows the fluctuation of the time delay on the X axis
  • (B) in FIG. 12 shows the fluctuation of the time delay on the Y axis.
  • the horizontal axis represents the number of time indexes
  • the vertical axis represents the ranked time delay ⁇ .
  • the time delay ranked from the top first peak to the third peak is displayed.
  • white circles indicate the time delays ranked in the first peak of the cross-correlation coefficient
  • hatched circles indicate the cross-correlation coefficient.
  • the time delay ranked in the second peak is shown.
  • Black circles indicate time delays ranked in the third peak.
  • FIG. 13 is a diagram illustrating an example of segment separation along the X axis.
  • (A) in FIG. 13 represents a segment corresponding to the sound source SS1
  • (B) in FIG. 13 represents a segment corresponding to the sound source SS2.
  • the horizontal axis in the figure indicates the number of time indexes
  • the vertical axis indicates the ranked time delay.
  • black rhombus marks shown in (A) and (B) indicate ⁇ virtual (virtual time delay) used in the segmentation process (the same applies hereinafter).
  • FIG. 14 is a diagram showing a result of fluctuation in time delay in the X axis and the Y axis for the measured data.
  • 14A shows the time delay variation on the X axis
  • FIG. 14B shows the time delay variation on the Y axis.
  • the fluctuation results in the situation where the landing sound is observed in the middle of the taxing sound are shown.
  • the white circles in FIG. 14 indicate time delays ranked in the first peak of the cross-correlation coefficient
  • the hatched circles indicate time delays ranked in the second peak of the cross-correlation coefficient.
  • the black circles indicate time delays ranked in the third peak of the cross correlation coefficient.
  • FIG. 15 is a diagram illustrating a result of separating the time delay variation on the X-axis and the Y-axis into segments for the measured data.
  • (A) and (B) in FIG. 15 show examples of time-delayed segment separation on the X axis
  • (C) and (D) in FIG. 15 show examples of time-delayed segment separation on the Y-axis.
  • the white circles in FIG. 15 indicate the time delays ranked in the first peak of the cross correlation coefficient
  • the hatched circles are ranked in the second peak of the cross correlation coefficient. Shows a time delay.
  • the black rhombus marks indicate ⁇ virtual (virtual time delay) used in the segmentation process.
  • the time delay variation is separated into, for example, four segments X1, X2, X3, and X4 on the X axis, and three segments Y1 and Y2 on the Y axis. , Y3.
  • the number of segments after separation basically represents the number of sound sources.
  • noise with a large sound pressure level in the middle becomes dominant, and the continuity of the time delay from before is broken, As in the separation example of FIG. 15, it is fully possible that the number of segments does not match the number of sound sources.
  • segments separated on each axis are further integrated with the same sound source between different axes.
  • segment integration processing That is, the segment integration processing is executed by the above-described separated sound source integration unit 120.
  • the segments separated on each axis as described above are combined between different axes.
  • segments between different axes are integrated using fluctuations in the cross-correlation coefficient R ( ⁇ ).
  • FIG. 16 is a diagram showing a variation pattern of the cross-correlation coefficient R ( ⁇ ) of each segment.
  • (A), (B), (C) in FIG. 16 show the variation pattern of the cross-correlation coefficient R ( ⁇ ) of the X-axis segment
  • (D), (E), (F) in FIG. Indicates a variation pattern of the cross-correlation coefficient R ( ⁇ ) of the Y-axis segment.
  • the white circle in the figure indicates the first peak value of the cross-correlation coefficient R ( ⁇ )
  • the hatched circle indicates the second peak value
  • the black circle indicates the third peak value. Yes.
  • a segment considered to be the same sound source has a very close variation pattern of R ( ⁇ ). ing. This is because the sound generated from an actual sound source has fluctuations such as changes in the engine output, or is affected by the weather before reaching the observation point. It is thought to be caused by appearing in
  • FIG. 17 is a diagram illustrating a calculation example of a normalized cross-correlation coefficient between overlapping segments in the time domain.
  • segments on the X axis are arranged in the vertical direction and segments on the Y axis are arranged in the horizontal direction, and the normalized cross-correlation coefficients between the segments are shown in a 4 ⁇ 3 matrix.
  • Example of segment integration In FIG. 17, it can be seen that if a combination having a normalized cross-correlation coefficient greater than 0.9 is selected, four combinations of X1-Y1, X2-Y2, X3-Y3, and X4-Y3 can be obtained. Reflecting this result, an example of segment integration between axes is shown in FIG. 16 by dashed arrows.
  • the peak value of the taxing sound is not detected between the time indexes Tia and Tib because the actual noise level of the landing sound is higher than the noise level of the taxing sound. This is because it was large enough.
  • the time delay is divided into sound source-specific segments by extracting the time delay variation in the time domain due to the peak of the cross-correlation coefficient on each axis, and The segments between the axes are integrated using the variation of the cross-correlation coefficient for each segment. This makes it possible to separate the sound arrival directions of the sound sources from a plurality of noise sources that are generated simultaneously in the observation space.
  • the number of aircraft that emit significant noise levels that could not be identified only by the method using only the first peak of the cross-correlation coefficient and the fluctuation of the sound pressure level by the identification method of this embodiment. Can be recognized automatically. Thereby, for example, in noise level evaluation of single noise, information for inferring the influence of background noise can be obtained. At the same time, since the direction of arrival of each sound can be obtained, it is possible to reliably identify each sound source by using the structure information of the airport such as a runway, a taxiway, and a surrounding road.
  • the observation on the Z-axis is omitted for simplification, but it is naturally possible to use the three-axis correlation of the X-axis, the Y-axis, and the Z-axis when implementing the present invention.
  • the peak tendency of the cross-correlation coefficient mainly due to ground reflection is prominent, and if such a tendency is utilized, the ground noise identification of the aircraft can be made more accurate. Very useful in terms.
  • the airfield is the target area.
  • the noise observation apparatus and the noise observation method of the present invention can use an observation space (target area) other than the airfield.
  • the conditions ( ⁇ , ⁇ , 0.9) related to the formation of segments and the integration of segments mentioned in one embodiment are examples, and the setting of the conditions is matched to the characteristics of the observation target area and the noise source. Can be changed as appropriate.
  • the cross-correlation coefficient is calculated for each scheduled interruption, but the interval of “every scheduled” may not be constant.
  • the calculation is performed with an interval of 200 ms at a certain fixed time, but the next fixed time may be after 100 ms shorter than 200 ms, or conversely after 300 ms longer than 200 ms.
  • a taxing sound and a landing sound are exemplified as the plurality of noises, but the plurality of noises may be other combinations. Further, the disclosed invention can be applied even when three or more noises are generated simultaneously.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

A noise observation device comprises: a cross-correlation coefficient calculation unit (112) which, using four microphones (M0-M3) which are respectively positioned two-by-two upon the x-axis, the y-axis, and the z-axis, computes cross-correlation coefficients of sound which has reached each microphone (M0-M3) for each axis; a peak search processing unit (114) which extracts, in descending order of cross-correlation coefficient, a plurality of time lags which denote a peak trend for the cross-correlation coefficients; a segmentation processing unit (116) which collects fluctuations in a time region of the plurality of time lags and forms a set of time lags which is contiguous for each axis; a normalized cross-correlation coefficient calculation unit (122) which, for the set of time lags for each axis, computes normalized cross-correlations between different axes; and a segment integration processing unit (124) which combines sets of time lags with the same sound sources, from normalized cross-correlation coefficients.

Description

騒音観測装置及び騒音観測方法Noise observation apparatus and noise observation method
 本発明は、観測対象の地域に複数の音源が存在する環境下での利用に適した騒音観測装置及び騒音観測方法に関する。 The present invention relates to a noise observation apparatus and a noise observation method suitable for use in an environment where a plurality of sound sources exist in an observation target area.
 従来、例えば航空機等の飛行ルート下で観測される航空機の飛行騒音の自動識別に有効な先行技術が知られている(例えば、日本国特開平7-43203号公報:特許文献1参照。)。この先行技術は、X,Y,Zの各軸線上にそれぞれ間隔をおいて配置した4つのマイクロホンに到達する音の時間遅れの相互相関係数から移動音源の到来方向ベクトル(仰角、方位角)を算出し、得られたベクトルの集合から移動音源の移動軌跡を自動で識別するものである。 Conventionally, a prior art effective for automatic identification of aircraft flight noise observed under a flight route such as an aircraft has been known (see, for example, Japanese Patent Laid-Open No. 7-43203: Patent Document 1). In this prior art, the arrival direction vector (elevation angle, azimuth angle) of the moving sound source is calculated from the cross-correlation coefficient of the time delay of the sound reaching four microphones arranged at intervals on the X, Y, and Z axes. And the movement trajectory of the moving sound source is automatically identified from the obtained vector set.
 先行技術の識別手法によれば、空港から離着陸する航空機の騒音の影響を受けるような観測地点であっても、空港から離着陸する航空機の騒音の影響をそれぞれ区別しながら正確に把握し、航空機の移動コースを高精度で識別することができる。 According to the prior art identification method, even if the observation point is affected by the noise of the aircraft taking off and landing from the airport, it accurately grasps the effects of the noise of the aircraft taking off and landing from the airport, and distinguishes them from each other. A moving course can be identified with high accuracy.
 航空機等の運行に伴って発生する騒音の観測対象は、これまでは上空からの騒音や滑走路上で発生する離陸滑走路騒音や着陸時のリバース騒音等の飛行騒音だけでよかったが、現在では(今後は)飛行場内における航空機の運用や機体の整備に伴って発生する航空機の地上騒音についても飛行場周辺で観測する必要がある。地上騒音には、例えば、駐機中の航空機が補助動力供給装置(APU)の稼動によって発生する騒音や、ターミナルと滑走路との間を移動中(タクシング)の航空機がやはり推進のために発生する騒音や、さらに飛行場内のエンジン試運転エリアで航空機がエンジンの試運転を行う場合等に発生する騒音がある。 Until now, it was sufficient to observe only the flight noise such as the noise from the sky, the take-off runway noise generated on the runway and the reverse noise at the time of landing. In the future, it will be necessary to observe the ground noise of the aircraft generated by aircraft operations and aircraft maintenance in the airfield. Ground noise includes, for example, noise generated by the operation of the auxiliary power supply unit (APU) in the parked aircraft and aircraft that are moving between the terminal and the runway (taxing) for propulsion. And noise generated when an aircraft performs an engine test operation in an engine test operation area in an airfield.
 さらには、飛行場周辺で観測される騒音は複雑であり、観測点には周辺から自動車やサイレンなど多様な騒音が入り交じって到来するため、航空機が発生する地上騒音のみを地上からの他の騒音と区別してピンポイントで検出することは難しい。 Furthermore, the noise observed around the airfield is complex, and various noises such as cars and sirens come from the surroundings at the observation point, so only the ground noise generated by the aircraft can be compared with other noise from the ground. It is difficult to detect pinpoints.
 また航空機騒音には、航空機の運行に伴って飛行場で観測される単発的に発生する一過性の単発騒音や、航空機の整備等に伴って飛行場周辺で観測されるエンジン試運転やAPU稼動などの騒音が長時間にわたって継続し、定常的であるがかなりのレベル変動を伴う準定常騒音などがあり、騒音の識別をさらに困難にしている。 Aircraft noise includes transient single-shot noise that occurs on the airfield as the aircraft operates, engine test operations and APU operations that are observed around the airfield due to aircraft maintenance, etc. Noise continues for a long time, and there are quasi-stationary noises that are steady but have considerable level fluctuations, making noise identification more difficult.
 上記の先行技術(特許文献1)に見られる3軸の相互相関法を用いた音源の識別手法では、複数の音源から騒音が同時に発生している場合に、各軸線上で相互相関係数が最大となる時間遅れが必ずしも同一音源からの音の到来方向を示すとは限らない。また、複数の騒音が重なって発生した場合、相互相関が最大となる音しか採用しないため、それ以外の騒音を自動的に識別することは困難である。 In the sound source identification method using the three-axis cross-correlation method found in the above prior art (Patent Document 1), when noise is simultaneously generated from a plurality of sound sources, the cross-correlation coefficient is on each axis. The maximum time delay does not necessarily indicate the direction of arrival of sound from the same sound source. In addition, when a plurality of noises are overlapped with each other, only the sound having the maximum cross-correlation is used, and it is difficult to automatically identify other noises.
 このような背景により、飛行場内のように複数の音源が存在する騒音環境下において、同時に複数の騒音が発生した場合でも、相互相関法を用いて騒音を識別することができる技術が望まれている。
日本国特開平7-43203号公報
With such a background, there is a need for a technology that can identify noise using the cross-correlation method even when multiple noises occur simultaneously in a noise environment where there are multiple sound sources, such as in an airfield. Yes.
Japanese Unexamined Patent Publication No. 7-43203
 ここに開示される発明は以下の解決手段を採用する。
 すなわち本解決手段は、各軸線別に時間遅れの相互相関係数を定時毎に算出し、相互相関係数がピーク(極大)傾向を示す時間遅れについて、相互相関係数が最大となるものから順番に複数の時間遅れの変動を時間領域で抽出していくことにより、各軸線別に連続した時間遅れの集合を形成する手法を採用する。このとき、各軸線別に形成された時間遅れの集合は、音源が複数存在する場合は音源別に分離された集合となる。そして、各軸線別に形成された時間遅れの集合について、今度は異なる軸線間での相互相関を見れば、そこから音源を同一とする時間遅れの集合同士を組み合わせることができる。これにより、同時に発生している複数の騒音を自動的に分離して識別することができる。
The invention disclosed herein employs the following solutions.
In other words, this solution calculates the time delay cross-correlation coefficient for each axis at regular intervals, and for the time delay in which the cross-correlation coefficient shows a peak (maximum) tendency, the cross-correlation coefficient starts from the largest. In this method, a plurality of time delay variations are extracted in the time domain to form a continuous time delay set for each axis. At this time, the set of time delays formed for each axis is a set separated for each sound source when there are a plurality of sound sources. Then, regarding the time delay sets formed for each axis, if the cross-correlation between the different axes is now observed, the time delay sets having the same sound source can be combined therewith. As a result, it is possible to automatically separate and identify a plurality of simultaneously occurring noises.
 本解決手段は、例えば飛行場内において複数の騒音(着陸とタクシング等)が同時に発生した場合、各軸線別の相互相関係数の最大ピークについては最も音圧レベルの高い騒音が支配的な傾向を示すが、最上位から順に複数のピークを見ていったとき、その中でいずれかのピークには最大の騒音以外の騒音(他の音源)の影響が表れることに着目した。 For example, when multiple noises (landing and taxing, etc.) occur simultaneously in an airfield, the noise with the highest sound pressure level tends to dominate the maximum peak of the cross-correlation coefficient for each axis. As shown in the figure, when looking at a plurality of peaks in order from the top, attention is paid to the fact that one of the peaks shows the influence of noise (other sound sources) other than the maximum noise.
 すなわち、ある音源が観測空間内を移動していく場合、その音源がある時間帯では相互相関係数が最大ピークとなる時間遅れで示されていても、別の時間帯では最大ピークでなく、その他下位ピークの時間遅れで示される場合がある。この場合、同時間帯で相互相関係数が最大ピークとなる時間遅れは他の音源を示しているため、時間領域で常に最大ピークだけを単純にトレースしても、音源を識別したことにならない。 That is, when a sound source moves in the observation space, even if the cross-correlation coefficient is indicated by a time delay that is the maximum peak in a certain time zone, it is not the maximum peak in another time zone, It may be indicated by the time delay of other lower peaks. In this case, the time delay at which the cross-correlation coefficient reaches the maximum peak in the same time zone indicates other sound sources, so simply tracing the maximum peak always in the time domain does not identify the sound source. .
 そこで本解決手段は、定時毎に算出した相互相関係数のピークの上位(例えば第1~第3位まで)となる複数の時間遅れの変動を時間領域で抽出していき、各軸線別に連続した時間遅れの集合を形成することとした。連続した時間遅れの集合は、時間領域で同一の音源と予想される時間遅れの変動(到来方向を表す騒音データ)を表しているため、観測空間内に複数の音源が存在する場合、各軸線上で音源別に分離された時間遅れの集合が形成されることになる。これにより、複数音源の騒音データを各軸線上で音源別に分離することができる。 In view of this, this solving means extracts a plurality of time delay fluctuations in the time domain that are higher in the peak of the cross-correlation coefficient calculated at regular intervals (for example, from the first to the third), and continuously for each axis. We decided to form a set of time delays. Since a set of consecutive time delays represents the expected time delay variation (noise data indicating the direction of arrival) that is expected to be the same sound source in the time domain, if there are multiple sound sources in the observation space, each axis A set of time delays separated by sound source on the line is formed. Thereby, the noise data of a plurality of sound sources can be separated for each sound source on each axis.
 さらに、各軸線上で分離した集合を異なる軸線間で組み合せる。すなわち、本発明は、ある集合に対して、個々の時間遅れではなく相互相関係数を考えた場合、同じ音源と考えられる集合同士は相互相関係数の変動が非常に似ていることに着目している。これは、実際の音源から発する音は、例えば出力変動や気象の変化の影響を受けるため、それらの変化がマイクロホンに入力される音に共通して現れるためと考えられる。したがって、ある時間領域において、異なる軸線間で同時に集合の組が存在するならば、各集合の相互相関係数の時間変動について時間遅れが0としたときの正規化相互相関係数から集合同士を組み合わせることができる。軸線間で組み合わせた集合同士は、同一音源の到来方向を表しているため、そこから観測空間内に存在する複数の音源を自動的に識別することができる。 Furthermore, a set separated on each axis is combined between different axes. That is, in the present invention, when a cross correlation coefficient is considered instead of individual time delay for a certain set, attention is paid to the fact that the sets considered to be the same sound source have very similar fluctuations in the cross correlation coefficient. is doing. This is presumably because the sound emitted from the actual sound source is affected by, for example, output fluctuations and weather changes, and these changes appear in common with the sound input to the microphone. Therefore, if a set of sets exists at the same time between different axes in a certain time domain, the sets are determined from the normalized cross-correlation coefficient when the time delay is 0 with respect to the time variation of the cross-correlation coefficient of each set. Can be combined. Since the sets combined between the axes represent the arrival directions of the same sound source, a plurality of sound sources existing in the observation space can be automatically identified therefrom.
 本解決手段の騒音観測装置は、算出手段、集合化手段及び統合化手段を備えた構成である。本解決手段の騒音観測方法は、これら構成を用いて実行することができる。
 すなわち、算出手段は、複数の音源が存在する観測空間内で規定される複数の軸線上にそれぞれ間隔を置いて配置された2つのマイクロホンを用いて、各軸線別に音の到達時間差を表す時間遅れの相互相関係数を定時毎に算出するステップを実行する。また集合化手段は、算出手段により定時毎に算出された相互相関係数がピーク傾向を示す複数の時間遅れについて、相互相関係数の大きい順に複数の時間遅れの変動を時間領域で抽出し、各軸線別に連続した時間遅れの集合を形成するステップを実行する。そして統合化手段は、集合化手段により形成された各軸線別の時間遅れの集合について、異なる軸線間での相互相関から音源を同一とする時間遅れの集合同士を組み合わせるステップを実行する。
The noise observation apparatus according to the present solution has a configuration including a calculation unit, an aggregation unit, and an integration unit. The noise observation method of the present solution can be executed using these configurations.
In other words, the calculation means uses two microphones arranged at intervals on a plurality of axes defined in an observation space where a plurality of sound sources exist, and uses a time delay that represents a difference in sound arrival time for each axis. The step of calculating the cross-correlation coefficient is performed at regular intervals. Further, the aggregation means extracts a plurality of time delay fluctuations in the time domain in descending order of the cross correlation coefficient for a plurality of time delays in which the cross correlation coefficient calculated every fixed time by the calculating means shows a peak tendency, A step of forming a set of continuous time delays for each axis is executed. Then, the integration unit executes a step of combining sets of time delays having the same sound source from the cross-correlation between different axes with respect to the sets of time delays for each axis formed by the aggregation unit.
 これにより、観測空間内に複数の音源が存在し、同時に複数の騒音が発生した場合であっても、相互相関法を用いて音源を自動的に識別することが可能になる。 This makes it possible to automatically identify sound sources using the cross-correlation method even when there are multiple sound sources in the observation space and multiple noises are generated at the same time.
 本解決手段の騒音観測装置及び騒音観測方法において、連続した時間遅れの集合を形成するにあたり、以下の各種態様を設けることができる。 In the noise observation device and the noise observation method of the present solution, the following various aspects can be provided in forming a continuous set of time delays.
 集合化手段は、連続した時間遅れの集合の形成に先立ち、定時毎にピーク傾向を示す複数の時間遅れが音源別の初期値となるか否かを確認するステップを実行することができる。 The aggregation means can execute a step of confirming whether or not a plurality of time delays showing a peak tendency at each fixed time become initial values for each sound source prior to formation of a set of continuous time delays.
 上記の態様であれば、相互相関係数の上位ピークから騒音発生時の初期値を確認した上で、以後の時間領域にわたり連続する時間遅れの集合を適切に形成していくことができる。 In the above aspect, after confirming the initial value at the time of noise generation from the upper peak of the cross-correlation coefficient, it is possible to appropriately form a set of continuous time delays over the subsequent time domain.
 また、集合化手段が行うステップは、以下のステップを含むことができる。
(1)定時毎にピーク傾向を示す複数の時間遅れのうち、1定時前にピーク傾向を示した特定の時間遅れとの差が所定の閾値未満となる時間遅れの唯一値が存在するか否かを判断するステップ。
(2)上記(1)で唯一値が存在すると判断した場合は唯一値を特定の時間遅れと同じ集合に加えるステップ。
(3)上記(1)で唯一値が存在しないと判断した場合、少なくとも特定の時間遅れ(直近の数個分)を用いて最小二乗法により仮想時間遅れを算出するステップ。
Moreover, the steps performed by the aggregation means can include the following steps.
(1) Among a plurality of time delays showing a peak tendency at every fixed time, whether or not there is a single value of time delay that makes a difference from a specific time delay showing a peak tendency before the fixed time less than a predetermined threshold Step to determine whether.
(2) A step of adding a unique value to the same set as a specific time delay if it is determined in (1) that the unique value exists.
(3) A step of calculating a virtual time delay by the least square method using at least a specific time delay (the latest several) when it is determined in (1) that there is no unique value.
 上記の態様であれば、定時毎に算出された上位ピークの時間遅れについて、前回の時間遅れと同じ音源による時間遅れか否かを確認した上で、音源を同一とすると予想される時間遅れ(唯一値となる騒音データ)だけを連続した集合に加えていくことができる。これにより、識別結果の精度を向上し、騒音観測結果の信頼性を高めることができる。また、今回の上位ピークの時間遅れに同一の音源と予想できるものが存在しなければ、前回値を含む直近数個分の時間遅れを用いて最小二乗法により仮想時間遅れを得ておき、以下のステップに活用することができる。 If it is said aspect, about the time delay of the upper peak calculated every fixed time, after confirming whether it is a time delay by the same sound source as the last time delay, the time delay (it is expected that the sound source is the same) Only unique noise data) can be added to a continuous set. Thereby, the accuracy of the identification result can be improved and the reliability of the noise observation result can be increased. Also, if there is nothing that can be predicted as the same sound source in the time delay of the upper peak this time, obtain the virtual time delay by the least square method using the time delay for the last several pieces including the previous value, Can be used for the steps.
 すなわち、集合化手段は、さらに以下のステップを実行する。
(4)特定の時間遅れとの差が所定の閾値未満となる時間遅れが複数存在するために唯一値が存在しないか、もしくは、複数の時間遅れの中に特定の時間遅れとの差が所定の閾値未満となる時間遅れが1つも存在しないために唯一値が存在しないかを判断するステップ。
(5)上記(4)で特定の時間遅れとの差が所定の閾値未満となる時間遅れが複数存在するために唯一値が存在しないと判断した場合、その中に仮想時間遅れとの差が特定の閾値未満となる特定値があるかを判断するステップ。
(6)上記(5)で特定値があると判断した場合、特定値を特定の時間遅れと同じ集合に加えるステップ。
(7)上記(5)で特定時がないと判断するか、もしくは、上記(4)で複数の時間遅れの中に特定の時間遅れとの差が所定の閾値未満となる時間遅れが1つも存在しないために唯一値が存在しないと判断した場合、仮想時間遅れを特定の時間遅れと同じ集合に加えるステップ。
That is, the aggregation unit further executes the following steps.
(4) Since there are a plurality of time delays in which the difference from a specific time delay is less than a predetermined threshold, there is no single value, or there is a predetermined difference from a specific time delay among a plurality of time delays. Determining whether there is only one value because there is no time delay that is less than the threshold value.
(5) If it is determined in (4) that there is no single value because there are multiple time delays where the difference from the specific time delay is less than the predetermined threshold, the difference from the virtual time delay is Determining whether there is a specific value that is less than a specific threshold;
(6) A step of adding the specific value to the same set as the specific time delay when it is determined in (5) that there is the specific value.
(7) In the above (5), it is determined that there is no specific time, or in the above (4), there is one time delay in which the difference from the specific time delay is less than a predetermined threshold among the plurality of time delays. Adding a virtual time delay to the same set as a particular time delay if it is determined that there is no unique value because it does not exist.
 この場合、先の(3)で算出した仮想時間遅れは、時間領域内で直近の時間遅れの変動を表している。したがって、複数の時間遅れが存在して唯一値を絞れない場合、仮想時間遅れに近いものを同一音源と予測した上で集合に加えることができる(6)。一方、仮想時間遅れに近いものも存在せず、唯一値も存在しなければ、仮想時間遅れを集合に加えることで、引き続き集合化を進めることができる(7)。 In this case, the virtual time delay calculated in (3) above represents the latest time delay variation in the time domain. Therefore, when there is a plurality of time delays and the only value cannot be narrowed down, it is possible to add the ones close to the virtual time delay to the set after predicting the same sound source (6). On the other hand, if there is nothing close to the virtual time delay and there is no single value, the virtual time delay can be added to the set to continue the assembly (7).
 また集合化手段は、以下のステップをさらに実行することができる。
(8)定時毎に所定回数にわたり連続して仮想時間遅れを集合に加えた場合、所定回数分の仮想時間遅れを削除して集合の形成を終了する終了判定ステップ。
Further, the aggregation means can further execute the following steps.
(8) An end determination step of ending the formation of the set by deleting the virtual time delay for the predetermined number of times when the virtual time delay is continuously added to the set for a predetermined number of times at regular times.
 この場合、集合内で最後の時間遅れが仮想時間遅れであり、かつ、直前の所定個数が全て仮想時間遅れであった場合、連続する所定個数分の仮想時間遅れを削除し、その集合についてはそれ以上の集合化を終了とすることができる。これにより、騒音イベント等の終了時期に合わせて連続した時間遅れの集合化も終了(終端処理)することができる。 In this case, if the last time delay in the set is a virtual time delay and all of the immediately preceding predetermined numbers are virtual time delays, the virtual time delay for a predetermined number of consecutive is deleted, and for the set Further aggregation can be terminated. As a result, it is also possible to end (end processing) the collection of continuous time delays in accordance with the end time of a noise event or the like.
 さらに集合化手段は、上記(8)の終了判定ステップにおいて、所定回数分の仮想時間遅れを削除して集合の形成を終了した結果、その集合に含まれる時間遅れの個数が規定数以下である場合、当該集合を無効化することができる。 Further, the aggregation means deletes a predetermined number of virtual time delays in the end determination step of (8) above, and as a result of completing the formation of the set, the number of time delays included in the set is equal to or less than a specified number. If so, the set can be invalidated.
 上記の態様であれば、集合としての絶対数が不足するものを無効化することで、以後の識別においてノイズの混入を避けることができる。これにより、識別精度を向上し、騒音観測結果の信頼性をさらに高めることができる。 In the case of the above-described aspect, it is possible to avoid mixing noise in subsequent identification by invalidating those having insufficient absolute numbers as a set. Thereby, identification accuracy can be improved and the reliability of noise observation results can be further increased.
 以上に開示した騒音観測装置及び騒音観測方法によれば、観測空間内で同時に発生している複数の音源を自動的に分離して識別することができる。 According to the noise observation apparatus and the noise observation method disclosed above, a plurality of sound sources that are simultaneously generated in the observation space can be automatically separated and identified.
 また、計算結果を膨大に蓄積した後の処理ではなく、定時的に処理していくことで計算負荷を軽くすることができるため、コンピュータを用いたリアルタイム処理を容易に実現することができる。 Also, since the calculation load can be reduced by processing periodically rather than processing after accumulating a large amount of calculation results, real-time processing using a computer can be easily realized.
図1は、飛行場内に騒音観測装置を設置した場合の1つの実施形態を示す概要図であり、FIG. 1 is a schematic diagram showing one embodiment when a noise observation device is installed in an airfield, 図2は、騒音観測装置の構成と相互相関法による騒音識別手法を概略的に示した図であり、FIG. 2 is a diagram schematically showing the configuration of the noise observation apparatus and the noise identification method based on the cross-correlation method. 図3は、単発騒音についての騒音イベント検出手法について、航路下の騒音レベルの時間的な変化とともに解説した図であり、Fig. 3 is a diagram explaining the noise event detection method for single noise, along with the temporal change of the noise level under the channel, 図4は、準定常騒音についての騒音イベント検出手法について、飛行場内(又は近傍)での騒音レベルの時間的な変化とともに解説した図であり、FIG. 4 is a diagram explaining the noise event detection method for quasi-stationary noise, along with the temporal change of the noise level in the airfield (or nearby), 図5は、識別手法の原理を説明するための簡素化モデル図であり、FIG. 5 is a simplified model diagram for explaining the principle of the identification method. 図6は、音源分離処理部により実行される音源分離処理の手順例を示すフローチャートであり、FIG. 6 is a flowchart illustrating an example of a procedure of sound source separation processing executed by the sound source separation processing unit. 図7は、相互相関係数の上位ピークによる時間遅れのランク付けを示す概要図であり、FIG. 7 is a schematic diagram showing ranking of time delays due to the upper peak of the cross-correlation coefficient, 図8は、各軸線別同一音源セグメント化処理の手順例を示すフローチャートであり、FIG. 8 is a flowchart showing a procedure example of the same sound source segmentation processing for each axis, 図9は、簡素化モデルでのX軸における時間遅れの変動を示す図であり、FIG. 9 is a diagram showing the variation of the time delay on the X axis in the simplified model, 図10は、図9に示すX軸上の時間遅れの変動を音源別のセグメントに分離して表した例を示す図であり、FIG. 10 is a diagram showing an example in which the variation in time delay on the X-axis shown in FIG. 9 is separated into sound source segments, 図11は、簡素化モデルを移動音源に適用した場合の例を示す図であり、FIG. 11 is a diagram illustrating an example when the simplified model is applied to a moving sound source, 図12は、X軸,Y軸における時間遅れの変動を示す図であり、FIG. 12 is a diagram showing fluctuations in time delay in the X axis and the Y axis. 図13は、X軸におけるセグメントの分離例を示す図であり、FIG. 13 is a diagram showing an example of segment separation in the X-axis. 図14は、実測データについて、X軸,Y軸における時間遅れの変動結果を示す図であり、FIG. 14 is a diagram showing the fluctuation results of the time delay in the X axis and the Y axis for the measured data. 図15は、実測データについて、X軸,Y軸における時間遅れの変動をセグメントに分離した結果を示す図であり、FIG. 15 is a diagram showing a result of separating the time delay variation in the X axis and the Y axis into segments for the measured data. 図16は、各セグメントの相互相関係数の変動パターンを示す図であり、FIG. 16 is a diagram showing a variation pattern of the cross-correlation coefficient of each segment. 図17は、時間領域で重なるセグメント間の正規化相互相関係数の算出例を示す図である。FIG. 17 is a diagram illustrating a calculation example of a normalized cross-correlation coefficient between overlapping segments in the time domain.
 以下、実施形態について図面を参照しながら説明する。 Hereinafter, embodiments will be described with reference to the drawings.
 図1は、飛行場内に騒音観測装置を設置した場合の1つの実施形態を示す概要図である。飛行場(又はその周辺)のような対象地域においては、航空機の航行に伴って上空から到来する騒音及び滑走路上で発生する離着陸走行騒音や着陸時のリバース騒音(以下、「飛行騒音」と称する。)の他に、飛行場内における航空機の運用や機体の整備に伴う騒音で、タクシングやエンジン試運転、APUの稼動などに伴う騒音(以下、「地上騒音」と称する。)が入り交じった騒音環境が形成されている。 FIG. 1 is a schematic diagram showing one embodiment when a noise observation apparatus is installed in an airfield. In a target area such as an airfield (or its surroundings), noise coming from the sky as the aircraft travels, take-off / landing noise generated on the runway, and reverse noise during landing (hereinafter referred to as “flight noise”). ) In addition to the noise associated with aircraft operations and aircraft maintenance in the airfield, there is a noise environment that is mixed with noise (hereinafter referred to as “ground noise”) associated with taxing, engine trial operation, and APU operation. Is formed.
 図1に示されているように、騒音観測装置は、飛行場内の観測点にマイクロホンユニット10を設置した状態で使用することができる。またマイクロホンユニット10には、図示しない観測ユニットが接続されている。 As shown in FIG. 1, the noise observation apparatus can be used with a microphone unit 10 installed at an observation point in an airfield. An observation unit (not shown) is connected to the microphone unit 10.
 対象地域となる飛行場内には、例えば駐機エリア20やタクシング路30、滑走路25上を走行あるいは飛行する着陸機40や離陸機50、エンジン試運転エリア60等の騒音発生源となる区域が各所に存在している。飛行場内では、これら各所から様々な騒音が発生し、それぞれの方向から観測点に到来する。本実施形態の騒音観測装置は、マイクロホンユニット10を用いることで、観測点に到来する複数の騒音を自動的に識別する用途に好適する。以下、騒音発生源となる区域別に説明する。 Within the target airfield, there are various areas that are noise sources, such as the landing area 20, the taxing road 30, the lander 40, the take-off machine 50, and the engine trial operation area 60 that run or fly on the runway 25. Exists. In the airfield, various noises are generated from these places and arrive at the observation point from each direction. The noise observation apparatus of the present embodiment is suitable for an application for automatically identifying a plurality of noises arriving at an observation point by using the microphone unit 10. In the following, explanation will be made for each area that is a source of noise.
〔APU〕
 駐機エリア20からは、補助動力装置(APU:Auxiliary Power Unit)の稼動に伴う騒音が発生している。なお補助動力装置は、駐機中の航空機AP内に圧縮空気や油圧、電力等を供給する動力源として用いられる小型エンジンである。
[APU]
From the parking area 20, noise is generated due to the operation of an auxiliary power unit (APU: Auxiliary Power Unit). The auxiliary power unit is a small engine that is used as a power source for supplying compressed air, hydraulic pressure, electric power, and the like into the parked aircraft AP.
〔タクシング〕
 タクシング路30は、上記の駐機エリアと滑走路25との間を航空機APが移動する走路である。タクシング中の航空機APからは、地上滑走に必要な推進力を得るためにエンジンが作動し、それによって騒音が発生する。
[Taxing]
The taxing road 30 is a runway on which the aircraft AP moves between the parking area and the runway 25. From the aircraft AP during taxing, the engine is operated to obtain a propulsive force necessary for ground running, and noise is thereby generated.
〔着陸音〕
 着陸機40は、航空機APが到着時に滑走路25に向けて進入降下して着地し、さらに多くの場合減速のため滑走路25上でエンジンの逆噴射(リバース)を行い、最終的に滑走路25から離脱するまでの運航に伴う騒音を発生する。
[Landing sound]
The landing aircraft 40 enters and descends toward the runway 25 when the aircraft AP arrives, and, in many cases, performs reverse engine injection (reverse) on the runway 25 for deceleration, and finally the runway. Generates noise associated with the flight until 25.
〔離陸音〕
 離陸機50は、航空機APの出発時に滑走路25の始端位置で滑走を開始し、滑走路25半ばで浮上・上昇して飛び去っていくまでの間の運航に伴う騒音を発生する。
[Takeoff sound]
The take-off aircraft 50 starts to run at the start position of the runway 25 when the aircraft AP departs, and generates noise associated with the operation until it flies, rises and flies off in the middle of the runway 25.
〔エンジン試運転〕
 またエンジン試運転エリア60では、航空機AP用のエンジン(メインエンジン)の動作確認のために行われる試運転に伴って騒音が発生する。
[Engine test run]
Further, in the engine trial operation area 60, noise is generated along with the trial operation performed for confirming the operation of the aircraft AP engine (main engine).
 なお、図1には示されていないが、その他にも飛行場内では以下の騒音が発生する。 Although not shown in FIG. 1, the following noises are also generated in the airfield.
〔タッチアンドゴー〕
 航空機APが離着陸訓練等のために、例えば滑走路25に進入、着地、減速した後、再びエンジン出力を上げて離陸する飛行形態(タッチアンドゴー)を行う場合、これら一連の動作に伴う騒音が発生する。
[Touch and Go]
For example, when the aircraft AP performs a flight form (touch and go) in which the aircraft AP enters the runway 25, takes a landing, decelerates, decelerates, then increases the engine output and takes off again for takeoff and landing training, etc. appear.
〔ホバリング〕
 ヘリコプタが浮上してほぼ静止している飛行形態をとる場合、これに伴う騒音が発生する。
[Hovering]
When the helicopter flies and takes a flight form in which it is almost stationary, noise accompanying this occurs.
〔市街地〕
 その他にも飛行場の周辺に例えば市街地70がある場合、市街地70での様々な社会活動(交通機関の運行、道路交通、市民生活等)に伴うその他の地上からの騒音が発生する。
[City area]
In addition, when there is an urban area 70 in the vicinity of the airfield, for example, other ground noise is generated due to various social activities (transportation operation, road traffic, civic life, etc.) in the urban area 70.
 図2は、騒音観測装置の構成と相互相関法による騒音識別手法を概略的に示した図である。騒音観測装置は、上記のマイクロホンユニット10を用いて演算処理を行い、相互相関法により騒音を識別する機能を有している。 FIG. 2 is a diagram schematically showing the configuration of the noise observation apparatus and the noise identification method based on the cross-correlation method. The noise observation apparatus has a function of performing calculation processing using the microphone unit 10 and identifying noise by a cross-correlation method.
〔マイクロホンユニット〕
 マイクロホンユニット10は、例えば4つのマイクロホンM0,M1,M2,M3を備えた構成であり、個々のマイクロホンM0~M3は、観測空間内で仮想的に定められたX軸線上、Y軸線上、Z軸線上、及び3軸座標系の原点に配置されている。具体的には、マイクロホンM0が原点に配置されており、原点から鉛直方向に延びたZ軸線上に別のマイクロホンM1が配置されている。また、原点から水平方向に延び、かつ、X軸線と90°の開きをなすY軸線上に別のマイクロホンM2が設置されており、原点から水平方向に延びるX軸線上に他のマイクロホンM3が設置されている。マイクロホンユニット10は、個々のマイクロホンM0~M3を機械的に固定しつつ、その設置状態でマイクロホンM0~M3の相対的な位置関係を保持している。以上のように、観測空間内のX軸線上、Y軸線上、Z軸線上の各軸線上には、それぞれ2つずつマイクロホンが配置されることになる。
[Microphone unit]
The microphone unit 10 includes, for example, four microphones M0, M1, M2, and M3. The individual microphones M0 to M3 are arranged on the X axis, the Y axis, and the Z axis that are virtually determined in the observation space. It is arranged on the axis and at the origin of the three-axis coordinate system. Specifically, the microphone M0 is disposed at the origin, and another microphone M1 is disposed on the Z axis extending in the vertical direction from the origin. In addition, another microphone M2 is installed on the Y axis extending in the horizontal direction from the origin and opening 90 ° with the X axis, and another microphone M3 is installed on the X axis extending in the horizontal direction from the origin. Has been. The microphone unit 10 holds the relative positions of the microphones M0 to M3 in an installed state while mechanically fixing the individual microphones M0 to M3. As described above, two microphones are disposed on each axis on the X axis, the Y axis, and the Z axis in the observation space.
 その他にマイクロホンユニット10は、上述した4つのマイクロホンM0~M3とは別のマイクロホンMBを備えている。4つのマイクロホンM0~M3は相互相関法による騒音識別用のものであるが、マイクロホンMBは周囲騒音の計測用である。すなわちマイクロホンMBは、例えば単独で観測点での騒音レベルを計測するために用いられる。 In addition, the microphone unit 10 includes a microphone MB different from the four microphones M0 to M3 described above. The four microphones M0 to M3 are for noise identification by the cross correlation method, while the microphone MB is for measuring ambient noise. That is, the microphone MB is used to measure the noise level at the observation point alone, for example.
〔観測ユニット〕
 騒音観測装置は観測ユニット100を備えており、この観測ユニット100にマイクロホンユニット10が接続されている。観測ユニット100は、例えば図示しない中央演算処理装置(CPU)や半導体メモリ(ROM、RAM)、ハードディスクドライブ(HDD)、入出力インタフェース、液晶ディスプレイ等を備えたコンピュータ機器で構成されている。
[Observation unit]
The noise observation apparatus includes an observation unit 100, and a microphone unit 10 is connected to the observation unit 100. The observation unit 100 includes, for example, computer equipment including a central processing unit (CPU), a semiconductor memory (ROM, RAM), a hard disk drive (HDD), an input / output interface, a liquid crystal display, and the like (not shown).
〔相互相関法による騒音識別手法〕
 次に4つのマイクロホンM0~M3を用いた相互相関法による騒音識別手法について説明する。なお、相互相関法による騒音識別手法は既に公知であるため、ここではその概略を説明する。
[Noise discrimination method by cross-correlation method]
Next, a noise identification method using the cross-correlation method using the four microphones M0 to M3 will be described. In addition, since the noise identification method by a cross correlation method is already well-known, the outline is demonstrated here.
 例えば、上空音の識別を考えるとする。観測空間内で鉛直線(Z軸線)上に2つのマイクロホンM1,M0が垂直に設置されているとき、これらの間隔をd(m)とする。そして、飛行している航空機APの音が仰角θで進入する場合、その音が2つのマイクロホンM1,M0に到達する時間差τ〔s〕は、音速をc〔m/s〕として次式(1)により表される。時間差τは、マイクロホンM1、M0に到達する音の相互相関が最大値となる時の時間遅れである。
 τ=d/c・sin(θ)・・・(1)
 そして上式(1)より、観測点からみた音源の仰角θを得ることができる。
For example, consider the identification of a sky sound. When two microphones M1 and M0 are vertically installed on the vertical line (Z-axis line) in the observation space, the interval between them is defined as d (m). When the sound of the flying aircraft AP enters at an elevation angle θ, the time difference τ [s] when the sound reaches the two microphones M1 and M0 is expressed by the following equation (1), where the sound speed is c [m / s]. ). The time difference τ is a time delay when the cross-correlation of the sounds that reach the microphones M1 and M0 has a maximum value.
τ = d / c · sin (θ) (1)
From the above equation (1), the elevation angle θ of the sound source viewed from the observation point can be obtained.
 音の到来方向が充分に上空側(θ>0)にあると考えられる場合、この仰角θの情報を飛行騒音の識別に利用することができる(先行技術で挙げた特許文献1参照。)。すなわち、例えばマイクロホンMBで検出された騒音レベルがある閾値を超えた場合(騒音イベント発生時)、時々刻々の仰角変化θ(t)を音到来方向データとして同時に記録しておけば、予め指定しておいた仰角より大きい音到来方向データの騒音を航空機APによる飛行騒音であると判断することができる。 When it is considered that the direction of sound arrival is sufficiently high (θ> 0), this elevation angle θ information can be used for identifying flight noise (see Patent Document 1 cited in the prior art). That is, for example, when the noise level detected by the microphone MB exceeds a certain threshold value (when a noise event occurs), if the elevation angle change θ (t) is recorded at the same time as the sound arrival direction data, it is designated in advance. It is possible to determine that the noise of the sound arrival direction data larger than the elevation angle is the flight noise caused by the aircraft AP.
〔到来方向ベクトルの算出〕
 また、音の到来方向を鉛直方向だけでなく、X-Y軸、Y-Z軸、Z-X軸の3軸に展開すれば、仰角θに加えて方位角δを計算により求めることが可能である。そしてこれら仰角θ及び方位角δを求めることにより、観測点を基準とした3軸の観測空間(ベクトル空間)内で騒音の到来方向ベクトル(単位ベクトル)を算出することができる。また、算出した到来方向ベクトルの外積により、観測点を基準として音源(航空機AP)の移動方向(どの方角からどの方角へ向かったか)をより確実に知ることができる。
[Calculation of arrival direction vector]
In addition to the vertical direction, the azimuth angle δ can be obtained by calculation in addition to the elevation angle θ if the sound arrival direction is expanded not only in the vertical direction but also in the three axes of the XY, YZ, and ZX axes. It is. Then, by obtaining the elevation angle θ and the azimuth angle δ, it is possible to calculate a noise arrival direction vector (unit vector) in a three-axis observation space (vector space) with the observation point as a reference. Further, the moving product of the sound source (aircraft AP) (from which direction to which direction) can be more reliably known with the cross product of the calculated arrival direction vectors as a reference.
〔飛行音識別手法の地上騒音判別への応用〕
 上記のように、音の3軸到来方向ベクトルを算出することができれば、仰角θが地上を指すものであった場合、その方位角δから地上からの騒音の到来方向を判別することができる。ただし、この判別手法では、観測空間内に複数の音源が存在し、それによって複数の騒音が同時に発生している場合、各軸線別の相互相関係数が最大となる場合の時間遅れτが必ずしも同じ音源を示すとは限らない。したがって、複数の騒音が同時に発生した場合、音の到来方向を正確に算出できないことがある。
[Application of flight sound discrimination method to ground noise discrimination]
As described above, if the three-axis arrival direction vector of sound can be calculated, when the elevation angle θ indicates the ground, the arrival direction of noise from the ground can be determined from the azimuth angle δ. However, in this discrimination method, when there are a plurality of sound sources in the observation space and a plurality of noises are generated at the same time, the time delay τ when the cross-correlation coefficient for each axis is maximum is not necessarily the same. It does not necessarily indicate the same sound source. Therefore, when a plurality of noises are generated simultaneously, the sound arrival direction may not be accurately calculated.
 なお、複数音源の識別については、これまでも各種の研究が報告されているが、いずれも音源の移動が複雑な場合に音源を分離できなかったり、膨大なデータを蓄積した後で行う処理のためにリアルタイム処理ができなかったりする等の問題がある。 Various studies on the identification of multiple sound sources have been reported so far, but none of them can be separated when the movement of the sound source is complicated, or processing performed after accumulating huge amounts of data. Therefore, there is a problem that real-time processing cannot be performed.
 このため本実施形態では、騒音観測装置内部でリアルタイムに処理することを考慮し、なるべく単純化したアルゴリズムを採用している。以下、本実施形態で用いる騒音観測方法についてより詳細に説明する。 For this reason, in the present embodiment, an algorithm simplified as much as possible is adopted in consideration of processing in real time inside the noise observation apparatus. Hereinafter, the noise observation method used in this embodiment will be described in more detail.
〔騒音観測装置としての構成〕
 観測ユニット100は、その機能要素として騒音イベント検出部102、到来方向ベクトル計算部106を備える他に、複数の機能要素を含む音源分離処理部110及び分離音源統合部120を備えている。
[Configuration as a noise monitoring device]
The observation unit 100 includes a noise event detection unit 102 and a direction-of-arrival vector calculation unit 106 as functional elements thereof, and further includes a sound source separation processing unit 110 and a separated sound source integration unit 120 including a plurality of functional elements.
 このうち騒音イベント検出部102は、例えばマイクロホンMB,M0~M3等からの騒音検出信号に基づき、対象地域内で発生している地上の騒音レベルを検出する。具体的には、騒音検出信号をデジタル変換した結果をサンプリングし、観測点における騒音レベル値(dB)を算出する。 Among these, the noise event detection unit 102 detects the ground noise level generated in the target area based on the noise detection signal from the microphones MB, M0 to M3, for example. Specifically, the result of digital conversion of the noise detection signal is sampled, and the noise level value (dB) at the observation point is calculated.
〔単発騒音/準定常騒音〕
 航空機の騒音には、大きく分けて単発騒音と準定常騒音がある。このうち単発騒音とは、単発的に発生する一過性の騒音であり、航空機APの航空機の運航に伴って飛行場周辺で観測される騒音などが該当する。また、地上騒音の場合はタクシングの騒音が単発騒音として観測されることが多い。
[Single noise / Quasi-steady noise]
Aircraft noise can be broadly divided into single noise and quasi-stationary noise. Of these, single noise is transient noise that occurs once, such as noise observed in the vicinity of an airfield as the aircraft AP operates. In the case of ground noise, taxing noise is often observed as a single noise.
 準定常騒音とは、長時間にわたって継続し、定常的であるがかなりのレベル変動を伴う騒音であり、航空機APから発生している場合は航空機の地上騒音とされる。具体的には、航空機APの整備等に伴って飛行場の周辺で観測されるエンジン試運転やAPUの稼動騒音や滑走路端で離陸前に待機するときの騒音等がこれに該当する。また、ヘリコプタのアイドリングやホバリングの騒音も定常的に続くことが多く、準定常騒音として観測されることもある。 The quasi-stationary noise is noise that continues for a long time and is steady but accompanied by a considerable level fluctuation. When it is generated from the aircraft AP, it is regarded as ground noise of the aircraft. Specifically, this includes engine test operation observed in the vicinity of an airfield accompanying the maintenance of the aircraft AP, operation noise of the APU, noise during standby before takeoff at the end of the runway, and the like. In addition, helicopter idling and hovering noises often continue constantly and may be observed as quasi-stationary noises.
 特に図示していないが、騒音イベント検出部102については、観測ユニット100内で騒音レベル値から単発騒音又は準定常騒音の騒音イベントを検出するための条件(閾値レベル)が登録されている。騒音イベント検出部102は、算出した騒音レベル値(dB)を登録済みの条件にあてはめ、単発騒音の飛行騒音や地上騒音のイベントを検出したり、準定常騒音の地上騒音イベントを検出したりすることができる。なお、騒音イベントの検出例についてはさらに後述する。 Although not particularly illustrated, the noise event detection unit 102 registers a condition (threshold level) for detecting a noise event of single noise or quasi-stationary noise from the noise level value in the observation unit 100. The noise event detection unit 102 applies the calculated noise level value (dB) to the registered conditions, detects a single-shot flight noise or ground noise event, or detects a quasi-steady noise ground noise event. be able to. An example of noise event detection will be described later.
 到来方向ベクトル計算部106は、4つのマイクロホンM0~M3からの検出信号に基づき、上記の3軸相互相関法によって音の到来方向ベクトル(仰角θ,方位角δ)を算出する。また到来方向ベクトル計算部106は、時間関数で表される仰角θ(t)及び方位角δ(t)を音到来方向データとして記録する。 The arrival direction vector calculation unit 106 calculates the sound arrival direction vector (elevation angle θ, azimuth angle δ) by the above-described three-axis cross-correlation method based on the detection signals from the four microphones M0 to M3. Moreover, the arrival direction vector calculation unit 106 records the elevation angle θ (t) and the azimuth angle δ (t) represented by a time function as sound arrival direction data.
〔音源分離処理部〕
 音源分離処理部110は、相互相関係数計算部112、ピーク探索処理部114及びセグメント化処理部116を含む構成である。音源分離処理部110は、各軸線別に得られた時間遅れを各軸線上で音源に分離する機能を有している。
[Sound source separation processing section]
The sound source separation processing unit 110 includes a cross correlation coefficient calculation unit 112, a peak search processing unit 114, and a segmentation processing unit 116. The sound source separation processing unit 110 has a function of separating the time delay obtained for each axis into sound sources on each axis.
 このうち、相互相関係数計算部112は、4つのマイクロホンM0~M3からの検出信号に基づき、各軸線別に相互相関ピークの上位複数の音の到来時間差を表す時間遅れを定時毎に計算する。また、各軸線別に計算することで、ここでは相互相関係数R(axis,i,τ)が得られる(iは定時毎のタイムインデックス:i=i+1で定時更新、axis=X,Y,Z)。また、ここでの計算結果は、上記の到来方向ベクトル計算部106にも提供されている。 Among these, the cross-correlation coefficient calculation unit 112 calculates a time delay representing arrival time differences of a plurality of upper sounds of the cross-correlation peak for each axis based on detection signals from the four microphones M0 to M3 at regular intervals. Further, by calculating for each axis, a cross-correlation coefficient R (axis, i, τ) is obtained here (i is a time index for each fixed time: i = i + 1, updated regularly, axis = X, Y, Z) ). The calculation result here is also provided to the arrival direction vector calculation unit 106 described above.
 ピーク探索処理部114は、相互相関係数計算部112で計算された各軸線別の相互相関係数について、ピーク傾向を示す複数の時間遅れを探索し、最大ピークから上位数個(例えば第1位~第3位)の時間遅れをランク付けする。各軸線別に相互相関係数のピークをランク付けするため、ここでは複数のランク付けされた時間遅れτaxis,i,jが得られる(j=1,・・・,M:例えばM=3)。 The peak search processing unit 114 searches for a plurality of time delays indicating a peak tendency for the cross-correlation coefficients for each axis calculated by the cross-correlation coefficient calculation unit 112, and the highest number (for example, the first number) from the maximum peak is searched. Rank the time delay from place to place 3). In order to rank the cross-correlation coefficient peaks for each axis, a plurality of ranked time delays τ axis, i, j are obtained here (j = 1,..., M: for example M = 3). .
 セグメント化処理部116は、ピーク探索処理部114で抽出されたピーク値上位の複数の時間遅れの変動を時間領域で収集していき、同一の音源と予想されるものを同一集合にまとめて連続した時間遅れ集合を形成する処理を行う。以下では、このような処理を「セグメント化」と称し、形成される集合を「セグメント」と称する。なお、音源分離処理部110による処理の詳細については、さらに別のフローチャートを用いて後述する。 The segmentation processing unit 116 collects a plurality of time delay fluctuations higher than the peak value extracted by the peak search processing unit 114 in the time domain, and collects the same sound source and expected ones in the same set. A process for forming a time delay set is performed. Hereinafter, such processing is referred to as “segmentation”, and the formed set is referred to as “segment”. Details of processing by the sound source separation processing unit 110 will be described later with reference to another flowchart.
〔分離音源統合部〕
 次に、分離音源統合部120は、正規化相互相関係数計算部122及びセグメント統合処理部124を含む構成である。分離音源統合部120は、上記のセグメント化処理部116により形成された各軸線別のセグメントを同一の音源で組み合わせる機能を有している。セグメント化処理部116で形成されたセグメントは、上記のように各軸線上で音源別に分離されているため、観測空間内で音源を識別するには、集合同士を各軸線間で組み合せる必要がある。したがって、ここでは相関係数R(τ)の変動を用いて各軸線間のセグメントを組み合わせる。
[Separated sound source integration section]
Next, the separated sound source integration unit 120 includes a normalized cross correlation coefficient calculation unit 122 and a segment integration processing unit 124. The separated sound source integration unit 120 has a function of combining segments for each axis formed by the segmentation processing unit 116 with the same sound source. Since the segments formed by the segmentation processing unit 116 are separated for each sound source on each axis as described above, in order to identify sound sources in the observation space, it is necessary to combine the sets between each axis. is there. Therefore, here, the segments between the axes are combined using the fluctuation of the correlation coefficient R (τ).
 このため正規化相互相関係数計算部122は、各軸線上で形成されたセグメントに対して、個々の時間遅れτではなく相互相関係数R(τ)の時間変動に対して、時間遅れがないとした(τ=0)ときの正規化相互相関係数R(0)をここで計算する。 For this reason, the normalized cross-correlation coefficient calculation unit 122 generates a time delay for the time variation of the cross-correlation coefficient R (τ), not the individual time delay τ, for the segment formed on each axis. The normalized cross-correlation coefficient R (0) when there is no (τ = 0) is calculated here.
 そして、セグメント統合処理部124は、計算された正規化相互相関係数R(0)が充分に大きいセグメント同士を同一音源のセグメントとして統合する。なお、分離音源統合部120が行う処理の詳細についても、さらに後述する。 Then, the segment integration processing unit 124 integrates segments having a sufficiently large calculated cross-correlation coefficient R (0) as segments of the same sound source. Details of processing performed by the separated sound source integration unit 120 will also be described later.
 その他に、観測ユニット100は識別結果出力部130を備えている。セグメント化処理部116で統合されたセグメント同士は、識別結果出力部130に提供される。識別結果出力部130は、同一音源に統合されたセグメント同士の情報、及び到来方向ベクトル計算部106で計算された到来方向ベクトルから複数の音源の種類を識別し、その結果を出力する。出力した結果は、例えば図示しない表示装置に表示したり、観測ユニット100の外部コンピュータ等にデータとして送信したりすることができる。 In addition, the observation unit 100 includes an identification result output unit 130. The segments integrated by the segmentation processing unit 116 are provided to the identification result output unit 130. The identification result output unit 130 identifies a plurality of types of sound sources from the information of the segments integrated into the same sound source and the arrival direction vector calculated by the arrival direction vector calculation unit 106, and outputs the result. The output result can be displayed on a display device (not shown) or transmitted as data to an external computer of the observation unit 100, for example.
〔騒音イベント検出手法〕
 図3は、単発騒音における騒音イベント検出手法について、航路下の騒音レベルの時間的な変化とともに解説した図である。上記の観測ユニット100は、例えば騒音イベント検出部102において連続的に騒音レベルを検出することで、観測点での暗騒音レベル(BGN)を算出している。
[Noise event detection method]
FIG. 3 is a diagram illustrating a noise event detection method for single noise, along with a temporal change in the noise level under the channel. The observation unit 100 calculates the background noise level (BGN) at the observation point by, for example, continuously detecting the noise level in the noise event detection unit 102.
 単発騒音は、上記のように航空機APが上空を通過する等により、一過性の騒音として発生する。したがって単発騒音レベルの時間的な変化は、時間の経過とともに騒音レベルは上昇し、時刻t1で暗騒音レベルより10dB高いレベルにまで上昇する。この後、騒音レベルは最大値(Nmax)に達し、再び暗騒音レベル(BGN)となる。 単 Single noise is generated as transient noise when the aircraft AP passes over the sky as described above. Accordingly, the temporal change in the single noise level increases with time, and increases to a level 10 dB higher than the background noise level at time t1. Thereafter, the noise level reaches the maximum value (Nmax) and becomes the background noise level (BGN) again.
 この場合、観測ユニット100は、騒音イベント検出部102において時刻t1から騒音イベントの検出を開始する。つまり、マイクロホンMBの騒音レベルが暗騒音レベル(BGN)より10dB高いレベルまで上昇すると、騒音イベントの検出処理が開始される。 In this case, the observation unit 100 starts noise event detection at the noise event detection unit 102 from time t1. That is, when the noise level of the microphone MB rises to a level 10 dB higher than the background noise level (BGN), noise event detection processing is started.
 なお、騒音イベント検出部102には、単発騒音が発生したと判定するための閾値レベル(Na)が予め設定されている。したがって騒音イベント検出部102は、観測値が閾値レベル(Na)を超えた場合にのみ単発騒音と識別する。この例では、実際に観測値が閾値レベル(Na)を超えているため、騒音イベント検出部102は騒音レベルが最大値(Nmax)に達した時刻t3をもって単発騒音の発生時刻と判定することができる。 In the noise event detection unit 102, a threshold level (Na) for determining that single noise has occurred is set in advance. Therefore, the noise event detection unit 102 identifies single noise only when the observed value exceeds the threshold level (Na). In this example, since the observed value actually exceeds the threshold level (Na), the noise event detection unit 102 determines that the single noise generation time is at time t3 when the noise level reaches the maximum value (Nmax). it can.
 また、このとき騒音イベント検出部102は、騒音レベルが最大値(Nmax)から10dB下がった時刻t4を単発騒音の終了時刻と判定する。この結果、時刻t1(開始時)から時刻t4(終了時)までが騒音イベント検出中(検出処理)の期間となる。 Further, at this time, the noise event detection unit 102 determines the time t4 when the noise level is lowered by 10 dB from the maximum value (Nmax) as the end time of the single noise. As a result, the period from the time t1 (start time) to the time t4 (end time) is the period during which the noise event is being detected (detection process).
 そして騒音イベント検出部102は、騒音レベルが最大値(Nmax)より10dBだけ下がった値よりも高いレベルにあった期間を切り出し、これを騒音イベント区間として判定する。騒音イベント区間は、観測点において単発騒音が継続した時間とみなされる。 Then, the noise event detection unit 102 cuts out a period when the noise level is higher than the value lower than the maximum value (Nmax) by 10 dB, and determines this as a noise event section. The noise event section is regarded as the time when single noise continues at the observation point.
 次に図4は、準定常騒音についての騒音イベント検出手法について、飛行場内(又は近傍)での騒音レベルの時間的な変化とともに解説した図である。ここでも観測ユニット100は、騒音イベント検出部102において連続的に騒音レベルを検出することで、観測点での暗騒音レベル(BGN)を算出している。 Next, FIG. 4 is a diagram explaining the noise event detection method for the quasi-stationary noise along with the temporal change of the noise level in the airfield (or in the vicinity). Here again, the observation unit 100 calculates the background noise level (BGN) at the observation point by continuously detecting the noise level in the noise event detection unit 102.
〔準定常騒音の検出〕
 飛行場内で航空機APによる準定常騒音が発生した場合を想定する。ある時刻t12より前では、例えばタクシング路30の移動等により、観測点での観測値が暗騒音レベル(BGN)より10dB高いレベル(NP1)まで上昇する。この後さらに上昇し、騒音レベルは準定常的に高いレベルをある程度の長い時間維持したまま推移し、暗騒音レベル(BGN)より10dB高いレベル(NP2)に低下し、再び暗騒音レベル(BGN)になる。
[Detection of quasi-stationary noise]
Assume that a quasi-stationary noise is generated by the aircraft AP in the airfield. Before a certain time t12, the observation value at the observation point rises to a level (NP1) 10 dB higher than the background noise level (BGN) due to, for example, movement of the taxing path 30 or the like. After that, the noise level further increases, the quasi-steady high level is maintained while being maintained for a long time, decreases to a level (NP2) 10 dB higher than the background noise level (BGN), and again the background noise level (BGN). become.
 この場合、観測ユニット100は、騒音イベント検出部102において時刻t12から騒音イベントの検出を開始する。つまり、ここでも暗騒音レベル(BGN)より10dB高いレベル(NP1)まで上昇すると、騒音イベントの検出処理が開始されることになる。ただし、準定常騒音の場合は閾値レベルが設定されていない。 In this case, the observation unit 100 starts detection of the noise event at time t12 in the noise event detection unit 102. In other words, the noise event detection process is started when the level rises to 10 dB higher than the background noise level (BGN) (NP1). However, in the case of quasi-stationary noise, no threshold level is set.
 そして騒音イベント検出部102は、観測値が暗騒音レベル(BGN)より10dB高いレベルにあった期間を切り出し、これを騒音イベント区間として判定する。この場合の騒音イベント区間は、観測点においてある程度の長い時間継続した場合に準定常騒音が継続した時間とみなされる。 And the noise event detection part 102 cuts out the period when the observed value was 10 dB higher than the background noise level (BGN), and determines this as the noise event section. The noise event section in this case is regarded as the time when the quasi-stationary noise continues when the observation point continues for a certain long time.
 ここで、離陸や着陸、上空通過等の単発的に発生する騒音イベントの検出は、実際の航空機騒音の常時監視等において、それが航空機騒音であるか否かを識別するための有用な手法となっている。その一方で、空港内の航空機APが行うエンジンテストや、APU等による準定常騒音は、上記のように継続時間(準定常騒音区間)が比較的長時間にわたることがある。また準定常騒音は、その騒音区間内に複数の音源が含まれることがあるため、準定常騒音区間の検出後、それがどのような騒音事象であるのかを自動的に識別することは難しい。 Here, detection of noise events that occur only once, such as takeoff, landing, and overpass, is a useful technique for identifying whether or not it is aircraft noise in continuous monitoring of actual aircraft noise, etc. It has become. On the other hand, as described above, the quasi-stationary noise caused by the engine test performed by the aircraft AP in the airport or the APU may have a relatively long duration (quasi-stationary noise section). In addition, since the quasi-stationary noise may include a plurality of sound sources in the noise section, it is difficult to automatically identify what kind of noise event it is after the quasi-stationary noise section is detected.
 本実施形態では、上述した音源分離処理部110及び分離音源統合部120により、同時に発生する複数の音源を相互相関係数の極大値(上位ピーク数個)を用いて、それぞれの音の到来方向を識別する手法を実現する。またこの手法では、騒音事象の発生と同時に識別処理が可能であり、いわゆる「リアルタイム処理」を可能としている。以下、本実施形態における識別手法の詳細を説明する。 In the present embodiment, the sound source separation processing unit 110 and the separated sound source integration unit 120 described above use the maximum value of the cross-correlation coefficient (the number of upper peaks) for a plurality of sound sources generated simultaneously, and the direction of arrival of each sound. Implement a method to identify Also, with this method, identification processing is possible simultaneously with the occurrence of a noise event, and so-called “real time processing” is possible. Hereinafter, details of the identification method in the present embodiment will be described.
〔原理説明〕
 先ず、本実施形態における識別手法の原理について説明する。
 図5は、本実施形態における識別手法の原理を説明するための簡素化モデル図である。簡素化モデルは、無響室AR内に3つのマイクロホンM0,M2,M3を2軸線(X-Y軸線)上に配置したものである。すなわち、ここでは飛行場内の地上騒音に着目するため、Z軸線上のマイクロホンM1を省略して観測空間を二次元に簡素化している。そして、X-Yの2軸線上に観測水平面PLを仮想的に規定し、そこに2つの固定した音源SS1,SS2を配置している。
[Principle explanation]
First, the principle of the identification method in this embodiment will be described.
FIG. 5 is a simplified model diagram for explaining the principle of the identification method in the present embodiment. In the simplified model, three microphones M0, M2, and M3 are arranged on two axes (XY axes) in an anechoic chamber AR. That is, here, in order to focus on ground noise in the airfield, the microphone M1 on the Z axis is omitted, and the observation space is simplified in two dimensions. Then, an observation horizontal plane PL is virtually defined on two XY axes, and two fixed sound sources SS1 and SS2 are arranged there.
 以上の条件で、時間的に前後して2つの音源SS1,SS2からそれぞれ音を出力し、上記の音源分離処理部110により音源別のセグメント化を行うものとする。 Under the above conditions, sound is output from the two sound sources SS1 and SS2 before and after time, and the sound source separation processing unit 110 performs segmentation by sound source.
〔音源分離処理〕
 図6は、音源分離処理部110により実行される音源分離処理の手順例を示すフローチャートである。音源分離処理部110は、例えばタイマ割り込みによって定時毎(例えば200ms毎)に音源分離処理部を実行することができる。以下、手順例に沿って説明する。
[Sound source separation processing]
FIG. 6 is a flowchart illustrating a procedure example of the sound source separation processing executed by the sound source separation processing unit 110. The sound source separation processing unit 110 can execute the sound source separation processing unit at regular intervals (for example, every 200 ms) by, for example, timer interruption. Hereinafter, it demonstrates along the example of a procedure.
 ステップS10:先ず音源分離処理部110は、今回のタイムインデックスiを定義する。上記のようにタイムインデックスiは、定時毎に1インクリメントで更新される(i=i+1)。 Step S10: First, the sound source separation processing unit 110 defines the current time index i. As described above, the time index i is updated by 1 increment every fixed time (i = i + 1).
 ステップS12:次に音源分離処理部110は、上記の相互相関係数計算部112により、各軸線別相互相関係数計算処理を実行する。この処理では、相互相関係数計算部112は各軸線(ここではX軸線とY軸線、以下も同様)別に相互相関係数R(axis,i,τ)を計算する。なお、実際の構成ではaxis=X,Y,Zであるが、簡素化モデルではZ軸を省略する。また、以降の処理は、各軸線についてそれぞれ実行することとする。 Step S12: Next, the sound source separation processing unit 110 performs cross-correlation coefficient calculation processing for each axis by the cross-correlation coefficient calculation unit 112 described above. In this process, the cross-correlation coefficient calculation unit 112 calculates a cross-correlation coefficient R (axis, i, τ) for each axis (here, X-axis and Y-axis, and so on). In the actual configuration, axis = X, Y, Z, but the simplified model omits the Z axis. The subsequent processing is executed for each axis.
 ステップS14:すなわち、音源分離処理部110は、上記のピーク探索処理部114により、各軸線別ピーク探索処理を実行する。この処理では、ピーク探索処理部114は各軸線別に相互相関係数R(axis,i,τ)の上位ピークを探索し、値の大きい順にランク付けを行う。ランクをjとすると、ここでは上記のように第1ピークから第3ピークまでランク付けされた複数の時間遅れτaxis,i,jが得られる(j=1,2,3)。そして、以降の処理は各ピークPeak(τaxis,i,j)について実行することなる。 Step S14: That is, the sound source separation processing unit 110 performs the peak search processing for each axis by the peak search processing unit 114 described above. In this processing, the peak search processing unit 114 searches for the upper peak of the cross-correlation coefficient R (axis, i, τ) for each axis, and ranks the values in descending order. If the rank is j, a plurality of time delays τ axis, i, j ranked from the first peak to the third peak as described above are obtained (j = 1, 2, 3). The subsequent processing is executed for each peak Peak (τ axis, i, j ).
〔上位ピークによるランク付け〕
 図7は、相互相関係数の上位ピークによる時間遅れのランク付けを示す概要図である。例えば、観測空間内に複数の音源が存在する場合(単一音源でもよい)、各軸線上で相互相関係数R(τ)を計算すると、複数の時間遅れτ1,τ2,τ3で相互相関係数がピーク(極大)傾向を示し、上位に第1ピークR(τ1)、第2ピークR(τ2)、第3ピークR(τ3)が観測される。これにより、相互相関係数のピークが大きい順に上位で複数の時間遅れτ1,τ2,τ3をランク付けすることができる。なお、ピーク探索処理部114によりランク付けを各軸線別に行った結果が上記のτaxis,i,jである。
[Ranking by top peak]
FIG. 7 is a schematic diagram showing ranking of time delays due to the upper peak of the cross-correlation coefficient. For example, when there are a plurality of sound sources in the observation space (a single sound source may be used), if the cross-correlation coefficient R (τ) is calculated on each axis, a mutual phase relationship is obtained with a plurality of time delays τ1, τ2, and τ3. The number shows a peak (maximum) tendency, and the first peak R (τ1), the second peak R (τ2), and the third peak R (τ3) are observed in the upper rank. Thereby, a plurality of time delays τ1, τ2, and τ3 can be ranked in descending order of the peak of the cross correlation coefficient. Note that the result of ranking performed for each axis by the peak search processing unit 114 is τ axis, i, j described above.
 ステップS16:音源分離処理部110は、上記のセグメント化処理部116により、各軸線別及びピーク別音源初期値判定処理を実行する。この処理では、セグメント化処理部116は各Peak(τaxis,i,j)が音源の初期値となるか否かを確認する。例えば、あるτaxis,i,jの直前に同じ音源と予想される特定のτaxis,i-1,jが存在しない場合、セグメント化処理部116は当該Peak(τaxis,i,j)を音源の開始地点τaxis,s,kとみなす(sはiのいずれか、kはランクjのいずれかとする。)。以後、この初期値に対してセグメント化の判断が行われる。 Step S16: The sound source separation processing unit 110 uses the segmentation processing unit 116 to execute a sound source initial value determination process for each axis and each peak. In this process, the segmentation processing unit 116 confirms whether or not each Peak (τ axis, i, j ) is an initial value of the sound source. For example, some tau axis, i, the particular tau axis which is expected to the same sound source immediately before j, if i-1, j is absent, segmentation processing unit 116 the Peak (τ axis, i, j ) the It is regarded as the start point τ axis, s, k of the sound source (s is one of i and k is one of rank j). Thereafter, segmentation is determined for this initial value.
 ステップS18:次に音源分離処理部110は、セグメント化処理部116により各軸線別同一音源セグメント化処理を実行する。この処理では、セグメント化処理部116は、今回のτaxis,i,jが1定時前の特定のτaxis,i-1,jと音源を同一とするものであるか否かを判断し、音源を同一とする場合はそれらを1つのセグメントとする。既に1定時前のτaxis,i-1,jがセグメントの一部であった場合、今回のτaxis,i,jを同じセグメントに追加し、時間領域でセグメントを成長させる。なお、処理の具体的な内容については、さらに別のフローチャートを用いて後述する。 Step S18: Next, the sound source separation processing unit 110 causes the segmentation processing unit 116 to perform the same sound source segmentation processing for each axis. In this process, the segmentation processing unit 116 determines whether or not the current τ axis, i, j is the same as the specific τ axis, i−1, j before the fixed time , and the sound source, When the sound sources are the same, they are set as one segment. If τ axis, i−1, j one time before has already been part of the segment, this time τ axis, i, j is added to the same segment, and the segment is grown in the time domain. The specific contents of the process will be described later using still another flowchart.
 ステップS20:そして音源分離処理部110は、セグメント化処理部116によりセグメント別終了判定処理を実行する。この処理は、形成中の全てのセグメントのそれぞれについて行われる。例えば、セグメント化処理部116は今回のτaxis,i,jがτvirtualであり、τaxis,i,jを含めた直前のγ 個(例:γ=10)のτが全てτvirtualであることを確認した場合、連続する10個のτvirtualを全て削除し、初期値としたτaxis,s,kからτaxis,i-γ,jまでを1つのセグメントとする(終端処理)。なお、「τvirtual」については後述する。また、このときセグメントを構成するτの個数がある程度少ない(例えば50個未満)場合、そのセグメントを無効化する。 Step S20: Then, the sound source separation processing unit 110 executes the segment-by-segment end determination process by the segmentation processing unit 116. This process is performed for each of all the segments being formed. For example, in the segmentation processing unit 116, the current τ axis, i, j is τ virtual , and γ of the immediately preceding γ (including γ = 10) including τ axis, i, j is τ virtual . If this is confirmed, all the 10 consecutive τ virtuals are deleted, and the initial value τ axis, s, k to τ axis, i−γ, j are set as one segment (termination process). Note that “τ virtual ” will be described later. At this time, if the number of τ constituting the segment is small to some extent (for example, less than 50), the segment is invalidated.
〔各軸線別同一音源セグメント化処理〕
 図8は、各軸線別同一音源セグメント化処理の手順例を示すフローチャートである。以下、手順例に沿って処理の内容を説明する。
[Same sound source segmentation for each axis]
FIG. 8 is a flowchart showing a procedure example of the same sound source segmentation processing for each axis. Hereinafter, the contents of the process will be described according to a procedure example.
 ステップS100:セグメント化処理部116は、いずれかの初期値τaxis,s,k又は既にセグメント化されている特定のτaxis,i-1,jと同一音源とみなす今回の時間遅れが1つ(唯一値)存在するか否かを判断する。具体的には、以下の式(2)を計算する。
Figure JPOXMLDOC01-appb-M000001
 ここに、
 α:音源の移動速度に依存する定数(所定の閾値)
Step S100: The segmentation processing unit 116 has one time delay of this time that is regarded as the same sound source as any one of the initial values τ axis, s, k or specific segmented τ axis, i−1, j. (Unique value) Determine whether it exists. Specifically, the following equation (2) is calculated.
Figure JPOXMLDOC01-appb-M000001
here,
α: Constant depending on the moving speed of the sound source (predetermined threshold)
 ステップS102:その結果、式(2)を満たすj’が1つだけ(唯一)存在する場合(ステップS100:Yes)、セグメント化処理部116は、当該時間遅れτaxis,i,j’を用いてセグメント化する。具体的には、τaxis,i-1,jと同じセグメントのメンバに当該時間遅れτaxis,i,j’を追加する。 Step S102: As a result, when there is only one (only) j ′ that satisfies the formula (2) (step S100: Yes), the segmentation processing unit 116 uses the time delay τ axis, i, j ′ . Segment. Specifically, the time delay τ axis, i, j ′ is added to the members of the same segment as τ axis, i−1, j .
 ステップS104:これに対し、式(2)を満たすj’が2つ以上あるか、もしくは、1つも存在しない場合(ステップS100:No)、セグメント化処理部116は、直前のτを用いて最小二乗法からτvirtualを計算する。このとき、直前のτは全てを使用する必要はなく、数個のデータで充分である。すなわち、ここでは直近のτの変動をτvirtualとして得られれば充分だからである。また、最小二乗法の次数は、経験的に2次が有効である。 Step S104: On the other hand, if there are two or more j ′ satisfying the expression (2) or none exists (step S100: No), the segmentation processing unit 116 uses the previous τ to minimize Τ virtual is calculated from the square method. At this time, it is not necessary to use all τ immediately before, and several pieces of data are sufficient. That is, it is sufficient here to obtain the latest variation of τ as τ virtual . The order of the least squares method is empirically effective as the second order.
 ステップS106:セグメント化処理部116は、2つ以上の同一音源とみなす今回の時間遅れがあるか否かを確認する。
 ステップS108:その結果、式(2)を満たすj’が2つ以上存在していた場合(ステップS106:Yes)、セグメント化処理部116はその中にセグメント化できる時間遅れがあるか否かを判断する。具体的には、以下の式(3)を計算する。
Figure JPOXMLDOC01-appb-M000002
 ここに、
 β:予め設定した定数(特定の閾値)、値は実験により又は経験的に設定可能。
Step S106: The segmentation processing unit 116 checks whether there is a current time delay that is regarded as two or more identical sound sources.
Step S108: As a result, when there are two or more j ′ satisfying the expression (2) (Step S106: Yes), the segmentation processing unit 116 determines whether or not there is a time delay that can be segmented. to decide. Specifically, the following equation (3) is calculated.
Figure JPOXMLDOC01-appb-M000002
here,
β: preset constant (specific threshold), value can be set experimentally or empirically.
 ステップS110:上式(3)を最小で満たす最適なτaxis,i,j’がある場合(ステップS108:Yes)、セグメント化処理部116は、その最適なτaxis,i,j’を用いてセグメント化する。具体的には、τaxis,i-1,jと同じセグメントのメンバに最適なτaxis,i,j’を追加する。 Step S110: When there is an optimum τ axis, i, j ′ that satisfies the above expression (3) at the minimum (step S108: Yes), the segmentation processing unit 116 uses the optimum τ axis, i, j ′ . Segment. Specifically, the optimum τ axis, i, j ′ is added to the members of the same segment as τ axis, i−1, j .
 ステップS112:一方、式(3)を満たすj’が1つも存在しないか(ステップS108:No)、もしくは、式(2)を満たすj’が1つも存在しない場合(ステップS106:No)、セグメント化処理部116は、先のステップS104で計算したτvirtualを用いてセグメント化する。具体的には、τaxis,i-1,jと同じセグメントのメンバにτvirtualを追加する。 Step S112: On the other hand, if there is no j ′ satisfying Expression (3) (Step S108: No), or if there is no j ′ satisfying Expression (2) (Step S106: No), the segment The segmentation processing unit 116 performs segmentation using τ virtual calculated in the previous step S104. Specifically, τ virtual is added to the members of the same segment as τ axis, i−1, j .
〔時間遅れの変動〕
 図9は、簡素化モデルでのX軸における時間遅れの変動を示す図である。図9の横軸にはタイムインデックス数を示し、縦軸にはX軸上の時間遅れを示している。また、図9中に示す白色丸印は相互相関係数の第1ピークにランク付けされた時間遅れを示し、斜線を施した丸印は相互相関係数の第2ピークにランク付けされた時間遅れを示している。
[Changes in time delay]
FIG. 9 is a diagram illustrating a variation in time delay on the X axis in the simplified model. The horizontal axis in FIG. 9 indicates the number of time indexes, and the vertical axis indicates the time delay on the X axis. Also, the white circles shown in FIG. 9 indicate the time delay ranked in the first peak of the cross-correlation coefficient, and the hatched circles indicate the time ranked in the second peak of the cross-correlation coefficient. Indicates a delay.
〔実行条件1〕
 この例では、上記の簡素化モデル(図5)において、先に1つ目の音源SS1から音を出力し、後から2つ目の音源SS2から音を出力させて複数の音源SS1,SS2からしばらく同時に音を出力した後、1つ目の音源SS1の出力を先に停止した。このとき、2つの音源SS1,SS2が観測水平面PL内で固定されているため、X軸上の2つのマイクロホンM0,M3から得られる検出信号の時間遅れは、マイクロホンM0,M3と各音源SS1,SS2との相対的な位置関係に基づいてほぼ安定した値を示すと考えられる。
[Execution condition 1]
In this example, in the above simplified model (FIG. 5), the sound is first output from the first sound source SS1, and the sound is output from the second sound source SS2 later to generate a plurality of sound sources SS1, SS2. After outputting the sound simultaneously for a while, the output of the first sound source SS1 was stopped first. At this time, since the two sound sources SS1 and SS2 are fixed in the observation horizontal plane PL, the time delays of the detection signals obtained from the two microphones M0 and M3 on the X axis are the microphones M0 and M3 and the sound sources SS1, SS2. It is considered that a substantially stable value is shown based on the relative positional relationship with SS2.
〔タイムインデックス数0~Ti1〕
 したがって、時間領域の初期(タイムインデックス数=0)から1つ目の音源SS1の出力が開始されると、X軸上の時間遅れは相互相関係数の第1ピークにランク付けされたものだけで表される。そして、2つ目の音源SS2から音が出力されるまで(タイムインデックス数0~Ti1)の時間領域では、相互相関係数の第1ピークにランク付けされた時間遅れだけでセグメントを形成することができる。
[Time index number 0 to Ti1]
Therefore, when the output of the first sound source SS1 is started from the beginning of the time domain (time index number = 0), the time delay on the X axis is only the one ranked in the first peak of the cross-correlation coefficient. It is represented by In the time domain until the sound is output from the second sound source SS2 (time index number 0 to Ti1), a segment is formed only by the time delay ranked in the first peak of the cross-correlation coefficient. Can do.
〔タイムインデックス数Ti1~Ti2〕
 2つ目の音源SS1から出力が開始されると、観測水平面PL内に複数の音源が存在する状態となる。この場合、X軸上の時間遅れは相互相関係数の第1ピーク及び第2ピークにランク付けされた複数のものとして表される。この場合、第1ピークにランク付けされた時間遅れ(>0)は1つ目の音源SS1に対応し、第2ピークにランク付けされた時間遅れ(<0)は2つ目の音源SS2に対応している。したがって、この時間領域においても相互相関係数の第1ピークにランク付けされた時間遅れが引き続き同一の音源SS1のセグメントに追加されることになる。
[Time index number Ti1 to Ti2]
When the output is started from the second sound source SS1, a plurality of sound sources exist in the observation horizontal plane PL. In this case, the time delay on the X-axis is represented as a plurality of ranks ranked in the first peak and the second peak of the cross correlation coefficient. In this case, the time delay (> 0) ranked in the first peak corresponds to the first sound source SS1, and the time delay (<0) ranked in the second peak is in the second sound source SS2. It corresponds. Therefore, even in this time domain, the time delay ranked to the first peak of the cross correlation coefficient is continuously added to the segment of the same sound source SS1.
 一方、タイムインデックス数Ti1のときの相互相関係数の第2ピークにランク付けされた時間遅れは、2つ目の音源SS2についての初期値とみなされる。そして、以後の時間領域では、相互相関係数の第2ピークにランク付けされた時間遅れが音源SS2のセグメントに追加されていく。 On the other hand, the time delay ranked in the second peak of the cross-correlation coefficient when the time index number is Ti1 is regarded as an initial value for the second sound source SS2. In the subsequent time domain, the time delay ranked at the second peak of the cross-correlation coefficient is added to the segment of the sound source SS2.
〔タイムインデックス数Ti2以降〕
 1つ目の音源SS2の出力が停止すると、以後は音源SS2が1つとなる。この場合、X軸上の時間遅れは相互相関係数の第1ピークにランク付けされたものだけで表される。ただし、以後の時間領域では、第1ピークにランク付けされた時間遅れが2つ目の音源SS2に対応しているため、ここからは相互相関係数の第1ピークにランク付けされた時間遅れが音源SS2のセグメントに追加されることになる。
[Time index number Ti2 or later]
When the output of the first sound source SS2 stops, the number of sound sources SS2 becomes one after that. In this case, the time delay on the X-axis is represented only by the one ranked in the first peak of the cross correlation coefficient. However, in the subsequent time domain, the time delay ranked to the first peak corresponds to the second sound source SS2, so from here the time delay ranked to the first peak of the cross-correlation coefficient Is added to the segment of the sound source SS2.
〔音源別分離〕
 図10は、図9に示すX軸上の時間遅れの変動を音源別のセグメントに分離して表した例を示す図である。このうち、図10中(A)は1つ目の音源SS1のセグメントを示し、図10中(B)は2つ目の音源SS2のセグメントを示している。
[Separation by sound source]
FIG. 10 is a diagram showing an example in which the variation in time delay on the X axis shown in FIG. 9 is separated into sound source segments. Among these, (A) in FIG. 10 shows the segment of the first sound source SS1, and (B) in FIG. 10 shows the segment of the second sound source SS2.
 このように、本実施形態では各軸線上の時間遅れの変動を音源別のセグメントに分離することができる。なお、ここではX軸について示しているが、Y軸についても同様に音源別のセグメントに分離することができる。 In this way, in this embodiment, the time delay variation on each axis can be separated into segments for each sound source. Although the X axis is shown here, the Y axis can be similarly divided into segments for each sound source.
〔移動音源への適用例〕
 図11は、簡素化モデルを移動音源に適用した場合の例を示す図である。この適用例では、観測水平面PL内で2つの移動する音源SS1,SS2を配置している。また、マイクロホンユニット10の角度(X軸及びY軸)を先の簡素化モデルと異ならせている。
[Example of application to moving sound source]
FIG. 11 is a diagram illustrating an example when the simplified model is applied to a moving sound source. In this application example, two moving sound sources SS1, SS2 are arranged in the observation horizontal plane PL. Further, the angles (X axis and Y axis) of the microphone unit 10 are different from the simplified model.
〔実行条件2〕
 例えば、無響室AR内に配置した2つの音源SS1,SS2のうち、1つ目の音源SS1を一方の壁近傍から他方の壁に向けて移動させた後、再度、一方の壁近傍まで移動させる。逆に、1つ目の音源SS2については、他方の壁近傍から一方の壁に向けて移動させた後、他方の壁近傍まで移動させた。また、これら2つの音源SS1,SS2の移動は同時に並行して行った。
[Execution condition 2]
For example, of the two sound sources SS1 and SS2 arranged in the anechoic room AR, the first sound source SS1 is moved from the vicinity of one wall toward the other wall and then moved again to the vicinity of one wall. Let Conversely, the first sound source SS2 was moved from the vicinity of the other wall toward one wall and then moved to the vicinity of the other wall. Further, these two sound sources SS1, SS2 were moved in parallel at the same time.
〔時間遅れの変動結果〕
 図12は、X軸,Y軸における時間遅れの変動を示す図である。このうち、図12中(A)がX軸上の時間遅れの変動を示し、図12中(B)がY軸上の時間遅れの変動を示している。なお、図中の横軸はタイムインデックス数を示し、縦軸にはランク付けされた時間遅れτを示している。また、ここでは上位の第1ピークから第3ピークまでランク付けされた時間遅れを表示している。ここでも同様に、図12中(A),(B)において白色丸印は相互相関係数の第1ピークにランク付けされた時間遅れを示し、斜線を施した丸印は相互相関係数の第2ピークにランク付けされた時間遅れを示している。また黒色丸印は、第3ピークにランク付けされた時間遅れを示している。
[Results of time delay fluctuation]
FIG. 12 is a diagram illustrating a variation in time delay in the X-axis and the Y-axis. Among these, (A) in FIG. 12 shows the fluctuation of the time delay on the X axis, and (B) in FIG. 12 shows the fluctuation of the time delay on the Y axis. In the figure, the horizontal axis represents the number of time indexes, and the vertical axis represents the ranked time delay τ. In addition, here, the time delay ranked from the top first peak to the third peak is displayed. Similarly, in FIGS. 12A and 12B, white circles indicate the time delays ranked in the first peak of the cross-correlation coefficient, and hatched circles indicate the cross-correlation coefficient. The time delay ranked in the second peak is shown. Black circles indicate time delays ranked in the third peak.
 複数の音源SS1,SS2が移動する場合、無響室ARのような理想環境においても、時間領域で時間遅れの変動を抽出していくと、各所で相互相関係数が第1~第3ピークまでランク付けされた複数の時間遅れが観測される。この場合でも、本実施形態の音源分離処理を適用することにより、各軸線上の時間遅れの変動を音源別のセグメントに分離することができる。 When multiple sound sources SS1 and SS2 move, even in an ideal environment such as an anechoic room AR, if the time delay variation is extracted in the time domain, the cross-correlation coefficients at the first to third peaks Multiple time delays are observed, ranked up to. Even in this case, by applying the sound source separation processing according to the present embodiment, it is possible to separate the time delay variation on each axis into segments for each sound source.
〔セグメントの分離例〕
 図13は、X軸におけるセグメントの分離例を示す図である。このうち、図13中(A)が音源SS1に対応するセグメントを表し、図13中(B)が音源SS2に対応するセグメントを表す。ここでも同様に、図中の横軸はタイムインデックス数を示し、縦軸はランク付けされた時間遅れを示している。なお、図13中(A),(B)に示される黒色菱形印は、セグメント化処理で用いられたτvirtual(仮想時間遅れ)を示している(以下同様)。
[Example of segment separation]
FIG. 13 is a diagram illustrating an example of segment separation along the X axis. Among these, (A) in FIG. 13 represents a segment corresponding to the sound source SS1, and (B) in FIG. 13 represents a segment corresponding to the sound source SS2. Similarly, the horizontal axis in the figure indicates the number of time indexes, and the vertical axis indicates the ranked time delay. In FIG. 13, black rhombus marks shown in (A) and (B) indicate τ virtual (virtual time delay) used in the segmentation process (the same applies hereinafter).
 このように、本実施形態の音源分離処理を適用すれば、各軸線上の時間遅れの変動が複雑に入り組んでいる場合(図12)であっても、これらを音源別のセグメントに分離可能であることが分かる。 In this way, by applying the sound source separation processing of the present embodiment, even when the time delay variation on each axis is complicated (FIG. 12), these can be separated into segments for each sound source. I understand that there is.
〔飛行場への適用例〕
 ここまでは、無響室ARを用いた簡素化モデルへの適用例であるが、以下では、実際の飛行場を観測空間とした場合の適用例について説明する。ここでは、実際の飛行場内で滑走路25の側方(図1)を観測点とし、マイクロホンユニット10で実測した騒音データに対して本実施形態の手法を適用している。なお、滑走路25とタクシング路30との位置関係は、図1に示す態様と異なっていてもよい。
[Example of application to an airfield]
Up to this point, an example of application to a simplified model using an anechoic chamber AR has been described. In the following, an example of application when an actual airfield is used as an observation space will be described. Here, the method of this embodiment is applied to noise data measured by the microphone unit 10 with the observation point being the side of the runway 25 (FIG. 1) in an actual airfield. Note that the positional relationship between the runway 25 and the taxing road 30 may be different from that shown in FIG.
〔時間遅れの変動結果〕
 図14は、実測データについて、X軸,Y軸における時間遅れの変動結果を示す図である。なお、図14中(A)はX軸上の時間遅れの変動を示し、図14中(B)はY軸上の時間遅れの変動を示している。また、ここではタクシング音に重なって途中から着陸音が観測された状況での変動結果を示している。なお、図14中の白色丸印は相互相関係数の第1ピークにランク付けされた時間遅れを示し、斜線を施した丸印は相互相関係数の第2ピークにランク付けされた時間遅れを示し、黒色丸印は相互相関係数の第3ピークにランク付けされた時間遅れを示している。
[Results of time delay fluctuation]
FIG. 14 is a diagram showing a result of fluctuation in time delay in the X axis and the Y axis for the measured data. 14A shows the time delay variation on the X axis, and FIG. 14B shows the time delay variation on the Y axis. In addition, here, the fluctuation results in the situation where the landing sound is observed in the middle of the taxing sound are shown. The white circles in FIG. 14 indicate time delays ranked in the first peak of the cross-correlation coefficient, and the hatched circles indicate time delays ranked in the second peak of the cross-correlation coefficient. The black circles indicate time delays ranked in the third peak of the cross correlation coefficient.
 図14に示されているように、飛行場内での実測データについて、各軸線上の時間遅れの変動を時間領域で抽出していくと、多くの箇所で相互相関係数が第1~第3ピークまでランク付けされた複数の時間遅れが観測されている。また、時間遅れの変動結果の全体を後処理的に俯瞰しても、一概にどこからどこまでの変動が同一音源によるものであるかを識別することは極めて困難であることが分かる。 As shown in FIG. 14, when the fluctuations in the time delay on each axis are extracted in the time domain for the actually measured data in the airfield, the cross-correlation coefficients are first to third at many points. Multiple time lags ranked to peak have been observed. Further, even if the entire fluctuation result of the time delay is looked at in a post-processing manner, it can be understood that it is extremely difficult to identify from where to where the fluctuation is due to the same sound source.
 その上で本実施形態では、上記のようにリアルタイム処理でセグメント化を行うことにより、各軸線上の時間遅れの変動を音源別のセグメントに分離することができる。 In addition, in this embodiment, by performing segmentation by real-time processing as described above, it is possible to separate the time delay variation on each axis into segments for each sound source.
〔セグメントの分離例〕
 図15は、実測データについて、X軸,Y軸における時間遅れの変動をセグメントに分離した結果を示す図である。このうち、図15中(A),(B)がX軸上の時間遅れのセグメント分離例を示し、図15中(C),(D)がY軸上の時間遅れのセグメント分離例を示している。ここでも同様に、図15中の白色丸印は相互相関係数の第1ピークにランク付けされた時間遅れを示し、斜線を施した丸印は相互相関係数の第2ピークにランク付けされた時間遅れを示している。また、黒色菱形印は、セグメント化処理で用いられたτvirtual(仮想時間遅れ)を示している。
[Example of segment separation]
FIG. 15 is a diagram illustrating a result of separating the time delay variation on the X-axis and the Y-axis into segments for the measured data. Among these, (A) and (B) in FIG. 15 show examples of time-delayed segment separation on the X axis, and (C) and (D) in FIG. 15 show examples of time-delayed segment separation on the Y-axis. ing. Similarly, the white circles in FIG. 15 indicate the time delays ranked in the first peak of the cross correlation coefficient, and the hatched circles are ranked in the second peak of the cross correlation coefficient. Shows a time delay. The black rhombus marks indicate τ virtual (virtual time delay) used in the segmentation process.
 実測データについても、上述したセグメント化処理を実行することにより、時間遅れの変動が例えばX軸上で4つのセグメントX1,X2,X3,X4に分離され、Y軸上では3つのセグメントY1,Y2,Y3に分離されている。 Also for the actual measurement data, by executing the segmentation process described above, the time delay variation is separated into, for example, four segments X1, X2, X3, and X4 on the X axis, and three segments Y1 and Y2 on the Y axis. , Y3.
 ここでセグメントは、基本的には音源別に分離されるため、分離後のセグメント数は音源の数を表していることが基本となる。しかし、タクシング音の発生中に着陸音が発生するといった状況に鑑みると、途中の音圧レベルが大きい騒音(着陸音)が支配的となり、以前からの時間遅れの連続性が分断されるため、図15の分離例のように、セグメント数が音源の数と一致しないことは充分に考えられる。 Here, since the segments are basically separated by sound source, the number of segments after separation basically represents the number of sound sources. However, in view of the situation where landing sound is generated during the generation of taxing sound, noise with a large sound pressure level in the middle (landing sound) becomes dominant, and the continuity of the time delay from before is broken, As in the separation example of FIG. 15, it is fully possible that the number of segments does not match the number of sound sources.
 本実施形態において各軸線上で分離されたセグメントは、さらに異なる軸線間で音源を同一とするもの同士が統合化される。 In the present embodiment, segments separated on each axis are further integrated with the same sound source between different axes.
〔セグメント統合化処理〕
 すなわち、上述した分離音源統合部120により、セグメント統合化処理を実行する。この処理では、上記のように各軸線上で分離したセグメントを異なる軸線間で組み合わせる。本実施形態では,相互相関係数R(τ)の変動を用いて異なる軸線間のセグメント同士を統合化している。
[Segment integration processing]
That is, the segment integration processing is executed by the above-described separated sound source integration unit 120. In this process, the segments separated on each axis as described above are combined between different axes. In the present embodiment, segments between different axes are integrated using fluctuations in the cross-correlation coefficient R (τ).
 図16は、各セグメントの相互相関係数R(τ)の変動パターンを示す図である。このうち、図16中(A),(B),(C)はX軸セグメントの相互相関係数R(τ)の変動パターンを示し、図16中(D),(E),(F)はY軸セグメントの相互相関係数R(τ)の変動パターンを示している。また、図中の白色丸印は相互相関係数R(τ)の第1ピーク値を示し、斜線を施した丸印は第2ピーク値を示し、黒色丸印は第3ピーク値を示している。 FIG. 16 is a diagram showing a variation pattern of the cross-correlation coefficient R (τ) of each segment. Among these, (A), (B), (C) in FIG. 16 show the variation pattern of the cross-correlation coefficient R (τ) of the X-axis segment, and (D), (E), (F) in FIG. Indicates a variation pattern of the cross-correlation coefficient R (τ) of the Y-axis segment. Also, the white circle in the figure indicates the first peak value of the cross-correlation coefficient R (τ), the hatched circle indicates the second peak value, and the black circle indicates the third peak value. Yes.
 本発明では、あるセグメントに対して、時間遅れτではなく相互相関係数R(τ)を考えると、同じ音源と考えられるセグメントはR(τ)の変動パターンが非常に近似するとの知見を得ている。これは、実際の音源から発生する音は、例えばエンジン出力の変化のような変動が存在したり、観測点に到達するまでに気象の影響を受けたりするため、それらの変化がマイクロホンの入力信号にも現れることに起因すると考えられる。 In the present invention, when a cross-correlation coefficient R (τ) is considered with respect to a certain segment rather than a time delay τ, a segment considered to be the same sound source has a very close variation pattern of R (τ). ing. This is because the sound generated from an actual sound source has fluctuations such as changes in the engine output, or is affected by the weather before reaching the observation point. It is thought to be caused by appearing in
 したがって、ある時間領域において、異なる軸線間でセグメントの組が存在する場合、τ=0のときの正規化相互相関係数R(0)を計算すると、音源を同一とするセグメント同士は正規化相互相関係数R(0)がある程度大きくなる。このため本実施形態では、R(0)>0.9となるセグメントの組み合わせを同一音源のものとして統合化する。なお、R(0)>0.9となる複数の組み合わせが存在する場合、R(0)が最大値となる組み合わせを統合化することとする。 Therefore, when a set of segments exists between different axes in a certain time domain, the normalized cross-correlation coefficient R (0) when τ = 0 is calculated, and segments having the same sound source are normalized to each other. Correlation coefficient R (0) increases to some extent. For this reason, in this embodiment, combinations of segments satisfying R (0)> 0.9 are integrated as those of the same sound source. In addition, when there are a plurality of combinations in which R (0)> 0.9, the combination in which R (0) has the maximum value is integrated.
〔正規化相互相関係数〕
 図17は、時間領域で重なるセグメント間の正規化相互相関係数の算出例を示す図である。図16中、縦方向にはX軸上のセグメントを、横方向にはY軸上のセグメントをそれぞれ配列し、セグメント間の正規化相互相関係数を4×3のマトリクスで示している。
[Normalized cross-correlation coefficient]
FIG. 17 is a diagram illustrating a calculation example of a normalized cross-correlation coefficient between overlapping segments in the time domain. In FIG. 16, segments on the X axis are arranged in the vertical direction and segments on the Y axis are arranged in the horizontal direction, and the normalized cross-correlation coefficients between the segments are shown in a 4 × 3 matrix.
〔セグメント統合例〕
 図17中、正規化相互相関係数が0.9より大きい組み合わせを選択すると、X1-Y1,X2-Y2,X3-Y3,X4-Y3の4つの組み合わせを得ることができることが分かる。この結果を反映して、図16には軸間でのセグメント統合例が破線の矢印により示されている。
[Example of segment integration]
In FIG. 17, it can be seen that if a combination having a normalized cross-correlation coefficient greater than 0.9 is selected, four combinations of X1-Y1, X2-Y2, X3-Y3, and X4-Y3 can be obtained. Reflecting this result, an example of segment integration between axes is shown in FIG. 16 by dashed arrows.
〔検証結果〕
 実測時の騒音事象との対比検証を行った結果、実際にX1-Y1,X3-Y3,X4-Y3の組み合わせはタクシング音であり、X2-Y2は着陸音であることが確認された。
〔inspection result〕
As a result of comparison with the noise event at the time of actual measurement, it was confirmed that the combination of X1-Y1, X3-Y3, and X4-Y3 was actually a taxing sound and X2-Y2 was a landing sound.
 一方、図16中(A),(B)において、タイムインデックスTia~Tibの間でタクシング音のピーク値が検出されていないのは、実際の着陸音の騒音レベルがタクシング音の騒音レベルよりも充分に大きかったためである。 On the other hand, in FIGS. 16A and 16B, the peak value of the taxing sound is not detected between the time indexes Tia and Tib because the actual noise level of the landing sound is higher than the noise level of the taxing sound. This is because it was large enough.
 以上のように、本実施形態の識別手法では、各軸線上で相互相関係数のピークにより時間遅れの変動を時間領域で抽出することにより、時間遅れを音源別のセグメントに分割し、そして、セグメントごとの相互相関係数の変動を用いて軸線間のセグメント同士を統合する。これにより、観測空間内で同時に発生する複数の騒音源に対して、各音源の音の到来方向を分離することが可能となる。 As described above, in the identification method of the present embodiment, the time delay is divided into sound source-specific segments by extracting the time delay variation in the time domain due to the peak of the cross-correlation coefficient on each axis, and The segments between the axes are integrated using the variation of the cross-correlation coefficient for each segment. This makes it possible to separate the sound arrival directions of the sound sources from a plurality of noise sources that are generated simultaneously in the observation space.
 また、本実施形態の識別手法により、音圧レベルの変動や従来の相互相関係数の第1ピークのみを用いる手法だけでは識別できなかった同時に存在する有意な騒音レベルを発する航空機の機数を自動的に認識することができる。これにより、例えば単発騒音の騒音レベル評価において、暗騒音の影響を推察するための情報を得ることができる。また同時に、それぞれの音の到来方向が得られるため、滑走路や誘導路、周辺道路など空港の構造情報を用いれば、それぞれどのような音源であるかを確実に識別することが可能である。 In addition, the number of aircraft that emit significant noise levels that could not be identified only by the method using only the first peak of the cross-correlation coefficient and the fluctuation of the sound pressure level by the identification method of this embodiment. Can be recognized automatically. Thereby, for example, in noise level evaluation of single noise, information for inferring the influence of background noise can be obtained. At the same time, since the direction of arrival of each sound can be obtained, it is possible to reliably identify each sound source by using the structure information of the airport such as a runway, a taxiway, and a surrounding road.
 なお、上記の1つの実施形態ではZ軸での観測を省略して簡素化しているが、本発明の実施に際してX軸,Y軸,Z軸の3軸の相関を用いることは当然可能である。特にZ軸上の時間遅れについては、主に地面反射による相互相関係数のピーク傾向が顕著であり、このような傾向を活用すれば、航空機の地上騒音識別をさらに高精度化することができる点で極めて有用である。 In the above-described one embodiment, the observation on the Z-axis is omitted for simplification, but it is naturally possible to use the three-axis correlation of the X-axis, the Y-axis, and the Z-axis when implementing the present invention. . In particular, with regard to time delay on the Z-axis, the peak tendency of the cross-correlation coefficient mainly due to ground reflection is prominent, and if such a tendency is utilized, the ground noise identification of the aircraft can be made more accurate. Very useful in terms.
 開示した発明は上述した実施形態に制約されることなく、種々に変形して実施することができる。1つの実施形態では、飛行場を対象地域としているが、本発明の騒音観測装置及び騒音観測方法は、飛行場以外を観測空間(対象区域)とすることができる。 The disclosed invention can be implemented with various modifications without being limited to the above-described embodiments. In one embodiment, the airfield is the target area. However, the noise observation apparatus and the noise observation method of the present invention can use an observation space (target area) other than the airfield.
 また、1つの実施形態で挙げたセグメントの形成やセグメント同士の統合に関わる条件(α,β,0.9)等は一例であり、条件の設定は観測対象地域や騒音発生源の特性に合わせて適宜に変更することができる。 In addition, the conditions (α, β, 0.9) related to the formation of segments and the integration of segments mentioned in one embodiment are examples, and the setting of the conditions is matched to the characteristics of the observation target area and the noise source. Can be changed as appropriate.
 1つの実施形態では、相互相関係数を定時割り込み毎に計算しているが、「定時毎」の間隔は一定でなくてもよい。例えば、ある定時では200msの間隔を置いて計算を行うが、次回の定時は200msより短い100ms後であったり、逆に200msより長い300ms後であったりしてもよい。 In one embodiment, the cross-correlation coefficient is calculated for each scheduled interruption, but the interval of “every scheduled” may not be constant. For example, the calculation is performed with an interval of 200 ms at a certain fixed time, but the next fixed time may be after 100 ms shorter than 200 ms, or conversely after 300 ms longer than 200 ms.
 1つの実施形態では、複数の騒音としてタクシング音と着陸音を例に挙げているが、複数の騒音はこれら以外の取り合わせでもよい。また、3つ以上の騒音が同時に発生しても開示した発明は適用可能である。 In one embodiment, a taxing sound and a landing sound are exemplified as the plurality of noises, but the plurality of noises may be other combinations. Further, the disclosed invention can be applied even when three or more noises are generated simultaneously.
〔符号の説明〕
 10  マイクロホンユニット
100  観測ユニット
102  騒音イベント検出部
106  到来方向ベクトル計算部
110  音源分離処理部
112  相互相関係数計算部
114  ピーク探索処理部
116  セグメント化処理部
120  分離音源統合部
122  正規化相互相関係数計算部
124  セグメント統合処理部
130  識別結果出力部
[Explanation of symbols]
DESCRIPTION OF SYMBOLS 10 Microphone unit 100 Observation unit 102 Noise event detection part 106 Arrival direction vector calculation part 110 Sound source separation processing part 112 Cross correlation coefficient calculation part 114 Peak search processing part 116 Segmentation processing part 120 Separation sound source integration part 122 Normalization mutual phase relation Number calculation unit 124 Segment integration processing unit 130 Identification result output unit

Claims (12)

  1.  複数の音源が存在する観測空間内で規定される複数の軸線上にそれぞれ間隔を置いて配置された2つのマイクロホンを用いて、各軸線別に各マイクロホンに到達した音の相互相関係数を定時毎に算出する算出手段と、
     前記算出手段により定時毎に算出された相互相関係数がピーク傾向を示す複数の時間遅れについて、相互相関係数の大きい順に抽出した複数の時間遅れの変動を時間領域で収集し、各軸線別に連続した時間遅れの集合を形成する集合化手段と、
     前記集合化手段により形成された各軸線別の時間遅れの集合について、異なる軸線間での相互相関から音源を同一とする時間遅れの集合同士を組み合わせる統合化手段と
    を備えた騒音観測装置。
    Using two microphones arranged at intervals on multiple axes defined in the observation space where multiple sound sources exist, the cross-correlation coefficient of the sound that reaches each microphone for each axis is fixed at regular intervals. A calculating means for calculating
    For a plurality of time delays in which the cross-correlation coefficient calculated every fixed time by the calculating means shows a peak tendency, a plurality of time delay fluctuations extracted in descending order of the cross-correlation coefficient are collected in the time domain, and each axis is An aggregation means for forming a set of continuous time delays;
    A noise observation apparatus comprising: an integrating unit that combines time delay sets having the same sound source from a cross-correlation between different axes with respect to a set of time delays for each axis formed by the collecting unit.
  2.  請求項1に記載の騒音観測装置において、
     前記集合化手段は、
     連続した時間遅れの集合の形成に先立ち、定時毎にピーク傾向を示す複数の時間遅れが音源別の初期値となるか否かを確認する。
    The noise observation apparatus according to claim 1,
    The aggregation means includes
    Prior to the formation of a set of continuous time delays, it is confirmed whether or not a plurality of time delays showing peak trends at regular intervals are initial values for each sound source.
  3.  請求項1又は2に記載の騒音観測装置において、
     前記集合化手段は、
     定時毎にピーク傾向を示す複数の時間遅れのうち、1定時前にピーク傾向を示した特定の時間遅れとの差が所定の閾値未満となる時間遅れの唯一値が存在する場合は前記唯一値を前記特定の時間遅れと同じ集合に加える一方、前記唯一値が存在しない場合、少なくとも前記特定の時間遅れとそれ以前の時間遅れを用いて最小二乗法により仮想時間遅れを算出する。
    In the noise observation apparatus according to claim 1 or 2,
    The aggregation means includes
    Among a plurality of time delays that show a peak tendency at each fixed time, if there is a unique value of a time delay in which the difference from a specific time delay that showed a peak tendency before the fixed time is less than a predetermined threshold, the unique value Is added to the same set as the specific time delay, and when the unique value does not exist, the virtual time delay is calculated by the least square method using at least the specific time delay and the previous time delay.
  4.  請求項3に記載の騒音観測装置において、
     前記集合化手段は、
     前記特定の時間遅れとの差が所定の閾値未満となる時間遅れが複数存在するために前記唯一値が存在しない場合、その中に前記仮想時間遅れとの差が特定の閾値未満となる特定値があれば、この特定値を前記特定の時間遅れと同じ集合に加える一方、前記特定時がないか、もしくは、複数の時間遅れの中に前記特定の時間遅れとの差が所定の閾値未満となる時間遅れが1つも存在しないために前記唯一値が存在しない場合、前記仮想時間遅れを前記特定の時間遅れと同じ集合に加える。
    In the noise observation apparatus according to claim 3,
    The aggregation means includes
    In the case where the only value does not exist because there are a plurality of time delays in which the difference from the specific time delay is less than a predetermined threshold, a specific value in which the difference from the virtual time delay is less than the specific threshold If there is, the specific value is added to the same set as the specific time delay, while the specific time does not exist, or the difference from the specific time delay among a plurality of time delays is less than a predetermined threshold If the unique value does not exist because there is no time delay, the virtual time delay is added to the same set as the specific time delay.
  5.  請求項4に記載の騒音観測装置において、
     前記集合化手段は、
     定時毎に所定回数にわたり連続して前記仮想時間遅れを集合に加えた場合、前記所定回数分の前記仮想時間遅れを削除して集合の形成を終了する。
    The noise observation apparatus according to claim 4,
    The aggregation means includes
    When the virtual time delay is continuously added to the set for a predetermined number of times at regular intervals, the virtual time delay for the predetermined number of times is deleted and the formation of the set is terminated.
  6.  請求項5に記載の騒音観測装置において、
     前記集合化手段は、
     前記所定回数分の前記仮想時間遅れを削除して集合の形成を終了した結果、その集合に含まれる時間遅れの個数が規定数以下である場合、当該集合を無効化する。
    The noise observation apparatus according to claim 5,
    The aggregation means includes
    If the number of time delays included in the set is equal to or less than a specified number as a result of deleting the predetermined number of virtual time delays and completing the formation of the set, the set is invalidated.
  7.  複数の音源が存在する観測空間内で規定される複数の軸線上にそれぞれ間隔を置いて配置された2つのマイクロホンを用いて、各軸線別に各マイクロホンに到達した音の相互相関係数を定時毎に算出する算出ステップと、
     前記算出ステップで定時毎に算出された相互相関係数がピーク傾向を示す複数の時間遅れについて、相互相関係数の大きい順に抽出した複数の時間遅れの変動を時間領域で収集し、各軸線別に連続した時間遅れの集合を形成する集合化ステップと、
     前記集合化ステップを通じて形成された各軸線別の時間遅れの集合について、異なる軸線間での相互相関から音源を同一とする時間遅れの集合同士を組み合わせる統合化ステップと
    を有する騒音観測方法。
    Using two microphones arranged at intervals on multiple axes defined in the observation space where multiple sound sources exist, the cross-correlation coefficient of the sound that reaches each microphone for each axis is fixed at regular intervals. A calculation step for calculating
    For a plurality of time delays in which the cross-correlation coefficient calculated every fixed time in the calculation step shows a peak tendency, a plurality of time delay fluctuations extracted in descending order of the cross-correlation coefficient are collected in the time domain, and each axis is An assembly step that forms a continuous time-delayed set;
    A noise observation method comprising: an integration step of combining time delay sets having the same sound source from cross-correlation between different axes with respect to a set of time delays for each axis formed through the aggregation step.
  8.  請求項7に記載の騒音観測方法において、
     前記集合化ステップでの連続した時間遅れの集合の形成に先立ち、定時毎にピーク傾向を示す複数の時間遅れが音源別の初期値となるか否かを確認する確認ステップをさらに有する。
    The noise observation method according to claim 7,
    Prior to the formation of a set of continuous time delays in the assembly step, the method further includes a confirmation step of confirming whether or not a plurality of time delays showing a peak tendency at regular time intervals become initial values for each sound source.
  9.  請求項7又は8に記載の騒音観測方法において、
     前記集合化ステップは、
     定時毎にピーク傾向を示す複数の時間遅れのうち、1定時前にピーク傾向を示した特定の時間遅れとの差が所定の閾値未満となる時間遅れの唯一値が存在するか否かを判断するステップと、
     前記唯一値が存在すると判断した場合は前記唯一値を前記特定の時間遅れと同じ集合に加えるステップと、
     前記唯一値が存在しないと判断した場合、少なくとも前記特定の時間遅れとそれ以前の時間遅れを用いて最小二乗法により仮想時間遅れを算出するステップと
    を含む。
    The noise observation method according to claim 7 or 8,
    The aggregation step includes
    Judgment is made whether there is a single value of the time delay that is less than the predetermined threshold, and the difference from the specific time delay that showed the peak tendency before the fixed time is less than a predetermined threshold among the multiple time delays that show the peak tendency at each fixed time And steps to
    Adding the unique value to the same set as the specific time delay if it is determined that the unique value exists;
    If it is determined that the unique value does not exist, at least a step of calculating a virtual time delay by a least square method using the specific time delay and a previous time delay is included.
  10.  請求項9に記載の騒音観測方法において、
     前記集合化ステップは、
     前記特定の時間遅れとの差が所定の閾値未満となる時間遅れが複数存在するために前記唯一値が存在しないか、もしくは、複数の時間遅れの中に前記特定の時間遅れとの差が所定の閾値未満となる時間遅れが1つも存在しないために前記唯一値が存在しないかを判断するステップと、
     前記特定の時間遅れとの差が所定の閾値未満となる時間遅れが複数存在するために前記唯一値が存在しないと判断した場合、その中に前記仮想時間遅れとの差が特定の閾値未満となる特定値があるかを判断するステップと、
     前記特定値があると判断した場合、前記特定値を前記特定の時間遅れと同じ集合に加えるステップと、
     前記特定時がないと判断するか、もしくは、複数の時間遅れの中に前記特定の時間遅れとの差が所定の閾値未満となる時間遅れが1つも存在しないために前記唯一値が存在しないと判断した場合、前記仮想時間遅れを前記特定の時間遅れと同じ集合に加えるステップと
    をさらに含む。
    The noise observation method according to claim 9,
    The aggregation step includes
    The unique value does not exist because there are a plurality of time delays in which the difference from the specific time delay is less than a predetermined threshold, or the difference from the specific time delay is a predetermined value among the plurality of time delays. Determining whether there is no unique value because there is no time delay that falls below the threshold of
    When it is determined that the only value does not exist because there are a plurality of time delays in which the difference from the specific time delay is less than a predetermined threshold, the difference from the virtual time delay is less than the specific threshold. Determining whether there is a specific value,
    If it is determined that there is the specific value, adding the specific value to the same set as the specific time delay;
    When it is determined that there is no specific time, or there is no time delay in which a difference from the specific time delay is less than a predetermined threshold among a plurality of time delays, and the unique value does not exist If so, the method further includes adding the virtual time delay to the same set as the specific time delay.
  11.  請求項10に記載の騒音観測方法において、
     前記集合化ステップにて定時毎に所定回数にわたり連続して前記仮想時間遅れを集合に加えた場合、前記所定回数分の前記仮想時間遅れを削除して集合の形成を終了する終了判定ステップをさらに有する。
    The noise observation method according to claim 10,
    In the aggregation step, when the virtual time delay is continuously added to the set at a fixed number of times at regular intervals, an end determination step for deleting the virtual time delay for the predetermined number of times and terminating the formation of the set is further included Have.
  12.  請求項11に記載の騒音観測方法において、
     前記終了判定ステップでは、
     前記所定回数分の前記仮想時間遅れを削除して集合の形成を終了した結果、その集合に含まれる時間遅れの個数が規定数以下である場合、当該集合を無効化する。
    The noise observation method according to claim 11,
    In the end determination step,
    If the number of time delays included in the set is equal to or less than a specified number as a result of deleting the predetermined number of virtual time delays and completing the formation of the set, the set is invalidated.
PCT/JP2013/004343 2012-08-09 2013-07-16 Noise observation device and noise observation method WO2014024382A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112013003958.3T DE112013003958T5 (en) 2012-08-09 2013-07-16 Noise Observation Device and Noise Observation Method
CN201380041613.3A CN104583737B (en) 2012-08-09 2013-07-16 Noise observation device and noise observation procedure

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-177181 2012-08-09
JP2012177181A JP5150004B1 (en) 2012-08-09 2012-08-09 Noise observation apparatus and noise observation method

Publications (1)

Publication Number Publication Date
WO2014024382A1 true WO2014024382A1 (en) 2014-02-13

Family

ID=47890586

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/004343 WO2014024382A1 (en) 2012-08-09 2013-07-16 Noise observation device and noise observation method

Country Status (4)

Country Link
JP (1) JP5150004B1 (en)
CN (1) CN104583737B (en)
DE (1) DE112013003958T5 (en)
WO (1) WO2014024382A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10140873B2 (en) * 2016-08-16 2018-11-27 The Boeing Company Performance-based track variation for aircraft flight management
CN110501674A (en) * 2019-08-20 2019-11-26 长安大学 A kind of acoustical signal non line of sight recognition methods based on semi-supervised learning
EP4181000A1 (en) 2021-11-15 2023-05-17 Siemens Mobility GmbH Method and computing environment for creating and applying a test algorithm for computing operations

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959842A (en) * 2016-04-29 2016-09-21 歌尔股份有限公司 Earphone noise reduction processing method and device, and earphone
CN110907895A (en) * 2019-12-05 2020-03-24 重庆商勤科技有限公司 Noise monitoring, identifying and positioning method and system and computer readable storage medium
CN114001758B (en) * 2021-11-05 2024-04-19 江西洪都航空工业集团有限责任公司 Method for accurately determining time delay through strapdown guide head strapdown decoupling

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59216022A (en) * 1983-05-24 1984-12-06 Yoichi Ando Sound field evaluating and measuring instrument and acoustic device
JPH023126A (en) * 1987-11-26 1990-01-08 Ricoh Co Ltd Information recording medium
JPH0743203A (en) * 1993-07-30 1995-02-14 Kobayashi Rigaku Kenkyusho Method and device for discriminating traveling sound source
JPH11190777A (en) * 1997-10-24 1999-07-13 Sekisui Chem Co Ltd Vibration detector, fixing method and fitting therefor and method for measuring ground propagation speed of vibration wave using it
JP2005184426A (en) * 2003-12-19 2005-07-07 Chiyuuden Plant Kk Apparatus and method for detecting sound source direction

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3350713B2 (en) * 2000-08-15 2002-11-25 神戸大学長 Method, apparatus and medium for identifying type of noise source
JP4113169B2 (en) * 2004-08-18 2008-07-09 日本電信電話株式会社 Method for estimating the number of signal sources, estimation apparatus, estimation program, and recording medium
DE102008062291B3 (en) * 2008-12-15 2010-07-22 Abb Technology Ag Measuring device and method for the diagnosis of noise in fluidic systems
JP5016724B1 (en) * 2011-03-18 2012-09-05 リオン株式会社 Noise observation apparatus and noise observation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59216022A (en) * 1983-05-24 1984-12-06 Yoichi Ando Sound field evaluating and measuring instrument and acoustic device
JPH023126A (en) * 1987-11-26 1990-01-08 Ricoh Co Ltd Information recording medium
JPH0743203A (en) * 1993-07-30 1995-02-14 Kobayashi Rigaku Kenkyusho Method and device for discriminating traveling sound source
JPH11190777A (en) * 1997-10-24 1999-07-13 Sekisui Chem Co Ltd Vibration detector, fixing method and fitting therefor and method for measuring ground propagation speed of vibration wave using it
JP2005184426A (en) * 2003-12-19 2005-07-07 Chiyuuden Plant Kk Apparatus and method for detecting sound source direction

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10140873B2 (en) * 2016-08-16 2018-11-27 The Boeing Company Performance-based track variation for aircraft flight management
US10565885B2 (en) 2016-08-16 2020-02-18 The Boeing Company Performance-based track variation for aircraft flight management
CN110501674A (en) * 2019-08-20 2019-11-26 长安大学 A kind of acoustical signal non line of sight recognition methods based on semi-supervised learning
EP4181000A1 (en) 2021-11-15 2023-05-17 Siemens Mobility GmbH Method and computing environment for creating and applying a test algorithm for computing operations

Also Published As

Publication number Publication date
DE112013003958T5 (en) 2015-04-23
JP5150004B1 (en) 2013-02-20
CN104583737A (en) 2015-04-29
CN104583737B (en) 2016-11-23
JP2014035287A (en) 2014-02-24

Similar Documents

Publication Publication Date Title
WO2014024382A1 (en) Noise observation device and noise observation method
AU2014250633B2 (en) Dynamic alarm zones for bird detection systems
JP5016724B1 (en) Noise observation apparatus and noise observation method
JP7232543B2 (en) Lightning threat information providing device, lightning threat information providing method and program
US10134292B2 (en) Navigating and guiding an aircraft to a reachable airport during complete engine failure
CN103482071A (en) Icing condition detection system
CN103903101A (en) General aviation multi-source information supervisory platform and method
JP2012126394A (en) System and method for predicting location of weather relative to aircraft
CN113869379A (en) Data-driven aircraft energy anomaly identification method
US20240221358A1 (en) System for automatic stop sign violation identification
JP5016726B1 (en) Noise observation apparatus and noise observation method
CN109190325A (en) Crowd evacuation Path Planning Simulation method based on the analysis of pedestrian&#39;s crowding
CN110781457A (en) Off-site oil consumption data processing method and device, electronic equipment and storage medium
Asensio et al. Implementation of a thrust reverse noise detection system for airports
Penkin et al. Detection of the aircraft vortex wake with the aid of a coherent Doppler lidar
Yoshikawa et al. Wake vortex observation campaign by ultra fast-scanning lidar in Narita airport, Japan
JP5561424B1 (en) Display control apparatus, display control method, and program
KR102475554B1 (en) Learning data generation method, learning data generation device, and learning data generation program
JP5685802B2 (en) Radar control device, radar monitoring coverage setting method and radar monitoring coverage setting program used in the device
Timar et al. Analysis of s-turn approaches at John F. Kennedy airport
CN105427676A (en) Method and device for determining time for aircraft to pass through departure point
CN118762493A (en) Point-line-network-fused slope disaster monitoring and early warning method and system
JP6366540B2 (en) Monitoring device
JPH10246780A (en) Airport plane radar surveillance device
CN110609574A (en) Flight data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13828374

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 1120130039583

Country of ref document: DE

Ref document number: 112013003958

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13828374

Country of ref document: EP

Kind code of ref document: A1