KR101748270B1 - Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same - Google Patents

Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same Download PDF

Info

Publication number
KR101748270B1
KR101748270B1 KR1020150166392A KR20150166392A KR101748270B1 KR 101748270 B1 KR101748270 B1 KR 101748270B1 KR 1020150166392 A KR1020150166392 A KR 1020150166392A KR 20150166392 A KR20150166392 A KR 20150166392A KR 101748270 B1 KR101748270 B1 KR 101748270B1
Authority
KR
South Korea
Prior art keywords
sound source
sound
filtering
frequency band
data
Prior art date
Application number
KR1020150166392A
Other languages
Korean (ko)
Other versions
KR20170061407A (en
Inventor
김재광
장윤호
Original Assignee
현대자동차주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 현대자동차주식회사 filed Critical 현대자동차주식회사
Priority to KR1020150166392A priority Critical patent/KR101748270B1/en
Publication of KR20170061407A publication Critical patent/KR20170061407A/en
Application granted granted Critical
Publication of KR101748270B1 publication Critical patent/KR101748270B1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B3/00Audible signalling systems; Audible personal calling systems
    • G08B3/10Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general

Abstract

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for providing sound tracking information capable of accurately recognizing sounds generated in the vicinity of a vehicle, a vehicle sound tracking apparatus, and a vehicle including the same. A method for providing sound tracking information according to an embodiment of the present invention includes the steps of storing sound data generated by sensing sounds generated in the vicinity of a vehicle, extracting characteristics of the sound data to determine a target sound source, Determining a filtering frequency based on a main frequency band of the sound source, and performing a filtering operation on the sound data according to the filtering frequency.

Description

TECHNICAL FIELD [0001] The present invention relates to a method for providing sound tracking information, a vehicle sound tracking device, and a vehicle including the same. BACKGROUND OF THE INVENTION Field of the Invention [0001]

The present invention relates to a method for providing sound tracking information, a vehicle sound tracking apparatus, and a vehicle including the same, and more particularly, to a sound tracking information providing method capable of accurately recognizing sounds generated in the vicinity of a vehicle, , And a vehicle including the same.

Various sounds are generated around the vehicle in operation. However, elderly people with loss of hearing ability or drivers with poor hearing sense may be insensitive to certain sounds (e.g., horn sounds, sirens sounds, etc.) that the driver should be aware of. In addition, due to the development of the sound insulation technology of a vehicle, even a person having good hearing with respect to the noise generated from the outside of the vehicle often fails to accurately sound outside the vehicle. Also, a driver who recognizes a specific sound generated from the rear may look back to see it, which may be a threat to safe driving.

Therefore, it is necessary to inform the driver about the specific sound such as the sound generated in the vicinity of the vehicle and the direction in which the vehicle is generated during operation, without interfering with the safety operation. However, it may be difficult to provide accurate information about a particular sound because the various sounds generated during vehicle operation can act as noise to each other.

An object of the present invention is to provide a sound tracking information providing method, a vehicle sound tracking apparatus, and a vehicle including the same, which can provide accurate information on sound around a vehicle occurring during vehicle operation.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, unless further departing from the spirit and scope of the invention as defined by the appended claims. It will be possible.

According to an aspect of the present invention, there is provided a method for providing sound tracking information, the method comprising: storing sound data generated by sensing sound generated in the vicinity of a vehicle; Determining a target sound source, determining a filtering frequency based on a main frequency band of the target sound source, and performing a filtering operation on the sound data according to the filtering frequency.

A sound tracker according to an embodiment of the present invention includes a data storage unit for storing sound data generated by sensing a sound generated in the vicinity of a vehicle, an acoustic recognition unit for extracting characteristics of the sound data to determine a target sound source, A filtering control unit for determining a filtering frequency based on a main frequency band of the target sound source, and a data filtering unit for performing a filtering operation on the sound data according to the filtering frequency.

A vehicle according to an embodiment of the present invention includes a multi-channel microphone for generating sound data by sensing sounds generated in the vicinity of the vehicle, a multi-channel microphone for extracting characteristics of the sound data, A sound tracker for performing a filtering operation on the sound data according to the filtering frequency and an acoustic notifier for visually or audibly informing the driver of information on the direction of the target sound source transmitted from the sound tracker .

According to the sound tracking information providing method, the vehicle sound tracking apparatus, and the vehicle including the same, the sound tracking information providing method, the vehicle sound tracking apparatus, and the vehicle including the same, By performing the direction tracking, it is possible to perform sound source tracking that is robust against noise.

In addition, in setting the filtering frequency of the filtering operation, the filtering performance can be improved by taking into consideration not only the main frequency band of the target sound but also the frequency band of other noise in which the frequency band overlaps.

The effects obtained by the present invention are not limited to the above-mentioned effects, and other effects not mentioned can be clearly understood by those skilled in the art from the following description will be.

1 is a view showing a vehicle according to an embodiment of the present invention.
2 is a detailed block diagram of the sound tracker shown in FIG.
3 is a flowchart illustrating an operation method of the sound tracker shown in FIG.
FIG. 4 is a flowchart illustrating in more detail step S40 shown in FIG.
5 is a table showing an example of acoustic classification results generated by the acoustic recognition unit shown in FIG.
6 is a table showing an example of a frequency band for each sound source stored in the filtering control unit shown in FIG.
FIG. 7 is a diagram illustrating an embodiment of a method for the filtering control unit shown in FIG. 2 to determine a filtering frequency band.
FIG. 8 is a diagram showing measurement results according to application of filtering in a specific situation. FIG.

Hereinafter, at least one embodiment related to the present invention will be described in detail with reference to the drawings. The suffix "module" and " part "for the components used in the following description are given or mixed in consideration of ease of specification, and do not have their own meaning or role.

1 is a view showing a vehicle according to an embodiment of the present invention.

Referring to FIG. 1, the vehicle 10 may generate information on a specific sound such as a sound generated in the vicinity of the vehicle, a direction in which the vehicle 10 is generated, .

The vehicle 10 includes a multichannel microphone 50 capable of collecting sounds of the outside of the vehicle 10 and a sound track 50 capable of generating information on a specific sound based on the sound information collected by the microphone 50 Device 100 as shown in FIG. Each microphone of the multi-channel microphone 50 can be understood as a single channel. The number (three) of the multi-channel microphones 50 and the installation position for the vehicle 10 are not limited to those shown in Fig.

The specific operation of the sound tracker 100 will be described later with reference to FIG.

2 is a detailed block diagram of the sound tracker shown in FIG.

2, the sound tracking apparatus 100 includes a signal processing unit 110, a data storage unit 120, an acoustic recognition unit 130, a filtering control unit 140, a data filtering unit 150, (Not shown). The sound tracker 100 may be implemented as a part of the head unit of the vehicle 10, but the scope of the present invention is not limited thereto.

The multi-channel microphone 50 senses a sound generated in the vicinity of the vehicle 10, generates sound data through analog-to-digital conversion, and transmits the sound data to the signal processing unit 110.

There are various sounds around the vehicle. There are engine sounds of other vehicles located in the vicinity of the vehicle, sounds of tire fricatives, traffic lights, electric sign boards, and general natural sounds. However, the driver is not interested in most sounds. Some sounds do not penetrate the vehicle's soundproofing system and are not passed on to the driver. However, the driver, for example, wants to know if the sound of the horn is heard in which direction the sound of the horn originated, whether it is for his own vehicle or not. Depending on the recognition of the horn sound, the driver can take various actions such as reducing the speed of the vehicle, changing the lane, or operating the emergency light.

In addition, the driver may not be able to hear the surrounding horn sound when the driver sets the volume of the audio system of the vehicle too high. In this case, it may be necessary to visually or in the vicinity of the driver's vehicle You need to let them know that the horn has occurred.

The driver may also be interested in other sounds. For example, when the vehicle suddenly stops, a large frictional sound is generated by friction between the tire and the ground. Such a fricative can be related to the occurrence of a traffic accident or a situation immediately before a traffic accident, and therefore requires the driver to pay attention. As another example, a collision sound occurs when an accident occurs in which the vehicle collides with another vehicle. It is possible to prevent a subsequent accident by informing the driver of the direction in which the sound of the crash sound has occurred by recognizing a collision sound such as a front or side.

If a siren such as a police or an ambulance sounds around the driver, the driver should take measures such as moving the lane so that the vehicle can pass. In certain cases, users may be subject to legal penalties because they do not take the necessary action. Therefore, there is a need for the driver to recognize the sound of the siren of the vehicle belonging to the public institution.

The signal processing unit 110 may perform noise filtering on the acquired sound data. This noise filtering can eliminate a variety of noises that are difficult to identify the nature or source of the sound. In addition, most of the sounds that the user is interested in, such as horn sounds, siren sounds, tire rubbers, and crash sounds, have decibels of sufficient magnitude (e.g., greater than 70 dB). Accordingly, the signal processing unit 110 can determine whether the decibel (i.e., size) of the noise-removed sound data is equal to or greater than a reference value. That is, the sound data having the size of the sound data less than the reference value can be removed by the signal processing unit 110.

The data storage unit 120 may store the noise-removed sound data. The data storage unit 120 may store the sound data in units of frames and may provide the sound data to the sound recognition unit 130 or the data filtering unit 150 on a frame-by-frame basis. The frame may mean sound data collected at the same time, and the interval between frames may have a specific period (for example, 100 ms), but the scope of the present invention is not limited thereto.

The sound recognition unit 130 determines the characteristics of the sound data. Acoustic data having a decibel of more than a predetermined reference value may not be important to the driver. For example, the sound of a train passing by, or the noise of an airplane near an airport, has a very high decibel, but it may not significantly affect operation. Noise is also generated when road repair and rehabilitation works. Rather, informing the driver of such acoustic data continuously may slow down the response of the driver to situations in which the driver needs to be aware, or may prevent the driver from responding.

The sound recognition unit 130 extracts feature values in a time domain and a frequency domain with respect to the sound data received from the data storage unit 120. The sound recognition unit 130 can construct an average value and a variance value of the feature values in a database. Here, the feature value may be a Mel-Frequency Cepstral Coefficients (MFCC), a Total Spectrum Power, a Sub-band Spectrum Power, and / or a peach frequency. The sound recognition unit 130 may store an average value and a variance value for a predetermined time period, for example, 100 ms, in the database with respect to the sound data.

In the field of speech signal processing, MFC (Mel-Frequency Cepstrum) is one of the methods of expressing the power spectrum of short-term signals. This can be obtained by taking a cosine transform on the log power spectrum in the frequency domain of the non-linear Mel scale. MFCC means the coefficient of several MFCs. MFCC generally applies a pre-emphasis filter to short-term sound data (signal) and applies DFT (Discrete Fourier Transform) to this value. Then use Melscale's Filter Bank (Mel Filter Banks) to find the power spectrum and log each Mel-scale power. When the DCT (Discrete Cosine Transform) is performed on the obtained value, the MFCC value is obtained.

The total power spectrum means an energy distribution of the entire spectrum within a predetermined frame interval and the subband power is usually divided into four groups of [0, 1, 2, Means the energy distribution value of the spectrum in the subband period. The pitch frequency can be obtained by detecting the peak of the normalized autocorrelation function.

The sound recognition unit 130 can classify the feature values of the acquired sound data through the classifier to determine whether the acquired sound data is a sound that the user is interested in. The classifier may be any one of an NN (Neural Network) classifier, a SVM (Support Vector Machine) classifier, and a Bayesian classifier.

In the present specification, the classifier will be described as an NN classifier.

The classifier of the acoustic recognition unit 130 classifies classes into classes according to types of sounds and uses the characteristic values of the obtained acoustic data to classify the acoustic data into a plurality of classes based on similarities with the plurality of classes, (Confidence Level) can be calculated. That is, the confidence level may mean a probability that the acoustic data corresponds to a class of acoustic, and the total of the confidence levels may be 1. [

The sound classification result generated by the classifier of the sound recognition unit 130 may include information about each class, the type of sound corresponding to each class, and the level of trust corresponding to each class.

The sound recognition unit 130 may generate a determination result according to whether the confidence level is equal to or greater than the first reference value a (for example, 0.7), and include the determination result in the sound classification result. That is, when the confidence level is equal to or greater than the first reference value (a), the acoustic recognition unit 130 can determine the acoustic type of the class corresponding to the confidence level as the type of the current acoustic data.

Accordingly, the sound recognition unit 130 may analyze the characteristics of the sound data, and generate sound classification result, which is information on what type of sound the sound data is.

The filtering control unit 140 may determine the filtering frequency based on the result of the sound classification of the sound recognition unit 130 and transmit the filtering frequency to the data filtering unit 150. Here, the filtering frequency means at least one cut off frequency for determining a filter such as a low pass filter, a high pass filter, a band pass filter, .

The detailed operation of the filtering control unit 140 will be described later with reference to FIG. 3 to FIG.

The data filtering unit 150 may perform a filtering operation on the sound data provided from the data storage unit 120 based on the filtering frequency determined by the filtering control unit 140. That is, the data filtering unit 150 may perform a filtering operation to cancel the frequency band excluding the pass band determined by the filtering frequency with respect to the sound data.

The sound tracker 160 may track the direction in which the sound is generated based on the filtered acoustic data for the acoustic type of the class whose confidence level is equal to or greater than the first reference value a (e.g., 0.7). The sound type may be provided from the sound recognition unit 130 or the filtering control unit 140.

The sound tracker 160 accumulates sound data corresponding to consecutive frames, identifies the identity of each micro input sound through a temporal characteristic (waveform) of the sound, compares the size of the same sound, It is possible to calculate the difference value of the arrival time of the arriving sound. The temporal feature may be provided by the sound recognition unit 130. [

Since the magnitude of the sound is inversely proportional to the square of the distance, when the distance from the sound generation position is doubled, the magnitude of the sound is reduced to 1/4 (about 6 dB reduction). Assuming that the width of a typical vehicle is about 2 m and the length is about 3 m, the size difference of the sensed sound may have a sufficiently significant value depending on the position of the point where the sound is generated.

For example, when a multi-channel microphone 50 is disposed as shown in FIG. 1, when a sound is generated at the upper right of the vehicle, the size of the sound detected by the microphone located at the upper end is detected by a microphone located at the left and right of the lower end It becomes larger than the average size of one sound. In addition, the size of the sound detected by the microphone located on the lower right side is larger than the sound detected by the microphone located on the lower left side.

With this characteristic, the approximate direction based on the center of the vehicle 10 can be tracked using the size of sound collected from each micro.

In addition, the angle to the sound generation position can be calculated using the difference value (signal delay) of the arrival time of the sound reaching each microphone. At this time, the sound tracker 160 stores in advance a table in which an angle to a sound generation position and a signal delay corresponding to each microphone are mapped. You can apply the delay value for all angles to the current signal to get the probability of having a tracking object at each angle. Through this, it is possible to estimate the sound generation position. This is because the combination of the angle with respect to the sound generation position and the signal delay corresponding to each microphone has a one-to-one correspondence with each other.

The sound tracker 160 detects information on the approximate direction and / or angle based on the center of the vehicle 10 and information on the identified acoustic type (acoustic type of the class whose confidence level is equal to or higher than the first reference value a) Information can be provided to the sound announcement unit 200.

The sound announcement unit 200 provides the driver with information about the sound generation region based on the information provided by the sound tracking apparatus 100. [ The sound informing unit 130 may provide the information visually or audibly. It is also possible to use both methods.

The sound announcement unit 200 may be implemented as a HUD (Head Unit Display) or a cluster mounted on the vehicle 10, and may provide information on the sound generation region. The acoustic annunciator 200 may be a smart device connected to the sound tracker 100 through a wired communication such as a CAN bus or a short-range wireless communication such as BLUETOOTH, NFC or Wi-Fi A watch or the like) to provide the driver with information about the sound generation region.

3 is a flowchart illustrating an operation method of the sound tracker shown in FIG. FIG. 4 is a flowchart illustrating in more detail step S40 shown in FIG. 5 is a table showing an example of acoustic classification results generated by the acoustic recognition unit shown in FIG. 6 is a table showing an example of a frequency band for each sound source stored in the filtering control unit shown in FIG. FIG. 7 is a diagram illustrating an embodiment of a method for the filtering control unit shown in FIG. 2 to determine a filtering frequency band.

2 to 7, the signal processing unit 110 may receive sound data generated through analog-to-digital conversion by sensing a sound generated in the vicinity of the vehicle 10 (S10).

The signal processing unit 110 performs noise filtering on the acquired sound data, and the data storage unit 120 can store the noise-removed sound data (S20).

The sound recognition unit 130 extracts feature values in a time domain and a frequency domain with respect to the sound data received from the data storage unit 120 and classifies the feature values through a classifier, An audio classification result can be generated (S30).

The filtering control unit 140 and the data filtering unit 150 may perform the filtering operation on the sound data based on the result of the sound classification (S40).

4 shows the detailed steps of step S40.

The filtering control unit 140 can determine whether the trust level of the highest class having the highest trust level included in the sound classification result of the sound recognition unit 130 is equal to or greater than the first reference value a or not at step S41. The acoustic type corresponding to the highest class may be an embodiment of the " target sound source ". That is, the target sound source refers to a sound source that is an object of sound tracking as a result of sound recognition of sound data.

The first reference value (a) may be set to 0.7 or more, for example, as a reference value by which the acoustic type of the class corresponding to the confidence level can be determined as the type of the current acoustic data.

5 shows an example of the result of sound classification, and the confidence level corresponding to each of the vehicle of the first class, the horn of the second class, the syllable of the third class, and the ambient noise of the fourth class is 0.82, 0.02, 0.16. In addition, the result of the classification corresponding to step S41 may be included in the result of the sound classification, and in this case, the step S41 may not be performed by the filtering controller 140 substantially. That is, the determination result is a result of whether or not the corresponding trust level is 0.7 or more, and corresponds to the determination result of " 0 ", which is 0.7 or more, with the confidence level corresponding to the vehicle of the first class being 0.82.

Hereinafter, each step will be described with reference to the example of FIG.

If the trust level of the highest class is less than the first reference value a (No path of S41), that is, if the acoustic type of the class corresponding to the current trust level can not be determined as the type of the current acoustic data, .

In Fig. 5, when a = 0.7, the trust level corresponding to the vehicle of the first class, which is the highest class, is 0.82, which is greater than the first reference value (a).

If the trust level of the highest class is equal to or higher than the first reference value a (Yes path in S41), that is, if the highest class sound class corresponding to the trust level can be determined as the current type of acoustic data, step S42 is performed .

In FIG. 5, since the confidence level corresponding to the vehicle of the first class is 0.82 and is greater than the first reference value (a), the type of current acoustic data can be judged as a vehicle.

The filtering control unit 140 can determine whether there is a class that is equal to or higher than the second reference value b among the trust levels corresponding to each of the subclasses of the highest class (the second to fourth classes of FIG. 5) (S42) . The acoustic type of the class that is higher than the second reference value (b) among the subclasses may be defined as an " interference sound source ". That is, although the interference sound source is not subject to sound tracking as a result of sound recognition, it means a sound source having a high probability of being included in the sound data and having a high possibility of acting as noise in sound tracking.

This is because it is not determined as the type of the current acoustic data but the frequency band of the acoustic type that is likely to be included in the acoustic data is considered in performing the filtering operation of the data filtering unit 150. [

The second reference value b may be set to a value that is a reference value for a considerable possibility to be included in the acoustic data, for example, 0.1, but may be set to be equal to or lower than the acoustic type of the class corresponding to the confidence level.

If the trust level of the subclass is equal to or greater than the second reference value b (Yes path of S42), that is, if there is a subclass having a considerable possibility to be included in the sound data, the filtering control unit 140 sets the frequency band of the highest class And a frequency in which the frequency band modified by the sub-class frequency band is the pass band is determined as the filtering frequency (S43).

5, since the confidence level corresponding to the ambient noise of the fourth class is 0.16 and is equal to or greater than the second reference value b, the filtering control unit 140 modifies the frequency band of the first class into the frequency band of the fourth class, .

The filtering control unit 140 stores a table of frequency bands for each sound source shown in FIG. That is, frequency bands distributed by acoustic types corresponding to the respective classes are different.

The noise generated in the vehicle is mainly distributed in the frequency band of 800 to 2000 Hz. In the case of ambient noise (other noise such as running wind, airplane noise), it is mainly distributed in the frequency band of 1400 ~ 2800HZ.

In the case of a horn sound, it is distributed in a frequency range of 300 to 500 Hz based on a normal design standard of the horn, and the fundamental frequency is 700 to 900 Hz, 1100 to 1300 Hz, 1500 to 1500 Hz corresponding to the harmonics frequency, 1700 Hz. Although the frequency component related to the horn sound may exist in the higher frequency band, the frequency band exceeding 1500 to 1700 Hz can be ignored because the size gradually becomes negligibly small.

Likewise, in the case of the siren sound, it is distributed at 600 to 800 Hz based on the usual design standard of the siren, and is distributed at 1300 to 1500 Hz, 2000 to 2200 Hz, and 2700 to 2900 Hz corresponding to the harmonic frequency with the source frequency.

The frequency band of the acoustic type of each of these classes can be predetermined by experiment or by design criteria or the like.

7, the frequency band of the vehicle of the highest class is 800 to 1400 Hz, and the frequency band of the ambient noise which is higher than the second reference value b is 1400 to 2800 Hz. Therefore, in the frequency band of 1400 to 2000 Hz, the vehicle sound judged to be the current sound type overlaps with the frequency band of the surrounding noise which is a sub-class having a considerable possibility to be included in the sound data. That is, in the overlapped frequency band, since the noise of the vehicle is mixed with the noise of the vehicle from the viewpoint of sound tracking and acoustic tracking, this frequency band should be filtered as shown in the lower part of FIG.

Therefore, the frequency of 800 to 1400 Hz, which is the final filtering frequency, is highly likely to include purely vehicle sound.

According to another embodiment, when overlapping frequency bands exceed a certain rate (for example, 60%) of the frequency band of the vehicle sound, there is a possibility that most of the information about the vehicle sound is lost. Therefore, May be determined as the filtering frequency. This is to balance system performance in terms of noise reduction and preservation of target information.

Also, when there is no overlapped frequency band, the highest frequency band may be determined as the filtering frequency as in step S44.

And, if the acoustic class which is higher than the highest class or the second reference value (b) is an acoustic class having a plurality of frequency bands such as a horn or a siren, a plurality of frequency bands should be considered together for the pass band or overlapping frequency band .

When the trust level of the subclass is less than the second reference value b (No path of S42), that is, when there is no significant subclass that is likely to be included in the sound data, the filtering control unit 140 sets the frequency of the highest class The frequency at which the band is directly set as the pass band is determined as the filtering frequency (S44).

That is, since there is no sound type which becomes noises from the viewpoint of sound tracking, the main frequency band of the vehicle sound to be subjected to sound tracking is directly determined as the filtering frequency, and other frequency bands are excluded, So that the probability of including only the vehicle sound becomes very high.

The data filtering unit 150 may perform a filtering operation to cancel the frequency band excluding the passband determined by the determined filtering frequency for the sound data (S45).

The sound tracker 160 may generate sound tracking information by tracking the direction in which sound is generated based on the filtered sound data with respect to the sound class of the highest class (S50).

The sound announcement unit 200 provides the driver with information on the sound generation region based on the information provided by the sound tracking apparatus 100 (S60).

FIG. 8 is a diagram showing measurement results according to application of filtering in a specific situation. FIG.

Referring to Fig. 8, when a specific situation occurs while the vehicle 10 is running, the result of measurement of the angle of the frame (the angle with respect to the center of the vehicle 10) have.

 It is assumed that the specific situation is a situation in which the wind is blowing a lot (the driving wind intensely occurs), the right rear vehicle is traveling at constant speed, and the opposite lane vehicle is moving occasionally.

The tables on the left and right side show the measurement results for forward direction estimation and the measurement results for direction estimation after filtering, respectively.

That is, the table on the left side shows the measurement result when the filtering operation of the data filtering unit 150 is not performed on the acoustic data, and the table on the right side shows the result of filtering when the filtering operation of the data filtering unit 150 is performed on the acoustic data .

In each table, the probability that there is an object recognized in the time-angle angle value from red to yellow, green, and blue may be lowered.

When filtering is not applied, it is difficult to recognize a pattern recognizing the existence of nearby vehicles and the movement according to time as shown in the left table. This is because when the filter is not used, the wind noise is large due to the windy environment and it is not possible to obtain a proper measurement result.

However, when the filtering is applied (filtering based on the final filtering frequency shown in Fig. 7), the pattern of red color corresponding to the sound of the vehicle is analyzed. As shown in the right table, The vehicle traveling in the opposite lane (180 ° - 270 °) can be recognized three times.

This is because the frequency band including the driving wind as well as the remaining bands excluding the frequency band of the vehicle sound are also filtered when the filtering is applied so that the acoustic direction is only tracked for the frequency band of the vehicle sound.

Therefore, according to the sound tracking information providing method, the vehicle sound tracking apparatus, and the vehicle including the sound tracking information providing method according to an embodiment of the present invention, the filtering operation is performed considering the main frequency band of the recognized target sound, It is possible to perform robust sound source tracking with respect to noise.

In addition, in setting the filtering frequency of the filtering operation, the filtering performance can be improved by taking into consideration not only the main frequency band of the target sound but also the frequency band of other noise in which the frequency band overlaps.

The sound tracking information providing method described above can be implemented as a computer-readable code on a computer-readable recording medium. The computer-readable recording medium includes all kinds of recording media storing data that can be decoded by a computer system. For example, it may be a ROM (Read Only Memory), a RAM (Random Access Memory), a magnetic tape, a magnetic disk, a flash memory, an optical data storage device, or the like. In addition, the computer-readable recording medium may be distributed and executed in a computer system connected to a computer network, and may be stored and executed as a code readable in a distributed manner.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention as defined in the appended claims. It will be understood that various modifications and changes may be made.

Claims (23)

Storing sound data generated by sensing sounds generated in the vicinity of the vehicle;
Extracting features of the sound data and determining a target sound source;
Determining a filtering frequency based on a main frequency band of the target sound source; And
And performing a filtering operation on the acoustic data according to the filtering frequency,
Wherein the determining the filtering frequency comprises:
Determining a frequency at which a main frequency band of the interference sound source is excluded from a main frequency band of the target sound source as a pass band as a filtering frequency when an interference sound source is present,
When a frequency band in which the main frequency band of the target sound source overlaps with the main frequency band of the interference sound source exceeds a predetermined ratio of the main frequency band of the target sound source, And determining the filtering frequency as a filtering frequency.
The method according to claim 1,
Wherein the target sound source is an acoustic type having a confidence level equal to or higher than a first reference value as a result of acoustic recognition of the acoustic data.
delete delete The method according to claim 1,
Wherein the determining the filtering frequency comprises:
And determining a frequency at which a main frequency band of the target sound source is a pass band as a filtering frequency when the interference sound source is absent.
The method according to claim 1,
Wherein the interference sound source is a type of sound having a confidence level higher than a second reference value in addition to the target sound source as a result of sound recognition of the sound data.
The method according to claim 1,
And generating information on the direction of the target sound source using the size and the signal delay of the filtered sound data.
The method according to claim 1,
Wherein the target sound source is determined according to a trust level for each sound type of an NN (Neural Network) classifier.
A data storage unit for storing sound data generated by sensing sounds generated in the vicinity of the vehicle;
An acoustic recognition unit for extracting characteristics of the sound data and determining a target sound source;
A filtering control unit for determining a filtering frequency based on a main frequency band of the target sound source; And
And a data filtering unit for performing a filtering operation on the sound data according to the filtering frequency,
Wherein the filtering control unit comprises:
Determining a frequency at which a main frequency band of the interference sound source is excluded from a main frequency band of the target sound source as a pass band as a filtering frequency when an interference sound source is present,
When a frequency band in which the main frequency band of the target sound source overlaps with the main frequency band of the interference sound source exceeds a predetermined ratio of the main frequency band of the target sound source, The acoustic tracking device determines the filtering frequency.
10. The method of claim 9,
Wherein the target sound source is an acoustic type having a confidence level equal to or higher than a first reference value as a result of acoustic recognition of the acoustic data.
delete delete 10. The method of claim 9,
Wherein the filtering control unit comprises:
Wherein the frequency of the main frequency band of the target sound source is determined as a filtering frequency when the interference sound source is absent.
10. The method of claim 9,
Wherein the interference sound source is an acoustic type having a confidence level equal to or higher than a second reference value in addition to the target sound source as a result of acoustic recognition of the acoustic data.
10. The method of claim 9,
And a sound tracker for generating information on the direction of the target sound source using the size and the signal delay of the filtered sound data.
10. The method of claim 9,
Wherein the target sound source is determined according to a trust level for each sound type of an NN (Neural Network) classifier.
A multi-channel microphone for generating sound data by sensing sound generated in the vicinity of the vehicle;
A sound tracker for determining a filtering frequency based on a main frequency band of the target sound source determined by extracting the characteristics of the sound data, and performing a filtering operation on the sound data according to the filtering frequency; And
And an acoustic notification unit for visually or audibly informing the driver of information on the direction of the target sound source transmitted from the sound tracker,
The sound tracker comprises:
Determining a frequency at which a main frequency band of the interference sound source is excluded from a main frequency band of the target sound source as a pass band as a filtering frequency when an interference sound source is present,
When a frequency band in which the main frequency band of the target sound source overlaps with the main frequency band of the interference sound source exceeds a predetermined ratio of the main frequency band of the target sound source, The vehicle is determined by the filtering frequency.
18. The method of claim 17,
Wherein the target sound source is an acoustic type having a confidence level equal to or higher than a first reference value as a result of acoustic recognition of the acoustic data.
delete delete 18. The method of claim 17,
The sound tracker comprises:
Wherein when the interference sound source is absent, a frequency at which the main frequency band of the target sound source is a pass band is determined as the filtering frequency.
18. The method of claim 17,
Wherein the interference sound source is an acoustic type having a confidence level higher than a second reference value in addition to the target sound source as a result of acoustic recognition of the acoustic data.
18. The method of claim 17,
Wherein the multi-channel microphone includes a microphone installed at the upper, lower left, and lower right sides of the center of the vehicle.
KR1020150166392A 2015-11-26 2015-11-26 Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same KR101748270B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150166392A KR101748270B1 (en) 2015-11-26 2015-11-26 Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150166392A KR101748270B1 (en) 2015-11-26 2015-11-26 Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same

Publications (2)

Publication Number Publication Date
KR20170061407A KR20170061407A (en) 2017-06-05
KR101748270B1 true KR101748270B1 (en) 2017-06-16

Family

ID=59223189

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150166392A KR101748270B1 (en) 2015-11-26 2015-11-26 Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same

Country Status (1)

Country Link
KR (1) KR101748270B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102135896B1 (en) * 2018-08-28 2020-07-20 국방과학연구소 A Robust Tracking Device and Method for Passive Sensor System

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101228749B1 (en) * 2011-08-24 2013-01-31 한국과학기술원 Position detecting system and method using audio frequency and and recording medium for the same

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101228749B1 (en) * 2011-08-24 2013-01-31 한국과학기술원 Position detecting system and method using audio frequency and and recording medium for the same

Also Published As

Publication number Publication date
KR20170061407A (en) 2017-06-05

Similar Documents

Publication Publication Date Title
KR101759143B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
KR101892028B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
KR101748276B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
KR101768145B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
KR101759144B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
KR101807616B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
EP2876639B1 (en) Using external sounds to alert vehicle occupants of external events
US10607488B2 (en) Apparatus and method of providing visualization information of rear vehicle
WO2012097150A1 (en) Automotive sound recognition system for enhanced situation awareness
KR101519255B1 (en) Notification System for Direction of Sound around a Vehicle and Method thereof
Sammarco et al. Crashzam: Sound-based Car Crash Detection.
KR101250668B1 (en) Method for recogning emergency speech using gmm
Lee et al. Acoustic hazard detection for pedestrians with obscured hearing
KR101748270B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
Valiveti et al. Soft computing based audio signal analysis for accident prediction
KR101901800B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
KR102331758B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
Sathyanarayana et al. Leveraging speech-active regions towards active safety in vehicles
KR102601171B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
KR102378940B1 (en) Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same
Marciniuk et al. Acoustic Road Monitoring

Legal Events

Date Code Title Description
GRNT Written decision to grant