KR101748270B1 - Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same - Google Patents
Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same Download PDFInfo
- Publication number
- KR101748270B1 KR101748270B1 KR1020150166392A KR20150166392A KR101748270B1 KR 101748270 B1 KR101748270 B1 KR 101748270B1 KR 1020150166392 A KR1020150166392 A KR 1020150166392A KR 20150166392 A KR20150166392 A KR 20150166392A KR 101748270 B1 KR101748270 B1 KR 101748270B1
- Authority
- KR
- South Korea
- Prior art keywords
- sound source
- sound
- filtering
- frequency band
- data
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q5/00—Arrangement or adaptation of acoustic signal devices
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B3/00—Audible signalling systems; Audible personal calling systems
- G08B3/10—Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B7/00—Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
- G08B7/06—Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
Abstract
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for providing sound tracking information capable of accurately recognizing sounds generated in the vicinity of a vehicle, a vehicle sound tracking apparatus, and a vehicle including the same. A method for providing sound tracking information according to an embodiment of the present invention includes the steps of storing sound data generated by sensing sounds generated in the vicinity of a vehicle, extracting characteristics of the sound data to determine a target sound source, Determining a filtering frequency based on a main frequency band of the sound source, and performing a filtering operation on the sound data according to the filtering frequency.
Description
The present invention relates to a method for providing sound tracking information, a vehicle sound tracking apparatus, and a vehicle including the same, and more particularly, to a sound tracking information providing method capable of accurately recognizing sounds generated in the vicinity of a vehicle, , And a vehicle including the same.
Various sounds are generated around the vehicle in operation. However, elderly people with loss of hearing ability or drivers with poor hearing sense may be insensitive to certain sounds (e.g., horn sounds, sirens sounds, etc.) that the driver should be aware of. In addition, due to the development of the sound insulation technology of a vehicle, even a person having good hearing with respect to the noise generated from the outside of the vehicle often fails to accurately sound outside the vehicle. Also, a driver who recognizes a specific sound generated from the rear may look back to see it, which may be a threat to safe driving.
Therefore, it is necessary to inform the driver about the specific sound such as the sound generated in the vicinity of the vehicle and the direction in which the vehicle is generated during operation, without interfering with the safety operation. However, it may be difficult to provide accurate information about a particular sound because the various sounds generated during vehicle operation can act as noise to each other.
An object of the present invention is to provide a sound tracking information providing method, a vehicle sound tracking apparatus, and a vehicle including the same, which can provide accurate information on sound around a vehicle occurring during vehicle operation.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, unless further departing from the spirit and scope of the invention as defined by the appended claims. It will be possible.
According to an aspect of the present invention, there is provided a method for providing sound tracking information, the method comprising: storing sound data generated by sensing sound generated in the vicinity of a vehicle; Determining a target sound source, determining a filtering frequency based on a main frequency band of the target sound source, and performing a filtering operation on the sound data according to the filtering frequency.
A sound tracker according to an embodiment of the present invention includes a data storage unit for storing sound data generated by sensing a sound generated in the vicinity of a vehicle, an acoustic recognition unit for extracting characteristics of the sound data to determine a target sound source, A filtering control unit for determining a filtering frequency based on a main frequency band of the target sound source, and a data filtering unit for performing a filtering operation on the sound data according to the filtering frequency.
A vehicle according to an embodiment of the present invention includes a multi-channel microphone for generating sound data by sensing sounds generated in the vicinity of the vehicle, a multi-channel microphone for extracting characteristics of the sound data, A sound tracker for performing a filtering operation on the sound data according to the filtering frequency and an acoustic notifier for visually or audibly informing the driver of information on the direction of the target sound source transmitted from the sound tracker .
According to the sound tracking information providing method, the vehicle sound tracking apparatus, and the vehicle including the same, the sound tracking information providing method, the vehicle sound tracking apparatus, and the vehicle including the same, By performing the direction tracking, it is possible to perform sound source tracking that is robust against noise.
In addition, in setting the filtering frequency of the filtering operation, the filtering performance can be improved by taking into consideration not only the main frequency band of the target sound but also the frequency band of other noise in which the frequency band overlaps.
The effects obtained by the present invention are not limited to the above-mentioned effects, and other effects not mentioned can be clearly understood by those skilled in the art from the following description will be.
1 is a view showing a vehicle according to an embodiment of the present invention.
2 is a detailed block diagram of the sound tracker shown in FIG.
3 is a flowchart illustrating an operation method of the sound tracker shown in FIG.
FIG. 4 is a flowchart illustrating in more detail step S40 shown in FIG.
5 is a table showing an example of acoustic classification results generated by the acoustic recognition unit shown in FIG.
6 is a table showing an example of a frequency band for each sound source stored in the filtering control unit shown in FIG.
FIG. 7 is a diagram illustrating an embodiment of a method for the filtering control unit shown in FIG. 2 to determine a filtering frequency band.
FIG. 8 is a diagram showing measurement results according to application of filtering in a specific situation. FIG.
Hereinafter, at least one embodiment related to the present invention will be described in detail with reference to the drawings. The suffix "module" and " part "for the components used in the following description are given or mixed in consideration of ease of specification, and do not have their own meaning or role.
1 is a view showing a vehicle according to an embodiment of the present invention.
Referring to FIG. 1, the
The
The specific operation of the
2 is a detailed block diagram of the sound tracker shown in FIG.
2, the
The
There are various sounds around the vehicle. There are engine sounds of other vehicles located in the vicinity of the vehicle, sounds of tire fricatives, traffic lights, electric sign boards, and general natural sounds. However, the driver is not interested in most sounds. Some sounds do not penetrate the vehicle's soundproofing system and are not passed on to the driver. However, the driver, for example, wants to know if the sound of the horn is heard in which direction the sound of the horn originated, whether it is for his own vehicle or not. Depending on the recognition of the horn sound, the driver can take various actions such as reducing the speed of the vehicle, changing the lane, or operating the emergency light.
In addition, the driver may not be able to hear the surrounding horn sound when the driver sets the volume of the audio system of the vehicle too high. In this case, it may be necessary to visually or in the vicinity of the driver's vehicle You need to let them know that the horn has occurred.
The driver may also be interested in other sounds. For example, when the vehicle suddenly stops, a large frictional sound is generated by friction between the tire and the ground. Such a fricative can be related to the occurrence of a traffic accident or a situation immediately before a traffic accident, and therefore requires the driver to pay attention. As another example, a collision sound occurs when an accident occurs in which the vehicle collides with another vehicle. It is possible to prevent a subsequent accident by informing the driver of the direction in which the sound of the crash sound has occurred by recognizing a collision sound such as a front or side.
If a siren such as a police or an ambulance sounds around the driver, the driver should take measures such as moving the lane so that the vehicle can pass. In certain cases, users may be subject to legal penalties because they do not take the necessary action. Therefore, there is a need for the driver to recognize the sound of the siren of the vehicle belonging to the public institution.
The
The
The
The
In the field of speech signal processing, MFC (Mel-Frequency Cepstrum) is one of the methods of expressing the power spectrum of short-term signals. This can be obtained by taking a cosine transform on the log power spectrum in the frequency domain of the non-linear Mel scale. MFCC means the coefficient of several MFCs. MFCC generally applies a pre-emphasis filter to short-term sound data (signal) and applies DFT (Discrete Fourier Transform) to this value. Then use Melscale's Filter Bank (Mel Filter Banks) to find the power spectrum and log each Mel-scale power. When the DCT (Discrete Cosine Transform) is performed on the obtained value, the MFCC value is obtained.
The total power spectrum means an energy distribution of the entire spectrum within a predetermined frame interval and the subband power is usually divided into four groups of [0, 1, 2, Means the energy distribution value of the spectrum in the subband period. The pitch frequency can be obtained by detecting the peak of the normalized autocorrelation function.
The
In the present specification, the classifier will be described as an NN classifier.
The classifier of the
The sound classification result generated by the classifier of the
The
Accordingly, the
The
The detailed operation of the
The
The
The
Since the magnitude of the sound is inversely proportional to the square of the distance, when the distance from the sound generation position is doubled, the magnitude of the sound is reduced to 1/4 (about 6 dB reduction). Assuming that the width of a typical vehicle is about 2 m and the length is about 3 m, the size difference of the sensed sound may have a sufficiently significant value depending on the position of the point where the sound is generated.
For example, when a
With this characteristic, the approximate direction based on the center of the
In addition, the angle to the sound generation position can be calculated using the difference value (signal delay) of the arrival time of the sound reaching each microphone. At this time, the
The
The
The
3 is a flowchart illustrating an operation method of the sound tracker shown in FIG. FIG. 4 is a flowchart illustrating in more detail step S40 shown in FIG. 5 is a table showing an example of acoustic classification results generated by the acoustic recognition unit shown in FIG. 6 is a table showing an example of a frequency band for each sound source stored in the filtering control unit shown in FIG. FIG. 7 is a diagram illustrating an embodiment of a method for the filtering control unit shown in FIG. 2 to determine a filtering frequency band.
2 to 7, the
The
The
The
4 shows the detailed steps of step S40.
The
The first reference value (a) may be set to 0.7 or more, for example, as a reference value by which the acoustic type of the class corresponding to the confidence level can be determined as the type of the current acoustic data.
5 shows an example of the result of sound classification, and the confidence level corresponding to each of the vehicle of the first class, the horn of the second class, the syllable of the third class, and the ambient noise of the fourth class is 0.82, 0.02, 0.16. In addition, the result of the classification corresponding to step S41 may be included in the result of the sound classification, and in this case, the step S41 may not be performed by the
Hereinafter, each step will be described with reference to the example of FIG.
If the trust level of the highest class is less than the first reference value a (No path of S41), that is, if the acoustic type of the class corresponding to the current trust level can not be determined as the type of the current acoustic data, .
In Fig. 5, when a = 0.7, the trust level corresponding to the vehicle of the first class, which is the highest class, is 0.82, which is greater than the first reference value (a).
If the trust level of the highest class is equal to or higher than the first reference value a (Yes path in S41), that is, if the highest class sound class corresponding to the trust level can be determined as the current type of acoustic data, step S42 is performed .
In FIG. 5, since the confidence level corresponding to the vehicle of the first class is 0.82 and is greater than the first reference value (a), the type of current acoustic data can be judged as a vehicle.
The
This is because it is not determined as the type of the current acoustic data but the frequency band of the acoustic type that is likely to be included in the acoustic data is considered in performing the filtering operation of the
The second reference value b may be set to a value that is a reference value for a considerable possibility to be included in the acoustic data, for example, 0.1, but may be set to be equal to or lower than the acoustic type of the class corresponding to the confidence level.
If the trust level of the subclass is equal to or greater than the second reference value b (Yes path of S42), that is, if there is a subclass having a considerable possibility to be included in the sound data, the
5, since the confidence level corresponding to the ambient noise of the fourth class is 0.16 and is equal to or greater than the second reference value b, the
The
The noise generated in the vehicle is mainly distributed in the frequency band of 800 to 2000 Hz. In the case of ambient noise (other noise such as running wind, airplane noise), it is mainly distributed in the frequency band of 1400 ~ 2800HZ.
In the case of a horn sound, it is distributed in a frequency range of 300 to 500 Hz based on a normal design standard of the horn, and the fundamental frequency is 700 to 900 Hz, 1100 to 1300 Hz, 1500 to 1500 Hz corresponding to the harmonics frequency, 1700 Hz. Although the frequency component related to the horn sound may exist in the higher frequency band, the frequency band exceeding 1500 to 1700 Hz can be ignored because the size gradually becomes negligibly small.
Likewise, in the case of the siren sound, it is distributed at 600 to 800 Hz based on the usual design standard of the siren, and is distributed at 1300 to 1500 Hz, 2000 to 2200 Hz, and 2700 to 2900 Hz corresponding to the harmonic frequency with the source frequency.
The frequency band of the acoustic type of each of these classes can be predetermined by experiment or by design criteria or the like.
7, the frequency band of the vehicle of the highest class is 800 to 1400 Hz, and the frequency band of the ambient noise which is higher than the second reference value b is 1400 to 2800 Hz. Therefore, in the frequency band of 1400 to 2000 Hz, the vehicle sound judged to be the current sound type overlaps with the frequency band of the surrounding noise which is a sub-class having a considerable possibility to be included in the sound data. That is, in the overlapped frequency band, since the noise of the vehicle is mixed with the noise of the vehicle from the viewpoint of sound tracking and acoustic tracking, this frequency band should be filtered as shown in the lower part of FIG.
Therefore, the frequency of 800 to 1400 Hz, which is the final filtering frequency, is highly likely to include purely vehicle sound.
According to another embodiment, when overlapping frequency bands exceed a certain rate (for example, 60%) of the frequency band of the vehicle sound, there is a possibility that most of the information about the vehicle sound is lost. Therefore, May be determined as the filtering frequency. This is to balance system performance in terms of noise reduction and preservation of target information.
Also, when there is no overlapped frequency band, the highest frequency band may be determined as the filtering frequency as in step S44.
And, if the acoustic class which is higher than the highest class or the second reference value (b) is an acoustic class having a plurality of frequency bands such as a horn or a siren, a plurality of frequency bands should be considered together for the pass band or overlapping frequency band .
When the trust level of the subclass is less than the second reference value b (No path of S42), that is, when there is no significant subclass that is likely to be included in the sound data, the
That is, since there is no sound type which becomes noises from the viewpoint of sound tracking, the main frequency band of the vehicle sound to be subjected to sound tracking is directly determined as the filtering frequency, and other frequency bands are excluded, So that the probability of including only the vehicle sound becomes very high.
The
The
The
FIG. 8 is a diagram showing measurement results according to application of filtering in a specific situation. FIG.
Referring to Fig. 8, when a specific situation occurs while the
It is assumed that the specific situation is a situation in which the wind is blowing a lot (the driving wind intensely occurs), the right rear vehicle is traveling at constant speed, and the opposite lane vehicle is moving occasionally.
The tables on the left and right side show the measurement results for forward direction estimation and the measurement results for direction estimation after filtering, respectively.
That is, the table on the left side shows the measurement result when the filtering operation of the
In each table, the probability that there is an object recognized in the time-angle angle value from red to yellow, green, and blue may be lowered.
When filtering is not applied, it is difficult to recognize a pattern recognizing the existence of nearby vehicles and the movement according to time as shown in the left table. This is because when the filter is not used, the wind noise is large due to the windy environment and it is not possible to obtain a proper measurement result.
However, when the filtering is applied (filtering based on the final filtering frequency shown in Fig. 7), the pattern of red color corresponding to the sound of the vehicle is analyzed. As shown in the right table, The vehicle traveling in the opposite lane (180 ° - 270 °) can be recognized three times.
This is because the frequency band including the driving wind as well as the remaining bands excluding the frequency band of the vehicle sound are also filtered when the filtering is applied so that the acoustic direction is only tracked for the frequency band of the vehicle sound.
Therefore, according to the sound tracking information providing method, the vehicle sound tracking apparatus, and the vehicle including the sound tracking information providing method according to an embodiment of the present invention, the filtering operation is performed considering the main frequency band of the recognized target sound, It is possible to perform robust sound source tracking with respect to noise.
In addition, in setting the filtering frequency of the filtering operation, the filtering performance can be improved by taking into consideration not only the main frequency band of the target sound but also the frequency band of other noise in which the frequency band overlaps.
The sound tracking information providing method described above can be implemented as a computer-readable code on a computer-readable recording medium. The computer-readable recording medium includes all kinds of recording media storing data that can be decoded by a computer system. For example, it may be a ROM (Read Only Memory), a RAM (Random Access Memory), a magnetic tape, a magnetic disk, a flash memory, an optical data storage device, or the like. In addition, the computer-readable recording medium may be distributed and executed in a computer system connected to a computer network, and may be stored and executed as a code readable in a distributed manner.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention as defined in the appended claims. It will be understood that various modifications and changes may be made.
Claims (23)
Extracting features of the sound data and determining a target sound source;
Determining a filtering frequency based on a main frequency band of the target sound source; And
And performing a filtering operation on the acoustic data according to the filtering frequency,
Wherein the determining the filtering frequency comprises:
Determining a frequency at which a main frequency band of the interference sound source is excluded from a main frequency band of the target sound source as a pass band as a filtering frequency when an interference sound source is present,
When a frequency band in which the main frequency band of the target sound source overlaps with the main frequency band of the interference sound source exceeds a predetermined ratio of the main frequency band of the target sound source, And determining the filtering frequency as a filtering frequency.
Wherein the target sound source is an acoustic type having a confidence level equal to or higher than a first reference value as a result of acoustic recognition of the acoustic data.
Wherein the determining the filtering frequency comprises:
And determining a frequency at which a main frequency band of the target sound source is a pass band as a filtering frequency when the interference sound source is absent.
Wherein the interference sound source is a type of sound having a confidence level higher than a second reference value in addition to the target sound source as a result of sound recognition of the sound data.
And generating information on the direction of the target sound source using the size and the signal delay of the filtered sound data.
Wherein the target sound source is determined according to a trust level for each sound type of an NN (Neural Network) classifier.
An acoustic recognition unit for extracting characteristics of the sound data and determining a target sound source;
A filtering control unit for determining a filtering frequency based on a main frequency band of the target sound source; And
And a data filtering unit for performing a filtering operation on the sound data according to the filtering frequency,
Wherein the filtering control unit comprises:
Determining a frequency at which a main frequency band of the interference sound source is excluded from a main frequency band of the target sound source as a pass band as a filtering frequency when an interference sound source is present,
When a frequency band in which the main frequency band of the target sound source overlaps with the main frequency band of the interference sound source exceeds a predetermined ratio of the main frequency band of the target sound source, The acoustic tracking device determines the filtering frequency.
Wherein the target sound source is an acoustic type having a confidence level equal to or higher than a first reference value as a result of acoustic recognition of the acoustic data.
Wherein the filtering control unit comprises:
Wherein the frequency of the main frequency band of the target sound source is determined as a filtering frequency when the interference sound source is absent.
Wherein the interference sound source is an acoustic type having a confidence level equal to or higher than a second reference value in addition to the target sound source as a result of acoustic recognition of the acoustic data.
And a sound tracker for generating information on the direction of the target sound source using the size and the signal delay of the filtered sound data.
Wherein the target sound source is determined according to a trust level for each sound type of an NN (Neural Network) classifier.
A sound tracker for determining a filtering frequency based on a main frequency band of the target sound source determined by extracting the characteristics of the sound data, and performing a filtering operation on the sound data according to the filtering frequency; And
And an acoustic notification unit for visually or audibly informing the driver of information on the direction of the target sound source transmitted from the sound tracker,
The sound tracker comprises:
Determining a frequency at which a main frequency band of the interference sound source is excluded from a main frequency band of the target sound source as a pass band as a filtering frequency when an interference sound source is present,
When a frequency band in which the main frequency band of the target sound source overlaps with the main frequency band of the interference sound source exceeds a predetermined ratio of the main frequency band of the target sound source, The vehicle is determined by the filtering frequency.
Wherein the target sound source is an acoustic type having a confidence level equal to or higher than a first reference value as a result of acoustic recognition of the acoustic data.
The sound tracker comprises:
Wherein when the interference sound source is absent, a frequency at which the main frequency band of the target sound source is a pass band is determined as the filtering frequency.
Wherein the interference sound source is an acoustic type having a confidence level higher than a second reference value in addition to the target sound source as a result of acoustic recognition of the acoustic data.
Wherein the multi-channel microphone includes a microphone installed at the upper, lower left, and lower right sides of the center of the vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150166392A KR101748270B1 (en) | 2015-11-26 | 2015-11-26 | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150166392A KR101748270B1 (en) | 2015-11-26 | 2015-11-26 | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170061407A KR20170061407A (en) | 2017-06-05 |
KR101748270B1 true KR101748270B1 (en) | 2017-06-16 |
Family
ID=59223189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150166392A KR101748270B1 (en) | 2015-11-26 | 2015-11-26 | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101748270B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102135896B1 (en) * | 2018-08-28 | 2020-07-20 | 국방과학연구소 | A Robust Tracking Device and Method for Passive Sensor System |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101228749B1 (en) * | 2011-08-24 | 2013-01-31 | 한국과학기술원 | Position detecting system and method using audio frequency and and recording medium for the same |
-
2015
- 2015-11-26 KR KR1020150166392A patent/KR101748270B1/en active IP Right Grant
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101228749B1 (en) * | 2011-08-24 | 2013-01-31 | 한국과학기술원 | Position detecting system and method using audio frequency and and recording medium for the same |
Also Published As
Publication number | Publication date |
---|---|
KR20170061407A (en) | 2017-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101759143B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
KR101892028B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
KR101748276B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
KR101768145B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
KR101759144B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
KR101807616B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
EP2876639B1 (en) | Using external sounds to alert vehicle occupants of external events | |
US10607488B2 (en) | Apparatus and method of providing visualization information of rear vehicle | |
WO2012097150A1 (en) | Automotive sound recognition system for enhanced situation awareness | |
KR101519255B1 (en) | Notification System for Direction of Sound around a Vehicle and Method thereof | |
Sammarco et al. | Crashzam: Sound-based Car Crash Detection. | |
KR101250668B1 (en) | Method for recogning emergency speech using gmm | |
Lee et al. | Acoustic hazard detection for pedestrians with obscured hearing | |
KR101748270B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
Valiveti et al. | Soft computing based audio signal analysis for accident prediction | |
KR101901800B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
KR102331758B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
Sathyanarayana et al. | Leveraging speech-active regions towards active safety in vehicles | |
KR102601171B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
KR102378940B1 (en) | Method for providing sound detection information, apparatus detecting sound around vehicle, and vehicle including the same | |
Marciniuk et al. | Acoustic Road Monitoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GRNT | Written decision to grant |