CN112532941A - Vehicle source intensity monitoring method and device, electronic equipment and storage medium - Google Patents

Vehicle source intensity monitoring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112532941A
CN112532941A CN202011374536.1A CN202011374536A CN112532941A CN 112532941 A CN112532941 A CN 112532941A CN 202011374536 A CN202011374536 A CN 202011374536A CN 112532941 A CN112532941 A CN 112532941A
Authority
CN
China
Prior art keywords
vehicle
source intensity
voiceprint
information
audio information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011374536.1A
Other languages
Chinese (zh)
Inventor
陈志菲
段立峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhongke Shengshi Intelligent Technology Co ltd
Original Assignee
Nanjing Zhongke Shengshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhongke Shengshi Intelligent Technology Co ltd filed Critical Nanjing Zhongke Shengshi Intelligent Technology Co ltd
Priority to CN202011374536.1A priority Critical patent/CN112532941A/en
Publication of CN112532941A publication Critical patent/CN112532941A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/18Artificial neural networks; Connectionist approaches
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a vehicle source intensity monitoring method and device, electronic equipment and a storage medium, and relates to the field of data processing. The vehicle source intensity monitoring method comprises the following steps: acquiring the starting time and the ending time of a vehicle passing through a monitoring area through video acquisition equipment; through sonar equipment acquisition the start time and the audio information in the end time range, audio information includes: various types of ambient sounds; and extracting the source intensity information of the vehicle from the audio information through a filtering algorithm. In the embodiment of the application, the video information and the audio information are subjected to combined analysis through the relevance in time to obtain the audio information corresponding to the fact that the vehicle enters a monitoring area, the source intensity information of the vehicle is extracted from the audio information by using a filtering algorithm, the main voiceprint characteristics of the running noise of the vehicle can be identified, the problem that the type of noise monitoring data in the prior art is single is solved, and the monitoring effect on the noise data is improved.

Description

Vehicle source intensity monitoring method and device, electronic equipment and storage medium
Technical Field
The application relates to the field of data processing, in particular to a vehicle source intensity monitoring method and device, electronic equipment and a storage medium.
Background
In recent years, with the rapid development of urban economy, motor vehicles are speeded up rapidly, traffic noise pollution is increasingly serious, and parts of road sections with traffic noise exceeding standards cannot reach the standards even after sound insulation and noise reduction treatment of sound barriers and the like, so that the production and life of surrounding living areas and business areas are seriously influenced. Therefore, traffic noise is the most important source of urban environmental noise, and the prevention and treatment of traffic noise is a key task for improving urban environmental noise.
The traffic noise monitoring is an important means for acquiring the traffic noise pollution condition of the urban road and is a primary premise for drawing an urban noise map. Road noise monitoring facilities of traditional reputation linkage mainly is the monitoring facilities that crimes for a law, utilizes sonar real-time detection promptly to whistle the signal, in case find the vehicle whistle promptly location calculation to guide the camera to snapshot the picture and in the picture whistle the location stack cloud picture mark, draw the license plate as law enforcement foundation.
In traffic noise, the whistle noise only belongs to a part of noise, and the monitoring data is single and the effect is poor by adopting the existing noise monitoring method.
Disclosure of Invention
In order to solve the problems in the prior art, the application provides a vehicle source intensity monitoring method and device, electronic equipment and a storage medium.
The application provides a vehicle source intensity monitoring method in a first aspect, which includes:
acquiring the starting time and the ending time of a vehicle passing through a monitoring area through video acquisition equipment; the start time is used for marking the time when the vehicle completely enters the monitoring area, and the end time is used for marking the time when the vehicle completely leaves the monitoring area;
obtaining the audio information in the starting time and the ending time range through sonar equipment, wherein the audio information comprises: various types of ambient sounds;
and extracting the source intensity information of the vehicle from the audio information through a filtering algorithm.
Optionally, the acquiring, by the video capture device, a start time and an end time of the vehicle passing through the monitoring area includes:
respectively acquiring video images of each monitoring area in a monitoring coverage range through the video acquisition equipment, wherein the monitoring areas are road areas divided according to lanes;
the method includes acquiring a starting time and an ending time of the vehicle passing through a monitoring area.
Optionally, after the extracting, by the filtering algorithm, the source strength information of the vehicle from the audio information, the method further includes:
according to a preset identification model, identifying the voiceprint type in the source intensity information, wherein the preset identification model is obtained according to voiceprint sample training, and the voiceprint sample comprises: at least one of the following voiceprints for different vehicle types: whistling sound marks, engine sound marks, rumbling sound marks, and fetal noise marks; a voiceprint of at least one road sound;
and marking the voiceprint types corresponding to different voiceprints in the source intensity information according to the recognized voiceprint types.
Optionally, the identifying the voiceprint type in the source intensity information according to a preset identification model includes:
and extracting voiceprint characteristics in the source intensity information according to a preset identification model, and identifying at least one voiceprint type in the source intensity information according to the voiceprint characteristics.
Optionally, before identifying the voiceprint type in the source intensity information according to a preset identification model, the method further includes:
acquiring the vehicle type of the vehicle according to the video image and a preset vehicle identification model, wherein the preset vehicle identification model is acquired by training of an image sample set, and the image sample set comprises: vehicle images of different vehicle types;
the identifying the voiceprint type in the source intensity information according to the preset identification model comprises the following steps:
and identifying the voiceprint type in the source intensity information according to a preset identification model and the vehicle type of the vehicle.
Optionally, the method further comprises:
and carrying out FFT analysis, time frequency analysis and line spectrum characteristic analysis on the source intensity information to obtain the noise time domain and frequency domain characteristics of the vehicle.
Optionally, the method further comprises:
acquiring audio information of a monitored area at any moment through sonar equipment;
and calculating and acquiring the position information of each sound source in the environment according to the audio information.
Optionally, after the calculating and acquiring the position information of each sound source in the environment according to the audio information, the method further includes:
and generating an acoustic image of the monitoring area according to the position information of each sound source and the video image acquired by the video acquisition equipment.
Optionally, the extracting, by a filtering algorithm, the source strength information of the vehicle from the audio information includes:
and extracting the source intensity information of the vehicle from the audio information through a beam forming algorithm.
The second aspect of the present application provides a vehicle source intensity monitoring device, including: an acquisition unit and an extraction unit;
the acquisition unit is used for acquiring the starting time and the ending time of the vehicle passing through the monitoring area through the video acquisition equipment; the start time is used for marking the time when the vehicle completely enters the monitoring area, and the end time is used for marking the time when the vehicle completely leaves the monitoring area;
obtaining the audio information in the starting time and the ending time range through sonar equipment, wherein the audio information comprises: various types of ambient sounds;
the extracting unit is used for extracting the source intensity information of the vehicle from the audio information through a filtering algorithm.
Optionally, the acquiring unit is specifically configured to acquire, by the video acquisition device, video images of each monitoring area within a monitoring coverage area, where the monitoring area is a road area divided according to lanes;
the method includes acquiring a starting time and an ending time of the vehicle passing through a monitoring area.
Optionally, the apparatus further comprises: an identification unit and a marking unit;
the identification unit is configured to identify a voiceprint type in the source intensity information according to a preset identification model, where the preset identification model is obtained according to voiceprint sample training, and the voiceprint sample includes: at least one of the following voiceprints for different vehicle types: whistling sound marks, engine sound marks, rumbling sound marks, and fetal noise marks; a voiceprint of at least one road sound;
and the marking unit is used for marking the voiceprint types corresponding to different voiceprints in the source intensity information according to the recognized voiceprint types.
Optionally, the identification unit is specifically configured to extract voiceprint features in the source intensity information according to a preset identification model, and identify at least one voiceprint type in the source intensity information according to the voiceprint features.
Optionally, the obtaining unit is further configured to obtain a vehicle type of the vehicle according to the video image and a preset vehicle identification model, where the preset vehicle identification model is obtained by training an image sample set, and the image sample set includes: vehicle images of different vehicle types;
the identification unit is further used for identifying the voiceprint type in the source intensity information according to a preset identification model and the vehicle type of the vehicle.
Optionally, the obtaining unit is further configured to perform FFT analysis, time-frequency analysis, and line spectrum feature analysis on the source intensity information, so as to obtain a noise time domain and a noise frequency domain feature of the vehicle.
Optionally, the acquiring unit is further configured to acquire audio information of the monitored area at any time through sonar equipment;
and calculating and acquiring the position information of each sound source in the environment according to the audio information.
Optionally, the apparatus further comprises: a generating unit;
and the generating unit is used for generating the acoustic image of the monitoring area according to the position information of each sound source and the video image acquired by the video acquisition equipment.
Optionally, the extracting unit is specifically configured to extract the source strength information of the vehicle from the audio information through a beamforming algorithm.
A third aspect of the present application provides an electronic device comprising: a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, and when the electronic device is operated, the processor communicates with the storage medium through the bus, and the processor executes the machine-readable instructions to perform the steps of the method according to the first aspect.
A fourth aspect of the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method according to the first aspect.
In the vehicle source intensity monitoring method, the vehicle source intensity monitoring device, the electronic device and the storage medium provided by the embodiment of the application, the vehicle source intensity monitoring method comprises the following steps: acquiring the starting time and the ending time of a vehicle passing through a monitoring area through video acquisition equipment; the start time is used for marking the time when the vehicle completely enters the monitoring area, and the end time is used for marking the time when the vehicle completely leaves the monitoring area; obtaining the audio information in the starting time and the ending time range through sonar equipment, wherein the audio information comprises: various types of ambient sounds; and extracting the source intensity information of the vehicle from the audio information through a filtering algorithm. In the embodiment of the application, the video information and the audio information are subjected to combined analysis through the relevance in time to obtain the audio information corresponding to the fact that the vehicle enters a monitoring area, the source intensity information of the vehicle is extracted from the audio information by using a filtering algorithm, the main voiceprint characteristics of the running noise of the vehicle can be identified, and the monitoring effect on noise data is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a schematic diagram of a vehicle source intensity monitoring system provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a vehicle source intensity monitoring method according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of a vehicle source intensity monitoring method according to another embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating a vehicle source intensity monitoring method according to another embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of a vehicle source intensity monitoring method according to another embodiment of the present application;
FIG. 6 is a schematic diagram of a vehicle source intensity monitoring apparatus according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a vehicle source intensity monitoring apparatus according to another embodiment of the present application;
FIG. 8 is a schematic view of a vehicle source intensity monitoring apparatus according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Furthermore, the terms "first," "second," and the like in the description and in the claims, as well as in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
The traffic noise monitoring is an important means for acquiring the traffic noise pollution condition of the urban road and is a primary premise for drawing an urban noise map. Road noise monitoring facilities of traditional reputation linkage mainly is the monitoring facilities that crimes for a law, utilizes sonar real-time detection promptly to whistle the signal, in case find the vehicle whistle promptly location calculation to guide the camera to snapshot the picture and in the picture whistle the location stack cloud picture mark, draw the license plate as law enforcement foundation. In traffic noise, the whistle noise only belongs to a part of noise, and the monitoring data is single and the effect is poor by adopting the existing noise monitoring method. Therefore, how to acquire other types of noise information, for example: the tire noise data and the engine noise have important significance for relevant departments to perform corresponding noise reduction processing operation by using noise information.
In order to solve the technical problems in the prior art, the present application provides an inventive concept: and performing combined analysis on the video information and the audio information through the relevance in time to obtain the audio information corresponding to the vehicle entering the monitoring area, and extracting the source strength information of the vehicle from the audio information by using a filtering algorithm.
The following describes a specific technical solution provided by the present application through possible implementation manners.
Fig. 1 is a schematic diagram of a vehicle source intensity monitoring system provided in an embodiment of the present application, and as shown in fig. 1, the vehicle source intensity monitoring system includes: video capture device 101, sonar device 102, and processing device 103. The video capture device 101 is mainly used for acquiring video information of a monitored area and sending the video information to the processing device 103 in a wireless or wired manner. Accordingly, the sonar equipment 102 is mainly used to acquire audio information of the monitored area and transmit the audio information to the processing equipment 103 in a wireless or wired manner. The processing device 103 is configured to analyze and process the audio information and the video information, and remotely view the processed audio information and the processed video information, and the processing device 103 may be a computer, a server, or other devices with processing and display functions.
Fig. 2 is a schematic flowchart of a vehicle source intensity monitoring method according to an embodiment of the present application, where an execution subject of the method may be a processing device such as an intelligent mobile device, a computer, a server, or the like. As shown in fig. 2, the method includes:
s201, acquiring the starting time and the ending time of the vehicle passing through the monitoring area through video acquisition equipment.
It should be noted that, in the embodiment of the present application, the monitoring area is used to indicate the monitoring range of the video capture device.
Optionally, the start time is used to mark the time when the vehicle completely enters the monitoring area and the end time is used to mark the time when the vehicle completely exits the monitoring area. The complete entrance of the vehicle into the monitoring area means that the tail of the vehicle has entered the monitoring area, and the complete exit from the monitoring area is also the exit from the monitoring area at the tail of the vehicle, i.e. the vehicle is no longer present in the monitoring area.
S202, audio information in the starting time range and the ending time range is obtained through sonar equipment.
In this application embodiment, utilize sonar equipment, specifically can be sonar microphone array equipment and gather the audio frequency, and then can acquire the audio information between the starting time when the vehicle got into the surveillance area and the ending time when leaving the surveillance area from the audio frequency according to the time of video acquisition equipment record. Wherein, sonar microphone array equipment can constitute by a plurality of micro-electromechanical system MEMS microphones, can gather road traffic noise signal and real-time processing, and the time of collection can also be recorded in the acquisition process in step.
It should be noted that, in the embodiment of the present application, the audio information includes: various types of ambient sounds. Exemplarily, it may be: bird song, road construction sound, vehicle whistle, vehicle tire noise, vehicle engine sound, square dance music sound, etc.
And S203, extracting the source intensity information of the vehicle from the audio information through a filtering algorithm.
The audio information contains various types of ambient sounds. In order to separate the source intensity information of the vehicle from the multiple types of environmental sounds, a filtering algorithm may be used to extract the source intensity information of the vehicle, and the specific type of the filtering algorithm is not limited. In the embodiment of the present application, the source strength information of the vehicle may include inherent sound attribute information of the vehicle, including, for example: engine sound of the vehicle, tire noise of the vehicle, and the like. The source intensity information in the embodiment of the present application is, for example, engine sound and tire noise within one meter of a vehicle, but is not limited thereto.
In the embodiment of the application, the method comprises the following steps: acquiring the starting time and the ending time of a vehicle passing through a monitoring area through video acquisition equipment; the start time is used for marking the time when the vehicle completely enters the monitoring area, and the end time is used for marking the time when the vehicle completely leaves the monitoring area; obtaining the audio information in the starting time and the ending time range through sonar equipment, wherein the audio information comprises: various types of ambient sounds; and extracting the source intensity information of the vehicle from the audio information through a filtering algorithm. The video information and the audio information are subjected to combined analysis through the relevance in time to obtain the audio information corresponding to the fact that the vehicle enters a monitoring area, the source intensity information of the vehicle is extracted from the audio information by using a filtering algorithm, the main voiceprint characteristics of the vehicle running noise can be identified, and the monitoring effect on noise data is improved.
Fig. 3 is a schematic flow chart of a vehicle source intensity monitoring method according to another embodiment of the present application, and as shown in fig. 3, step S201 may specifically include:
s301, video images of each monitoring area in the monitoring coverage range are respectively obtained through video acquisition equipment.
S302, acquiring the starting time and the ending time of the vehicle passing through the monitoring area.
In the embodiment of the present application, the monitoring region may be a road region divided by lanes. When there are four lanes in the monitoring coverage, the monitoring area can be divided into at least four monitoring areas. In addition, rules can be defined according to the size of the vehicle, the monitoring length in each monitoring area and the like, and the primarily divided monitoring areas can be subdivided specifically.
For example, when the monitoring coverage is four lanes and the monitoring length is 30 meters, the type of the vehicle may be determined according to the video information acquired by the video acquisition device when the vehicle enters 20-30 meters of the monitoring length, including: large trucks, private cars, etc. And taking 10-20 meters of the monitoring length as a specific monitoring area, and dividing the monitoring length into two monitoring areas of 10-15 meters and 15-20 meters when the vehicle is judged to be a private car according to the video information. Accordingly, the start time and the end time of the vehicle passing through the monitoring area are obtained, which correspond to the start time and the end time of 10-15 meters and the start time and the end time of 15-20 meters. When the vehicle is judged to be a large truck according to the video information, 10-20 meters are directly used as the whole monitoring area. Accordingly, obtaining a start time and an end time for the vehicle to pass through the monitoring area corresponds to a start time and an end time for the vehicle to pass 10-20 meters. Further, when the distance of 10-20 meters is divided into two monitoring areas, the source intensity data also corresponds to two groups of data, and in order to increase the accuracy of the source intensity data, the two groups of source intensity data can be averaged to be used as the final source intensity data of the vehicle.
Fig. 4 is a schematic flowchart of a vehicle source intensity monitoring method according to another embodiment of the present application, and as shown in fig. 4, after the vehicle source intensity information is extracted from the audio information by a filtering algorithm, the method further includes:
s401, according to a preset identification model, identifying the voiceprint type in the source intensity information.
S402, according to the recognized voiceprint types, the voiceprint types corresponding to different voiceprints are marked in the source intensity information.
Optionally, in this embodiment of the present application, the preset recognition model is obtained according to a voiceprint sample training, where the voiceprint sample includes: at least one of the following voiceprints for different vehicle types: whistling sound marks, engine sound marks, rumbling sound marks, and fetal noise marks; a voiceprint of at least one road sound.
Alternatively, typical traffic noises such as whistling, engine sounds, roaring, tire noises, etc. have large differences in spectral characteristics, and can be separately identified by conventional statistical signal characteristics.
Furthermore, it is also possible to use preset recognition models, for example: and identifying the voiceprint type in the source intensity information by the trained neural network model.
In addition, after each type of voiceprint type is obtained, the voiceprint types corresponding to different voiceprints can be marked in the source intensity information by using the voiceprint types.
It can be understood that, in the embodiment of the present application, by marking the voiceprint types corresponding to different voiceprints in the source intensity information, when a relevant department uses source intensity data to perform noise reduction analysis, it can be determined whether there are other types of sounds besides vehicle engine sounds and vehicle tire noises in the source intensity information. When the source intensity information is marked with other voiceprint types except the engine voiceprint and the fetal noise voiceprint, the fact that interference information possibly exists in the source intensity information at the moment is indicated, the intensity of the interference can be specifically judged, and whether the source intensity data is used for noise reduction analysis processing or not is determined.
Optionally, identifying the voiceprint type in the source intensity information according to a preset identification model includes:
and extracting the voiceprint characteristics in the source intensity information according to a preset identification model, and identifying at least one voiceprint type in the source intensity information according to the voiceprint characteristics.
Optionally, in this embodiment of the application, a preset recognition model may be used to firstly recognize a voiceprint feature in the source intensity information, and then a voiceprint type in the source intensity information is recognized by using the voiceprint feature. The preset recognition model may be a neural network model pre-trained using a large amount of voiceprint data.
Optionally, before identifying the voiceprint type in the source intensity information according to a preset identification model, the method further includes:
and acquiring the vehicle type of the vehicle according to the video image and a preset vehicle identification model.
In the embodiment of the application, the vehicle type of the vehicle in the monitoring area can be obtained according to the video image obtained by the video acquisition equipment and the preset vehicle identification model. Optionally, the preset vehicle identification model is obtained by training an image sample set, where the image sample set includes: vehicle images of different vehicle types.
According to a preset identification model, identifying the voiceprint type in the source intensity information, comprising the following steps:
and identifying the voiceprint type in the source intensity information according to a preset identification model and the vehicle type of the vehicle.
Due to different vehicles, the whistle sound, engine sound, and tire noise sound may be different. In order to accurately identify the voiceprint type from the source intensity information, in the embodiment of the application, after the vehicle type of the vehicle is acquired through the preset vehicle identification model, the voiceprint type is identified in the source intensity information according to the preset identification model and the vehicle type of the vehicle. The preset identification model is prestored with whistling voiceprint information, engine voiceprint information and tire noise voiceprint information corresponding to different types of vehicles.
Optionally, the method further comprises: and carrying out FFT analysis, time frequency analysis and line spectrum characteristic analysis on the source intensity information to obtain the noise time domain and frequency domain characteristics of the vehicle.
In the embodiment of the application, after the source intensity information is obtained, FFT analysis, time-frequency analysis, line spectrum feature analysis may be performed on the source intensity information to obtain the noise time domain and frequency domain features of the vehicle.
It can be understood that, in the embodiment of the present application, by acquiring the noise time domain and frequency domain characteristics of the vehicle, a processing basis can be provided for a relevant department when performing noise reduction processing by using the source intensity data. Specifically, when the acquired source intensity information is mainly concentrated on the low-frequency noise, the acoustic frequency band may be specifically set corresponding to the low-frequency noise, and the low-frequency noise may be specifically eliminated.
Fig. 5 is a schematic flowchart of a vehicle source intensity monitoring method according to another embodiment of the present application, and as shown in fig. 5, the method further includes:
s501, acquiring audio information of the monitored area at any moment through sonar equipment.
And S502, calculating and acquiring the position information of each sound source in the environment according to the audio information.
In this application embodiment, can also acquire the audio information of monitoring area arbitrary moment according to sonar equipment, according to audio information, calculate the positional information who acquires every strong sound source in the environment. Specifically, according to the sound pressure of the audio information at each grid point in the monitored area, the position and the sound pressure of the strong sound source can be obtained. It should be noted that the determination of the origin of coordinates can be flexibly set according to the requirement, which is not limited in this embodiment.
Optionally, after the obtaining of the position information of each sound source in the environment is calculated according to the audio information, the method further includes:
and generating an acoustic image of the monitoring area according to the position information of each sound source and the video image acquired by the video acquisition equipment.
In the embodiment of the application, the acoustic image of the monitoring area can be generated by combining with the video image acquired by the video acquisition equipment. The sonogram may be the result of a combination of video and audio information of the monitored area. The specific binding process is as follows: video acquisition equipment can obtain the image information of every sound source according to video image to and the coordinate information of image, and sonar equipment can obtain the coordinate information of every sound source according to the size of acoustic pressure simultaneously, overlaps the two of the coordinate information of the image that obtains through the coordinate information of every sound source that obtains sonar equipment and video acquisition equipment and can obtain the acoustic image of monitoring area. The images of each sound source in the monitored area and the audio information under the corresponding images can be obtained through the sound image map.
It can be understood that the sound image panoramic image of the monitored area can be easily obtained by generating the sound image of the monitored area, and the sound image panoramic image has important significance for the sound pressure level prediction of surrounding buildings.
Optionally, extracting the source strength information of the vehicle from the audio information by a filtering algorithm, including:
the source strength information of the vehicle is extracted from the audio information by a beamforming algorithm.
Beamforming algorithms, for example: both the conventional beamforming method and the clear-SC method may be used to extract source strength information for the vehicle. Specifically, the conventional beam forming method has a small calculation amount, but the main lobe width is large at low frequency, and is mainly used for sound source positioning and source intensity estimation of medium and high frequency bands; the CLEAN-SC method belongs to a high-resolution beam forming method and is suitable for estimating the vehicle source intensity in a medium-low frequency band or when a plurality of vehicles run side by side. Therefore, for the whole analysis frequency band, especially for noise signals below 10kHz, the beam space distribution can be reasonably set according to the main lobe widths of the two methods, and the effective beam coverage of vehicles with different sizes is realized by combining the vehicle profiles given by the video acquisition equipment, so that a more accurate source intensity extraction result is obtained.
Further, there must be delay and attenuation in the sound propagation process, and in order to make the acquired source intensity information more accurate, data improvement may be performed, for example: (1) compensating the source intensity information according to the spherical attenuation principle; (2) carrying out certain sound intensity scaling on the space spectrum formed by the wave beams, and integrating the sound power in the width of the main lobe to obtain the actual source intensity of the vehicle; (3) when the vehicle runs on two sides of the road with high buildings, the reflection of low-frequency radiation noise needs to be corrected and compensated.
The following describes a device, a storage medium, and the like corresponding to the vehicle source intensity monitoring method provided by the present application, and specific implementation processes and technical effects thereof are referred to above, and will not be described again below.
Fig. 6 is a schematic diagram of a vehicle source intensity monitoring device provided in an embodiment of the present application, and as shown in fig. 6, the device may include: an acquisition unit 601 and an extraction unit 602; an obtaining unit 601, configured to obtain, by a video capture device, a start time and an end time of a vehicle passing through a monitoring area; the starting time is used for marking the time when the vehicle completely enters the monitoring area, and the ending time is used for marking the time when the vehicle completely leaves the monitoring area; through the audio information of sonar equipment acquisition start time and end time within range, audio information includes: various types of ambient sounds;
an extracting unit 602, configured to extract source strength information of the vehicle from the audio information through a filtering algorithm.
Optionally, the obtaining unit 601 is specifically configured to obtain video images of monitoring areas within a monitoring coverage range through a video collecting device, where the monitoring areas are road areas divided according to lanes; the starting time and the ending time of the vehicle passing through the monitoring area are obtained.
Fig. 7 is a schematic diagram of a vehicle source intensity monitoring device according to another embodiment of the present application, and as shown in fig. 7, the device further includes: a recognition unit 603 and a marking unit 604;
the recognition unit 603 is configured to recognize a voiceprint type in the source intensity information according to a preset recognition model, where the preset recognition model is obtained according to a voiceprint sample training, and the voiceprint sample includes: at least one of the following voiceprints for different vehicle types: whistling sound marks, engine sound marks, rumbling sound marks, and fetal noise marks; a voiceprint of at least one road sound;
a marking unit 604, configured to mark voiceprint types corresponding to different voiceprints in the source strength information according to the identified voiceprint types.
Optionally, the identifying unit 603 is specifically configured to extract a voiceprint feature in the source intensity information according to a preset identification model, and identify at least one voiceprint type in the source intensity information according to the voiceprint feature.
Optionally, the obtaining unit 601 is further configured to obtain a vehicle type of the vehicle according to the video image and a preset vehicle identification model, where the preset vehicle identification model is obtained by training an image sample set, and the image sample set includes: vehicle images of different vehicle types;
the identifying unit 603 is further configured to identify a voiceprint type in the source intensity information according to a preset identification model and a vehicle type of the vehicle.
Optionally, the obtaining unit 601 is further configured to perform FFT analysis, time-frequency analysis, and line spectrum feature analysis on the source intensity information, so as to obtain noise time domain and frequency domain features of the vehicle.
Optionally, the acquiring unit 601 is further configured to acquire audio information of the monitored area at any time through sonar equipment;
and according to the audio information, calculating and acquiring the position information of each sound source in the environment.
Fig. 8 is a schematic view of a vehicle source intensity monitoring device according to another embodiment of the present application, and as shown in fig. 8, the device further includes: a generation unit 605;
the generating unit 605 is configured to generate an acoustic image of the monitoring area according to the position information of each sound source and the video image acquired by the video acquisition device.
Optionally, the extracting unit 602 is specifically configured to extract the source strength information of the vehicle from the audio information through a beamforming algorithm.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application, including: a processor 710, a storage medium 720 and a bus 730, wherein the storage medium 720 stores machine-readable instructions executable by the processor 710, when the electronic device is operated, the processor 710 communicates with the storage medium 720 through the bus 730, and the processor 710 executes the machine-readable instructions to perform the steps of the above-mentioned method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
The embodiment of the application provides a storage medium, wherein a computer program is stored on the storage medium, and the computer program is executed by a processor to execute the method.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A vehicle source intensity monitoring method, comprising:
acquiring the starting time and the ending time of a vehicle passing through a monitoring area through video acquisition equipment; the start time is used for marking the time when the vehicle completely enters the monitoring area, and the end time is used for marking the time when the vehicle completely leaves the monitoring area;
obtaining the audio information in the starting time and the ending time range through sonar equipment, wherein the audio information comprises: various types of ambient sounds;
and extracting the source intensity information of the vehicle from the audio information through a filtering algorithm.
2. The method of claim 1, wherein the obtaining a start time and an end time for the vehicle to pass through the monitoring area via the video capture device comprises:
respectively acquiring video images of each monitoring area in a monitoring coverage range through the video acquisition equipment, wherein the monitoring areas are road areas divided according to lanes;
the method includes acquiring a starting time and an ending time of the vehicle passing through a monitoring area.
3. The method of claim 2, wherein after extracting the source strength information of the vehicle from the audio information by the filtering algorithm, the method further comprises:
according to a preset identification model, identifying the voiceprint type in the source intensity information, wherein the preset identification model is obtained according to voiceprint sample training, and the voiceprint sample comprises: at least one of the following voiceprints for different vehicle types: whistling sound marks, engine sound marks, rumbling sound marks, and fetal noise marks; a voiceprint of at least one road sound;
and marking the voiceprint types corresponding to different voiceprints in the source intensity information according to the recognized voiceprint types.
4. The method according to claim 3, wherein the identifying the voiceprint type in the source intensity information according to a preset identification model comprises:
and extracting voiceprint characteristics in the source intensity information according to a preset identification model, and identifying at least one voiceprint type in the source intensity information according to the voiceprint characteristics.
5. The method according to claim 3 or 4, wherein before identifying the voiceprint type in the source intensity information according to a preset identification model, the method further comprises:
acquiring the vehicle type of the vehicle according to the video image and a preset vehicle identification model, wherein the preset vehicle identification model is acquired by training of an image sample set, and the image sample set comprises: vehicle images of different vehicle types;
the identifying the voiceprint type in the source intensity information according to the preset identification model comprises the following steps:
and identifying the voiceprint type in the source intensity information according to a preset identification model and the vehicle type of the vehicle.
6. The method of claim 1, further comprising:
acquiring audio information of a monitored area at any moment through sonar equipment;
and calculating and acquiring the position information of each sound source in the environment according to the audio information.
7. The method of claim 1, wherein the extracting the source strength information of the vehicle from the audio information by a filtering algorithm comprises:
and extracting the source intensity information of the vehicle from the audio information through a beam forming algorithm.
8. A vehicle source intensity monitoring device, comprising: an acquisition unit and an extraction unit;
the acquisition unit is used for acquiring the starting time and the ending time of the vehicle passing through the monitoring area through the video acquisition equipment; the start time is used for marking the time when the vehicle completely enters the monitoring area, and the end time is used for marking the time when the vehicle completely leaves the monitoring area;
obtaining the audio information in the starting time and the ending time range through sonar equipment, wherein the audio information comprises: various types of ambient sounds;
the extracting unit is used for extracting the source intensity information of the vehicle from the audio information through a filtering algorithm.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the method according to any one of claims 1-7.
10. A storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011374536.1A 2020-11-30 2020-11-30 Vehicle source intensity monitoring method and device, electronic equipment and storage medium Pending CN112532941A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011374536.1A CN112532941A (en) 2020-11-30 2020-11-30 Vehicle source intensity monitoring method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011374536.1A CN112532941A (en) 2020-11-30 2020-11-30 Vehicle source intensity monitoring method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112532941A true CN112532941A (en) 2021-03-19

Family

ID=74995219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011374536.1A Pending CN112532941A (en) 2020-11-30 2020-11-30 Vehicle source intensity monitoring method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112532941A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114397010A (en) * 2021-12-29 2022-04-26 南京中科声势智能科技有限公司 Transient signal acoustic imaging method based on wavelet decomposition
CN114944152A (en) * 2022-07-20 2022-08-26 深圳市微纳感知计算技术有限公司 Vehicle whistling sound identification method
CN115116232A (en) * 2022-08-29 2022-09-27 深圳市微纳感知计算技术有限公司 Voiceprint comparison method, device and equipment for automobile whistling and storage medium
CN116540178A (en) * 2023-04-28 2023-08-04 广东顺德西安交通大学研究院 Noise source positioning method and system for audio and video fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050100172A1 (en) * 2000-12-22 2005-05-12 Michael Schliep Method and arrangement for processing a noise signal from a noise source
CN101145280A (en) * 2007-10-31 2008-03-19 北京航空航天大学 Independent component analysis based automobile sound identification method
CN103065627A (en) * 2012-12-17 2013-04-24 中南大学 Identification method for horn of special vehicle based on dynamic time warping (DTW) and hidden markov model (HMM) evidence integration
CN104282147A (en) * 2014-09-27 2015-01-14 无锡市恒通智能交通设施有限公司 Intelligent vehicle monitor method
CN105989710A (en) * 2015-02-11 2016-10-05 中国科学院声学研究所 Vehicle monitoring device based on audio and method thereof
CN109345834A (en) * 2018-12-04 2019-02-15 北京中电慧声科技有限公司 The illegal whistle capture systems of motor vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050100172A1 (en) * 2000-12-22 2005-05-12 Michael Schliep Method and arrangement for processing a noise signal from a noise source
CN101145280A (en) * 2007-10-31 2008-03-19 北京航空航天大学 Independent component analysis based automobile sound identification method
CN103065627A (en) * 2012-12-17 2013-04-24 中南大学 Identification method for horn of special vehicle based on dynamic time warping (DTW) and hidden markov model (HMM) evidence integration
CN104282147A (en) * 2014-09-27 2015-01-14 无锡市恒通智能交通设施有限公司 Intelligent vehicle monitor method
CN105989710A (en) * 2015-02-11 2016-10-05 中国科学院声学研究所 Vehicle monitoring device based on audio and method thereof
CN109345834A (en) * 2018-12-04 2019-02-15 北京中电慧声科技有限公司 The illegal whistle capture systems of motor vehicle

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114397010A (en) * 2021-12-29 2022-04-26 南京中科声势智能科技有限公司 Transient signal acoustic imaging method based on wavelet decomposition
CN114944152A (en) * 2022-07-20 2022-08-26 深圳市微纳感知计算技术有限公司 Vehicle whistling sound identification method
CN115116232A (en) * 2022-08-29 2022-09-27 深圳市微纳感知计算技术有限公司 Voiceprint comparison method, device and equipment for automobile whistling and storage medium
CN115116232B (en) * 2022-08-29 2022-12-09 深圳市微纳感知计算技术有限公司 Voiceprint comparison method, device and equipment for automobile whistling and storage medium
CN116540178A (en) * 2023-04-28 2023-08-04 广东顺德西安交通大学研究院 Noise source positioning method and system for audio and video fusion
CN116540178B (en) * 2023-04-28 2024-02-20 广东顺德西安交通大学研究院 Noise source positioning method and system for audio and video fusion

Similar Documents

Publication Publication Date Title
CN112532941A (en) Vehicle source intensity monitoring method and device, electronic equipment and storage medium
CN112560822B (en) Road sound signal classification method based on convolutional neural network
CN107985225B (en) Method for providing sound tracking information, sound tracking apparatus and vehicle having the same
US8339282B2 (en) Security systems
CN108226854B (en) Apparatus and method for providing visual information of rear vehicle
CN109816987B (en) Electronic police law enforcement snapshot system for automobile whistling and snapshot method thereof
Vij et al. Smartphone based traffic state detection using acoustic analysis and crowdsourcing
CN110398647B (en) Transformer state monitoring method
WO2009145310A1 (en) Sound source separation and display method, and system thereof
WO2006059806A1 (en) Voice recognition system
CN111261189B (en) Vehicle sound signal feature extraction method
CN105913059B (en) Automatic identification system for vehicle VIN code and control method thereof
CN112744174B (en) Vehicle collision monitoring method, device, equipment and computer readable storage medium
KR101821923B1 (en) Method for evaluating sound quality of electric vehicle warning sound considering masking effect and apparatus thereof
CN110765823A (en) Target identification method and device
CN115116232B (en) Voiceprint comparison method, device and equipment for automobile whistling and storage medium
CN113705412A (en) Deep learning-based multi-source data fusion track state detection method and device
CN108877814B (en) Inspection well cover theft and damage detection method, intelligent terminal and computer readable storage medium
KR101519255B1 (en) Notification System for Direction of Sound around a Vehicle and Method thereof
US20050004797A1 (en) Method for identifying specific sounds
CN107274912B (en) Method for identifying equipment source of mobile phone recording
JP5578986B2 (en) Weather radar observation information providing system and weather radar observation information providing method
Mato-Méndez et al. Blind separation to improve classification of traffic noise
CN105139852A (en) Engineering machinery recognition method and recognition device based on improved MFCC (Mel Frequency Cepstrum Coefficient) sound features
CN114417908A (en) Multi-mode fusion-based unmanned aerial vehicle detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210319

RJ01 Rejection of invention patent application after publication