CN113823089A - Traffic volume detection method and device, electronic equipment and readable storage medium - Google Patents

Traffic volume detection method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113823089A
CN113823089A CN202111102368.5A CN202111102368A CN113823089A CN 113823089 A CN113823089 A CN 113823089A CN 202111102368 A CN202111102368 A CN 202111102368A CN 113823089 A CN113823089 A CN 113823089A
Authority
CN
China
Prior art keywords
frame
cepstrum
frequency spectrum
mel
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111102368.5A
Other languages
Chinese (zh)
Inventor
蔡娜娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Danya Technology Co ltd
Original Assignee
Guangzhou Danya Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Danya Technology Co ltd filed Critical Guangzhou Danya Technology Co ltd
Priority to CN202111102368.5A priority Critical patent/CN113823089A/en
Publication of CN113823089A publication Critical patent/CN113823089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/141Discrete Fourier transforms
    • G06F17/142Fast Fourier transforms, e.g. using a Cooley-Tukey type algorithm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Analysis (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Evolutionary Biology (AREA)
  • Discrete Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to an intelligent traffic technology, and discloses a traffic volume detection method, which comprises the following steps: acquiring vehicle audio data of a target area, and preprocessing the vehicle audio data to obtain a processed frame signal; performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum; updating the frame frequency spectrum by using a preset Mel triangular filter group to obtain an updated frame frequency spectrum; performing discrete cosine transform processing on the updated frame spectrum to obtain a cepstrum coefficient; calculating a cepstrum distance by using the cepstrum coefficient and the obtained noise cepstrum coefficient; and extracting the characteristics of the processing frame signal by using the short-time energy, and carrying out fusion processing according to the short-time energy, the cepstrum distance and the characteristics to obtain traffic flow information in the target area. The invention also provides a traffic volume detection device, electronic equipment and a storage medium. The invention can improve the accuracy of traffic volume detection.

Description

Traffic volume detection method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of intelligent traffic, and in particular, to a traffic volume detection method, apparatus, electronic device, and readable storage medium.
Background
In driving detection, accurate traffic volume acquisition is helpful for traffic development planning, and data support is provided for intelligent management and control of a traffic system and road engineering construction. In a traffic monitoring system, the main methods of traffic detection are video detection and audio detection, the audio detection is not influenced by a shelter, light intensity and weather conditions compared with a video detector, the driving condition of a vehicle is detected through the peak value of an audio frame in the audio detection, and the calculation load is low.
However, in the conventional voice traffic detection method, the problem that the recognition rate of the number of vehicles is easily reduced due to the overlapping part of the audio frames exists, so that the traffic volume detection accuracy based on the driving voice is low.
Disclosure of Invention
The invention provides a traffic volume detection method, a traffic volume detection device, electronic equipment and a computer readable storage medium, and mainly aims to improve the accuracy of traffic volume detection.
In order to achieve the above object, the present invention provides a traffic volume detecting method, including:
acquiring vehicle audio data of a target area, and preprocessing the vehicle audio data to obtain a processed frame signal;
performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum;
updating the frame frequency spectrum by utilizing a preset Mel triangular filter group to obtain an updated frame frequency spectrum;
performing discrete cosine transform processing on the updated frame frequency spectrum to obtain a cepstrum coefficient;
calculating a cepstrum distance by using the cepstrum coefficient and the obtained noise cepstrum coefficient;
and extracting the characteristics of the processing frame signal by using short-time energy, and carrying out fusion processing according to the short-time energy, the cepstrum distance and the characteristics to obtain traffic flow information in the target area.
Optionally, the obtaining traffic flow information in the target area by performing fusion processing according to the short-time energy, the cepstrum distance, and the features includes:
calculating the power of the cepstrum distance to obtain an adjusted cepstrum distance;
multiplying the adjusted cepstrum distance by the short-time energy characteristic to obtain a target characteristic parameter;
extracting a characteristic value from the target characteristic parameter, and fusing the characteristic value and the characteristic to obtain a target characteristic parameter frequency spectrum;
and acquiring a highest peak from the target characteristic parameter frequency spectrum, and acquiring traffic flow information in the target area according to the position of the highest peak on the target characteristic parameter frequency spectrum.
Optionally, the calculating a cepstrum distance by using the cepstrum coefficient and the obtained noise cepstrum coefficient includes:
calculating a noise cepstral coefficient average of a plurality of the noise cepstral coefficients;
and acquiring the number of the cepstrum coefficients, and calculating cepstrum distances according to the number of the cepstrum coefficients and the average value of the pectoral cepstrum coefficients.
Optionally, the performing discrete cosine transform processing on the updated frame spectrum to obtain cepstrum coefficients includes:
and separating the frequency spectrum of the updated frame by using a preset discrete cosine transform formula, and extracting the frequency spectrum envelope contained in the separated frequency spectrum of the updated frame to obtain the cepstrum coefficient.
Optionally, the updating the frame spectrum by using a preset mel triangle filter set to obtain an updated frame spectrum includes:
inputting the frame frequency spectrum into at least two preset Mel triangular filter banks to obtain a plurality of Mel frequencies;
acquiring the highest Mel frequency and the lowest Mel frequency of at least two Mel frequencies;
determining the number of point positions between the highest Mel frequency and the lowest Mel frequency according to the number of the Mel triangular filter banks, and obtaining a plurality of additional Mel frequencies corresponding to the number of point positions by using a preset formula in the Mel triangular filter banks;
and converting the plurality of additional Mel frequencies into a plurality of updating frequencies, and carrying out drawing processing on the updating frequencies to obtain an updating frame frequency spectrum.
Optionally, the performing fast fourier transform on the processed frame signal to obtain a frame spectrum includes:
extracting the frame frequency of the processing frame signal, and calculating by using a preset Fourier formula and the frame frequency to obtain a plurality of initial frame frequency values;
deleting repeated values in the plurality of initial frame frequency values to obtain a plurality of target frame frequency values;
and summing the plurality of target frame frequency values to obtain the frame frequency spectrum.
Optionally, the preprocessing the vehicle audio data to obtain a processed frame signal includes:
pre-emphasis processing is carried out on the vehicle audio data to obtain pre-emphasis audio signals;
performing framing processing on the pre-emphasis frame signal to obtain a framing signal;
and windowing the framing signals to obtain processed frame signals.
In order to solve the above problems, the present invention also provides a traffic volume detecting device, including:
the frame signal acquisition module is used for acquiring vehicle audio data of a target area and preprocessing the vehicle audio data to obtain a processed frame signal;
a cepstrum coefficient obtaining module, configured to perform fast fourier transform on the processed frame signal to obtain a frame frequency spectrum, update the frame frequency spectrum by using a preset mel triangular filter group to obtain an updated frame frequency spectrum, and perform discrete cosine transform on the updated frame frequency spectrum to obtain a cepstrum coefficient;
and the traffic volume information acquisition module is used for calculating a cepstrum distance by using the cepstrum coefficient and the acquired noise cepstrum coefficient, extracting the characteristics of the processing frame signal by using short-time energy, and performing fusion processing according to the short-time energy, the cepstrum distance and the characteristics to obtain traffic volume information in the target area.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and a processor for executing the computer program stored in the memory to realize the traffic volume detection method.
In order to solve the above problem, the present invention also provides a computer-readable storage medium having at least one computer program stored therein, the at least one computer program being executed by a processor in an electronic device to implement the traffic volume detection method described above.
The method comprises the steps of preprocessing vehicle audio data to obtain a processed frame signal by acquiring the vehicle audio data of a target area; further, performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum; updating the frame frequency spectrum by utilizing a preset Mel triangular filter group to obtain an updated frame frequency spectrum; the updated frame frequency spectrum is subjected to discrete cosine transform processing to obtain cepstrum coefficients, repeated frequency values in processed frame signals can be removed through fast Fourier transform, human ear simulation can be performed on the frame frequency spectrum with vehicles at the overlapping part through a preset Mel triangular filter group, formants of the frame frequency spectrum are highlighted, change curves of the formants corresponding to a plurality of cepstrum coefficients are obtained through discrete cosine transform, and the formants are strengthened; and finally, extracting the characteristics of the processing frame signal by using short-time energy, and fusing the short-time energy, the calculated cepstrum distance and the extracted characteristics to obtain traffic flow information in the target area and improve the accuracy of traffic flow detection. Therefore, the traffic volume detection method, the traffic volume detection device, the electronic equipment and the readable storage medium provided by the embodiment of the invention can improve the accuracy rate of traffic volume detection.
Drawings
Fig. 1 is a schematic flow chart illustrating a traffic volume detection method according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a traffic volume detection device according to an embodiment of the present invention;
fig. 3 is a schematic internal structural diagram of an electronic device for implementing a traffic volume detection method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a traffic volume detection method. The execution subject of the traffic volume detection method includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiments of the present application. In other words, the traffic volume detection method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flow diagram of a traffic volume detection method according to an embodiment of the present invention is shown, in the embodiment of the present invention, the traffic volume detection method includes:
and S1, acquiring vehicle audio data of the target area, and preprocessing the vehicle audio data to obtain a processed frame signal.
In the embodiment of the invention, the frame signal processing refers to a frame signal obtained by performing pre-emphasis, framing and windowing on the vehicle audio data.
In the embodiment of the invention, the vehicle audio data can be acquired through a website and a resource library. Alternatively, the vehicle audio data may be audio data related to road traffic for an area collected in real time.
In detail, the preprocessing the vehicle audio data to obtain a processed frame signal includes:
pre-emphasis processing is carried out on the vehicle audio data to obtain pre-emphasis audio signals;
performing framing processing on the pre-emphasis frame signal to obtain a framing signal;
and windowing the framing signals to obtain processed frame signals.
In the embodiment of the invention, the pre-emphasis is to enable the vehicle audio data to pass through a pre-constructed high-pass filter to enhance the high-frequency part in the audio signal, so as to highlight the formants of the high frequency to obtain the pre-emphasis audio signal.
Specifically, in the embodiment of the present invention, the pre-emphasis processing may be implemented by the following filter formula:
y(t)=x(t)-αx(t-1)
wherein, α is a filter coefficient, x is an audio sampling value at the time t, and y is an audio signal subjected to pre-emphasis processing.
In the embodiment of the present invention, the framing refers to dividing the pre-emphasis audio signal according to a fixed time length, where each divided sample is a framing signal of a frame.
Specifically, in the embodiment of the present invention, the framing signals of one frame and one frame divided by seconds are first converted into sampling points, where the framing signals include an overlapping area, and if the audio signal is not divided into an even number of frames, the signals are filled to ensure that the sampling points of all the frames are equal, and the audio signal does not interrupt sampling.
For example, the audio signal frame is divided into 20-40ms frames, which are 25ms standard, and 1kHz signal, the frame length (i.e. sample point) of the signal is 0.025x 1000-25, the frame shift is 10ms, the frame shift corresponds to sample points of 0.01 x 1000-10, the first speech frame starts from 0, the second speech frame 10, and the third speech frame 20.
In the embodiment of the invention, the windowing refers to multiplying each frame of the framing signal by a preset window function, and the main function is to increase the connectivity between the left end and the right end of the frame, so as to prevent the leakage problem that the spectrum is smeared in the whole frequency band due to the non-periodic truncation of the signal and ensure that the whole situation is more continuous.
Preferably, the preset window function may be a Hamming window function.
Specifically, in the embodiment of the present invention, it is assumed that the signal after framing is S (N), where N is 0, 1, 2 …, and N-1, where S denotes the signal, N denotes the number of frame points included in S (N), and N denotes the size of the frame.
Specifically, windowing can be achieved by the following formula:
S′(n)=S(n)×W(n)
wherein, S' (n) represents the processed frame signal after windowing, w (n) is a Hamming window function, and w (n) is shown as the following formula:
Figure BDA0003271284070000061
wherein, a represents the frame length needing windowing, and different values of a can generate different Hamming window functions.
And S2, performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum.
In the embodiment of the invention, the frame spectrum is a frequency map representing vehicle audio data, and the spectrogram mainly comprises a vehicle frame spectrum and a background noise frame spectrum which are overlapped.
In detail, the performing fast fourier transform on the processed frame signal to obtain a frame spectrum includes:
extracting the frame frequency of the processing frame signal, and calculating by using a preset Fourier formula and the frame frequency to obtain a plurality of initial frame frequency values;
deleting repeated values in the plurality of initial frame frequency values to obtain a plurality of target frame frequency values;
and summing the plurality of target frame frequency values to obtain the frame frequency spectrum.
Specifically, in the embodiment of the present invention, the preset fourier formula is as follows:
Figure BDA0003271284070000062
wherein, s (k) represents a frame frequency obtained by performing fast fourier transform on the audio signal s (n); k represents the number of frame points included in s (k), and N represents the size of the frame.
Assume a sampling frequency of fsThen, the framing signal is expressed by the following formula:
Figure BDA0003271284070000063
wherein, TsRepresenting a corresponding time period, f, in the audio signal S (n)sRepresenting the sampling frequency, and n represents the number of frame points included in the signal s (n). The frequency at point k can be obtained by the following equation:
Figure BDA0003271284070000064
wherein, s (k) is a plurality of initial frame frequency values, k represents the number of frame points included in s (k), N represents the size of the frame, the repeated frequency values in s (k) are removed, and the remaining value is the target frame frequency value.
And summing each frequency corresponding to k by the following formula:
Figure BDA0003271284070000071
and S3, updating the frame frequency spectrum by utilizing a preset Mel triangular filter group to obtain an updated frame frequency spectrum.
In the embodiment of the invention, the main function of the Mel triangular filter bank is to smooth the frame frequency spectrum, eliminate the function of harmonic wave and highlight the formant of the frame frequency spectrum.
In detail, the updating the frame spectrum by using a preset mel triangle filter set to obtain an updated frame spectrum includes:
inputting the frame frequency spectrum into at least two preset Mel triangular filter banks to obtain a plurality of Mel frequencies;
acquiring the highest Mel frequency and the lowest Mel frequency of at least two Mel frequencies;
determining the number of point positions between the highest Mel frequency and the lowest Mel frequency according to the number of the Mel triangular filter banks, and obtaining a plurality of additional Mel frequencies corresponding to the number of point positions by using a preset formula in the Mel triangular filter banks;
and converting the plurality of additional Mel frequencies into a plurality of updating frequencies, and carrying out drawing processing on the updating frequencies to obtain an updating frame frequency spectrum.
In the embodiment of the invention, the Mel frequency can describe the nonlinear characteristic of the human ear frequency and simulate the processing of the human ear auditory process.
Specifically, in the embodiment of the present invention, the frame spectrum is converted into a mel frequency by the following formula:
Figure BDA0003271284070000072
in an embodiment of the present invention, the mel triangular filter bank may include 10 mel triangular filters, the highest mel frequency may be 401.25Mels, the lowest mel frequency may be 2834.99Mels, the number of additional points is ten consistent with the number of the mel triangular filter bank, and a plurality of additional mel frequencies may be obtained by a formula included in the mel triangular filter bank: 22.50, 843.75, 1065.00, 1286.25, 1507.50, 1728.74, 1949.99, 2171.24, 2392.49, 2613.74 Mels.
The plurality of additional mel frequencies may be converted to the plurality of update frequencies by the following formula:
f=700(10Mel(f)2595-1)
wherein, the plurality of update frequencies obtained by formula calculation are: 517.33, 781.90, 1103.97, 1496.04, 1973.32, 2554.33, 3261.62, 4122.63, 5170.76, 6446.70 Hz.
In the embodiment of the invention, the Mel-triangular filter bank can perform human ear simulation on the overlapped part of the vehicle frame frequency spectrum, and highlights the formants of the frame frequency spectrum.
And S4, performing discrete cosine transform processing on the updated frame frequency spectrum to obtain a cepstrum coefficient.
In the embodiment of the present invention, the cepstrum coefficient represents that discrete cosine transform is performed on each frame of the updated frame spectrum.
The discrete cosine transform has the advantages that energy concentration is utilized, and the phase of a signal does not need to be estimated, so that a better signal enhancement effect is achieved under lower operation complexity.
In detail, the performing discrete cosine transform processing on the updated frame spectrum to obtain cepstrum coefficients includes:
and separating the frequency spectrum of the updated frame by using a preset discrete cosine transform formula, and extracting the frequency spectrum envelope contained in the separated frequency spectrum of the updated frame to obtain the cepstrum coefficient.
In an embodiment of the present invention, the update frame spectrum includes a spectral envelope and spectral details. The spectral envelope is a smooth curve connecting peaks of amplitudes of different frequencies, and represents a low-frequency signal of a frequency spectrum; the spectral details represent the high frequency signals of the spectrum.
Specifically, in the embodiment of the present invention, the calculation of the cepstrum coefficient may be implemented by the following discrete cosine transform formula:
Figure BDA0003271284070000081
wherein, the Cm(i, k) represents the k-th cepstral coefficient of the i-th frame, and
Figure BDA0003271284070000082
m represents the number of mel-filters, typically an even number; si(m) energy of the m-th mel-filter of the i-th frame signal.
And S5, calculating a cepstrum distance by using the cepstrum coefficient and the acquired noise cepstrum coefficient.
In the embodiment of the invention, the cepstrum distance represents the distance between cepstrum coefficients of each frame of the vehicle frame spectrum at the overlapping part.
In the embodiment of the present invention, similarly, the process and the method for acquiring the noise cepstrum coefficient are substantially the same as the process and the method for acquiring the cepstrum coefficient.
In detail, the calculating by using the cepstrum coefficient and the obtained noise cepstrum coefficient to obtain a cepstrum distance includes:
calculating a noise cepstral coefficient average of a plurality of the noise cepstral coefficients;
and acquiring the number of the cepstrum coefficients, and calculating cepstrum distances by using the cepstrum coefficients and the average value of the noise cepstrum coefficients according to the number of the cepstrum coefficients.
In the embodiment of the invention, the noise cepstrum coefficients are summed and divided by the number of the noise cepstrum coefficients to obtain the average value of the noise cepstrum coefficients.
Specifically, the cepstrum distance may be calculated by the following formula:
Figure BDA0003271284070000091
wherein d ism(i) Representing a cepstrum distance; cm(i, k) represents cepstral coefficients;
Figure BDA0003271284070000092
representing the mean value of the noise cepstral coefficients; p represents the number of cepstral coefficients.
And S6, extracting the characteristics of the processed frame signal by using short-time energy, and carrying out fusion processing according to the short-time energy, the cepstrum distance and the characteristics to obtain traffic flow information in the target area.
In the embodiment of the invention, the short-time energy represents the condition that the signal energy of each frame processing frame changes along with time, and the short-time energy is used for extracting the characteristics of the windowed processing frame signal.
In detail, the obtaining of the traffic flow information in the target area by performing the fusion processing according to the short-time energy, the cepstrum distance, and the features includes:
and extracting a formant of the processing frame signal according to the short-time energy, wherein the formant is the characteristic of the processing frame signal extracted by the short-time energy.
Specifically, in the embodiment of the present invention, the formants may be obtained by the following short-time energy formula:
Figure BDA0003271284070000093
wherein L represents the length of the windowing function; h (i) represents a windowing function; x (i) represents a time domain signal of the processed frame signal.
In the embodiment of the present invention, the traffic flow information in the target area may be obtained according to the number of the formants, and specifically, the traffic flow information includes information on the number of the running vehicles.
For example, if the number of formants is 5, the number of vehicles in the target region is 5.
The short-time energy function roughly identifies and processes whether frame signals exist or not, so that the short-time energy and cepstrum distance are further fused, the spectrum envelope of a peak body on a frame spectrum obtained after fusion is clearer, the peak is more obvious in reflection, and the number of vehicles with vehicle section overlapping parts can be further obtained according to the number of the peaks.
In detail, the obtaining of the traffic flow information in the target area by performing the fusion processing according to the short-time energy, the cepstrum distance, and the features includes:
calculating the power of the cepstrum distance to obtain an adjusted cepstrum distance;
multiplying the adjusted cepstrum distance by the short-time energy characteristic to obtain a target characteristic parameter;
extracting a characteristic value from the target characteristic parameter, and fusing the characteristic value and the characteristic to obtain a target characteristic parameter frequency spectrum;
and acquiring a highest peak from the target characteristic parameter frequency spectrum, and acquiring traffic flow information in the target area according to the position of the highest peak on the target characteristic parameter frequency spectrum.
In the embodiment of the invention, the cepstrum distance is adjusted by utilizing the adjustment coefficient, the cepstrum distance of the vehicle frame frequency spectrum at the overlapping part is adjusted, redundant wave crests are removed, and the highest peak of the vehicle frame frequency spectrum at the overlapping part is highlighted.
Specifically, in the embodiment of the present invention, the short-time energy, the cepstrum distance, and the feature may be fused by the following formulas:
Figure BDA0003271284070000101
wherein i represents an adjustment coefficient; piRepresenting the characteristic value corresponding to the ith frame; eiRepresenting a short time energy.
In the embodiment of the invention, the traffic flow information in the target area can be obtained according to the position of the highest peak.
For example, in the frame spectrum with the segments in the overlapped part, the number of the highest peaks is 6, and the final number of vehicles in the target area is 6.
The method comprises the steps of preprocessing vehicle audio data to obtain a processed frame signal by acquiring the vehicle audio data of a target area; further, performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum; updating the frame frequency spectrum by utilizing a preset Mel triangular filter group to obtain an updated frame frequency spectrum; the updated frame frequency spectrum is subjected to discrete cosine transform processing to obtain cepstrum coefficients, repeated frequency values in processed frame signals can be removed through fast Fourier transform, human ear simulation can be performed on the frame frequency spectrum with vehicles at the overlapping part through a preset Mel triangular filter group, formants of the frame frequency spectrum are highlighted, change curves of the formants corresponding to a plurality of cepstrum coefficients are obtained through discrete cosine transform, and the formants are strengthened; and finally, extracting the characteristics of the processing frame signal by using short-time energy, and fusing the short-time energy, the calculated cepstrum distance and the extracted characteristics to obtain traffic flow information in the target area and improve the accuracy of traffic flow detection. Therefore, the traffic volume detection method provided by the embodiment of the invention can improve the accuracy of traffic volume detection.
Fig. 2 is a functional block diagram of the traffic volume detecting device according to the present invention.
The traffic volume detecting device 100 according to the present invention may be installed in an electronic device. According to the implemented functions, the traffic detection apparatus may include a frame signal obtaining module 101, a cepstrum coefficient obtaining module 102, and a traffic information obtaining module 103, which may also be referred to as a unit, and refer to a series of computer program segments that can be executed by a processor of an electronic device and can perform fixed functions, and are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the frame signal acquisition module 101 is configured to acquire vehicle audio data of a target area, and preprocess the vehicle audio data to obtain a processed frame signal.
In the embodiment of the invention, the frame signal processing refers to a frame signal obtained by performing pre-emphasis, framing and windowing on the vehicle audio data.
In the embodiment of the invention, the vehicle audio data can be acquired through a website and a resource library. Alternatively, the vehicle audio data may be audio data related to road traffic for an area collected in real time.
In detail, the frame signal obtaining module 101 performs preprocessing on the vehicle audio data by performing the following operations to obtain a processed frame signal, including:
pre-emphasis processing is carried out on the vehicle audio data to obtain pre-emphasis audio signals;
performing framing processing on the pre-emphasis frame signal to obtain a framing signal;
and windowing the framing signals to obtain processed frame signals.
In the embodiment of the invention, the pre-emphasis is to enable the vehicle audio data to pass through a pre-constructed high-pass filter to enhance the high-frequency part in the audio signal, so as to highlight the formants of the high frequency to obtain the pre-emphasis audio signal.
Specifically, in the embodiment of the present invention, the pre-emphasis processing may be implemented by the following filter formula:
y(t)=x(t)-αx(t-1)
wherein, α is a filter coefficient, x is an audio sampling value at the time t, and y is an audio signal subjected to pre-emphasis processing.
In the embodiment of the present invention, the framing refers to dividing the pre-emphasis audio signal according to a fixed time length, where each divided sample is a framing signal of a frame.
Specifically, in the embodiment of the present invention, the framing signals of one frame and one frame divided by seconds are first converted into sampling points, where the framing signals include an overlapping area, and if the audio signal is not divided into an even number of frames, the signals are filled to ensure that the sampling points of all the frames are equal, and the audio signal does not interrupt sampling.
For example, the audio signal frame is divided into 20-40ms frames, which are 25ms standard, and 1kHz signal, the frame length (i.e. sample point) of the signal is 0.025x 1000-25, the frame shift is 10ms, the frame shift corresponds to sample points of 0.01 x 1000-10, the first speech frame starts from 0, the second speech frame 10, and the third speech frame 20.
In the embodiment of the invention, the windowing refers to multiplying each frame of the framing signal by a preset window function, and the main function is to increase the connectivity between the left end and the right end of the frame, so as to prevent the leakage problem that the spectrum is smeared in the whole frequency band due to the non-periodic truncation of the signal and ensure that the whole situation is more continuous.
Preferably, the preset window function may be a Hamming window function.
Specifically, in the embodiment of the present invention, it is assumed that the signal after framing is S (N), where N is 0, 1, 2 …, and N-1, where S denotes the signal, N denotes the number of frame points included in S (N), and N denotes the size of the frame.
Specifically, windowing can be achieved by the following formula:
S′(n)=S(n)×W(n)
wherein, S' (n) represents the processed frame signal after windowing, w (n) is a Hamming window function, and w (n) is shown as the following formula:
Figure BDA0003271284070000121
wherein, a represents the frame length needing windowing, and different values of a can generate different Hamming window functions.
The cepstrum coefficient obtaining module 102 is configured to perform fast fourier transform on the processed frame signal to obtain a frame frequency spectrum, update the frame frequency spectrum by using a preset mel triangular filter set to obtain an updated frame frequency spectrum, and perform discrete cosine transform on the updated frame frequency spectrum to obtain a cepstrum coefficient.
In the embodiment of the invention, the frame spectrum is a frequency map representing vehicle audio data, and the spectrogram mainly comprises a vehicle frame spectrum and a background noise frame spectrum which are overlapped.
In detail, the cepstrum coefficient obtaining module 102 performs fast fourier transform on the processed frame signal to obtain a frame spectrum by performing the following operations, including:
extracting the frame frequency of the processing frame signal, and calculating by using a preset Fourier formula and the frame frequency to obtain a plurality of initial frame frequency values;
deleting repeated values in the plurality of initial frame frequency values to obtain a plurality of target frame frequency values;
and summing the plurality of target frame frequency values to obtain the frame frequency spectrum.
Specifically, in the embodiment of the present invention, the preset fourier formula is as follows:
Figure BDA0003271284070000131
wherein, s (k) represents a frame frequency obtained by performing fast fourier transform on the audio signal s (n); k represents the number of frame points included in s (k), and N represents the size of the frame.
Assume a sampling frequency of fsThen, the framing signal is expressed by the following formula:
Figure BDA0003271284070000132
wherein, TsRepresenting a corresponding time period, f, in the audio signal S (n)sRepresenting the sampling frequency, and n represents the number of frame points included in the signal s (n). The frequency at point k can be obtained by the following equation:
Figure BDA0003271284070000133
wherein, s (k) is a plurality of initial frame frequency values, k represents the number of frame points included in s (k), N represents the size of the frame, the repeated frequency values in s (k) are removed, and the remaining value is the target frame frequency value.
And summing each frequency corresponding to k by the following formula:
Figure BDA0003271284070000134
in the embodiment of the invention, the main function of the Mel triangular filter bank is to smooth the frame frequency spectrum, eliminate the function of harmonic wave and highlight the formant of the frame frequency spectrum.
In detail, the cepstrum coefficient obtaining module 102 updates the frame spectrum by using a preset mel-triangle filter bank by performing the following operations, to obtain an updated frame spectrum, including:
inputting the frame frequency spectrum into at least two preset Mel triangular filter banks to obtain a plurality of Mel frequencies;
acquiring the highest Mel frequency and the lowest Mel frequency of at least two Mel frequencies;
determining the number of point positions between the highest Mel frequency and the lowest Mel frequency according to the number of the Mel triangular filter banks, and obtaining a plurality of additional Mel frequencies corresponding to the number of point positions by using a preset formula in the Mel triangular filter banks;
and converting the plurality of additional Mel frequencies into a plurality of updating frequencies, and carrying out drawing processing on the updating frequencies to obtain an updating frame frequency spectrum.
In the embodiment of the invention, the Mel frequency can describe the nonlinear characteristic of the human ear frequency and simulate the processing of the human ear auditory process.
Specifically, in the embodiment of the present invention, the frame spectrum is converted into a mel frequency by the following formula:
Figure BDA0003271284070000141
in an embodiment of the present invention, the mel triangular filter bank may include 10 mel triangular filters, the highest mel frequency may be 401.25Mels, the lowest mel frequency may be 2834.99Mels, the number of additional points is ten consistent with the number of the mel triangular filter bank, and a plurality of additional mel frequencies may be obtained by a formula included in the mel triangular filter bank: 22.50, 843.75, 1065.00, 1286.25, 1507.50, 1728.74, 1949.99, 2171.24, 2392.49, 2613.74 Mels.
The plurality of additional mel frequencies may be converted to the plurality of update frequencies by the following formula:
f=700(10Mel(f)2595-1)
wherein, the plurality of update frequencies obtained by formula calculation are: 517.33, 781.90, 1103.97, 1496.04, 1973.32, 2554.33, 3261.62, 4122.63, 5170.76, 6446.70 Hz.
In the embodiment of the invention, the Mel-triangular filter bank can perform human ear simulation on the overlapped part of the vehicle frame frequency spectrum, and highlights the formants of the frame frequency spectrum.
In the embodiment of the present invention, the cepstrum coefficient represents that discrete cosine transform is performed on each frame of the updated frame spectrum.
The discrete cosine transform has the advantages that energy concentration is utilized, and the phase of a signal does not need to be estimated, so that a better signal enhancement effect is achieved under lower operation complexity.
In detail, the cepstrum coefficient obtaining module 102 performs discrete cosine transform processing on the updated frame spectrum by performing the following operations to obtain cepstrum coefficients, including:
and separating the frequency spectrum of the updated frame by using a preset discrete cosine transform formula, and extracting the frequency spectrum envelope contained in the separated frequency spectrum of the updated frame to obtain the cepstrum coefficient.
In an embodiment of the present invention, the update frame spectrum includes a spectral envelope and spectral details. The spectral envelope is a smooth curve connecting peaks of amplitudes of different frequencies, and represents a low-frequency signal of a frequency spectrum; the spectral details represent the high frequency signals of the spectrum.
Specifically, in the embodiment of the present invention, the calculation of the cepstrum coefficient may be implemented by the following discrete cosine transform formula:
Figure BDA0003271284070000151
wherein, the Cm(i, k) represents the k-th cepstral coefficient of the i-th frame, and
Figure BDA0003271284070000152
m represents the number of mel-filters, typically an even number; si(m) energy of the m-th mel-filter of the i-th frame signal.
The traffic volume information obtaining module 103 is configured to calculate a cepstrum distance by using the cepstrum coefficient and the obtained noise cepstrum coefficient, extract a feature of the processed frame signal by using short-time energy, and perform fusion processing according to the short-time energy, the cepstrum distance, and the feature to obtain traffic volume information in the target area.
In the embodiment of the invention, the cepstrum distance represents the distance between cepstrum coefficients of each frame of the vehicle frame spectrum at the overlapping part.
In the embodiment of the present invention, similarly, the process and the method for acquiring the noise cepstrum coefficient are substantially the same as the process and the method for acquiring the cepstrum coefficient.
In detail, the traffic volume information obtaining module 103 performs calculation by using the cepstrum coefficient and the obtained noise cepstrum coefficient by performing the following operations to obtain a cepstrum distance, including:
calculating a noise cepstral coefficient average of a plurality of the noise cepstral coefficients;
and acquiring the number of the cepstrum coefficients, and calculating cepstrum distances by using the cepstrum coefficients and the average value of the noise cepstrum coefficients according to the number of the cepstrum coefficients.
In the embodiment of the invention, the noise cepstrum coefficients are summed and divided by the number of the noise cepstrum coefficients to obtain the average value of the noise cepstrum coefficients.
Specifically, the cepstrum distance may be calculated by the following formula:
Figure BDA0003271284070000153
wherein d ism(i) Representing a cepstrum distance; cm(i, k) represents cepstral coefficients;
Figure BDA0003271284070000154
representing the mean value of the noise cepstral coefficients; p represents the number of cepstral coefficients.
In the embodiment of the invention, the short-time energy represents the condition that the signal energy of each frame processing frame changes along with time, and the short-time energy is used for extracting the characteristics of the windowed processing frame signal.
In detail, the extracting the feature of the processed frame signal by using the short-time energy includes:
and extracting a formant of the processing frame signal according to the short-time energy, wherein the formant is the characteristic of the processing frame signal extracted by the short-time energy. Specifically, in the embodiment of the present invention, the formants may be obtained by the following short-time energy formula:
Figure BDA0003271284070000161
wherein L represents the length of the windowing function; h (i) represents a windowing function; x (i) represents a time domain signal of the processed frame signal.
In the embodiment of the present invention, the traffic flow information in the target area may be obtained according to the number of the formants, and specifically, the traffic flow information includes information on the number of the running vehicles.
For example, if the number of formants is 5, the number of vehicles in the target region is 5.
The short-time energy function roughly identifies and processes whether frame signals exist or not, so that the short-time energy and cepstrum distance are further fused, the spectrum envelope of a peak body on a frame spectrum obtained after fusion is clearer, the peak is more obvious in reflection, and the number of vehicles with vehicle section overlapping parts can be further obtained according to the number of the peaks.
In detail, the traffic volume information obtaining module 103 performs fusion processing according to the short-time energy, the cepstrum distance, and the feature by performing the following operations to obtain traffic volume information in the target area, including:
calculating the power of the cepstrum distance to obtain an adjusted cepstrum distance;
multiplying the adjusted cepstrum distance by the short-time energy characteristic to obtain a target characteristic parameter;
extracting a characteristic value from the target characteristic parameter, and fusing the characteristic value and the characteristic to obtain a target characteristic parameter frequency spectrum;
and acquiring a highest peak from the target characteristic parameter frequency spectrum, and acquiring traffic flow information in the target area according to the position of the highest peak on the target characteristic parameter frequency spectrum.
In the embodiment of the invention, the cepstrum distance is adjusted by utilizing the adjustment coefficient, the cepstrum distance of the vehicle frame frequency spectrum at the overlapping part is adjusted, redundant wave crests are removed, and the highest peak of the vehicle frame frequency spectrum at the overlapping part is highlighted.
Specifically, in the embodiment of the present invention, the short-time energy, the cepstrum distance, and the feature may be fused by the following formulas:
Figure BDA0003271284070000162
wherein i represents an adjustment coefficient; piRepresenting the characteristic value corresponding to the ith frame; eiRepresenting a short time energy.
In the embodiment of the invention, the traffic flow information in the target area can be obtained according to the position of the highest peak.
For example, in the frame spectrum with the segments in the overlapped part, the number of the highest peaks is 6, and the final number of vehicles in the target area is 6.
The method comprises the steps of preprocessing vehicle audio data to obtain a processed frame signal by acquiring the vehicle audio data of a target area; further, performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum; updating the frame frequency spectrum by utilizing a preset Mel triangular filter group to obtain an updated frame frequency spectrum; the updated frame frequency spectrum is subjected to discrete cosine transform processing to obtain cepstrum coefficients, repeated frequency values in processed frame signals can be removed through fast Fourier transform, human ear simulation can be performed on the frame frequency spectrum with vehicles at the overlapping part through a preset Mel triangular filter group, formants of the frame frequency spectrum are highlighted, change curves of the formants corresponding to a plurality of cepstrum coefficients are obtained through discrete cosine transform, and the formants are strengthened; and finally, extracting the characteristics of the processing frame signal by using short-time energy, and fusing the short-time energy, the calculated cepstrum distance and the extracted characteristics to obtain traffic flow information in the target area and improve the accuracy of traffic flow detection. Therefore, the traffic volume detection device provided by the embodiment of the invention can improve the accuracy of traffic volume detection.
Fig. 3 is a schematic structural diagram of an electronic device implementing the traffic volume detection method according to the present invention.
The electronic device may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a traffic detection program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, local magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a traffic volume detection program, etc., but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., a traffic volume detection program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication bus 12 may be a PerIPheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Fig. 3 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 3 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Optionally, the communication interface 13 may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which is generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the communication interface 13 may further include a user interface, which may be a Display (Display), an input unit (such as a Keyboard (Keyboard)), and optionally, a standard wired interface, or a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The traffic volume detection program stored in the memory 11 of the electronic device is a combination of computer programs, which when run in the processor 10, may implement:
acquiring vehicle audio data of a target area, and preprocessing the vehicle audio data to obtain a processed frame signal;
performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum;
updating the frame frequency spectrum by utilizing a preset Mel triangular filter group to obtain an updated frame frequency spectrum;
performing discrete cosine transform processing on the updated frame frequency spectrum to obtain a cepstrum coefficient;
calculating a cepstrum distance by using the cepstrum coefficient and the obtained noise cepstrum coefficient;
and extracting the characteristics of the processing frame signal by using short-time energy, and carrying out fusion processing according to the short-time energy, the cepstrum distance and the characteristics to obtain traffic flow information in the target area.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, the computer program may implement:
acquiring vehicle audio data of a target area, and preprocessing the vehicle audio data to obtain a processed frame signal;
performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum;
updating the frame frequency spectrum by utilizing a preset Mel triangular filter group to obtain an updated frame frequency spectrum;
performing discrete cosine transform processing on the updated frame frequency spectrum to obtain a cepstrum coefficient;
calculating a cepstrum distance by using the cepstrum coefficient and the obtained noise cepstrum coefficient;
and extracting the characteristics of the processing frame signal by using short-time energy, and carrying out fusion processing according to the short-time energy, the cepstrum distance and the characteristics to obtain traffic flow information in the target area.
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A traffic volume detection method, characterized in that the method comprises:
acquiring vehicle audio data of a target area, and preprocessing the vehicle audio data to obtain a processed frame signal;
performing fast Fourier transform on the processed frame signal to obtain a frame frequency spectrum;
updating the frame frequency spectrum by utilizing a preset Mel triangular filter group to obtain an updated frame frequency spectrum;
performing discrete cosine transform processing on the updated frame frequency spectrum to obtain a cepstrum coefficient;
calculating a cepstrum distance by using the cepstrum coefficient and the obtained noise cepstrum coefficient;
and extracting the characteristics of the processing frame signal by using short-time energy, and carrying out fusion processing according to the short-time energy, the cepstrum distance and the characteristics to obtain traffic flow information in the target area.
2. The traffic volume detecting method according to claim 1, wherein the obtaining traffic volume information in the target area by performing fusion processing based on the short-time energy, the cepstrum distance, and the features comprises:
calculating the power of the cepstrum distance to obtain an adjusted cepstrum distance;
multiplying the adjusted cepstrum distance by the short-time energy characteristic to obtain a target characteristic parameter;
extracting a characteristic value from the target characteristic parameter, and fusing the characteristic value and the characteristic to obtain a target characteristic parameter frequency spectrum;
and acquiring a highest peak from the target characteristic parameter frequency spectrum, and acquiring traffic flow information in the target area according to the position of the highest peak on the target characteristic parameter frequency spectrum.
3. The traffic volume detecting method according to claim 1, wherein the calculating a cepstrum distance using the cepstrum coefficients and the acquired noise cepstrum coefficients comprises:
calculating a noise cepstral coefficient average of a plurality of the noise cepstral coefficients;
and acquiring the number of the cepstrum coefficients, and calculating cepstrum distances by using the cepstrum coefficients and the average value of the noise cepstrum coefficients according to the number of the cepstrum coefficients.
4. The traffic volume detecting method according to claim 1, wherein the performing discrete cosine transform processing on the updated frame spectrum to obtain cepstral coefficients comprises:
and separating the frequency spectrum of the updated frame by using a preset discrete cosine transform formula, and extracting the frequency spectrum envelope contained in the separated frequency spectrum of the updated frame to obtain the cepstrum coefficient.
5. A traffic detection method as claimed in claim 1, wherein said updating the frame spectrum with a predetermined mel-triangle filter set to obtain an updated frame spectrum comprises:
inputting the frame frequency spectrum into at least two preset Mel triangular filter banks to obtain a plurality of Mel frequencies;
acquiring the highest Mel frequency and the lowest Mel frequency of at least two Mel frequencies;
determining the number of point positions between the highest Mel frequency and the lowest Mel frequency according to the number of the Mel triangular filter banks, and obtaining a plurality of additional Mel frequencies corresponding to the number of point positions by using a preset formula in the Mel triangular filter banks;
and converting the plurality of additional Mel frequencies into a plurality of updating frequencies, and carrying out drawing processing on the updating frequencies to obtain an updating frame frequency spectrum.
6. A traffic detection method as claimed in claim 1, wherein said performing a fast fourier transform on said processed frame signal to obtain a frame spectrum comprises:
extracting the frame frequency of the processing frame signal, and calculating by using a preset Fourier formula and the frame frequency to obtain a plurality of initial frame frequency values;
deleting repeated values in the plurality of initial frame frequency values to obtain a plurality of target frame frequency values;
and summing the plurality of target frame frequency values to obtain the frame frequency spectrum.
7. A traffic detection method as claimed in claim 1, wherein said pre-processing said vehicle audio data to obtain a processed frame signal comprises:
pre-emphasis processing is carried out on the vehicle audio data to obtain pre-emphasis audio signals;
performing framing processing on the pre-emphasis frame signal to obtain a framing signal;
and windowing the framing signals to obtain processed frame signals.
8. A traffic volume detecting device, comprising:
the frame signal acquisition module is used for acquiring vehicle audio data of a target area and preprocessing the vehicle audio data to obtain a processed frame signal;
a cepstrum coefficient obtaining module, configured to perform fast fourier transform on the processed frame signal to obtain a frame frequency spectrum, update the frame frequency spectrum by using a preset mel triangular filter group to obtain an updated frame frequency spectrum, and perform discrete cosine transform on the updated frame frequency spectrum to obtain a cepstrum coefficient;
and the traffic volume information acquisition module is used for calculating a cepstrum distance by using the cepstrum coefficient and the acquired noise cepstrum coefficient, extracting the characteristics of the processing frame signal by using short-time energy, and performing fusion processing according to the short-time energy, the cepstrum distance and the characteristics to obtain traffic volume information in the target area.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform a traffic detection method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements a traffic volume detection method according to any one of claims 1 to 7.
CN202111102368.5A 2021-09-19 2021-09-19 Traffic volume detection method and device, electronic equipment and readable storage medium Pending CN113823089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111102368.5A CN113823089A (en) 2021-09-19 2021-09-19 Traffic volume detection method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111102368.5A CN113823089A (en) 2021-09-19 2021-09-19 Traffic volume detection method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113823089A true CN113823089A (en) 2021-12-21

Family

ID=78922697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111102368.5A Pending CN113823089A (en) 2021-09-19 2021-09-19 Traffic volume detection method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113823089A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115116230A (en) * 2022-07-26 2022-09-27 浪潮卓数大数据产业发展有限公司 Traffic environment monitoring method, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2945154A1 (en) * 2001-02-02 2015-11-18 Motorola Mobility LLC Method and apparatus for speech reconstruction in a distributed speech recognition system
CN107610715A (en) * 2017-10-10 2018-01-19 昆明理工大学 A kind of similarity calculating method based on muli-sounds feature
CN109147818A (en) * 2018-10-30 2019-01-04 Oppo广东移动通信有限公司 Acoustic feature extracting method, device, storage medium and terminal device
CN111261189A (en) * 2020-04-02 2020-06-09 中国科学院上海微系统与信息技术研究所 Vehicle sound signal feature extraction method
CN111625763A (en) * 2020-05-27 2020-09-04 郑州航空工业管理学院 Operation risk prediction method and prediction system based on mathematical model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2945154A1 (en) * 2001-02-02 2015-11-18 Motorola Mobility LLC Method and apparatus for speech reconstruction in a distributed speech recognition system
CN107610715A (en) * 2017-10-10 2018-01-19 昆明理工大学 A kind of similarity calculating method based on muli-sounds feature
CN109147818A (en) * 2018-10-30 2019-01-04 Oppo广东移动通信有限公司 Acoustic feature extracting method, device, storage medium and terminal device
CN111261189A (en) * 2020-04-02 2020-06-09 中国科学院上海微系统与信息技术研究所 Vehicle sound signal feature extraction method
CN111625763A (en) * 2020-05-27 2020-09-04 郑州航空工业管理学院 Operation risk prediction method and prediction system based on mathematical model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马庆禄 等: "基于行车声音端点检测的交通量统计", 《科学技术与工程》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115116230A (en) * 2022-07-26 2022-09-27 浪潮卓数大数据产业发展有限公司 Traffic environment monitoring method, equipment and medium

Similar Documents

Publication Publication Date Title
CN109859772B (en) Emotion recognition method, emotion recognition device and computer-readable storage medium
WO2022116420A1 (en) Speech event detection method and apparatus, electronic device, and computer storage medium
CN112397047A (en) Speech synthesis method, device, electronic equipment and readable storage medium
CN108960520A (en) A kind of Methods of electric load forecasting, system, computer equipment, medium
CN112951203B (en) Speech synthesis method, device, electronic equipment and storage medium
CN104126200A (en) Acoustic processing unit
CN114333881B (en) Audio transmission noise reduction method, device and medium based on environment self-adaptation
CN112509554A (en) Speech synthesis method, speech synthesis device, electronic equipment and storage medium
CN113903363A (en) Violation detection method, device, equipment and medium based on artificial intelligence
CN113823089A (en) Traffic volume detection method and device, electronic equipment and readable storage medium
CN113421584B (en) Audio noise reduction method, device, computer equipment and storage medium
CN112489628B (en) Voice data selection method and device, electronic equipment and storage medium
CN114863945A (en) Text-based voice changing method and device, electronic equipment and storage medium
CN113793620A (en) Voice noise reduction method, device and equipment based on scene classification and storage medium
CN113869599A (en) Fish epidemic disease development prediction method, system, equipment and medium
CN116564322A (en) Voice conversion method, device, equipment and storage medium
CN113555026B (en) Voice conversion method, device, electronic equipment and medium
CN116543739A (en) EMD-based power equipment noise control method
CN116542783A (en) Risk assessment method, device, equipment and storage medium based on artificial intelligence
CN111933154A (en) Method and device for identifying counterfeit voice and computer readable storage medium
WO2023029960A1 (en) Voice noise reduction model training method, voice scoring method, apparatus, device, storage medium and program product
CN115171660A (en) Voiceprint information processing method and device, electronic equipment and storage medium
CN114842880A (en) Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium
CN113221990A (en) Information input method and device and related equipment
CN113808577A (en) Intelligent extraction method and device of voice abstract, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211221

RJ01 Rejection of invention patent application after publication