CN113156939A - Unmanned sound wave and light sense coordinated detection method and system - Google Patents

Unmanned sound wave and light sense coordinated detection method and system Download PDF

Info

Publication number
CN113156939A
CN113156939A CN202110222979.7A CN202110222979A CN113156939A CN 113156939 A CN113156939 A CN 113156939A CN 202110222979 A CN202110222979 A CN 202110222979A CN 113156939 A CN113156939 A CN 113156939A
Authority
CN
China
Prior art keywords
flow
vehicle
sound wave
sound
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110222979.7A
Other languages
Chinese (zh)
Other versions
CN113156939B (en
Inventor
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110222979.7A priority Critical patent/CN113156939B/en
Publication of CN113156939A publication Critical patent/CN113156939A/en
Application granted granted Critical
Publication of CN113156939B publication Critical patent/CN113156939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/20Position of source determined by a plurality of spaced direction-finders
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method and a system for detecting unmanned sound wave and light sensation in a coordinated manner, which are characterized by comprising the following steps of: 1) classifying the acoustic wave signal into a plurality of sine waves based on Fourier transform, wherein the frequency and the amplitude of a fundamental wave or a harmonic wave of the acoustic wave signal are used as identifiers of the sine waves; 2) sampling acoustic signal characteristics, identifying and tracking, combining Doppler effect, training and learning the corresponding relation between the acoustic signal and the light sensing signal of the vehicle by a machine learning method, learning the acoustic characteristics of the vehicle with a specific light sensing signal by a hidden Markov learning algorithm, accurately calculating the direction of a sound source based on an acoustic coordinate calculation model and verifying the direction, and respectively predicting the type and the distance of the vehicle by a logistic regression algorithm and the hidden Markov algorithm in special road scenes such as foggy days and the like; 3) designing traffic ultrasonic signals, including crossing traffic guiding ultrasonic signals and driving ultrasonic signals, wherein the crossing traffic guiding ultrasonic signals are synchronous with traffic light signals, and the vehicle driving ultrasonic signals are synchronous with vehicle signal lights; 4) designing a sound wave and light sensation coordination mechanism, and realizing operations such as straight going and overtaking based on the sound wave characteristic navigation guidance of surrounding vehicles; 5) the vehicle sound pick-up is designed in a deployment mode, and a sound wave and light sensation integrated reflector is designed.

Description

Unmanned sound wave and light sense coordinated detection method and system
Technical Field
The invention relates to a method and a system for detecting unmanned sound wave and light sense in a coordinated manner, which can accurately guide an unmanned vehicle to run in various complex scenes and are applied to the fields of intelligent transportation, unmanned driving and advanced auxiliary driving; the method is applied to the field of signal design of crossing signal lamps; the method is applied to the field of design of unmanned vehicles and reflectors; the method is applied to the field of unmanned computer software design.
Background
The existing unmanned detection method and system mainly apply laser radar, electromagnetic wave radar, ultrasonic radar and video shooting modes to realize active and passive detection of surrounding obstacles, and application scenes of each detection mode have limitations, such as: the laser radar is affected under the conditions of poor light such as fog, sand and dust and the like; the camera video is also influenced at night or under the condition of fuzzy light; the detection range of the millimeter wave (electromagnetic wave) radar is small, the detection period is long if large-range scanning is realized, and the processing task amount is increased; currently, the ultrasonic radar is mainly applied to parking detection due to the influence of short detection distance.
Meanwhile, when the existing unmanned vehicle passes through a traffic light intersection, the method mainly depends on a signal light color identification method, the identification precision is easily influenced by light, a computer image processing task is added, and the reliability is poor.
In addition, in practical road application, unmanned vehicles lack perception of surrounding vehicle noise and horn sound warning, and signal lamp warning methods such as overtaking and braking of surrounding vehicles by means of camera video recognition are also easily affected by sight.
Finally, the existing vehicle reflectors still continue the design idea of a man-made driving vehicle, the function and the structure of a single glass panel are not suitable for the unmanned driving vehicle, and a large number of sensors required by the unmanned driving vehicle are difficult to find out proper positions for installation.
Generally, the existing unmanned detection method and system lack passive detection perception on sound waves of surrounding vehicles, also lack broadcast communication of signal lamps on the unmanned vehicles, and simultaneously lack sound wave broadcast communication among the unmanned vehicles; the existing unmanned detection method and system lack a coordination detection mechanism of sound waves and laser radars, a coordination detection mechanism of sound waves and video cameras and a coordination detection mechanism of sound waves and electromagnetic wave radars; existing unmanned detection methods and systems lack a mirror design dedicated to unmanned vehicles.
Disclosure of Invention
In order to solve the problems of the existing unmanned detection method and system, the invention designs the unmanned sound wave and light sense coordination detection method and system, a plurality of sound pickups are arranged at the front, back, upper and lower positions of the unmanned vehicle, every three sound pickups form a group, and a sound wave direction recognition algorithm is designed, so that the specific direction coordinate of the sound source is accurately judged; the machine learning algorithm trains and learns the corresponding relation between the sound wave signals and the light sensing signals of the vehicles, and when the laser radar or the camera video is limited by a scene, the vehicle types and the distances of the surrounding vehicles are predicted according to the frequency and the intensity of the sound wave signals; under the condition that the light sensing signal is clear, sound wave detection and light sensing (including laser radar, camera video and electromagnetic wave radar) detection are coordinated according to a certain mechanism flow, and part of light sensing detection tasks are replaced by sound wave detection, so that the task amount of light sensing detection is reduced or the defect of light sensing detection is overcome; the crossing signal lamp system sends crossing traffic guidance ultrasonic signals in a broadcasting mode, and the unmanned vehicles at the crossing receive the ultrasonic signals and interpret the ultrasonic signals according to a decoding mechanism; unmanned vehicles mutually send and receive vehicle running ultrasonic signals, synchronize with vehicle traffic signal lamps, and replace horn sounds with whistling ultrasonic waves; a sound wave and light sensation integrated reflector is designed and consists of a base, a reflector glass panel, a sound pickup and a light sensation device. The unmanned sound wave and light sense coordination detection method and system specifically comprise the following steps:
first, classification of acoustic signals
The sound wave signals are processed and classified into a plurality of sine waves, the frequency and the amplitude serve as identifiers of the sine waves, one sound wave signal is decomposed into a single fundamental wave and a plurality of harmonic waves, one single sound source signal is decomposed into a plurality of sound wave signals, and one multi-sound-source mixed signal is decomposed into a plurality of single sound source signals. In contrast, a single fundamental wave and a plurality of harmonics are combined into one sound wave signal, a plurality of sound wave signals are combined into one single sound source signal, and a plurality of single sound source signals are combined into one multi-sound-source mixed signal.
The sound wave signal is Fourier transformed, namely:
Figure BSA0000234708330000021
the sound wave signals are composed of fundamental waves and harmonic waves after Fourier transformation, and the fundamental waves or the harmonic waves of different sound wave signals have different frequencies and amplitudes, namely different frequencies or different amplitudes or different frequencies and different amplitudes.
According to time ti(i is a corner mark, i is 1, 2, 3, …) the sound wave signal is classified, for example, the sound wave signal at the time of t1 is BFAHfa(F, A denotes a base wave frequency and amplitude, F is 1, 2, 3 …; a is 1, 2, 3, …; F and a denote harmonic frequencies and amplitudes, F is 1, 2, 3 …; and a is 1, 2, 3, …), and the base waves are first classified according to their frequency and amplitude, for example, { B {11Hfa、B12Hfa、B21HfaIn. }, the number of fundamental wave signals is
Figure BSA0000234708330000022
Further subdivided according to frequency and amplitude of harmonic wave on the basis of classified fundamental wave, e.g. B11H11、B11H12、B11H21In the method, the harmonic quantity of an acoustic wave signal in which the ith fundamental wave is positioned is
Figure BSA0000234708330000023
The total number of sound wave signals of different fundamental waves or harmonic waves is
Figure BSA0000234708330000024
Wherein, BiIs the ith fundamental wave; hiIs the harmonic number of the acoustic wave signal where the ith fundamental wave is located.
The sound wave frequency domain and the time domain signal are converted with each other, the time domain signal is required to be combined for identification and tracking, in the following (second, identification and tracking of the sound wave signal), a single sound source signal is required to be sampled to identify the type of the vehicle, namely the vehicle type, the driving noise or the horn sound of the same vehicle is a single sound source signal, and the direction of the sound source vehicle can be identified and tracked only by sampling one sound wave signal; in the following (three, coordinate calculation of sound source), only one sound wave signal needs to be sampled to identify and track the direction of the sound wave.
Identification and tracking of acoustic signals
1. Machine learning method
The unmanned vehicle collects sound wave signals and light sensation (including video and laser radar) signals of surrounding vehicles during running, measures the distance (relative to the vehicle) and the shape (large-sized truck, medium-sized truck, light truck, passenger vehicle, medium-sized passenger vehicle and large-sized passenger vehicle) and the application (conventional vehicle, fire truck, ambulance and police car) of a sound source vehicle through the light sensation signals, analyzes the frequency and the amplitude of the sound wave signals of the sound source vehicle and the type of the sound wave (running noise, horn and alarm), establishes corresponding relations between the distance and the shape of the sound source vehicle and the frequency and the amplitude of the sound wave signals and stores the corresponding relations into a learning type sample library; and inputting the characteristics of the dynamic mode sound wave signals of the vehicle into the lead-in type sample library in a manual input mode. The learning type sample library is associated with the guiding type sample library, and classification corresponding relations are established, for example, the sound wave (noise) signal frequency ranges of three power modes of diesel oil, gasoline and electric are obviously different, the sound wave (noise) amplitudes of vehicles with different power modes and different shapes at the same distance are obviously different, the horn sound frequencies of the vehicles with different shapes are also obviously different, the horn sound amplitudes of the vehicles with different shapes at the same distance are also obviously different, and the alarm sound waves sent by a fire truck, an ambulance and a police car are obviously different. The above "distinct" is quantified in terms of probability. The sound wave and light sensation of the vehicle are corresponding to each other, as shown in table 1.
TABLE 1 vehicle acoustic wave signal and light sensing signal corresponding table
Figure BSA0000234708330000031
Figure BSA0000234708330000041
In table 1, the power mode covers the shape index, and the power mode and the shape index are combined, such as electric large passenger car, electric light truck, diesel large passenger car, and diesel light truck. The power mode index collects the color of the license plate through light sensation signals, the green plate is an automobile with the power mode of 'electric', and the blue plate is an automobile with 'diesel' or 'gasoline'. On the basis of being identified as the blue plate, the appearance index of the vehicle is further acquired based on the light sensation signal, and a large truck, a medium truck, a light truck, a large passenger vehicle and a medium passenger vehicle are divided into a diesel power mode, and a passenger vehicle is divided into a gasoline power mode.
In table 1, "frequency range" corresponds to a combination of "appearance" and "power mode", and "frequency probability" is a probability of correspondence, i.e., a correspondence accuracy; the "amplitude probability" is the probability of the correspondence between the "amplitude range" and the "distance", i.e., the corresponding accuracy.
In table 1, the distance index is obtained in a light sensing detection (e.g., lidar) manner during machine learning in which a correspondence between a vehicle light sensing signal and a sound wave signal is established; in sound source prediction, the distance is predicted according to the frequency and the intensity of sound wave signals.
2. Hidden Markov learning algorithm
And training and learning the corresponding relation between the vehicle light sensation signals and the sound wave signals by using a hidden Markov learning algorithm, and learning and training the sound intensity of different vehicle sound wave frequency signals at different distances. The training data comprises observation sequences and corresponding state sequences or observation sequences only and is realized by supervised learning and unsupervised learning algorithms respectively. The observation sequence and the state sequence are a light sensing signal and a sound wave signal of the vehicle respectively.
And (3) a supervised learning algorithm:
suppose that the given training data comprises S observation sequences with the same length and corresponding state sequences { (O)1,I1),(O2,I2),…,(Os,Is) Then the parameters of the hidden markov model can be estimated using maximum likelihood estimation. The method comprises the following specific steps:
(1) transition probability aijEstimating
Let A be the frequency of transition from time t to state j when time t is in state i and t +1ijThen the probability of state transition aijIs estimated to be
Figure BSA0000234708330000051
(2) Probability of observation bj(k) Is estimated by
Let the frequency of the states in the sample be j and observed as k be BjkThen the probability b that the state is j observed as kj(k) Is estimated to be
Figure BSA0000234708330000052
(3) Probability of initial state piiIs estimated of'iFor the initial state of q in S samplesiFrequency of (2)
The supervised learning needs to use training data, and in the application of sound wave detection of unmanned vehicles, sufficient sound wave frequency, amplitude, distance and vehicle type information of the vehicles need to be acquired by combining light sensing signals.
Unsupervised learning Baum-Welch algorithm:
suppose that a given training data contains only S observation sequences of length T { (O)1,O2,…,OSWith no corresponding state sequence, the goal is to learn the parameters of the hidden markov model λ ═ (a, B, pi). The observation sequence data is regarded as observation data O, the state sequence data is regarded as non-observable hidden data I, and the hidden Markov model is actually a probability model containing hidden variables
P(O|λ)=∑IP(O|I,λ)P(I|λ) (2.3)
Its parameter learning can be realized by the EM algorithm.
(1) Determining log-likelihood functions of complete data
All observed data are O ═ O (O)1,o2,…,oT) All the hidden data are I ═ I (I)1,i2,…,iT) The complete data is (O, I) ═ O1,o2,…,oT,i1,i2,…,iT). The log-likelihood function of the complete data is logP(O,I|λ)。
(2) Step E of the EM algorithm: q function Q (lambda, lambda')
Q(λ,λ′)=∑IlogP(O,I|λ)P(O,I|λ′) (2.4)
Where λ' is the current estimated value of the hidden markov model parameter and λ is the hidden markov model parameter to be maximized.
Figure BSA0000234708330000061
Thus, the function Q (λ, λ') can be written as:
Figure BSA0000234708330000062
the summation in the equation is performed for the total sequence length T of all training data.
(3) M steps of the EM algorithm: maximizing the Q function Q (λ, λ') to find the model parameters A, B, π
Since the parameters to be maximized are present individually in 3 terms in equation (2.5), only the terms need to be maximized individually.
The 1 st term of equation (2.5) can be written as:
Figure BSA0000234708330000063
πisatisfy the constraint condition
Figure BSA0000234708330000064
Using a lagrange multiplier method, a lagrange function is written:
Figure BSA0000234708330000065
partial derivative is calculated and the result is 0
Figure BSA0000234708330000066
To obtain
P(O,i1=i|λ′)+γπi=0
Summing i to obtain gamma
γ=-P(O,|λ′)
Carry over to formula 2.6
Figure BSA0000234708330000067
② the 2 nd item of the formula (2.5) can be written as
Figure BSA0000234708330000068
Like item 1, applications have constraints
Figure BSA0000234708330000069
Can be obtained by the Lagrange multiplier method
Figure BSA0000234708330000071
Item 3 of formula (2.5) is
Figure BSA0000234708330000072
Also using the Lagrange multiplier method, with the constraint of
Figure BSA0000234708330000073
Only at ot=vkWhen b isj(ot) To b isj(k) Is not 0, with I (o)t=vk) Is shown to obtain
Figure BSA0000234708330000074
Estimation formula of Baum-Welch model parameters:
the probabilities in the formulae (2.7), (2.8) and (2.9) are represented by γt(i),ξt(i, j), the corresponding formula can be written as:
Figure BSA0000234708330000075
Figure BSA0000234708330000076
πi=γ1(i) (2.12)
wherein, γt(i),ξt(i, j) are each represented by the formula
Figure BSA0000234708330000077
And formula
Figure BSA0000234708330000078
Is given byt(i) Is the forward probability, betat(i) Is a backward probability, and the forward probability and the backward probability are respectively defined as
αt(i)=P(o1,o2,...,ot,it=qi|λ) (2.15)
βt(i)=P(ot+1,or+2,...,oT,it=qi|λ) (2.16)
P (O | λ) is the observed sequence probability.
The input is observed data O ═ O1,o2,...,oT) The output is hidden Markov model parameter, and the model parameter lambda is obtained through initialization and recursion(n+1)=(A(n+1),B(n+1),π(n+1))。
3. Sound source prediction
According to the time sequence, based on the sound wave signal of the vehicle itself, when different sound wave signals appear, the sound wave signal b except the sound wave a of the vehicle itself is screened out from a plurality of sound wave signals, and since b is the set of a plurality of sound wave signals in most cases, the array b [ i ] is used]Is represented by b [ i ]]={B11Hfa、B12Hfa、B21Hfa、...;B11H11、B11H12、B11H21A, the sound wave a is noise emitted when the vehicle is normal, fundamental wave, harmonic frequency and amplitude value of a are based on machine learning, for example, a regression algorithm of machine learning is used for calculating a function value Y of a variable along with vehicle speed, acceleration, gear and load indexa,YaThe values consist of fundamental frequency, fundamental amplitude, harmonic frequency, and harmonic amplitude.
The sampling range of the sound wave signal b comprises vehicle horn sound, vehicle running noise, alarm sound and collision sound. When b [ i ]]In the presence of an acoustic signal BFAHfa>BFAHfa' then, the acoustic azimuth is calculated. B isFAHfaThe value of the' is determined by a sound wave sample recorded in advance and machine learning, and the amplitude of the sound wave signal is gradually attenuated along with the increase of the distance, so that the transmission distance and the amplitude of the same sound wave signal in the same medium have a corresponding relation. The running speed of a certain type of vehicle and the rotating speed of an engine jointly influence the frequency and amplitude variation range of self noise, the sound wave frequency, the distance from the vehicle, the vehicle speed and the sound wave amplitude of vehicles with different shapes are sampled as a training set based on machine learning, the shapes of the vehicles (large-sized trucks, medium-sized trucks, light-duty trucks, passenger vehicles, medium-sized passenger vehicles and large-sized passenger vehicles), the power modes (diesel, gasoline and electric), the purposes (conventional vehicles, fire trucks, ambulances and police vehicles), the sound wave frequency, the distance from the vehicle, the vehicle speed and the alarm are used as input values, the sound wave amplitude is used as an output value, and simultaneously the Doppler effect is combined to acquire the frequency and the amplitude of the vehicles which are opposite to or same to the vehicle and run at different speeds vThe frequency and amplitude of the time varies. v is the relative speed, v0 is the speed of the vehicle itself, v1 is the speed of the source vehicle, when going in opposite directions, v0+ v 1; when the vehicle travels in the same direction, v is equal to v0-v1 or v is equal to v1-v0, the facing or same direction and the speed v are used as input values, the frequency is used as an output value, and finally the distance between a sound source and the vehicle, the type of the sound source and the facing or same direction characteristics of the vehicle are predicted based on the sound wave signals collected in real time.
When the continuous abnormal noise a' occurs in the vehicle, when the surrounding environment is changed or the surrounding vehicle is changed and the surrounding vehicle has no vehicle but the noise is still constant, starting a sound wave direction calculation process, calculating the sound direction by utilizing the front sound pickup, the rear sound pickup, the left sound pickup, the right sound pickup, the upper sound pickup and the lower sound pickup, wherein the sound wave direction intersection points of the plurality of sound pickups are specific sound source points, and if the sound wave direction intersection points are located at the position of the vehicle, judging the sound wave is a fault sound wave.
(1) Sound wave signal identification based on machine learning algorithm
First, the type of vehicle (model) is judged (predicted)
A multinomial logistic regression algorithm is applied to divide a vehicle type Y into a power mode (diesel oil, gasoline and electric), an appearance (a large truck, a medium truck, a light truck, a passenger vehicle, a medium passenger vehicle and a large passenger vehicle) and an application (a conventional vehicle, a fire truck, an ambulance and a police car), wherein any combination of the power mode, the appearance and the application is a Y value. Y is an output value, the vehicle type is represented by Arabic numerals 1, 2, 3 and …, and the value set of the discrete random variable Y is {1, 2, 3, …, K }; x is the input value, x represents the sound characteristics, i.e. the frequencies of the fundamental and harmonic (time series combined waveform comprising a plurality of different frequencies), the polynomial regression model is as follows:
Figure BSA0000234708330000081
Figure BSA0000234708330000082
in the above formula, x ∈ Rn+1,wk∈Rn+1,wkIs a parameter, wkAlso called weight vector, wkX is wkAnd x, P (Y ═ K | x) and P (Y ═ K | x) are probability values.
Based on an infinite number of sound wave signals B in training setFAHfaAnd classifying all sample sound waves into the combination of power modes (diesel, gasoline and electric) and shapes (large-sized trucks, medium-sized trucks, light trucks, passenger vehicles, medium-sized passenger vehicles and large-sized passenger vehicles), purposes (conventional vehicles, fire trucks, ambulances and police cars) and alarms, and calculating the probability of each sound wave corresponding to the vehicle type according to the logistic regression model to obtain the maximum probability.
Model prediction based on input real-time acoustic signals BFAHfaAnd (3) outputting one of the vehicle types according to the maximum probability value in the step (1).
Second, judging the relative distance between the sound source and the vehicle
On the basis of vehicle type judgment, the amplitude of the sound wave signal is taken as an input value, the distance is taken as an output value, the value set of the distance Y is {2, 5, 10, 20, 30, 50, 100, 200, … }, the value set means that the distance from the vehicle is 2 meters, 5 meters, 10 meters, 20 meters, 30 meters, 50 meters, 100 meters, 200 meters and 200 meters,
(2) sound wave signal distance identification based on hidden Markov model approximate algorithm
Using hidden markov approximation algorithms, the idea being to select at each instant t the state most likely to occur at that instant
Figure BSA0000234708330000091
Thereby obtaining a state sequence
Figure BSA0000234708330000092
It is taken as the result of the prediction. Given a hidden Markov model λ and an observation sequence O, at time t is in state qiProbability of (gamma)t(i) The method comprises the following steps:
Figure BSA0000234708330000093
most probable state at each time t
Figure BSA0000234708330000094
Figure BSA0000234708330000095
Thereby obtaining a state sequence
Figure BSA0000234708330000096
I*Is a sequence of acoustic distances, qiIs a certain distance value, sequence of states
Figure BSA0000234708330000097
Is the predicted distance and the observation sequence O is the amplitude sequence corresponding to the acoustic frequency.
Based on the method, the vehicle types can be classified in more ways according to the characteristics of the sound wave signals, for example, the vehicle types are classified according to diesel vehicles, gasoline vehicles and electric vehicles, the vehicle types are further classified based on the use and the size of the vehicle, even the vehicle brands and the factory models are classified based on the sound wave signals, and the prediction method is similar; or the vehicle type is not classified, the vehicle type is directly classified according to the characteristics of the sound wave signals, the frequency and the amplitude are used as input values, and the distance (the distance from the vehicle) is used as an output value.
Thirdly, calculating the coordinates of the sound source
1. Acoustic wave coordinate calculation model
The letter o is a target sound source point, namely an o point; the letter A is the point A where the sound pickup A is located; the letter B is a point where the sound pickup B is located, namely a point B; the letter C is the point where the sound pickup C is located, namely the point C; the letter D is the point at which the microphone D is located, i.e., the point D.
The distance between the point A and the point B is m, and the distance between the point A and the point C is n; let the coordinates of point a be (0, 0, 0), point B be (m, 0, 0), point C be (0, n, 0), point o be (x, y, z), and x, y, z are the horizontal (left-right), vertical (up-down), and distance (depth) coordinates of the sound source, respectively.
The distance | oA | ═ a |, B |, and C between points o and B, respectively, then:
a2=x2+y2+z2, (3.1);
b2=(x-m)2+y2+z2, (3.2);
c2=x2+(y-n)2+z2, (3.3);
the above formulas (3.1) to (3.2) give: a is2-b2=2mx-m2
Namely: m is2+a2-b2=2mx,
Then:
Figure BSA0000234708330000101
the above formulas (3.1) to (3.3) give: a is2-c2=2ny-n2
Namely: n is2+a2-c2=2ny,
Then:
Figure BSA0000234708330000102
assuming that a is a certain value, the time difference between the sound wave received by the microphones a and B is Tab, and the propagation distance of the sound wave at the time Tab is Lab, the difference between a and B (a-B) is Lab, that is, B is a-Lab. The propagation speed of the sound wave in the air is known as Va, and the propagation distance of the sound wave at the time Tab is Lab ═ Va × Tab. The influence of wind speed and temperature change on sound waves is neglected, and the values of x and y in the coordinate system in a stable sound wave transmission medium are not influenced by the value of a, but in practical application, the setting of the value of a is determined by regression.
Similarly, the x and y coordinates of the o point are calculated from the coordinates of A, B, D and the distance from the o point to A, B, D, respectively, and in order to distinguish the x and y coordinate values of the o point obtained by the two calculation methods of A, B, D from the o point to the point and A, B, C from each other, the o point coordinates calculated based on A, B, C are written as o (x, y), and the o point coordinates calculated based on A, B, D are written as o' (x, y). The measurement accuracy is judged by calculating the coincidence of o (x, y) and o' (x, y), i.e., the deviation of the two x, y coordinates.
2. Sound source orientation verification
Based on the principle of calculating the sound wave direction in the two methods, when the time difference of the sound wave reaching the three points a, b and c cannot be accurately judged, namely the direction cannot be accurately judged, or whether a vehicle exists in a certain direction or not is judged, the direction coordinate of a sound wave transmitting (sound source) point o can be assumed to be (x, y, z), the coordinates of the three points a, b and c of sound pick-up devices are known, the distance from the point o to the three points a, b and c is calculated, the propagation speed of the sound wave is known, and the time t of the sound wave reaching the three points a, b and c is obtained1、t2、t3By analysis of t1、t2、t3And judging whether the signals at the moment are consistent or not so as to judge whether the sound source point is correct or not, further judging whether the sound source vehicle type or the sound wave range of the vehicle belongs to, and finally judging the distance based on the sound wave intensity and the sound wave frequency.
3. Unmanned vehicle sound source orientation detection
The head, the tail, the left side, the right side and the top of the automobile are respectively provided with sound pick-up devices, and every two groups of sound pick-up devices are respectively kept at a certain distance and are not all on the same plane. The sound source direction is judged according to the time when each sound pick-up detects the change value of the frequency, the amplitude and the waveform of the sound along with the time, the same sound signal is firstly received by each sound pick-up, the determination of the same sound signal is based on the time, the frequency, the waveform and the amplitude, and the receiving time difference of the same sound signal is respectively detected by a plurality of sound pick-ups which are pairwise in a group.
Firstly, determining the collection range of sound signals, dividing the sound source direction into (right) front, (right) rear, left front, right front, left side, right side, left rear and right rear relative to the vehicle, comparing the first receiving time of the same sound signal collected by a head sound pickup, a tail sound pickup, a left sound pickup and a right sound pickup, and calculating Max { a1, a2, b1, b2, c1, c2, d1, d2, e1 and e2} which are sound pickups detecting the same sound signal firstly, assuming that the vehicle body has ten sound pickups. When the sensitivities of the ten sound pickups are consistent, the front, the back, the left and the right of the sound source can be judged according to the amplitude of the sound signal.
a1, a2, b1, b2, c1, c2, d1, d2, e1 and e2 are the sound pick-up at the head, tail, left side, right side and top of the vehicle respectively.
If a1 or a2 is the maximum, the sound source is in front, the time difference Taa or the phase difference of the same sound signal si collected by a1 and a2 is further calculated, assuming that the propagation speed of the sound wave in the air is Vsa, the propagation distance of the sound wave at Taa is La Taa × Vsa, the distance of the sound wave from the sound source to a1 is La1, the distance of the sound wave from the sound source to a2 is La2 La1+ La, and the distance between a1 and a2 is Laa, a triangle can be drawn according to the proportional relation of three side lengths, and the horizontal orientation of the sound source can be determined; the time difference Tae between the acquisition of the same sound signal si by a1 and the acquisition of the same sound signal si by e1 is calculated, the travel distance of the sound wave at Tae time is Le Tae × Vsa, and the distance of the sound wave from the sound source to e1 is Le1 La1+ Le.
Further, the distance of the sound source is judged according to the amplitude of the waveform sound signal with a certain frequency, and the change of the distance of the sound source relative to the vehicle is judged by combining the Doppler effect of the sound wave. The specific sound signals with different distances and the sound wave change based on the Doppler effect are referenced to a sound signal sample library, and the sound signal sample library comprises an introduction type sample library and a learning type sample library. The import type sample library is a voice sample which is input in advance and corresponds to a specific meaning; the learning type sample library is a sample library which is learned by combining light sensation and sound in the road driving practice of the unmanned vehicle, and the sound and the image are in corresponding relation.
The collection range of the sound (sound wave) signals mainly comprises a horn sound, vehicle running noise, intersection traffic signal sign (traffic light) sound, a judgment reference sound sample library of the horn sound, the vehicle running sound and the intersection traffic signal sign sound, and a meaning reference sound signal paraphrasing library of specific sound signals.
And selecting sound sample references according to scenes, wherein the sound sample references are different from sound sample references of open plain roads, mountain roads and tunnels. Dividing road scenes into urban streets, plain roads, mountain roads and tunnels. The method comprises the steps of respectively establishing sound sample libraries in different scenes, and recording sound signals by a sound pickup arranged on a vehicle body, wherein the sound signals comprise four attributes of frequency, amplitude, time and waveform, namely the change of the frequency, the amplitude and the waveform along with the time. The sound signals are recorded by a plurality of sound pickups at different positions, or the sound pickups at different positions are combined pairwise to record the sound signals respectively. Controlling the rotating speed and the speed of an engine through the change of an accelerator and a gear, tracking and recording the change of the frequency, the amplitude and the waveform of the sampling sound of the vehicle, and if the sampling sound keeps synchronous, enabling the sound signal to be the noise of the vehicle; when the noise of the vehicle changes and other sound signals recorded by the vehicle sound pickup are relatively stable, the surrounding environment of the vehicle is judged to be abnormal.
When the sound signal changes obviously, the sound direction is judged, and at the moment, the light sensing system focuses on the sound direction and records the object light sensing signal attributes of the sound production direction, such as time, color, shape, volume, direction, relative displacement (relative to the vehicle) and distance (relative to the vehicle). If the shape and the volume accord with the sample characteristics (for example, accord with the characteristics of an automobile) input in advance and the sound detection direction and the light sensing detection direction are synchronous (under the condition that the relative displacement and the distance change along with time, the sound sensing direction and the light sensing direction are always the same), the light sensing object is determined to be a sound generating body, and at the moment, the light sensing signal and the sound signal are stored in a sample library and a corresponding relation is established.
Establishing a sound signal sign library, establishing sound signal characteristics according to traffic sound signal rules of different countries or regions, and forming unique sign characteristics based on frequency, amplitude, waveform and duration attributes of sound signals. If the sound signal detected by the sound pickup accords with the characteristics of sound samples (such as the frequency, the amplitude, the duration and the waveform of a horn sound) input into the sound signal sign library in advance, the driving state of the unmanned vehicle is controlled by combining the detection result of the light sensing system.
And (3) predicting the direction, distance and vehicle type of the vehicle around the unmanned vehicle by combining the sound source prediction in the previous section (II, the recognition and tracking of the sound wave signal).
Traffic ultrasonic signal
The traffic ultrasonic signals comprise crossing traffic guide ultrasonic signals and vehicle running ultrasonic signals.
1. Crossing traffic guiding ultrasonic signal
The crossing traffic guiding ultrasonic signal, namely the crossing signal lamp ultrasonic signal, is synchronous with the crossing traffic light. The ultrasonic transmission has directional characteristics, ultrasonic transmitting devices are respectively arranged beside signal lamps to transmit traffic guidance ultrasonic signals to oncoming vehicles, and the signal types are as follows: a green light for direct travel; a direct red light; a straight yellow light; a left turn green light; a left turn red light; a left turn yellow light; a right turn green light; a right-turn red light; a yellow light turns right.
(1) Intersection signal lamp ultrasonic signal coding and decoding mechanism 1:
setting two ultrasonic frequencies f1And f2(referred to herein as the ultrasonic signal component frequency), 20X 103≤f1≤30×106,20×103≤f2≤30×106,f1≠f2In hertz (hz).
The ultrasonic signal of a signal lamp is divided into three parts of wave head, content and wave tail, which are respectively composed of a plurality of f1And f2And (4) forming. Frequency f1Represented by binary 0, frequency f2Expressed by binary 1, the wave head, the content and the wave tail have the same duration of single 0 and single 1, and the expression range R of 16 bits of 1 and 0 is 216Namely: and R is {0000000000000000, …, 1111111111111111111 }, wherein 4 bits represent wave heads, 9 bits represent contents, and 3 bits represent wave tails, and the decoding mechanism is that the decoding mechanism can only be interpreted when a complete signal lamp ultrasonic signal is received. The four regions of the intersection are a, b, c and d, and the wave head and the wave tail of the ultrasonic signal of each region are uniquely identified, that is, the wave heads of the four regions a, b, c and d are all different, and the wave tails of the four regions are also all different, for example:
the wave head of the ultrasonic signal of the signal lamp in the area a is 0001, and the wave tail is 001
The wave head of the ultrasonic signal of the b area signal lamp is 0010, and the wave tail is 011
The wave head of the ultrasonic signal of the signal lamp in the c area is 0100, and the wave tail is 101
The wave head of the ultrasonic signal of the d-area signal lamp is 1000, and the wave tail is 111
The ultrasonic signal content of the signal lamp is guided by red, green and yellow lamps in straight line, left turn and right turn, the three states of straight line, left turn and right turn are coded simultaneously, namely 9-bit binary system simultaneously represents left turn, straight line and right turn, the ultrasonic signal content in a, b, c and d areas is a unified coding mechanism, for example:
green light is 110; the red light is 100; yellow light is 010
a. The phase (straight, left-turning and right-turning) sound waves of each region b, c and d are 1, left-turning sound waves, 2, straight sound waves, 3 and right-turning sound waves in sequence, namely:
the left turn green light, the straight red light and the right turn green light are 110100110
The left turn red light, the straight green light and the right turn green light are 100110110
The left turn green light, the straight red light and the right turn red light are 110100100
The left turn red light, the straight red light and the right turn green light are 100100110
The red light for left turn, the red light for straight going and the red light for right turn are 100100100
The left-turn green light, the straight-going green light and the right-turn green light are 110110110
010010010 for the yellow light of left turn, the yellow light of straight going and the yellow light of right turn
One complete signal lamp ultrasonic signal example:
the area a has a left-turn green light, a straight-going red light and a right-turn green light of 0001110100110001
The red light at left turn, the green light at straight going and the green light at right turn in the area b are 0010100110110011
The green light of left turn, the red light of straight going and the red light of right turn in the c area are 0100110100100101
The red light for left turn, the red light for straight going and the green light for right turn in the d area are 1000100100110111
(2) Intersection signal lamp ultrasonic signal coding and decoding mechanism 2:
similar to the method in the above (1), except that f is used3And f4The frequency of the signal component as the wave head and wave tail, i.e. both the wave head and the wave tail f3And f4The content still consists of f1And f2And (4) forming.
Frequency f of ultrasonic signal component3And f4,20×103≤f3≤30×106,20×103≤f4≤30×106,f3≠f4≠f1≠f2In hertz (hz).
Frequency f3Represented by binary 0, frequency f4Expressed as binary 1, f3And f4Composed of 4-bit wave head, 3-bit wave tail and f1And f2The formed 8-bit contents together form a signal lamp ultrasonic signal with 16 bits.
(3) Intersection signal lamp ultrasonic signal coding and decoding mechanism 3:
similarly to the methods (1) and (2), the same applies to f3And f4Frequency of signal components as wave head and wave tail, f1And f2The signal component frequencies of the contents are distinguished in that the wave head and the wave tail are respectively represented by independent 8- bit 1 and 0, the contents are also represented by independent 8 bits, and the durations of single 1 and single 0 in the wave head and the wave tail are the same, namely The=the1=the0,the1Is the duration of a single 1 in the wave head and wave tail, the0Is the duration of a single 0 in the wave head and wave tail, where T is used uniformlyheRepresents; duration T of a single 1 and a single 0 in the contentcNot necessarily with the duration T of a single 1 and a single 0 in the wave head, wave tailheThe same; a. wave head and wave tail coding examples of b, c and d regions:
the wave head of the ultrasonic wave signal of the a-region signal lamp is 00000001, and the wave tail is 00000001
The wave head and the wave tail of the ultrasonic signal of the b area signal lamp are 00000010 and 00000011 respectively
The wave head of the ultrasonic signal of the c area signal lamp is 00000100, and the wave tail is 00000101
The wave head of the ultrasonic wave signal of the d-area signal lamp is 00001000, and the wave tail is 00000111
Content encoding example:
a green light 00; a red light 11; yellow light 10
Content tag (also referred to herein as a left-turn tag since left-turn is the first in the content sequence) 01
Still adopt the sequence of left-turning, straight-going and right-turning, i.e.
The left turn green light, the straight red light and the right turn green light are 01001100
The left turn red light, the straight green light and the right turn green light are 01110000
01001111 for left turn green light, straight red light and right turn red light
The left turn red light, the straight red light and the right turn green light are 01111100
The red light for left turn, the red light for straight going and the red light for right turn are 01111111
The green light of left turn, the green light of straight going and the green light of right turn are 01000000
01101010 for the yellow light of left turn, the yellow light of straight going and the yellow light of right turn
One complete signal lamp ultrasonic signal example:
the area a has a left-turn green light, a straight-going red light and a right-turn green light of 000000010100110000000001
The red light at left turn, the green light at straight going and the green light at right turn in the area b are 000000100111000000000011
The green light of left turn, the red light of straight going and the red light of right turn in the c area are 000001000100111100000101
The red light for left turn, the red light for straight going and the green light for right turn in the d area are 000010000111110000000111
(4) Intersection signal lamp ultrasonic signal coding and decoding mechanism 4:
similar to the method in (3), the difference is the frequency f of the signal component used in the regions a, b, c, and diThe regions a, b, c and d all differ from one another in that the same binary coding scheme is used for the regions a, b, c and d, except that the decoding scheme for the four regions a, b, c and d uses a unique signal component frequency fiAs the identification.
(5) Intersection signal lamp ultrasonic signal coding and decoding mechanism 5:
similar to the above (1), the difference is further simplified on the basis of (1), the coding mechanism does not distinguish a, b, c and d areas, and only aims at the driving vehicle in one of the a, b, c and d areas to transmit ultrasonic signals by utilizing the directional propagation characteristics of ultrasonic waves, even if the vehicle in the a area receives the reflected signal in the b area or is distinguished by utilizing the signal intensity attenuation characteristic, if the original signal transmission intensities in the a, b, c and d areas are the same.
Meanwhile, the coding mechanism is further simplified, and the left-turn, straight-going and right-turn directions all adopt independent coding identification modes, namely, sequential coding is not adopted any more, and direction coding is attached additionally, for example: moving in a straight line 00; a left crutch 01; a right crutch 10.
The signal transmitting sequence still adopts a combination mode of direction and signal lamp state, but does not distinguish the sequence of straight running, right turning and left turning. The decoding mechanism is further simplified, and the directions are not distinguished in sequence, namely, the decoding mode that only one complete signal is received can be adopted, and the decoding is randomly decoded in real time, for example, the decoding is immediately carried out when a 'straight green light' is received.
The headers and trailers are also further simplified, using a smaller number of binary bits (e.g., 10 for headers and 01 for trailers). Even the wave head and the wave tail can be saved, namely, one signal only has the states of the azimuth and the signal lamp.
2. Ultrasonic signal for vehicle running
The vehicle running ultrasonic signal is synchronized with a vehicle signal lamp. The ultrasonic signal of vehicle running is transmitted and received by the vehicle, and all vehicles equipped with ultrasonic transmitting and receiving devices transmit and receive the ultrasonic signal of vehicle running according to the unified coding and decoding rule. Similar to the ultrasonic coding method in 1, the same coding method of combining ultrasonic waves with different signal frequencies into 1 and 0 is adopted, the difference is that the used signal component frequency is different from the signal component frequency range of the intersection signal lamp, namely fi(i=1,2,...,N)≠fj(j=1,2,...,N),fiFrequency range of ultrasonic signal component of intersection signal lamp, fjIs the frequency range of the ultrasonic signal component during the running of the vehicle; meanwhile, in order to better distinguish the vehicle running signal from the intersection signal light signal, the vehicle running ultrasonic signalThe duration time of the formed 1 and 0 is different from that of the signal lamp of the intersection, and the coding mode of the ultrasonic signals 1 and 0 for vehicle running is also different from that of the ultrasonic signals of the signal lamp of the intersection; finally, in order to better distinguish the ultrasonic signal of each vehicle and avoid the generation of mixed sound, the solution is to adopt countless ultrasonic signal component frequencies f, and f is in the ultrasonic signal component frequency range of vehicle running, namely f E fj(j 1, 2.. times.n), the condition that the frequency f of the ultrasonic wave signal component used by each vehicle has a unique characteristic as much as possible is met, and in order to further distinguish the characteristics of each vehicle, the vehicle identification number can be increased by adopting a coding mode of f components 1 and 0.
The vehicle running ultrasonic signal consists of a vehicle identification number and an indication classification number. The distinguishing modes of the indication classification number and the vehicle identification number are divided into three types: the signal component f used by the two is different; the duration lengths of binary single 1 and single 0 formed by the two f are different; the binary encoding of both 1 and 0 is different.
Vehicle identification number example:
the 4-bit or 8-bit coding mode is adopted, and the expression range R of 4 bits 1 and 0 is 24And the range R of 8 bits 1 and 0 is 28The possibility of vehicle duplication around the road can be greatly reduced:
V1a vehicle 1000; v2A vehicle 1100; v3A vehicle 1110; v4Vehicle 1111
Vehicle-running ultrasonic signal indication classification number (classification and coding) example:
a left turn signal 1010; the right turn signal 0101; a brake signal 0000; hazard warning signal 1100;
a reverse signal 1001; a slow travel signal 0110; the overtaking signal corresponds to the left turning signal; a homing (overtaking completion) signal corresponding to the right turn signal; a brake signal corresponding to the brake signal; the whistle (horn sound) signal corresponds to the danger alarm signal or is independently designed with a signal code.
The decoding mechanism is as follows: only when a complete travel ultrasonic signal is received can it be interpreted. The vehicle identification number is sent separately and continuously to reflect the vehicle position signal, or a separate vehicle position signal is added. A plurality of classification numbers can be combined, for example, a complete driving ultrasonic signal comprises a danger alarm signal and a brake signal.
Fifth, coordination mechanism of sound wave and light sensation
The judgment method and process in the unmanned control process are called as a sound wave and light sense coordination mechanism, also called as a control judgment mechanism (called as a judgment mechanism for short), and the judgment mechanism is divided into a plurality of sub-processes. The sub-process comprises the following steps: machine learning flow, (driving) front light sensation flow (front right, front left, front right), rear (rear right, rear left, rear right) light sensation flow, front sound wave flow (front right, front left, front right), rear sound wave flow (rear right, rear left, rear right), left light sensation flow, right light sensation flow, left sound wave flow, right sound wave flow. The judgment mechanism participates in the control process, and the control process is a control measure of automatic driving or advanced auxiliary driving and comprises the processes of acceleration, deceleration, braking, steering, keeping, advancing, retreating and the like. The judgment mechanism and the control flow work in a coordinated mode, and the judgment mechanism and the control flow work in a combined mode to control the unmanned vehicle to run.
Front light sensation process: the method comprises a front light sensing process, a left front light sensing process and a right front light sensing process, wherein under the condition that conditions allow, a video or a laser radar is started to detect a vehicle or an obstacle in front of the front, the left front and the right front; in the case of impermissible light signal conditions, a meter-wave (electromagnetic wave) radar replaces the video or lidar. The detection distance is set according to the practical application condition and the radar performance. The front detection process of the front light sensing flow is referred to as a front light sensing flow, the left front detection process of the front light sensing flow is referred to as a left front light sensing flow, and the right front detection process of the front light sensing flow is referred to as a right front light sensing flow.
Rear light sensation flow: similar to the front light sensation process, the detection area is divided into a left rear part, a right rear part and a right rear part. The front and rear detection process of the rear light sensation flow is referred to as a front and rear light sensation flow, the left and rear detection process of the rear light sensation flow is referred to as a left and front light sensation flow, and the right and rear detection process of the rear light sensation flow is referred to as a right and rear light sensation flow.
Left light sensation flow: similar to the front light sensation process, the difference is that the detection area is on the left side.
Right light sensation procedure: similar to the front light sensation process, the difference is that the detection area is the right side.
Front sound wave flow: the method is divided into a front sound wave flow, a left front sound wave flow and a right front sound wave flow, and the methods described in the above (second, sound source coordinate calculation) and (third, sound source direction verification) are applied, so that the sound waves of the vehicle in front of the front, left front and right front are mainly detected by using a plurality of sound pick-up devices positioned at the head of the vehicle body, and if necessary, the sound pick-up devices at other positions are also combined.
A rear sound wave flow: the sound wave flow is the same as the front sound wave flow, the difference is that the detection directions are right back, left back and right back, a plurality of sound pickups positioned at the tail part of the car body are mainly used, and meanwhile, the sound pickups at other positions are also combined.
Left side acoustic flow: with the sound wave flow in the place ahead, the difference is that the detection position is the left side, mainly uses and is located a plurality of adapter of automobile body head, afterbody, side near left side, also combines the adapter of other positions simultaneously.
Right side acoustic flow: with the place ahead sound wave flow, the difference is that the detection position is the right side, mainly uses and is located a plurality of adapters on automobile body head, afterbody, side near the right side, also combines the adapters of other positions simultaneously.
A machine learning process: the machine learning method in the above (i.e. identification and tracking of the acoustic wave signal) is applied, and the correlation index of the sound source is learned and predicted by using the corresponding relationship between the light sensing signal and the acoustic wave signal.
(1) Overtaking process under clear light sensing signal
The lane is divided into a passing lane, a running lane and a deceleration lane, wherein the passing lane is assumed to be on the left side of the running lane, and the deceleration lane is assumed to be on the right side of the running lane. Setting an operation flow as an advancing process, starting a front light sensation flow when preparing to overtake and change lanes, detecting whether a vehicle exists in a set distance range of a left front overtaking lane or not, if the vehicle exists, setting the front light sensation flow at a priority level, starting a front sound wave flow and a machine learning flow, learning the corresponding relation among the sound wave frequency, the amplitude, the distance, the vehicle type and the speed of the front (left front, right front and right front) vehicle by a machine, wherein the priority level of the front sound wave flow and the machine learning flow is inferior to that of the front light sensation flow, namely, the front sound wave flow and the machine learning flow can be started only on the basis of ensuring the priority of the front light sensation flow, the front light sensation flow always keeps an operating state in the whole judging mechanism, and the rest sub-flows determine whether to operate or not according to the situation;
if no vehicle exists in the left front, starting a left sound wave flow, verifying whether a vehicle exists on the left side, if so, starting a left light sensation flow, and after verifying that a vehicle exists, starting a machine learning flow, wherein the machine learns the corresponding relation among the sound wave frequency, the amplitude, the distance, the vehicle type and the speed of the left vehicle, the priority of the left sound wave flow, the priority of the left light sensation flow and the priority of the machine learning flow are lower than that of the front light sensation flow, and the priority of the left sound wave flow is higher than that of the left light sensation flow;
if the left sound wave flow detects that the left passing lane has no vehicle or the left sound wave flow detects that the left side has a vehicle and the left light sensation flow verifies that the left passing lane has no vehicle, the rear sound wave flow is started to judge whether the left rear passing lane has a vehicle or not, if the left passing lane has a vehicle, the left rear light sensation flow is started to verify whether the left passing lane has a vehicle or not, and if the light sensation flow also has a vehicle, the machine learning flow is started. When the control flow is an advancing process, the priority of the rear light sensation flow, the rear sound wave flow and the machine learning flow is lower than that of the front light sensation flow, and the priority of the rear sound wave flow is higher than that of the rear light sensation flow;
if the rear sound wave flow detects that no vehicle exists in the left rear passing lane, the left rear light sensing flow is started to detect that no vehicle exists in the left rear passing lane, or the rear sound wave flow detects that no vehicle exists in the left rear passing lane and the left rear light sensing flow detects that no vehicle exists in the left rear passing lane, the control flow is started when the left front, left and left rear passing lanes are determined to have no vehicle, the passing signal stated in the (4.2 vehicle running ultrasonic wave) is transmitted, the passing lane changing process is started, and the specific steering, accelerating or decelerating process is determined by the control flow;
and fifthly, in the whole overtaking process of preparing to drive to the overtaking lane, the front light sensation process, the machine learning process and the front sound wave process are always kept in the running state, and the front light sensation process is always kept in the priority level. When the vehicle is driven to a passing lane, starting a right sound wave flow and a right light sensation flow, when the vehicle exceeds a pre-passing vehicle, starting a rear sound wave flow, and when the vehicle exceeds a set distance of the pre-passing vehicle, transmitting a homing (passing completion) signal explained in the step four and traffic ultrasonic waves;
when the right front light sensing process is ready to reset, whether a vehicle exists in the set distance of the right front lane is detected, if no vehicle exists, the resetting process is started, and the steering and the like are determined by the control process; if the vehicle exists, a front sound wave process and a machine learning process are started, the speed of the right front vehicle is judged at the same time, and the vehicle is continuously overtaken or decelerated to wait for the right front vehicle to drive away and is determined by an operation process.
In the above-mentioned flow, the light sensing signal detects the width, height, and angle (the relative angle between the detected vehicle and the light sensing device) of the detected vehicle when the detected vehicle is located behind the detected vehicle, and detects the length, height, and angle of the detected vehicle when the detected vehicle is located on one side of the detected vehicle, thereby calculating the shape of the detected vehicle.
(2) Overtaking process under clear light sensing signal
Similar to the overtaking process, the difference is that the vehicle overtakes the vehicle, the type, the sound wave frequency, the sound wave amplitude, the distance (relative to the vehicle) and the speed of the pre-overtaking vehicle are detected by starting a rear sound wave process, a left side or right side sound wave process, a rear light sensation process, a left side or right side light sensation process and a machine learning process, and the relation between a sound wave signal and a light sensation signal is established.
The priority of the front light sensing process is greater than that of all the other sub-processes in the judgment mechanism, and the front light sensing process is always in an operating state. When sound wave signals exist at the left rear part or the right rear part, after the rear light sensing process confirms that vehicles exist in the passing lane, the light sensing process detects the relative distance, the sound wave process detects and tracks the sound wave frequency and amplitude of the sound source vehicles, and the machine learning process establishes the corresponding relation.
(3) In case the light-sensitive signal is not sensitive (e.g. fog days)
When the functions of the laser radar or the video in the light sensing process are affected by weather and the like, the distance and the type of the sound source vehicle are predicted by depending on the machine learning result data in the (overtaking process under the condition of clear light sensing signals), and if necessary, the distance and the type are confirmed by the meter wave radar detection in the light sensing process.
The method comprises the following steps of firstly, continuously transmitting slow driving signals in the (IV) traffic ultrasonic signals in a straight-going process (a set of a plurality of judgment mechanism sub-processes and an operation process), setting a front sound wave process and a rear sound wave process in continuous working states, and determining the working states of the rest sub-processes in a judgment mechanism according to the conditions. Forward detection while straight: when the front sound wave process detects that a vehicle exists in the front set distance range, the front light sensing process is started to confirm whether the vehicle exists, and if the vehicle exists, the front emergency state level Le is judgedi(i ═ 1, 2.., N) is set to Le1And initiates an operation process, Le1>Le2>…>LeNNamely Lei>Lei+1The control flow is according to LeiThe specific speed reduction or overtaking measures are determined by the control flow; if the light sensing process detects that there is no vehicle, then LeiIs set as Le2The control flow is according to Le2Taking measures, which are specifically determined by an operation flow;
a straight-going process, rear detection during straight-going: when the rear sound wave flow detects that a vehicle exists in the set distance range right behind, the rear light sensing flow is started to confirm, and if the vehicle exists, the rear emergency state level is set as BLei(i ═ 1, 2.., N) and passed to the control process, BLe1>BLe2>…>BLeNNamely BLei>BLei+1The control flow is according to BLeiThe magnitude of the warning signal is determined by the control flow, and when a rear vehicle collides with the rear vehicle to a set distance range, the warning signal (fourth, traffic ultrasonic signal) is transmitted; if the rear light sensation flow detects that no vehicle is in the right rear, then the rear part isThe Square Emergency level will be set to BLei+1The control flow is according to BLei+1Taking measures, which are specifically determined by an operation flow;
the overtaking process (the set of a plurality of judgment mechanism sub-processes and an operation process) is to detect when entering a overtaking lane: assuming that a passing lane, a traffic lane and a deceleration lane are sequentially arranged from left to right, when preparing lane change and passing, a front sound wave flow detects whether a vehicle exists in a set distance range of the left front passing lane, if the vehicle exists, a straight-going flow is continuously executed, if the vehicle does not exist, a front light sensing flow is started for confirmation, if the vehicle exists, the vehicle continues to run according to the straight-going flow and continuously detects whether the vehicle exists in the set distance range of the left front passing lane, until the front sound wave flow and the front light sensing flow both detect that no vehicle exists, a left sound wave flow is started for detecting whether the vehicle exists in the left passing lane, if the vehicle exists, the straight-going flow is continuously run, and at intervals of time t, the vehicle exists in the left passing lane is continuously detected until the left sound wave flow detects that no vehicle exists in the left passing lane and also does not exist in the left light sensing flow, a rear sound wave flow is started for detecting whether the vehicle exists in the set distance range of the left rear passing lane, on the same principle, if a vehicle exists, waiting (interval time t continuous detection) and running according to a straight-going flow, starting a lane-changing overtaking process until no vehicle is detected by a rear sound wave flow and no vehicle is confirmed by a rear light sensation flow, wherein the specific process is determined by an operation flow, if the vehicle is detected by the rear sound wave flow and the vehicle is detected by the rear light sensation flow or the vehicle is detected by the rear sound wave flow and the vehicle is detected by the rear light sensation flow, continuing to wait and running the straight-going flow, starting the lane-changing overtaking process until no vehicle is detected in a set distance range of the overtaking lane by the sound wave flow and the light sensation flow, and transmitting an overtaking signal stated in the step (IV and a traffic ultrasonic signal);
and fourthly, overtaking process, detection when the vehicle enters a overtaking lane: the method comprises the following steps that when a straight-going process is executed, a right sound wave process continuously detects whether a vehicle exists in a right traffic lane, if the vehicle exists, the straight-going process continues to be executed, the overtaking speed is determined by a control process, a right sound wave process continuously detects the right traffic lane, until no vehicle exists, a left light sensing process is started to confirm, if the light sensing process detects the vehicle, the method continues to wait (the straight-going process continues to detect the right traffic lane), until no vehicle is confirmed, a rear process is started to detect whether the vehicle exists in a set distance range of the right rear traffic lane, if the vehicle exists, the method waits until the rear sound wave process and the rear light sensing process both detect the vehicle does not exist, a front sound wave process is started to detect whether the vehicle exists in the set distance range of the right front traffic lane, if the vehicle does not exist, the front light sensing process is started to confirm, and if the light sensing process confirms that the vehicle exists, the method continues to wait, and stopping transmitting the overtaking signal and transmitting the vehicle running ultrasonic homing (overtaking completion) signal explained in the (IV) and traffic ultrasonic signal until the front sound wave flow and the front light sensation flow detect that no vehicle exists in the range of the set distance at the front right, and executing the homing process in the control flow, wherein the specific process is determined by the control flow.
The sound wave flow in the above (3) means that the sound wave flow and the machine learning flow work cooperatively to predict whether there is a vehicle. When the light sensing process detection function is limited, the control process can make a judgment according to the sound wave process detection result, namely, the sound wave process judgment is taken as the standard or the priority of the detection result of the sound wave process is improved to be superior to that of the light sensing process detection result.
Drawings
Fig. 1 is a side (left) view of a pickup layout for a truck, in which arabic numerals are given: 2, a sound pick-up b is positioned at the front lower part of the left side of the head of the truck; 3, a sound pick-up c is positioned at the front upper part of the left side of the head of the truck; and 6, a sound pick-up f is positioned at the rear lower part of the left side of the tail part of the truck.
Fig. 2 is a side (right) view of a pickup layout for a truck, where arabic numerals are: 1, a sound pick-up a which is positioned at the front lower part of the right side of the head of the truck; 4, a sound pick-up d is positioned at the front upper part of the right side of the head of the truck; and the 5 sound pickup e is positioned at the rear lower part of the right side of the tail part of the truck.
FIG. 3 is a schematic front view of a pickup layout of a truck, wherein Arabic numerals are given: 1 pickup a, as in fig. 2; 2, a sound pick-up b, the same as the sound pick-up in fig. 1; 3 pickup c, same as fig. 1; 4 pickup d, same as fig. 2.
Fig. 4 is a side (left) view of a pickup layout for a passenger car, where arabic numerals are: 1, a sound pick-up a is positioned at the rear lower part of the left side of the tail part of the passenger car; 4, a sound pick-up d is positioned at the rear upper part of the left side of the tail of the passenger car; 5, a sound pick-up e is positioned at the front upper part of the left side of the head of the passenger car; the sound pickup h is positioned on the sound wave and light sensation integrated reflector on the left side of the passenger car;
fig. 5 is a side (right) view of a passenger car pickup layout, wherein arabic numerals are explained: 2, a sound pick-up b is positioned at the rear lower part of the right side of the tail of the passenger car; 3, a sound pick-up c is positioned at the rear upper part of the right side of the tail of the passenger car; 6, a sound pick-up f is positioned at the front lower part of the right side of the head of the passenger car; and the sound pickup g is positioned on the sound wave and light sensation integrated reflector on the right side of the passenger car.
Fig. 6 is a front view of the acoustic wave and light sensing integrated reflector on the right side of the passenger car, wherein the numerical explanations: 7 pickup g, same as fig. 5, and 8, i.e. pickup h; 9 a light-sensing device L; 10 a reflector glass panel R, wherein a small hole is arranged in the reflector glass panel R, and the sound pick-up g and the light sensation device L respectively detect external signals through the small hole; 11 fixing device, reflector glass panel R are located the fixing device, form airtight space between the two, and the circuit board of light sense device L and adapter g is located wherein, and the power cord of light sense device and adapter, signal line pass through one side (the side of being connected with the automobile body) of fixing device and wear out.
Fig. 7 is a time domain plot of acoustic signals, with time on the horizontal axis and acoustic amplitude on the vertical axis, where arabic numerals are used for definitions: 1 signal Sw1Peak point p of1(ii) a2 signal Sw2Peak point p of2(ii) a 3 signal Sw2Upper peak point p 3; 4 signal Sw1Peak point p of4(ii) a 5 Signal Sw1Peak point p of5(ii) a 6 Signal Sw2Peak point p of6(ii) a The difference between 1 and 2 is tdpAnd the difference between 3 and 4 is also tdpThe difference between 5 and 6 is likewise tdp
Fig. 8 is a schematic diagram of the acoustic wave azimuth calculation, in which arabic numerals are given: 0 sound wave emission point, namely a sound source point O; 1, a pickup A; 2, a sound pick-up B; 3, a sound pick-up C; 4 a sound pickup D.
FIG. 9 is a frequency chart of traffic sound wave signal components, with time on the horizontal axis and time on the vertical axisAcoustic amplitude, wherein: 1 traffic acoustic signal component frequency f1(ii) a2 traffic acoustic signal component frequency f2(ii) a 3 traffic sound wave signal component frequency f3(ii) a 4 traffic sound wave signal component frequency f4
FIG. 10 is a binary schematic diagram of a traffic sound wave signal, with time on the horizontal axis and signal component frequency f on the vertical axisi(same as f in FIG. 9)1、f2、f3、f4) Wherein: 0 fiOne of (1), e.g. f1;1 fiAnother one of, e.g. f2(ii) a 0 and 1 form two traffic sound wave signals Sb1、Sb2Lower part is Sb1Upper part is Sb2
FIG. 11 is a front probing flowchart of the straight-through flow chart, wherein the notation: 1, starting; 2, a front sound wave process; 3, judging; 4
Figure BSA0000234708330000191
5, controlling the flow; 6
Figure BSA0000234708330000192
7, controlling the process; 8 front light sensation process; 9, judging; 10
Figure BSA0000234708330000193
11, controlling the process; 12
Figure BSA0000234708330000194
13, controlling the process; 14, finishing; y is; n is not.
FIG. 12 is a backward probing flowchart of the straight-through flowchart, wherein the notation: 1, starting; 2, a rear sound wave flow; 3, judging; 4
Figure BSA0000234708330000195
5, controlling the flow; 6
Figure BSA0000234708330000196
7, controlling the process; 8, a back light sensation process; 9, judging; 10
Figure BSA0000234708330000197
11, controlling the process; 12
Figure BSA0000234708330000198
13, controlling the process; 14, finishing; y is; n is not.
FIG. 13 is a flow chart of a second forward detection scheme of the forward flow chart, wherein the notation: 1, starting; 2, a front sound wave process; 3, judging; 4
Figure BSA0000234708330000199
5, controlling the flow; 6
Figure BSA00002347083300001910
7, controlling the process; 8 front light sensation process; 9, judging; 10
Figure BSA00002347083300001911
11, controlling the process; 12
Figure BSA00002347083300001912
13, controlling the process; 14, finishing; y is; n is not.
FIG. 14 is a flow chart of a second detection scheme following the straight-through flow chart, wherein the notation: 1, starting; 2, a rear sound wave flow; 3, judging; 4
Figure BSA00002347083300001913
5, controlling the flow; 6
Figure BSA00002347083300001914
7, controlling the process; 8, a back light sensation process; 9, judging; 10
Figure BSA00002347083300001915
11, controlling the process; 12
Figure BSA00002347083300001916
13, controlling the process; 14, finishing; y is; n is not.
FIG. 15 is a process diagram for passing, wherein the symbols are paraphrased: 1, starting; 2, left front sound wave process; 3, judging; 4
Figure BSA00002347083300001917
5, controlling the flow; 6 left front light sensation process; 7, judging; 8
Figure BSA00002347083300001918
9, controlling the process; 10 left side acoustic flow; 11, judging; 12
Figure BSA0000234708330000201
13, controlling the process; 14 left light sensation process; 15, judging; 16
Figure BSA0000234708330000202
17, controlling the process; 18 rear acoustic flow; 19, judging; 20
Figure BSA0000234708330000203
21, controlling the process; 22 rear light sensation process; 23, judging; 24
Figure BSA0000234708330000204
25, controlling the flow; 26, controlling the flow; 27, carrying out a straight flow; 28 forward acoustic flow; 29, judging; 30
Figure BSA0000234708330000205
31, controlling the flow; 32 right front light sensing flow; 33, judging; 34
Figure BSA0000234708330000206
35, controlling the process; 36 right side acoustic flow; 37, judging; 38
Figure BSA0000234708330000207
39 operation and control flow; 40 right light sensation procedure; 41, judging; 42
Figure BSA0000234708330000208
43, control flow; 44 right rear acoustic flow; 45, judging; 46
Figure BSA0000234708330000209
47 control flow; 48 right rear light sensation process; 49, judging; 50
Figure BSA00002347083300002010
51, controlling the flow; 52, controlling the process; 53, straight-going process; 54, finishing; y is; n is not.
Fig. 16 is a schematic diagram of a traffic lane waiting for overtaking, wherein the symbols are explained as follows: b1 outer road boundary line; b2 inner road boundary line; l1 boundary between the passing lane and the traffic lane; the L2 boundary line between the traffic lane and the deceleration lane; v1 unmanned vehicle; a vehicle around V2; a vehicle around V3; ia the left front of the unmanned vehicle passes through a lane and sets a distance boundary; setting a front boundary for the left lane passing of the Ib unmanned vehicle; setting a rear boundary for the left overtaking lane of the Ic unmanned vehicle; setting a distance boundary for the left rear lane of the Id unmanned vehicle to pass through; fig. 17 is a diagram of a overtaking lane, wherein the symbols are explained as follows: b1 outer road boundary line; b2 inner road boundary line; l1 boundary between the passing lane and the traffic lane; the L2 boundary line between the traffic lane and the deceleration lane; v1 unmanned vehicle; a vehicle around V2; a vehicle around V3; ia setting a distance boundary of a right rear lane of the unmanned vehicle; setting a rear boundary of the right lane of the Ib unmanned vehicle; setting a front boundary of a lane on the right side of the Ic unmanned vehicle; the Id unmanned vehicle right front lane sets a distance boundary.
Detailed Description
Referring to fig. 1, a layout diagram of a pickup on the left side of a truck, fig. 2, a layout diagram of a pickup on the right side of a truck, and fig. 3, a layout diagram of a pickup on the front side of a head of a truck, wherein small circles beside a, b, c, d, e, and f respectively represent a pickup a, a pickup b, a pickup c, a pickup d, a pickup e, and a pickup f, which are respectively referred to as a, b, c, d, e, and f.
Referring to fig. 4, a layout diagram of a sound pickup on the left side of a passenger car, fig. 5 is a layout diagram of a sound pickup on the side of a passenger car, and fig. 6 is a front view of a sound wave and light sensing integrated mirror on the right side of a passenger car, wherein a, b, c, d, e, f, g and h in the drawings represent sound pickups at different positions of a car body, and h is located on the sound wave and light sensing integrated mirror on the left side of the passenger car.
Referring to fig. 6, wherein R is a mirror glass panel, L is a light sensing device, i.e. a camera or a lidar transmitting and receiving device, g is a sound pickup, R, L, g and a fixing device (base) jointly form a sound wave and light sensing integrated mirror, a small hole is drilled on the right side of the mirror glass panel R, the light sensing device L and the sound pickup g are respectively exposed from the small hole, circuit boards of the light sensing device L and the sound pickup g are positioned at the bottom of the mirror glass panel R, i.e. in a space between the glass panel and the fixing device, the glass panel R and the fixing device form a sealed space, the light sensing device L and the power line and the signal line of the sound pickup g penetrate out from one side of the fixing device (beside the connection part of the fixing device and the vehicle body), and is connected with a control system and a power supply which are positioned in an automobile cab, and heat generated by the circuit boards of the light sensing device L and the sound pickup g is dissipated into the outside air by the fixing device (base). When the automobile is running, the fixing device (base) of the reflector contacts with the air in the head-on direction to generate air flow, and heat is dissipated accordingly.
As shown in fig. 7, the unmanned vehicle filters out sound signals of a certain characteristic frequency, such as the running noise of surrounding automobiles and the waveform of horn sound, through a filter, calculates the time difference of the sound waves collected by different sound collectors, Sw1And Sw2Is the same sound wave signal t collected by different sound pick-up devicesdpIs the peak point p2And peak point p1Time difference of tdpIs also the peak point p4And peak point p3Due to time difference of Sw1And Sw2Is the same sound wave signal with the same peak point p6And peak point p5Is also tdp
As shown in the schematic diagram of sound wave direction calculation in fig. 8, point A, B, C, D represents microphone a, microphone B, microphone C, and microphone D, point O represents the sound source point position, the principle corresponds to the part (three, coordinate calculation of sound source) in the above description, and point A, B, C, D corresponds to points a, B, C, and D in fig. 1, 2, 3, 4, and 5.
Corresponding to the part (second, identification and tracking of the sound wave signals) in the invention content, the unmanned vehicle acquires light sensing signals and sound wave signals of surrounding vehicles during running, and machine learning is performed to establish the corresponding relation between the vehicle sound waves and the light sensing signals.
Under the condition that light sensing signals such as special weather are influenced or the light sensing signal processing capacity is limited, the sound wave signals of surrounding vehicles are collected and the type and the distance of a sound source vehicle are predicted by applying a machine learning algorithm.
FIG. 9 is a frequency chart of the components of the traffic acoustic signal, which is represented by f1、f2、f3、f4Constant frequency acoustic wave component, f1、f2、f3、f4The original signals which form the traffic sound wave signal are called as traffic sound wave signal components; FIG. 10 is a binary schematic of a traffic acoustic signal, Sb1、Sb2Respectively two traffic sound wave signals, respectively1、f2、f3、f4One represents 1 and the other represents 0, i.e., f in FIG. 91、f2、f3、f4S in FIG. 10b1、Sb2. Corresponding to the part (four, traffic ultrasonic signal) in the invention contents, the unmanned vehicle sends and receives vehicle running ultrasonic signals, mutual identification and mutual avoidance are realized according to the invention contents (five, a sound wave and light sensation coordination mechanism), and a traffic light system at the intersection sends traffic guidance ultrasonic signals to the unmanned vehicle.
Under the condition of good light sensation signals in fine days and the like, the light sensation signals of surrounding vehicles and the sound wave signals are in corresponding relation, and one of two methods is adopted: classifying the vehicle type into V according to the light sensation signali(i is 1, 2, 3,.., N), establishing a corresponding relation between the vehicle type and the sound wave range according to the sound wave signal of the corresponding vehicle type,
Figure BSA0000234708330000211
(i 1, 2., N; |, A, B | is the frequency range, A is the lowest frequency, B is the highest frequency, | A, B | sampling value of the frequency range should be the typical characteristic frequency of the sound wave of this model, combine the above-mentioned machine learning algorithm in (two, recognition of the sound wave signal is traced) to learn the sound wave frequency characteristic of different models and different range characteristic, its two: on the basis of the unique characteristic of the sound wave signal, gather the light sense signal of the sound source vehicle and set up the corresponding relation, for example the light sense signal of the sound source vehicle
Figure BSA0000234708330000212
The sound wave having a frequency V1、V2The vehicle type corresponds to the vehicle type, the machine learning algorithm learns the amplitude probability of the frequency sound wave at different distances, and meanwhile learns the probability that a certain vehicle type belongs to the sound wave.
Referring to the front detection flow chart of fig. 11, a flow is started, a flow of 2 front sound waves is started, a judgment is made, that is, whether a vehicle is in a preset distance in front is judged according to the flow of 2 front sound waves, and if yes (result is Y), an emergency state is sent
Figure BSA0000234708330000213
And 5, setting an emergency state according to the value of i in an operation process of 5, wherein i is 1, 2
Figure BSA0000234708330000214
A level of (d); if the result of the judgment of 3 is N (no vehicle), the emergency state is set as the emergency state
Figure BSA0000234708330000215
Wherein
Figure BSA0000234708330000216
When the 5 control process is finished, the 8 front light sensing process is continuously executed, and similarly, when the 7 control process is finished, the 8 front light sensing process is also continuously executed, the 9 judgment is to judge whether a vehicle exists in the front set distance according to the 8 front light sensing process, and if the vehicle exists, the sending state is judged to be in the (Y) state
Figure BSA0000234708330000217
And controlling a process flow to 11, wherein i is 1, 2
Figure BSA0000234708330000221
A level of (d); if the result of the judgment of 9 is N, the emergency state is set as
Figure BSA0000234708330000222
And sends the data to a 13 operation flow13 flow of manipulations according to
Figure BSA0000234708330000223
Or
Figure BSA0000234708330000224
Determine the specific operation process, and 11 operation procedures according to
Figure BSA0000234708330000225
Or
Figure BSA0000234708330000226
The specific operation process is determined. After the execution of the control flow 13 and the control flow 11 is completed, the next step continues to loop back to the front sound wave flow 2, or the execution 14 is finished, specifically the execution 14 is finished, or the front sound wave flow 2 is determined by the control flow 13 or 11.
Referring to fig. 12, a rear detection flow chart after the straight-ahead flow chart is shown, wherein 1, the flow is started, 2 rear sound wave flows are started, 3, whether a vehicle exists in a rear set distance is judged according to the 2 rear sound wave flows, and if yes (result is Y), an emergency state is sent
Figure BSA0000234708330000227
And 5, setting an emergency state according to the value of i in an operation process of 5, wherein i is 1, 2
Figure BSA0000234708330000228
A level of (d); if the result of the judgment of 3 is N (no vehicle), the emergency state is set as the emergency state
Figure BSA0000234708330000229
Wherein
Figure BSA00002347083300002210
When the 5 control flow is finished, the 8 rear light sensing flow is continuously executed, and similarly, when the 7 control flow is finished, the 8 rear light sensing flow is also continuously executed, the 9 judgment is that whether a vehicle exists in the rear set distance is judged according to the 8 rear light sensing flow, and if the vehicle exists, the state is sent
Figure BSA00002347083300002211
And controlling a process flow to 11, wherein i is 1, 2
Figure BSA00002347083300002212
A level of (d); if the result of the judgment of 9 is N, the emergency state is set as
Figure BSA00002347083300002213
And sent to the 13 control flow, the 13 control flow is based on
Figure BSA00002347083300002214
Or
Figure BSA00002347083300002215
Determine the specific operation process, and 11 operation procedures according to
Figure BSA00002347083300002216
Or
Figure BSA00002347083300002217
The specific operation process is determined. After the execution of the control flow 13 and the control flow 11 is completed, the next step continues to loop back to the 2 nd backward sound wave flow, or the execution 14 is finished, specifically the execution 14 is finished, or the 2 nd backward sound wave flow is determined by the control flow 13 or the control flow 11.
As shown in the second flow chart of the front detection scheme of the straight-ahead flow of fig. 13, similar to the flow chart of fig. 11, the difference is that when the control flow 7 is executed, the control flow returns to the front acoustic wave flow 2, that is, when the front acoustic wave flow 2 is detected as no vehicle (N), the front light sensing flow verification is not executed for the moment 8, but the front acoustic wave flow 2 is continuously circulated.
As shown in the second flowchart of the rear detection scheme after the straight-ahead flow in fig. 14, similar to the flowchart in fig. 12, the difference is that when the control flow 7 is executed, the control flow returns to the rear acoustic wave flow 2, that is, when the rear acoustic wave flow 2 is detected as no vehicle (N), the rear light sensing flow verification is not executed for the moment 8, but the rear acoustic wave flow 2 is continuously circulated.
As shown in FIG. 15The vehicle flow chart is that 1, 2, the left front sound wave flow detects the vehicle condition in the set distance range of the left front overtaking lane, and if 3, the judgment result is Y (vehicle exists), the state is sent
Figure BSA00002347083300002218
Giving 5 an operation flow, continuing to return to the 2 left front sound wave flow until the judgment result of 3 is N (no vehicle), continuing to execute 6 left front light sensation flow to verify whether a vehicle exists in the set distance range of the left front overtaking lane, and if the judgment result of 7 is Y (vehicle exists), sending the state
Figure BSA00002347083300002219
Giving 9 control flow, determining the specific control process of maintaining straight going and the like by the 9 control flow, returning to the 2 left front sound wave flow to continue detecting at the same time, executing 10 left sound wave flow to detect whether a left overtaking lane has a vehicle or not until 7 judgment results are N (no vehicle), and if 11 judgment results are Y, sending state
Figure BSA00002347083300002220
Giving 13 control flow, continuing to return to 2 left front sound wave flow, continuing to circulate until 11 judges result is N, executing 14 left side light sensation flow, if 15 judges result is Y, sending state
Figure BSA00002347083300002221
Giving 17 operation flow, returning 2 left front sound wave flow, continuing circulation until 15 judgment result is N, executing 18 left rear sound wave flow, if 19 judgment result is Y, sending state
Figure BSA0000234708330000231
Giving control flow 21, returning to the left front sound wave flow 2, continuing to circulate until the judgment result of 19 is N, executing left back light sensation flow 22, and if the judgment result of 23 is Y, sending state
Figure BSA0000234708330000232
Give 25 the control flow and return to 2 left frontThe sound wave flow continues to circulate until the judgment result of 23 is N, then the control flow of 26 is executed, the control flow of 26 finishes the left turn from the straight lane to the overtaking lane, then the straight flow of 27 is continuously executed in the overtaking lane (as shown in figure 11 or 13, 12 or 14), the sound wave flow of 28 right front is executed at the same time to detect the condition of the vehicle in the set distance range of the right front straight lane, if the judgment result of 29 is Y, the state is sent
Figure BSA0000234708330000233
The control flow is given to 31, the right front sound wave flow is returned to 28, the right front light sensing flow verification is executed 32 until the judgment result of 29 is N, and if the judgment result of 33 is Y, the state is sent
Figure BSA0000234708330000234
Giving 35 a control flow and returning 28 a right front sound wave flow, executing 36 a right sound wave flow to detect whether a vehicle exists in a right straight lane or not until the judgment result of 33 is N, and if the judgment result of 37 is Y, sending a state
Figure BSA0000234708330000235
And giving control flow 39, returning to the right front sound wave flow 28, continuing to circulate until the judgment result of 37 is N, executing 40 right light sensation flow, and if the judgment result of 41 is Y, sending the state
Figure BSA0000234708330000236
Giving 43 an operation flow, returning 28 the right front sound wave flow, continuing to circulate until the judgment result of 41 is N, executing 44 the right rear sound wave flow to detect the condition of the vehicle in the right rear set distance range, and if the judgment result of 45 is Y, sending the state
Figure BSA0000234708330000237
Giving 47 a control flow, returning 28 a right front sound wave flow, continuing circulation until 45 a judgment result is N, executing 48 right rear light sensation flow verification, and if 49 a judgment result is Y, sending a state
Figure BSA0000234708330000238
Giving 51 the control flow, returning 28 the right front sound wave flow to continue circulating, and executing 52 the control flow until the judgment result of 49 is N, wherein the 52 control flow finishes the passing from the passing lane to the traffic lane, continues 53 the straight-going flow (shown in figure 11 or 13, 12 or 14) in the traffic lane and ends the passing flow.
As shown in fig. 16, a schematic diagram of waiting for overtaking in a traffic lane corresponds to the overtaking flow shown in fig. 15, B1 is an outer side edge line of a road, B2 is an inner side edge line of the road, L1 is a boundary line between the overtaking lane and the traffic lane, L2 is a boundary line between the traffic lane and a deceleration lane, V1 is an unmanned vehicle (equipped with a sound wave and light sensing coordination detection method), V2 and V3 are surrounding vehicles of V1, short dotted lines Ia, Ib, Ic and Id are distance range edge lines set for detecting a left front overtaking lane, a left overtaking lane and a left rear overtaking lane by the vehicle V1, namely, the distance between Ia and Ib is a left front overtaking lane set distance range of the vehicle V1, the distance between Ib and Ic is a left overtaking set distance range of the vehicle V1, and the distance between Ic and Id is a left rear overtaking set distance range of the vehicle V1.
As shown in the overtaking schematic diagram of fig. 17, corresponding to the overtaking flow of fig. 15, B1 is an outer side edge line of a road, B2 is an inner side edge line of the road, L1 is a boundary line between the overtaking lane and the running lane, L2 is a boundary line between the running lane and the deceleration lane, V1 is an unmanned vehicle (equipped with a sound wave and light sense coordinated detection method), V2 and V3 are surrounding vehicles of V1, short dotted lines Ia, Ib, Ic and Id are distance range edge lines set by the vehicle V1 for detecting a right front overtaking lane, a right side overtaking lane and a right rear overtaking lane, namely, the distance between Ia and Ib is a right rear overtaking lane set distance range of the vehicle V1, the distance between Ib and Ic is a right overtaking set distance range of the vehicle V1, and the distance between Ic and Id is a right overtaking set distance range of the vehicle V1.

Claims (6)

1. The unmanned sound wave and light sense coordinated detection method and system are characterized by comprising the following steps of:
1) the method comprises the following steps of sound wave signal classification, wherein the sound wave signals are processed and classified into a plurality of sine waves by applying Fourier transform, the frequency and the amplitude serve as identifiers of the sound wave signals, one sound wave signal is decomposed into a single fundamental wave and a plurality of harmonic waves, one single sound source signal is decomposed into a plurality of sound wave signals, and one multi-sound-source mixed signal is decomposed into a plurality of single sound source signals; on the contrary, a single fundamental wave and a plurality of harmonics are combined into an acoustic signal, a plurality of acoustic signals are combined into a single acoustic source signal, and a plurality of single acoustic source signals are combined into a multi-acoustic source mixed signal;
2) the method comprises the following steps of identifying and tracking sound wave signals, collecting the sound wave signals and light sensation signals of surrounding vehicles when an unmanned vehicle runs, measuring the distance, the shape and the application of a sound source vehicle through the light sensation signals, analyzing the frequency, the amplitude and the sound wave type of the sound wave signals of the sound source vehicle, establishing corresponding relations between the distance and the shape of the sound source vehicle and the frequency and the amplitude of the sound wave signals, storing the corresponding relations into a learning type sample library, inputting the characteristics of the sound wave signals of the power mode of the vehicle into an import type sample library through a manual input mode, associating the learning type sample library with the import type sample library, and establishing a classification corresponding relation; training and learning the corresponding relation between the light sensation signals and the sound wave signals of the vehicle by using a hidden Markov learning algorithm, and learning and training the sound intensity of the sound wave frequency signals of different vehicles at different distances; applying a polynomial logistic regression algorithm, firstly judging the type of the vehicle, and secondly judging the relative distance between a sound source and the vehicle; classifying according to vehicle types, vehicle purposes and sizes based on a hidden Markov model approximate algorithm, or classifying vehicle brands and delivery models based on sound wave signals, or directly taking frequency and amplitude as input values and distance as output values;
3) and (3) calculating the coordinates of the sound source, wherein a sound source coordinate calculation model comprises the following steps:
Figure FSA0000234708320000011
the coordinate of the point o is x, y and z, the consistency of the coordinate of the point o and the coordinate of the point o', namely the deviation of the two coordinates of x and y, is calculated, and the measurement accuracy is judged; verification of sound source orientation by analysis t1、t2、t3Whether the signals at the moment are consistent or not is judged, so that whether the sound source point is correct or not is judged; sound source azimuth detection, sound pick-up devices are respectively arranged at the head, tail, left side, right side and top of the automobile, and every two groups of sound pick-up devices are respectively kept at a certain distanceThe microphones are not all on the same plane, each microphone detects the change value of the frequency, amplitude and waveform of the sound along with time, the microphones are divided into a group of microphones in pairs to respectively detect the receiving time difference of the same sound signal, the sound source direction is divided into microphones of the head, tail, left side, right side and top of the vehicle relative to the front, the rear, the front, the right, the left side, the right side, the rear and the right side of the vehicle, a1, a2, b1, b2, c1, c2, d1, d2, e1 and e2, respectively, the microphones of the head, the tail, the left side, the right side and the top of the vehicle, if a1 or a2 is maximum, the sound source is in the front, the time difference Taa or the phase difference of the sound wave of a1 and a2 is further calculated, the propagation speed of the sound wave in the air is assumed to be Vsa, the propagation distance of the sound wave at the time of Taa is La Taa × Vsa, the distance from the sound source to a 82 1, the distance of the sound wave from the sound source to the La 8653, the sound wave is 368672, the distance of the La 8672 and the La 8672 from the sound wave of the La 867 a 867, drawing a triangle according to the proportional relation of three side lengths, determining the horizontal orientation of a sound source, and calculating the time difference Tae between a1 and e1 for collecting the same sound signal si, wherein the travel distance of a sound wave at Tae is Le Tae × Vsa, and the distance of the sound wave from the sound source to e1 is Le1 La1+ Le; the method comprises the steps that the sound source distance is further judged according to the amplitude of a certain frequency and waveform sound signal, a sound signal sample library comprises a lead-in type sample library and a learning type sample library, the lead-in type sample library is a sound sample which is input in advance and corresponds to a specific meaning, the learning type sample library is a sample library which is learned by an unmanned vehicle in road driving practice based on light sensation and sound combination, the sound and an image are in a corresponding relation, the collection range of the sound signal mainly comprises horn sound, vehicle driving noise and intersection traffic signal marking sound, the sound sample reference is selected according to a scene, road scenes are divided, the scene is divided into urban streets, plain roads, mountain roads and tunnels, and the sound sample libraries are respectively established in different scenes; predicting the directions, distances and vehicle types of surrounding vehicles by combining the identification tracking of the sound wave signals in the step 2);
4) the ultrasonic traffic signals include crossing traffic guiding ultrasonic signals and vehicle running ultrasonic signals, and each crossing traffic guiding ultrasonic signal is composed of a plurality of f1And f2Composition, frequency f1Represented by binary 0, frequency f2Represented by binary 1, the intersection traffic guidance ultrasonic signal coding and decoding mechanism is in a 0 and 1 mode; the same-intersection traffic guidance ultrasonic signal coding method is similar, and the vehicle running ultrasonic signals are coded in a mode that ultrasonic waves with different signal frequencies are combined into 1 and 0; the crossing signal lamp system sends crossing traffic guidance ultrasonic signals in a broadcasting mode, and the unmanned vehicles at the crossing receive the ultrasonic signals and interpret the ultrasonic signals according to a decoding mechanism; unmanned vehicles mutually send and receive vehicle running ultrasonic signals, synchronize with vehicle traffic signal lamps, and replace horn sounds with whistling ultrasonic waves;
5) when the laser radar or the video camera is limited by a scene, the model and the distance of surrounding vehicles can be predicted according to the frequency and the intensity of the sound wave signal; under the condition that the light sensing signal is clear, the sound wave detection and the light sensing detection are coordinated according to a certain mechanism flow, and the sound wave detection replaces part of the light sensing detection tasks, so that the task amount of the light sensing detection is reduced or the defect of the light sensing detection is overcome; dividing a judging mechanism in the unmanned control process into a plurality of sub-processes, namely: the method comprises the following steps that a machine learning process, a front light sensing process, a front left light sensing process, a front right light sensing process, a rear left light sensing process, a rear right light sensing process, a front sound wave process, a front left sound wave process, a front right sound wave process, a rear left sound wave process, a rear right sound wave process, a left light sensing process, a right light sensing process, a left sound wave process and a right sound wave process are involved in a control process, a judgment mechanism participates in the control process, the control process is a control measure of automatic driving or advanced auxiliary driving and comprises acceleration, deceleration, braking, steering, holding, advancing and retreating processes, the judgment mechanism and the control process work in a coordinated mode, and the machine learning process, the front light sensing process, the front left light sensing process, the front right light sensing process, the rear right sound wave process, the rear left sound wave process, the rear sound wave process and the control process are combined to control the driving of the unmanned vehicle;
overtaking process under the condition of clear light sensing signals:
dividing a lane into a passing lane, a running lane and a decelerating lane, assuming that the passing lane is on the left side of the running lane and the decelerating lane is on the right side of the running lane, setting a control flow as an advancing process, starting a front light sensing flow when preparing to change the passing lane, detecting whether a vehicle exists in a set distance range of the left front passing lane, setting the front light sensing flow at a priority level if judging that the vehicle exists, starting a front sound wave flow and a machine learning flow, learning the corresponding relation of sound wave frequency, amplitude, distance, vehicle type and speed of the front vehicle, namely the left front vehicle, the right front vehicle, the front sound wave flow and the machine learning flow at a priority level lower than that of the front light sensing flow by a machine, namely starting the front sound wave flow and the machine learning flow only on the basis of ensuring that the front light sensing flow has the priority level, and keeping the front light sensing flow in an operating state all the time in the whole judging mechanism, the other sub-processes determine whether to operate according to the situation;
if no vehicle exists in the left front, starting a left sound wave flow, verifying whether a vehicle exists on the left side, if so, starting a left light sensation flow, and after verifying that a vehicle exists, starting a machine learning flow, wherein the machine learns the corresponding relation among the sound wave frequency, the amplitude, the distance, the vehicle type and the speed of the left vehicle, the priority of the left sound wave flow, the priority of the left light sensation flow and the priority of the machine learning flow are lower than that of the front light sensation flow, and the priority of the left sound wave flow is higher than that of the left light sensation flow;
if the left sound wave flow detects that the left passing lane has no vehicle or the left sound wave flow detects that the left side has a vehicle and the left light sensation flow verifies that the left passing lane has no vehicle, the rear sound wave flow is started to judge whether the left rear passing lane has a vehicle or not, if the left passing lane has a vehicle, the left rear light sensation flow is started to verify whether the left passing lane has a vehicle or not, and if the light sensation flow also has a vehicle, the machine learning flow is started. When the control flow is an advancing process, the priority of the rear light sensation flow, the rear sound wave flow and the machine learning flow is lower than that of the front light sensation flow, and the priority of the rear sound wave flow is higher than that of the rear light sensation flow;
if the rear sound wave flow detects that no vehicle exists in the left rear passing lane, the left rear light sensing flow is started to detect that no vehicle exists in the left rear passing lane, or the rear sound wave flow detects that vehicles exist in the left rear passing lane and the left rear light sensing flow detects that no vehicle exists in the left rear passing lane, the control flow is started when the left front, left and left rear passing lanes are determined to be empty of vehicles, 4) vehicle running ultrasonic signals in traffic ultrasonic signals are transmitted, and the passing lane changing process is started, wherein the specific steering, accelerating or decelerating process is determined by the control flow;
in the whole overtaking process of preparing to drive to the overtaking lane, the front light sensing process, the machine learning process and the front sound wave process are always kept in a running state, the front light sensing process is always kept in a priority level, when the overtaking lane is driven, the right sound wave process and the right light sensing process are started, when the overtaking vehicle is exceeded, the rear sound wave process is started, and when the preset distance of the overtaking vehicle is exceeded, the vehicle driving ultrasonic signals in the 4) traffic ultrasonic signals are transmitted;
when the right front light sensing process is ready to reset, whether a vehicle exists in the set distance of the right front lane is detected, if no vehicle exists, the resetting process is started, and the steering and the like are determined by the control process; if the vehicle exists, starting a front sound wave process and a machine learning process, judging the speed of the right front vehicle, and continuously overtaking or decelerating to wait for the right front vehicle to leave the vehicle, wherein the operation process determines the speed of the right front vehicle;
overtaking process under clear light sensing signal condition:
similar to the overtaking process, the difference is that the vehicle overtakes the vehicle, the type, the sound wave frequency, the sound wave amplitude, the distance and the speed of the pre-overtaking vehicle are detected by starting a rear sound wave process, a left side or right side sound wave process, a rear light sensation process, a left side or right side light sensation process and a machine learning process, and the relation between a sound wave signal and a light sensation signal is established, the priority of the front light sensation process is greater than that of all the rest sub-processes in a judgment mechanism, the front light sensation process is always in a running state, when the sound wave signal exists on the left rear side or the right rear side, the light sensation process detects the relative distance after the overtaking lane is confirmed by the rear light sensation process, the sound wave process detects and tracks the sound wave frequency and the amplitude of the sound source vehicle, and the machine learning process establishes a corresponding relation;
under the condition that the light sensing signals are insensitive, when the functions of a laser radar or a video in the light sensing process are limited due to the influence of weather and the like, the distance and the type of a sound source vehicle are predicted by machine learning result data in the overtaking process under the condition that the light sensing signals are clear, and if necessary, the distance and the type of the sound source vehicle are determined by the meter wave radar detection in the light sensing process, wherein the light sensing process in the section mainly refers to the meter wave radar detection:
the method comprises the following steps that firstly, a straight-going flow, a front sound wave flow and a rear sound wave flow are in continuous working states, the working states of other sub-flows in a judgment mechanism are determined according to conditions, and front detection is carried out during straight-going: when the front sound wave process detects that a vehicle exists in the front set distance range, the front light sensing process is started to confirm whether the vehicle exists, and if the vehicle exists, the front emergency state level Le is judgedi(i ═ 1, 2.., N) is set to Le1And initiates an operation process, Le1>Le2>…>LeNNamely Lei>Lei+1The control flow is according to LeiThe specific speed reduction or overtaking measures are determined by the control flow; if the light sensing process detects that there is no vehicle, then LeiIs set as Le2The control flow is according to Le2Taking measures, which are specifically determined by an operation flow;
a straight-going process, rear detection during straight-going: when the rear sound wave flow detects that a vehicle exists in the set distance range right behind, the rear light sensing flow is started to confirm, and if the vehicle exists, the rear emergency state level is set as BLei(i ═ 1, 2.., N) and passed to the control process, BLe1>BLe2>…>BLeNNamely BLei>BLei+1The control flow is according to BLeiThe method comprises the steps of taking measures, wherein the specific measures are determined by an operation process, and when a rear vehicle knocks into a set distance range, transmitting 4) vehicle running ultrasonic signals in traffic ultrasonic signals; if the rear light sensing process detects that no vehicle is located right behind, the rear emergency level is set to BLei+1The control flow is according to BLei+1Taking measures, which are specifically determined by an operation flow;
the overtaking process is to detect when entering a overtaking lane: assuming that a passing lane, a traffic lane and a deceleration lane are sequentially arranged from left to right, when preparing lane change and passing, a front sound wave flow detects whether a vehicle exists in a set distance range of the left front passing lane, if the vehicle exists, a straight-going flow is continuously executed, if the vehicle does not exist, a front light sensing flow is started for confirmation, if the vehicle exists, the vehicle continues to run according to the straight-going flow and continuously detects whether the vehicle exists in the set distance range of the left front passing lane, until the front sound wave flow and the front light sensing flow both detect that no vehicle exists, a left sound wave flow is started for detecting whether the vehicle exists in the left passing lane, if the vehicle exists, the straight-going flow is continuously run, and at intervals of time t, the vehicle exists in the left passing lane is continuously detected until the left sound wave flow detects that no vehicle exists in the left passing lane and also does not exist in the left light sensing flow, a rear sound wave flow is started for detecting whether the vehicle exists in the set distance range of the left rear passing lane, on the same principle, if a vehicle exists, waiting and running according to a straight-going flow, starting a lane-changing overtaking process until the rear sound wave flow detects that no vehicle exists and the rear light sensation flow confirms that no vehicle exists, wherein the specific process is determined by an operation flow, if the rear sound wave flow detects that no vehicle exists and the rear light sensation flow detects that no vehicle exists, or the rear sound wave flow detects that no vehicle exists and the rear light sensation flow detects that no vehicle exists, continuing waiting and running the straight-going flow, starting the lane-changing overtaking process until the sound wave flow and the light sensation flow both detect that no vehicle exists in a set distance range of an overtaking lane, and transmitting 4) a vehicle running ultrasonic signal in the traffic ultrasonic signal;
and fourthly, overtaking process, detection when the vehicle enters a overtaking lane: the method comprises the following steps of executing a straight-going flow, continuously detecting whether a vehicle exists in a right-side traffic lane by a right-side sound wave flow, continuously executing the straight-going flow if the vehicle exists, determining the overtaking speed by a control flow, continuously detecting the right-side traffic lane by the right-side sound wave flow until no vehicle exists, starting a left light sensing flow to confirm, continuously waiting if the vehicle exists, starting a rear flow to detect whether the vehicle exists in a set distance range of a right rear traffic lane until no vehicle exists, waiting if the vehicle exists, starting a front sound wave flow to detect whether the vehicle exists in the set distance range of the right front traffic lane until no vehicle exists in both the rear sound wave flow and the rear light sensing flow, starting a front light sensing flow to confirm, continuously waiting if the vehicle exists in the light sensing flow, and continuing until no vehicle exists in the set distance range of the right front sound wave flow and the front light sensing flow, stopping transmitting the overtaking signal and transmitting 4) the vehicle running ultrasonic signal in the traffic ultrasonic signal, and executing a homing process in the control flow, wherein the specific process is determined by the control flow.
2. The unmanned sound wave and light sensation coordination detection method and system according to claim 1, wherein the reflector glass panel, the light sensation device, the sound pickup and the fixing device together form a sound wave and light sensation integrated reflector, a small hole is drilled in one side of the reflector glass panel, the light sensation device and the sound pickup are respectively exposed from the small hole, the circuit board of the light sensation device and the sound pickup is positioned at the bottom of the reflector glass panel, namely in a space between the glass panel and the fixing device, the glass panel and the fixing device form a sealed space, power lines and signal lines of the light sensation device and the sound pickup penetrate out from one side of the fixing device, and heat generated by the light sensation device and the circuit board of the sound pickup is dissipated to the outside air by the fixing device.
3. The unmanned acoustic and light sense coordinated detection method and system according to claim 1, wherein the intersection traffic guidance ultrasonic signal: 1) a crossing signal lamp ultrasonic signal coding and decoding mechanism 1 is characterized in that a signal lamp ultrasonic signal is divided into a wave head part, a content part and a wave tail part which are respectively composed of a plurality of f1And f2And (4) forming. Frequency f1Represented by binary 0, frequency f2The method comprises the steps that binary 1 is used for representing, the duration time of a single 0 and a single 1 in a wave head, a content and a wave tail is the same, wherein 4 bits represent the wave head, 9 bits represent the content, 3 bits represent the wave tail, a decoding mechanism is that the decoding mechanism can be used for decoding only after a complete signal lamp ultrasonic signal is received, four areas of a crossroad are a, b, c and d, the wave head and the wave tail of each area of the ultrasonic signal are uniquely identified, namely the wave heads of the four areas a, b, c and d are different, and the wave tails of the four areas are also different; the ultrasonic signal content of the signal lamp is guided by red, green and yellow lamps in straight line, left turn and right turn, and the three states of straight line, left turn and right turn are coded simultaneously, namelyThe 9-bit binary system simultaneously represents left turn, straight line and right turn, and the ultrasonic signal contents in the areas a, b, c and d are a uniform coding mechanism; 2) the crossing signal lamp ultrasonic signal coding and decoding mechanism 2 is similar to the method in the above 1), and the difference is that f is adopted3And f4The frequency of the signal component as the wave head and wave tail, i.e. both the wave head and the wave tail f3And f4The content still consists of f1And f2Composition, frequency f of ultrasonic signal component3And f4Frequency f3Represented by binary 0, frequency f4Expressed as binary 1, f3And f4Composed of 4-bit wave head, 3-bit wave tail and f1And f2The formed 8-bit contents form a 16-bit signal lamp ultrasonic signal together; 3) the crossing signal lamp ultrasonic signal coding and decoding mechanism 3 is similar to the method in the above 1) and 2), and the same method adopts f3And f4Frequency of signal components as wave head and wave tail, f1And f2The signal component frequencies of the contents are distinguished in that the wave head and the wave tail are respectively represented by independent 8-bit 1 and 0, the contents are also represented by independent 8 bits, and the durations of single 1 and single 0 in the wave head and the wave tail are the same, namely The=the1=the0(ii) a 4) The intersection signal lamp ultrasonic signal coding and decoding mechanism 4 is similar to the method in the above 3), and the differences are that the signal component frequencies used in the regions a, b, c and d are all different, the regions a, b, c and d use the same binary coding mechanism, and the differences are that the decoding mechanisms of the regions a, b, c and d adopt the unique signal component frequency fiAs an identification; 5) the intersection signal lamp ultrasonic signal coding and decoding mechanism 5 is similar to the above 1), and the difference is that the encoding mechanism is further simplified on the basis of 1), the coding mechanism does not distinguish a, b, c and d areas, and only aims at the driving vehicles in one of the a, b, c and d areas to transmit ultrasonic signals by utilizing the directional propagation characteristics of ultrasonic waves, even if the vehicles in the a area receive the reflected signals in the b area, or the ultrasonic signals are distinguished by utilizing the attenuation characteristics of signal intensity, and the condition is that the original signal transmission intensities in the a, b, c and d areas are the same; the coding mechanism is further simplified, and the left-turn, straight-going and right-turn directions are identified by adopting independent codesThe method is that the sequential coding is not adopted any more, and the azimuth coding is attached; the signal transmitting sequence still adopts a combination mode of direction and signal lamp state, but does not distinguish the sequence of straight running, right turning and left turning; the decoding mechanism is further simplified, the directions are not distinguished according to the sequence, namely, the mode that the decoding can be carried out only by receiving a complete signal is not adopted any more, and the decoding is carried out in real time at random; the wave head and the wave tail are further simplified, and fewer binary digits are adopted; the wave head and the wave tail are omitted, namely, one signal only has the states of the azimuth and the signal lamp;
vehicle travel ultrasonic signal: the ultrasonic signal of vehicle running is transmitted and received by the vehicle, the vehicle equipped with the ultrasonic transmitting and receiving device transmits and receives the ultrasonic signal of vehicle running according to the unified coding and decoding rule; similar to the intersection traffic guidance ultrasonic coding method, the coding modes of combining ultrasonic waves with different signal component frequencies into 1 and 0 are also adopted, the difference is that the used signal component frequency is different from the intersection traffic guidance ultrasonic signal component frequency range, the duration of 1 and 0 of the vehicle driving ultrasonic signal is different from the intersection traffic guidance ultrasonic signal, the coding modes of the vehicle driving ultrasonic signals 1 and 0 are also different from the intersection signal lamp ultrasonic signal, and the vehicle identification number is increased by adopting the coding modes of f components 1 and 0; the ultrasonic signal that the vehicle traveles comprises vehicle identification number, instruction classification number, indicates the differentiation mode of classification number and vehicle identification number, divide into three: the signal component f used by the two is different; the duration lengths of binary single 1 and single 0 formed by the two f are different; the binary codes of 1 and 0 are different; the decoding mechanism is that the decoding mechanism can be decoded only when a complete running ultrasonic signal is received; the vehicle identification number is sent separately and continuously to reflect the vehicle position signal, or a separate vehicle position signal is added.
4. The unmanned sound wave and light sense coordinated detection method and system according to claim 1, wherein under the condition that light sense signals are good in a sunny day and the like, the light sense signals and sound wave signals of surrounding vehicles are in a corresponding relation, and one of two methods is adopted: classifying the vehicle types according to the light sensation signalsIs a ViEstablishing a corresponding relation between the vehicle type and the sound wave range according to the sound wave signals of the corresponding vehicle type, and learning the sound wave frequency characteristics and the amplitude characteristics of different distances of different vehicle types by applying a machine learning algorithm; the second step is as follows: based on the unique characteristics of the sound wave signals, the light sensation signals of the sound source vehicles are collected, the corresponding relation is established, the amplitude probability of the sound wave with the frequency at different distances is learned through a machine learning algorithm, and the probability that a certain vehicle type belongs to the sound wave is learned.
5. The method and system for detecting harmony of sound wave and light sensation of unmanned aerial vehicle as claimed in claim 1,
1) a forward detection process of a straight-ahead process, a process 1 of starting the process, a process 2 of starting a forward sound wave, a process 3 of judging whether a vehicle exists in a forward set distance according to the process 2 of the forward sound wave, and if so, sending an emergency state
Figure FSA0000234708320000061
And 5, setting an emergency state according to the value of i in an operation process of 5, wherein i is 1, 2
Figure FSA0000234708320000062
A level of (d); if the result of the judgment of 3 is N, setting the emergency state as the emergency state
Figure FSA0000234708320000063
Wherein
Figure FSA0000234708320000064
When the operation flow of 5 is finished, the front light sensing flow of 8 is continuously executed, and when the operation flow of 7 is finished, the front light sensing flow of 8 is also continuously executed, and 9, namely, whether a vehicle exists in the front set distance is judged according to the front light sensing flow of 8, if so, the state is sent
Figure FSA0000234708320000065
And controlling a process flow to 11, wherein i is 1, 2Acute state
Figure FSA0000234708320000066
A level of (d); if the result of the judgment of 9 is N, the emergency state is set as
Figure FSA0000234708320000067
And sent to the 13 control flow, the 13 control flow is based on
Figure FSA0000234708320000068
Or
Figure FSA0000234708320000069
Determine the specific operation process, and 11 operation procedures according to
Figure FSA00002347083200000610
Figure FSA00002347083200000611
Or
Figure FSA00002347083200000612
Determining a specific control process, after the control process 13 and the control process 11 are executed, continuing to cycle back to the sound wave process before 2, or executing 14 to finish, wherein the specific execution 14 is finished or the sound wave process before 2 is determined by the control process 13 or 11;
2) a detection process at the rear part of the straight-going process, wherein 1, the process is started, 2, a sound wave process at the rear part is started, 3, the judgment is carried out, namely whether vehicles exist in the rear set distance is judged according to the 2, if yes, an emergency state is sent
Figure FSA00002347083200000613
And 5, setting an emergency state according to the value of i in an operation process of 5, wherein i is 1, 2
Figure FSA00002347083200000614
A level of (d); if the result of the judgment of 3 is obtainedIf N is reached, the emergency state is set to
Figure FSA00002347083200000615
Wherein
Figure FSA00002347083200000616
When the 5 control flow is finished, the 8 rear light sensing flow is continuously executed, and similarly, when the 7 control flow is finished, the 8 rear light sensing flow is also continuously executed, the 9 judgment is that whether a vehicle exists in the rear set distance is judged according to the 8 rear light sensing flow, and if the vehicle exists, the sending state is sent
Figure FSA00002347083200000617
And controlling a process flow to 11, wherein i is 1, 2
Figure FSA00002347083200000618
A level of (d); if the result of the judgment of 9 is N, the emergency state is set as
Figure FSA00002347083200000619
And sent to the 13 control flow, the 13 control flow is based on
Figure FSA00002347083200000620
Or
Figure FSA00002347083200000621
Determine the specific operation process, and 11 operation procedures according to
Figure FSA00002347083200000622
Figure FSA00002347083200000623
Or
Figure FSA00002347083200000624
Determining specific operation processes, 13 operation processes and 11 operation processesAfter the execution is finished, the next step continues to loop back to the 2-step sound wave flow, or the execution 14 is finished, and the specific execution 14 is finished or the 2-step sound wave flow is determined by the 13 or 11 control flow.
3) The second flow of the front detection scheme of the straight-going flow is similar to the flow of the 1) and is different in that the flow returns to the 2-front sound wave flow when the 7-operation flow is executed, namely when the 2-front sound wave flow is detected to be without a vehicle, the 8-front light sensation flow verification is not executed for the moment, and the 2-front sound wave flow is continuously circulated;
4) the second flow of the rear detection scheme after the straight-going flow is similar to the flow of the 2) and is different from the second flow of the rear detection scheme in that the sound wave flow returns to the 2 rear sound wave flow when the 7 operation flows are executed, namely when the 2 rear sound wave flow is detected to be without a vehicle, the 8 rear light sensation flow verification is not executed for the moment, and the 2 rear sound wave flow is continuously circulated;
5) and (3) overtaking process, wherein 1, 2, the left front sound wave process detects the condition of the vehicle in the set distance range of the left front overtaking lane, and if the judgment result 3 is Y, the state is sent
Figure FSA0000234708320000071
Giving 5 an operation flow, continuing to return to the 2 left front sound wave flow, continuing to execute 6 left front light sensation flow to verify whether a vehicle exists in the set distance range of the left front overtaking lane or not until the judgment result of 3 is N, and if the judgment result of 7 is Y, sending the state
Figure FSA0000234708320000072
Giving 9 an operation flow, simultaneously returning to the 2 left front sound wave flow to continue detecting, executing 10 a left sound wave flow to detect whether a left overtaking lane has a vehicle or not when executing 7 a judgment result of N, and if 11 a judgment result of Y, sending a state
Figure FSA0000234708320000073
Giving 13 control flow, continuing to return to 2 left front sound wave flow, continuing to circulate until 11 judges result is N, executing 14 left side light sensation flow, if 15 judges result is Y, sending state
Figure FSA0000234708320000074
Giving 17 operation flow, returning 2 left front sound wave flow, continuing circulation until 15 judgment result is N, executing 18 left rear sound wave flow, if 19 judgment result is Y, sending state
Figure FSA0000234708320000075
Giving control flow 21, returning to the left front sound wave flow 2, continuing to circulate until the judgment result of 19 is N, executing left back light sensation flow 22, and if the judgment result of 23 is Y, sending state
Figure FSA0000234708320000076
Giving 25 an operation flow, returning to 2 a left front sound wave flow, continuing to circulate until the judgment result of 23 is N, executing 26 the operation flow, finishing left turning from a straight lane to a passing lane by 26 the operation flow, further continuing to execute 27 the straight flow in the passing lane, executing 28 a right front sound wave flow to detect the condition of the vehicle in the set distance range of the right front straight lane, and if the judgment result of 29 is Y, sending a state
Figure FSA0000234708320000077
The control flow is given to 31, the right front sound wave flow is returned to 28, the right front light sensing flow verification is executed 32 until the judgment result of 29 is N, and if the judgment result of 33 is Y, the state is sent
Figure FSA0000234708320000078
Giving 35 a control flow and returning 28 a right front sound wave flow, executing 36 a right sound wave flow to detect whether a vehicle exists in a right straight lane or not until the judgment result of 33 is N, and if the judgment result of 37 is Y, sending a state
Figure FSA0000234708320000079
And giving control flow 39, returning to the right front sound wave flow 28, continuing to circulate until the judgment result of 37 is N, executing 40 right light sensation flow, and if the judgment result of 41 is NY, then send status
Figure FSA00002347083200000710
Giving 43 an operation flow, returning 28 the right front sound wave flow, continuing to circulate until the judgment result of 41 is N, executing 44 the right rear sound wave flow to detect the condition of the vehicle in the right rear set distance range, and if the judgment result of 45 is Y, sending the state
Figure FSA00002347083200000711
Giving 47 a control flow, returning 28 a right front sound wave flow, continuing circulation until 45 a judgment result is N, executing 48 right rear light sensation flow verification, and if 49 a judgment result is Y, sending a state
Figure FSA00002347083200000712
And giving 51 the control flow, returning 28 the right front sound wave flow, continuing to circulate until the judgment result of 49 is N, executing 52 the control flow, completing the passing from the passing lane to the driving lane by 52 the control flow, continuing 53 the straight flow in the driving lane, and ending the passing flow.
6. The method and system for detecting harmony of sound wave and light sensation of unmanned aerial vehicle as claimed in claim 1,
1) the pickup layout of the truck is that the pickup b is positioned at the front lower part of the left side of the head of the truck; the sound pick-up c is positioned at the front upper part of the left side of the head of the truck; the sound pick-up f is positioned at the rear lower part of the left side of the tail part of the truck; the pickup a is positioned at the front lower part of the right side of the head of the truck; the sound pick-up d is positioned at the front upper part of the right side of the head of the truck; the sound pickup e is positioned at the rear lower part of the right side of the tail part of the truck;
2) the layout of the pickup of the passenger car is that the pickup a is positioned at the rear lower part of the left side of the tail part of the passenger car; the sound pick-up d is positioned at the rear upper part of the left side of the tail of the passenger car; the sound pick-up e is positioned at the front upper part of the left side of the head of the passenger car; the sound pickup h is positioned on the sound wave and light sensation integrated reflector on the left side of the passenger car; the sound pick-up b is positioned at the rear lower part of the right side of the tail of the passenger car; the sound pick-up c is positioned at the rear upper part of the right side of the tail of the passenger car; the sound pick-up f is positioned at the front lower part of the right side of the head of the passenger car; the sound pick-up g is located on the passenger car right side sound wave and light sense integral type reflector, and the sound pick-up h is located on the passenger car left side sound wave and light sense integral type reflector.
CN202110222979.7A 2021-03-01 2021-03-01 Unmanned sound wave and light sense coordinated detection method and system Active CN113156939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110222979.7A CN113156939B (en) 2021-03-01 2021-03-01 Unmanned sound wave and light sense coordinated detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110222979.7A CN113156939B (en) 2021-03-01 2021-03-01 Unmanned sound wave and light sense coordinated detection method and system

Publications (2)

Publication Number Publication Date
CN113156939A true CN113156939A (en) 2021-07-23
CN113156939B CN113156939B (en) 2023-06-02

Family

ID=76883737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110222979.7A Active CN113156939B (en) 2021-03-01 2021-03-01 Unmanned sound wave and light sense coordinated detection method and system

Country Status (1)

Country Link
CN (1) CN113156939B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1777143A1 (en) * 2005-10-20 2007-04-25 Volkswagen Aktiengesellschaft Lane-change assistant
US20170045941A1 (en) * 2011-08-12 2017-02-16 Sony Interactive Entertainment Inc. Wireless Head Mounted Display with Differential Rendering and Sound Localization
CN108961788A (en) * 2018-07-20 2018-12-07 张鹏 Traffic lights wisdom transform method
CN109615887A (en) * 2018-12-24 2019-04-12 张鹏 Wisdom traffic network system signal guidance method
EP3477616A1 (en) * 2017-10-27 2019-05-01 Sigra Technologies GmbH Method for controlling a vehicle using a machine learning system
EP3511902A1 (en) * 2018-01-15 2019-07-17 Reliance Core Consulting LLC Systems for motion analysis in a field of interest
WO2020001891A1 (en) * 2018-06-27 2020-01-02 Zf Friedrichshafen Ag Sound channel and housing for acoustic sensors for a vehicle for detecting sound waves of an acoustic signal outside the vehicle
US20200334979A1 (en) * 2017-09-15 2020-10-22 Velsis Sistemas E Tecnologia Viaria S/A Predictive, integrated and intelligent system for control of times in traffic lights
CN112185335A (en) * 2020-09-27 2021-01-05 上海电气集团股份有限公司 Noise reduction method and device, electronic equipment and storage medium
US10907940B1 (en) * 2017-12-12 2021-02-02 Xidrone Systems, Inc. Deterrent for unmanned aerial systems using data mining and/or machine learning for improved target detection and classification

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1777143A1 (en) * 2005-10-20 2007-04-25 Volkswagen Aktiengesellschaft Lane-change assistant
US20170045941A1 (en) * 2011-08-12 2017-02-16 Sony Interactive Entertainment Inc. Wireless Head Mounted Display with Differential Rendering and Sound Localization
US20200334979A1 (en) * 2017-09-15 2020-10-22 Velsis Sistemas E Tecnologia Viaria S/A Predictive, integrated and intelligent system for control of times in traffic lights
EP3477616A1 (en) * 2017-10-27 2019-05-01 Sigra Technologies GmbH Method for controlling a vehicle using a machine learning system
US10907940B1 (en) * 2017-12-12 2021-02-02 Xidrone Systems, Inc. Deterrent for unmanned aerial systems using data mining and/or machine learning for improved target detection and classification
EP3511902A1 (en) * 2018-01-15 2019-07-17 Reliance Core Consulting LLC Systems for motion analysis in a field of interest
WO2020001891A1 (en) * 2018-06-27 2020-01-02 Zf Friedrichshafen Ag Sound channel and housing for acoustic sensors for a vehicle for detecting sound waves of an acoustic signal outside the vehicle
CN108961788A (en) * 2018-07-20 2018-12-07 张鹏 Traffic lights wisdom transform method
CN109615887A (en) * 2018-12-24 2019-04-12 张鹏 Wisdom traffic network system signal guidance method
CN112185335A (en) * 2020-09-27 2021-01-05 上海电气集团股份有限公司 Noise reduction method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113156939B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US10650677B2 (en) Detecting and responding to sirens
CN109927719B (en) Auxiliary driving method and system based on obstacle trajectory prediction
CN114375467B (en) System and method for detecting an emergency vehicle
US20200064856A1 (en) Detecting and responding to sounds for autonomous vehicles
US20220122365A1 (en) Multi-modal, multi-technique vehicle signal detection
US11608055B2 (en) Enhanced autonomous systems with sound sensor arrays
US11702102B2 (en) Filtering return points in a point cloud based on radial velocity measurement
CN114442101B (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
US20220128995A1 (en) Velocity estimation and object tracking for autonomous vehicle applications
US20220073104A1 (en) Traffic accident management device and traffic accident management method
EP4170606A1 (en) Identification of real and image sign detections in driving applications
CN113156939A (en) Unmanned sound wave and light sense coordinated detection method and system
Enayati et al. A novel triple radar arrangement for level 2 ADAS detection system in autonomous vehicles
WO2023058142A1 (en) Information processing device, program, and information processing method
Li Ros-Based Sensor Fusion and Motion Planning for Autonomous Vehicles: Application to Automated Parkinig System
Li et al. Construction of Highway Vehicle Recognition Model Based on Microwave Radar
CN117841995A (en) Vehicle interaction method and device, vehicle and storage medium
CN117622142A (en) Vehicle driving intention recognition method based on laser radar and visible light communication
CN116337090A (en) Vehicle-road collaborative navigation system and method with high anti-interference capability

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant