CN114906190A - Vehicle-mounted sensing system and data processing method - Google Patents

Vehicle-mounted sensing system and data processing method Download PDF

Info

Publication number
CN114906190A
CN114906190A CN202110184659.7A CN202110184659A CN114906190A CN 114906190 A CN114906190 A CN 114906190A CN 202110184659 A CN202110184659 A CN 202110184659A CN 114906190 A CN114906190 A CN 114906190A
Authority
CN
China
Prior art keywords
vehicle
sound source
acoustic
wind speed
sensing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110184659.7A
Other languages
Chinese (zh)
Inventor
齐海政
彭宇君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Volkswagen Automotive Co Ltd
Original Assignee
FAW Volkswagen Automotive Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Volkswagen Automotive Co Ltd filed Critical FAW Volkswagen Automotive Co Ltd
Priority to CN202110184659.7A priority Critical patent/CN114906190A/en
Publication of CN114906190A publication Critical patent/CN114906190A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62BHAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
    • B62B3/00Hand carts having more than one axis carrying transport wheels; Steering devices therefor; Equipment therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a vehicle-mounted sensing system which comprises a first sensing device, a second sensing device and a central processing device, wherein the first sensing device is used for acquiring environmental data around a vehicle and determining obstacle information around the vehicle based on the environmental data, and the obstacle information comprises the size, the coordinates and the motion state of an obstacle; the second sensing device is used for acquiring audio data around the vehicle and determining sound source information around the vehicle based on the audio data, wherein the sound source information comprises acoustic coordinates and acoustic sizes of a sound source; the central processing device is electrically connected with the first sensing device and receives the obstacle information determined by the first sensing device, the central processing device is electrically connected with the second sensing device and receives the sound source information determined by the second sensing device, and the central processing device processes the received obstacle information and the received sound source information, identifies obstacles around the vehicle and/or determines the ambient wind speed of the vehicle.

Description

Vehicle-mounted sensing system and data processing method
Technical Field
The invention relates to the technical field of vehicle environment sensing equipment, in particular to a vehicle-mounted sensing system and a data processing method.
Background
In recent years, with the continuous development of automotive electronic technologies such as sensors, an intelligent driving automobile is rapidly becoming an important future development direction of the automobile industry as a solution for improving the driving safety of vehicles and reducing traffic accidents and traffic jams. Systems for intelligently driving a car in general can be divided into: sensing, deciding, predicting, planning and controlling several modules. Wherein the perception module is responsible for detecting the relevant environment of intelligent driving car in-process of traveling, and relevant environment specifically includes: the intelligent driving of traffic participants, lane lines, road signs, traffic lights and the like around the automobile.
In the driving process of the intelligent driving automobile, whether the sensing module can effectively, accurately and timely detect the relevant environment in the driving process of the intelligent driving automobile directly determines the safety of the intelligent driving automobile in the driving process. Therefore, for intelligently driving the automobile, an effective, accurate and timely sensing module is very important.
In the current intelligent driving field, a sensing system generally senses external environments based on sensors such as a camera, a laser radar, a millimeter wave radar and ultrasonic waves. When the complex and variable traffic environment is faced, the traditional sensing mode has defects. For example, in order to rapidly pass through special vehicles such as fire trucks, ambulances, police cars and the like, an alarm is sounded in the driving process, and at the moment, an intelligent driving automobile needs to plan and execute an avoidance action in time according to an alarm sound, but the traditional perception cannot identify the alarm sound and perform space positioning on a sound source; the peripheral vehicles send sharp tire noises or large engine noises due to emergency braking, rapid acceleration or large lateral maneuver, and at the moment, the automatic driving automobile should identify the position and the driving intention of the target vehicle according to the sharp tire noises or the large engine noises and avoid the sharp tire noises or the large engine noises properly, but the traditional sensing mode cannot identify the sharp tire noises or the large engine noises and cannot identify the position and the driving intention of the target vehicle according to acoustic information; when the targets around the vehicle are completely or partially shielded, the sensing frame obtained by traditional sensing disappears or the size is distorted, and the engine noise and the tire noise of the vehicle on the reference road can be used for improving the recognition accuracy of the targets around the vehicle, but the tire noise and the engine noise cannot be recognized by the traditional sensing.
In conclusion, the sound signals on the road have important value for the safe driving of the intelligent driving automobile. However, in the field of smart driving automobiles, the conventional sensing system only adopts a camera, a millimeter wave radar, a laser radar and the like, and does not have a sensor for detecting sound signals, so that the smart driving automobile cannot recognize sound signals around the vehicle.
Disclosure of Invention
To solve at least one aspect of the above problems, the present invention provides a vehicle-mounted sensing system, comprising: the first sensing device is used for acquiring environment data around the vehicle and determining obstacle information around the vehicle based on the environment data, wherein the obstacle information comprises the size, the coordinates and the motion state of an obstacle; the second sensing device is used for acquiring audio data around the vehicle and determining sound source information around the vehicle based on the audio data, wherein the sound source information comprises acoustic coordinates and acoustic sizes of a sound source; the central processing device is electrically connected with the first sensing device and receives the obstacle information determined by the first sensing device, the central processing device is electrically connected with the second sensing device and receives the sound source information determined by the second sensing device, and the central processing device processes the received obstacle information and the sound source information and identifies obstacles around the vehicle and/or determines the ambient wind speed of the vehicle.
Preferably, the second sensing device comprises: the system comprises at least one microphone array, a control unit and a display unit, wherein the at least one microphone array is fixedly arranged on a vehicle and used for acquiring audio data around the vehicle; the electronic acoustic control unit is arranged inside the vehicle and electrically connected with the at least one microphone array, the electronic acoustic control unit is electrically connected with the central processing device and used for receiving and processing audio data acquired by the at least one microphone array to determine the sound source information and sending the sound source information to the central processing device.
Preferably, the at least one microphone array is arranged at any one position or a combination of a plurality of positions of a vehicle body sheet metal part between a vehicle interior B column and a ceiling, a vehicle body sheet metal part between the vehicle interior B column and a floor, a vehicle body sheet metal part between a vehicle interior A column and the ceiling, a vehicle body sheet metal part between the vehicle interior A column and the floor, a vehicle body sheet metal part between a vehicle interior C column and the ceiling, a vehicle body sheet metal part at a position between the vehicle interior C column and the floor, and a position below a front row seat.
Preferably, the acoustic electronic control unit is fixedly arranged on an armrest sheet metal part in the vehicle.
Preferably, the first sensing device comprises: the millimeter wave radar module is fixedly arranged on the vehicle and used for acquiring environmental data of a preset area outside the vehicle; the camera module is fixedly arranged on the vehicle and used for acquiring image data around the vehicle; the first processing unit is electrically connected with the millimeter wave radar module and the camera module and used for receiving and processing the environment data and the image data to determine obstacle information around the vehicle and sending the obstacle information to the central processing device.
Preferably, the millimeter wave radar module is arranged at any one or more of the middle point position of the front bumper of the vehicle, the positions of the two ends of the front bumper of the vehicle, the middle point position of the rear bumper of the vehicle, the left side wall position of the vehicle and the right side wall position of the vehicle.
Preferably, the millimeter wave radar module includes a millimeter wave radar control unit and at least one millimeter wave radar, the millimeter wave radar control unit is electrically connected with the at least one millimeter wave radar, and the millimeter wave radar control unit receives and processes signals of the millimeter wave radar.
Preferably, the camera module is arranged in any one or more of a middle point position of the heat dissipation grid, a right rear view mirror shell position, a left rear view mirror shell position and a middle point position of the rear bumper.
Preferably, the camera module comprises a camera control unit and at least one camera, wherein the camera control unit is electrically connected with the at least one camera, and the camera control unit receives and processes signals of the camera.
Preferably, the first sensing device further comprises a laser radar module, the laser radar module is electrically connected with the first processing unit, and the laser radar module is used for acquiring environmental data around the vehicle and sending the environmental data to the first processing unit.
In another aspect, the present invention provides a data processing method using the vehicle-mounted sensing system, including: acquiring environment data around the vehicle by using a first sensing device, and determining obstacle information around the vehicle based on the environment data, wherein the obstacle information comprises the size, the coordinates and the motion state of an obstacle; acquiring audio data around the vehicle by using a second sensing device, and determining sound source information around the vehicle based on the audio data, wherein the sound source information comprises acoustic coordinates and acoustic sizes of a sound source; and electrically connecting a central processing unit with the first sensing device, receiving the obstacle information determined by the first sensing device, electrically connecting the central processing unit with the second sensing device, receiving the sound source information determined by the second sensing device, and processing the obstacle information and the sound source information by the central processing unit, identifying obstacles around the vehicle and/or determining the ambient wind speed of the vehicle.
Preferably, the step of acquiring audio data around the vehicle using the second sensing device and determining sound source information around the vehicle based on the audio data includes: acquiring an ambient audio data set including audio files of various vehicles, sounds made under various driving conditions, various road conditions, and various weather environments using at least one microphone array; determining a sound source feature identification algorithm according to the environment audio data set by utilizing an acoustic electronic control unit; collecting audio data around a vehicle by using the at least one microphone array, receiving the audio data around the vehicle by using the acoustic electronic control unit and identifying sound source characteristics based on a sound source characteristic identification algorithm, wherein the sound source characteristics comprise components, categories and conditions of emitted sound; the acoustic electronic control unit determines the acoustic size of the sound source according to the sound source characteristics and determines the acoustic coordinates of the sound source based on a sound source localization algorithm according to the audio data around the vehicle.
Preferably, the sound source characteristic component comprises any one or a combination of tire noise, engine noise, exhaust noise, rearview mirror wind noise and vehicle speaker audio, wherein the vehicle speaker audio comprises an alarm issued by an ambulance, a fire truck or a police vehicle.
Preferably, the categories of sound source characteristics include passenger cars, commercial passenger cars, trucks, ambulances, fire trucks, police cars and sources of sound that are not identifiable.
Preferably, the conditions of the sound source characteristics include emergency braking, rapid acceleration, and steady driving.
Preferably, the sound source information further includes an acoustic frame of the sound source, wherein the acoustic frame is a spatial distribution of acoustic sizes of the sound source.
Preferably, the step of the acoustic electronic control unit determining the acoustic frame of the sound source comprises: determining the current acoustic coordinate of a sound source and the acoustic coordinate of the previous moment; and taking the direction determined by the connection line of the current acoustic coordinate of the sound source and the acoustic coordinate of the previous moment as the length direction of the acoustic frame, taking the current acoustic coordinate of the sound source as the center of the acoustic frame, and taking the length and width of the acoustic size of the sound source as the length and width of the acoustic frame.
Preferably, when the sound source is a whole vehicle, the step of determining the acoustic frame by the acoustic electronic control unit includes: when the relative distance change between the center coordinates of the acoustic frames of the plurality of sound sources is smaller than a set first threshold value, judging that the plurality of sound sources belong to the same vehicle; and taking the smallest rectangular frame of the acoustic frames of the plurality of sound sources belonging to the same vehicle as the acoustic frame of the whole vehicle.
Preferably, the sound source information further includes a wind speed sound source center coordinate, a wind speed sound source frequency, and a wind speed sound source intensity.
Preferably, the step of determining the sound source information of a wind speed sound source comprises: collecting a wind noise sound data set by using at least one microphone array, wherein the wind noise sound data set comprises audio files of air vortex sounds when wind blows over different static obstacles at different wind speeds; determining a wind speed characteristic identification algorithm according to the wind noise sound data set by utilizing an acoustic electronic control unit; acquiring audio data around the vehicle by using the at least one microphone array, receiving the audio data around the vehicle by using the acoustic electronic control unit, and judging whether the audio data is a wind speed sound source or not based on a wind speed characteristic identification algorithm; when the audio data around the vehicle collected by the at least one microphone array is judged to be a wind speed sound source, the acoustic electronic control unit determines the acoustic information of the wind speed sound source according to the audio data around the vehicle based on a sound source positioning algorithm.
Preferably, the step of determining obstacle information around the vehicle using the first sensing device includes: acquiring environment data by using a millimeter wave radar module and a camera module, and identifying static obstacles and dynamic obstacles based on the environment data; and fusing the recognition results of the millimeter wave radar module and the camera module by using a first processing unit to determine the size, the coordinates and the motion state of the obstacles around the vehicle.
Preferably, the step of acquiring obstacle information around the vehicle using the first sensing device includes: acquiring environmental data by using a millimeter wave radar module, a camera module and a laser radar module, and identifying static obstacles and dynamic obstacles based on the environmental data; and fusing the identification results of the millimeter wave radar module, the camera module and the laser radar module by using a first processing unit so as to determine the size, the coordinates and the motion state of the obstacles around the vehicle.
Preferably, when the central processing device determines that the distance between the acoustic coordinate of the sound source and the center coordinate of the obstacle is smaller than a set second threshold, the obstacle and the sound source are the same obstacle, and the spatial distribution of the geometrical size of the obstacle is determined according to a minimum rectangular frame including an acoustic frame of the sound source and an obstacle sensing frame.
Preferably, the step of the central processing device determining the ambient wind speed comprises: determining the center coordinate of a wind speed sound source by using the second sensing device, determining the coordinate of a static obstacle by using the first sensing device, taking the center of the wind speed sound source and the static obstacle as a wind speed measurement pair, and determining the distance between the wind speed measurement pair according to the center coordinate of the wind speed sound source and the coordinate of the static obstacleSeparation:
Figure BDA0002942576580000051
determining an included angle between a connecting line of the wind speed measurement pair and the vehicle running direction:
Figure BDA0002942576580000052
in the formula, x NWM And y NWM Is the coordinate, x, of the center of the wind speed sound source in the geodetic coordinate system SON And y SON Coordinates of the static barrier in a geodetic coordinate system; calculating the average value of included angles between a plurality of groups of wind speed measurement pairs and the vehicle running direction
Figure BDA0002942576580000053
Setting included angle threshold value delta theta WANM Distance threshold MaxD NM Will satisfy D NM <MaxD NM And is provided with
Figure BDA0002942576580000054
As counting measurement points; calculating wind noise frequency of the counting measurement points:
Figure BDA0002942576580000055
f NWi the frequency of the audio frequency sent out from the wind speed sound source center corresponding to the ith counting and measuring point is received, u is the sound velocity, v is the vehicle speed, and alpha i The included angle between the wind speed sound source center of the ith counting measurement point and a vehicle connecting line and the vehicle moving direction is formed; according to the determined wind noise frequencies of the plurality of counting measurement points, determining the ambient wind speed of the vehicle as follows:
Figure BDA0002942576580000056
wherein the content of the first and second substances,
Figure BDA0002942576580000057
wherein, the value range of i corresponds to the total number n, dB of the counting measurement points NWi Sound intensity, v, corresponding to the ith count measurement point WSi Wind speed for the ith count measurement point.
Preferably, the ambient wind speed of the vehicleThe direction is as follows:
Figure BDA0002942576580000058
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002942576580000059
in the formula D Wi The distance between the wind speed sound source center and the static obstacle corresponding to the ith counting measurement point is calculated, n is the total number of the counting measurement points, and theta WAi And (4) forming an included angle between the connecting line of the wind speed sound source center and the static barrier in the ith counting and measuring point and the ground.
Preferably, the component of the ambient wind speed in the direction of travel of the vehicle is: v. of WSo =v WS ×cos(θ WAEgo )-v Ego The component in the vertical direction is: v. of WSa =v WS ×sin(θ WAEgo ) Wherein, theta Ego Is the yaw angle of the vehicle, v Ego Is the speed of the vehicle.
Preferably, the central processing device processes the obstacle information and the sound source information to determine a risk coefficient of the obstacle: k Nt =K NC ×K NT Wherein, K is NC Is the risk factor of the acoustic operating conditions, K NT To obtain the danger coefficient of the obstacle kind.
Preferably, the acoustic condition risk factor includes a jerk factor, which is determined by the following equation: k acc =K accN ×f accN ,K accN For a set rapid acceleration proportionality factor, f accN A sudden acceleration fetal noise frequency determined for the second sensing device.
Preferably, the acoustic condition hazard coefficient comprises an emergency braking coefficient, the emergency braking coefficient being determined by the formula: k is bra =K braN ×f braN ,K braN For a set emergency braking proportionality factor, f braN An emergency braking tire noise frequency determined for said second sensing device.
Preferably, the acoustic condition risk factor includes a steady driving factor, and the steady driving factor is equal to a set steady driving proportionality factor.
The vehicle-mounted sensing system and the data processing method provided by the embodiment of the invention have the following beneficial effects:
(1) the vehicle-mounted sensing system acquires audio data around the vehicle by adopting the acoustic sensing system, acquires environmental data around the vehicle by adopting the traditional sensing system, fuses the processing result of the acoustic sensing system and the processing result of the traditional sensing system through the central processing device, identifies obstacles around the vehicle and improves the sensing precision of the vehicle-mounted sensing system on the obstacle information around the vehicle.
(2) The vehicle-mounted sensing system is used for identifying the obstacles in the surrounding environment of the vehicle and determining the danger coefficient of the obstacles, so that reasonable avoidance and adjustment are performed based on the danger coefficient of the obstacles, and the safe driving coefficient of the vehicle is improved.
Drawings
For a better understanding of the above and other objects, features, advantages and functions of the present invention, reference should be made to the embodiments illustrated in the drawings. Like reference numerals in the drawings refer to like parts. It will be appreciated by those skilled in the art that the drawings are intended to illustrate preferred embodiments of the invention, without in any way limiting the scope of the invention, and that the various components in the drawings are not to scale.
FIG. 1 is a block diagram of an on-board sensory system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first sensing device of the vehicle-mounted sensing system according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an on-board sensory system according to an embodiment of the present invention;
FIG. 4 is a schematic distribution diagram of an on-board vehicle sensing system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an on-board sensory system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an acoustic block principle of a vehicle-mounted sensing system according to an embodiment of the present invention;
fig. 7 is a schematic diagram of obstacle information determined by a first sensing device of the vehicle-mounted sensing system according to the embodiment of the invention;
fig. 8 is a schematic diagram of an acoustic block of a sound source acquired by a second sensing device of the vehicle-mounted sensing system according to the embodiment of the invention;
fig. 9 is a schematic diagram of a central processing device of the vehicle-mounted sensing system fusing information of a first sensing device and a second sensing device according to an embodiment of the invention;
FIG. 10 is a schematic diagram of an on-board sensing system for determining the wind speed of the vehicle's surroundings according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of an applied ambient wind speed of an on-board sensing system according to an embodiment of the present invention;
FIG. 12 is a functional block diagram of an in-vehicle sensing system according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment". The term "another embodiment" means "at least one additional embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
In order to at least partially solve one or more of the above problems and other potential problems, an embodiment of the present disclosure proposes an on-vehicle sensing system, including a first sensing device, a second sensing device, and a central processing device, where the first sensing device is configured to acquire environmental data around a vehicle and determine obstacle information around the vehicle based on the environmental data, where the obstacle information includes a size, coordinates, and a motion state of an obstacle; the second sensing device is used for acquiring audio data around the vehicle and determining sound source information around the vehicle based on the audio data, wherein the sound source information comprises acoustic coordinates and acoustic sizes of a sound source; the central processing device is electrically connected with the first sensing device and receives the obstacle information determined by the first sensing device, the central processing device is electrically connected with the second sensing device and receives the sound source information determined by the second sensing device, and the central processing device processes the received obstacle information and sound source information, identifies obstacles around the vehicle and/or determines the ambient wind speed of the vehicle.
Specifically, the vehicle-mounted sensing system shown in fig. 1 includes a first sensing device 100, a second sensing device 200 and a central processing device 300, wherein the first sensing device 100 and the second sensing device 200 are respectively electrically connected with the central processing device 300, the central processing device 300 is electrically connected with a main controller of the vehicle, so as to realize communication between the vehicle-mounted sensing system and the main controller of the vehicle, and further, obstacle information around the vehicle identified by the vehicle-mounted sensing system or ambient wind speed of the vehicle is transmitted to the main controller of the vehicle, and the main controller of the vehicle adjusts the speed and direction of the vehicle based on the above results. The central processing device of the vehicle-mounted sensing system comprises a receiving module and a processing module, the receiving module is electrically connected with the first sensing device 100 and the second sensing device 200, the receiving module receives obstacle information and sound source information around a vehicle from the first sensing device 100 and the second sensing device 200, and the obstacle information and the sound source information are fused through the processing module so as to accurately identify obstacles around the vehicle or determine the ambient wind speed where the vehicle is located.
In the embodiment, the second sensing device comprises an acoustic electronic control unit and at least one microphone array, and the at least one microphone array is fixedly arranged on the vehicle and used for acquiring audio data around the vehicle; the electronic acoustic control unit is arranged in the vehicle, electrically connected with the at least one microphone array and electrically connected with the central processing device, and is used for receiving and processing audio data acquired by the at least one microphone array to determine sound source information and sending the sound source information to the central processing device.
As shown in fig. 1 to 3, the second sensing device 200 employs an acoustic module including an acoustic electronic control unit 230 and at least one microphone array, and in this embodiment, the microphone array employs a MEMS (Micro-Electro-Mechanical System) based microphone array, wherein the MEMS based microphone array includes a controller power module, a plurality of MEMS microphones, a DSP (Digital Signal Processing), an A2B (automatic Audio Bus) transceiver, and a controller power module. The MEMS microphone is a microphone commonly used in the digital audio field, and an example of the signal output port interface is an I2S (Integrated interface Sound Integrated circuit) interface. The DSP is a DSP with a common acoustic interface commonly used in the digital audio field, and the model of one specific embodiment is TMS320C 6748. Examples of commonly used acoustic interfaces include the I2S (Integrated inter Sound Integrated circuit built-in Audio bus) interface or the McBSP (Multichannel Buffered Serial Port) or McASP (Multichannel Audio Serial Port) interface. The output interface of the microphone of the MEMS is connected with the acoustic interface of the DSP through a wire; the A2B transceiver is connected with the acoustic interface wire of the DSP; the output terminal of the controller power supply module is connected with and supplies power to the power supply input terminals of the DSP, the A2B transceiver and the MEMS microphone.
In this embodiment, the acoustic electronic control unit 230 includes a Controller power module, a DSP, a serial-CAN (Controller Area Network) conversion module, an A2B transceiver, and a CAN transceiver. The DSP is a common DSP with a common acoustic interface in the digital audio field, and the model of one specific embodiment is TMS320C 6748; examples of common acoustic interfaces include the I2S interface or McBSP or McASP interface; the serial port-CAN conversion module is a conversion module commonly used in the field of automotive electronics. The internal connection relationship of the acoustoelectric control unit 230 is: the output end of the controller power supply module is connected with the power supply input ends of the DSP, the serial port-CAN conversion module, the A2B transceiver and the CAN transceiver and supplies power to the DSP, the serial port-CAN conversion module, the A2B transceiver and the CAN transceiver. The A2B transceiver is connected with the acoustic interface and the electric wire of the DSP; and the serial port of the DSP is connected with a serial port-CAN conversion module through a wire. The serial port-CAN conversion module is connected with a CAN transceiver interface through a wire, and the other end of the CAN transceiver is connected with the central processing unit 300. One embodiment of a CAN transceiver is CAN-FD (Controller Area Network with Flexible Data-Rate Flexible Controller Area Network).
Specifically, in the embodiment shown in fig. 1, the second sensing device 200 includes a first microphone array 210, a second microphone array 220, and an acoustic electronic control unit 230, and the first microphone array 210 and the second microphone array 220 are respectively disposed on a vehicle body sheet metal part in the vehicle interior between the vehicle right side B-pillar 8 and the vehicle left side B-pillar 9 and the vehicle roof 16. In other embodiments, the second sensing device 200 may further include a third microphone array, a fourth microphone array, or a plurality of microphone arrays. The first microphone array 210, the second microphone array 220 and other multiple microphone arrays are respectively and fixedly arranged on any one position or combination of multiple positions in a vehicle body sheet metal part between the vehicle right side A column 6 and the roof 16, a vehicle body sheet metal part between the vehicle left side A column 7 and the roof 16, a vehicle body sheet metal part between the vehicle right side A column 6 and the floor, a vehicle body sheet metal part between the vehicle right side C column 12 and the roof 16, a vehicle body sheet metal part between the vehicle left side C column 13 and the roof 16, a vehicle body sheet metal part between the vehicle right side C column 12 and the floor, a vehicle body sheet metal part between the vehicle left side C column 13 and the floor, and under the front row seat.
Specifically, the acoustic electronic control unit 230 is fixedly disposed on an armrest sheet metal member inside the vehicle to increase the stability of the second sensing device 200 and facilitate electrical connection of the acoustic electronic control unit 230 and the central processing device 300. The fixing mode of the acoustic electronic control unit 230 is clamping, bonding, screwing, bolting or welding which are commonly used in the vehicle field. In another embodiment, the acoustic electronic control unit 230 may be fixedly disposed on a sub-dashboard inside the vehicle, or the like, so that the fixing of the acoustic electronic control unit 230 and the electrical connection with the central processing device 300 may be achieved.
In this embodiment, the central processing unit 300 includes a controller power module, five CAN transceivers, and an embedded computer or an on-chip computer system. The embedded computer or the on-Chip computer System is an SOC (System on Chip System on a Chip) with a CAN interface, which is commonly used in the field of automotive electronics. A preferred embodiment of the CAN is CAN-FD. The model of one specific embodiment of the embedded computer or on-chip computer system is TDA4 VM. The central processing unit 300 is connected internally such that the output terminal of the controller power supply module is connected to and supplies power to five CAN transceivers, embedded computers, or power input terminals of the on-chip computer system. And the five CAN transceivers are respectively connected with CAN interface wires of the embedded computer or the on-chip computer system. The central processing unit 300 is fixed under the center console or in the trunk of the vehicle. With the preferred location being under the center console of the vehicle. The fixing mode is clamping, bonding, screw connection, bolt connection or welding which are commonly used in the field of vehicles.
In this embodiment, the first sensing device 100 includes a millimeter wave radar module, a camera module and a laser radar module, wherein the millimeter wave radar module, the camera module and the laser radar module are respectively connected with the central processing device, the receiving module of the central processing device 300 is electrically connected with the millimeter wave radar control unit, the camera control unit and the laser radar control unit, and the processing module of the central processing device 300 fuses the information received by the receiving module from the control unit of each module of the first sensing device 100 and the information of the acoustics electronic control unit 230 of the second sensing device 200 to identify the obstacles around the vehicle.
In another embodiment, the first sensing device 100 may further include a millimeter wave radar module, a camera module, and a first processing unit, where the millimeter wave radar module is fixedly disposed on the vehicle and is used to obtain environmental data of a preset area outside the vehicle; the camera module is fixedly arranged on the vehicle and used for acquiring image data around the vehicle; the first processing unit is electrically connected to the millimeter wave radar module and the camera module, and the first processing unit is configured to receive and process the recognition result of the camera module of the millimeter wave radar module to determine obstacle information around the vehicle, and transmit the obstacle information to the central processing device 300. In other embodiments, the first sensing device 100 may further include a millimeter wave radar module, a laser radar module, a camera module, and a first processing unit, where the laser radar module is fixedly disposed on the vehicle, and is configured to acquire information such as a position, a motion state, and a shape of an obstacle in an external preset area, and send the information to the first processing unit, and the first processing unit determines information about the obstacle around the vehicle by processing recognition results of the millimeter wave radar module, the laser radar module, and the camera module.
The camera module is a camera module commonly used in the field of intelligent driving automobiles and comprises a plurality of camera modules and a look-around controller, and an output port of the camera module controller is a CAN interface and is electrically connected with the central processing unit 300 through the CAN interface. In the present embodiment, the camera module includes a first camera 111, a second camera 112, and a camera control unit 101, wherein the first camera 111 is disposed at a midpoint of the radiator grille 2, and the second camera 112 is fixedly disposed at a midpoint of the rear bumper 14 at the rear end of the vehicle.
In other embodiments, the camera module may further include a first camera 111, a second camera 112, a third camera 113, a fourth camera 114, and a camera control unit 101, wherein the first camera 111 is disposed at a midpoint of the radiator grille 2, the second camera 112 is disposed at a midpoint of the rear bumper 14 at the rear end of the vehicle, the third camera 113 is disposed on the right rearview mirror housing 4, and the fourth camera 114 and the third camera 113 are symmetrically disposed on the left rearview mirror housing 5. The fixing modes of the cameras are clamping connection, bonding connection, screw connection, bolt connection or welding which are commonly used in the field of vehicles. As shown in fig. 5, the first camera 111 acquires image data in a first camera area 1110 at the front end of the vehicle, the second camera 112 acquires image data in a second camera area 1120 at the rear end of the vehicle, the third camera 113 acquires image data in a third camera area 1130 at the right side of the vehicle, the fourth camera 114 acquires image data in a fourth camera area 1140 at the left side of the vehicle, and the camera control unit 101 controls the cameras to be turned on and off, and receives and processes the image data acquired by the cameras to identify obstacles around the vehicle.
In another embodiment, the camera module may further include five, six, or more cameras and a camera control unit 101, the camera control unit 101 is electrically connected to the plurality of cameras and controls the operation of the plurality of cameras, and the plurality of cameras are fixedly disposed at any one or a combination of a plurality of positions among a midpoint position of the heat dissipation grill 2, a position of the right mirror housing 4, a position of the left mirror housing 3, and a midpoint position of the rear bumper 14, so that a vehicle circumferential space may be covered with a sum of camera areas of the respective cameras.
In some embodiments, the millimeter wave radar module is disposed at any one or a combination of more of a midpoint position of the vehicle front bumper 1, both end positions of the vehicle front bumper 1, a midpoint position of the vehicle rear bumper 14, a vehicle left side wall 11 position, and a vehicle right side wall 10 position.
Specifically, the millimeter wave radar module is a commonly used millimeter wave radar module in the field of intelligent driving automobiles, and comprises a plurality of millimeter wave radar modules and a millimeter wave radar control unit, wherein an output port of the millimeter wave radar control unit is a CAN interface and is electrically connected with the central processing unit 300 through the CAN interface.
In some embodiments, the millimeter wave radar module comprises a millimeter wave radar control unit and at least one millimeter wave radar, the millimeter wave radar control unit is electrically connected with the at least one millimeter wave radar, and the millimeter wave radar control unit receives and processes signals of the at least one millimeter wave radar.
As shown in fig. 1, the millimeter-wave radar module includes a millimeter-wave radar module including a first millimeter-wave radar 121, a second millimeter-wave radar 122, and a millimeter-wave radar control unit 102, the millimeter-wave radar control unit 102 being electrically connected to the first millimeter-wave radar 121 and the second millimeter-wave radar 122, controlling on and off of each millimeter-wave radar, and receiving and processing environmental data of the vehicle from each millimeter-wave radar.
In other embodiments, as shown in fig. 4 and 5, millimeter-wave radar module may further include first millimeter-wave radar 121, second millimeter-wave radar 122, third millimeter-wave radar 123, fourth millimeter-wave radar 124, fifth millimeter-wave radar 125, sixth millimeter-wave radar 126, and millimeter-wave radar control unit 102. Wherein, the fixed mid point position that sets up bumper 1 in front of the vehicle of first millimeter wave radar 121, the fixed mid point position that sets up at rear bumper 14 of vehicle of second millimeter wave radar 122, third millimeter wave radar 123 sets up the left side fender department at the vehicle front end, fourth millimeter wave radar 124 sets up the right side fender department at the vehicle front end, fifth millimeter wave radar 125 sets up in vehicle right side wall 10 department, sixth millimeter wave radar 126 sets up in vehicle left side wall 11 department. The millimeter wave radar control unit 102 receives and processes environment data of each area around the vehicle, which is acquired by each millimeter wave radar, and includes a first millimeter wave radar area 1210, a second millimeter wave radar area 1220, a third millimeter wave radar area 1230, a fourth millimeter wave radar area 1240, a fifth millimeter wave radar area 1250, and a sixth millimeter wave radar area 1260.
In other embodiments, millimeter-wave radar module may also include a plurality of millimeter-wave radar and millimeter-wave radar control units 102, the plurality of millimeter-wave radar fixed locations including, but not limited to: the automobile rear bumper comprises a front automobile bumper 1, a radiator grille 2, a front windshield 3, a right rearview mirror shell 4, a left rearview mirror shell 5, an outer decorative plate of an automobile right side B column 8, an outer decorative plate of an automobile left side B column 9, an automobile right side wall 10, an automobile left side wall 11, an automobile body metal plate at an automobile right side C column 12, an automobile body metal plate at an automobile left side C column 13, an automobile rear bumper 14 and an automobile rear cover 15.
In this embodiment, the first sensing device further includes a laser radar module, the laser radar module is electrically connected to the first processing unit, and the laser radar module is configured to identify an obstacle based on environmental data around the vehicle, and send the identification to the first processing unit.
The laser radar module is a laser radar module commonly used in the field of intelligent driving automobiles and comprises one or more laser radars and a laser radar control unit, and an output port of the laser radar control unit is a CAN interface. In this embodiment, the lidar module includes a first lidar 131 and a lidar control unit 103, with the first lidar 131 being disposed within a fascia within the vehicle roof 16.
In another embodiment, the first sensing device may further include a laser radar module, a millimeter wave radar module, and a camera module, each of which is connected to the central processing device 300, and the central processing device 300 fuses recognition results of each of the modules to determine the static obstacle and the dynamic obstacle.
In another aspect, the present invention provides a data processing method using the vehicle sensing system as described above, including:
step S1, acquiring environment data around the vehicle by using the first sensing device, and determining obstacle information around the vehicle based on the environment data, the obstacle information including a size, coordinates, and a motion state of the obstacle.
Step S2, acquiring audio data around the vehicle using the second sensing device, and determining sound source information around the vehicle based on the audio data, the sound source information including acoustic coordinates and acoustic dimensions of the sound source.
And step S3, electrically connecting the central processing unit with the first sensing device, receiving the obstacle information determined by the first sensing device, electrically connecting the central processing unit with the second sensing device, receiving the sound source information determined by the second sensing device, and processing the obstacle information and the sound source information by the central processing unit to identify the obstacle around the vehicle or determine the ambient wind speed of the vehicle.
In some embodiments, the step of determining the obstacle information around the vehicle using the first sensing device in step S1 includes:
step S1a, acquiring environmental data by using a millimeter wave radar module and a camera module; the first sensing device adopts a sensing algorithm commonly used in the field of intelligent driving to recognize the target of the millimeter wave radar, the camera and the laser radar.
Step S1b, fusing the recognition results of the millimeter wave radar module and the camera module by using a first processing unit to determine the size, the coordinates and the motion state of the obstacles around the vehicle; the multi-sensor fusion adopts a multi-sensor fusion algorithm commonly used in the field of intelligent driving to fuse the results recognized by the millimeter wave radar, the camera and the laser radar.
The obstacle information includes static obstacle information output and dynamic obstacle information output, and as shown in fig. 7, the following information of the dynamic obstacle calculated in the multi-sensor fusion of the dynamic obstacle output: (x) TBV ,y TBV ) The center position of a sensing frame of the target identified by a traditional sensing algorithm; (L) TBV ,W TBV ) Length and width of a sensing box for the identified target; (V) XTBV ,V YTBV ) The x-direction and y-direction velocities of the identified sensing frames. The static obstacle outputs the following information for the static obstacle calculated in the multi-sensor fusion: (x) SOj ,y SOj ) The center position of the static obstacle sensing frame identified for the traditional sensing algorithm. As shown in fig. 10, in a specific real-time example, there are a total of 1, 2, … …, N static obstacles, and the center positions of the corresponding static obstacle sensing frames are (x) SO1 ,y SO1 ),(x SO2 ,y SO2 ),……,(x SON ,y SON )。
In some embodiments, the step of acquiring audio data around the vehicle using the second sensing device in step S2, and determining sound source information around the vehicle based on the audio data includes:
step S2a, an ambient audio data set is acquired with at least one microphone array. The respective MEMS-based microphone array modules of the second sensing device detect sound sources around the vehicle, respectively.
And S2b, acquiring an environmental audio data set acquired by at least one microphone array by using the acoustic electronic control unit, and determining a sound source characteristic identification algorithm based on a neural network algorithm.
Step S2c, collecting audio data around the vehicle by using at least one microphone array, receiving the audio data around the vehicle by using the acoustic electronic control unit and identifying sound source characteristics based on a sound source characteristic identification algorithm, wherein the sound source characteristics comprise components, types and conditions of emitted sound.
And step S2d, the acoustic electronic control unit determines the acoustic size of the sound source according to the characteristics of the sound source and determines the acoustic coordinates of the sound source based on a sound source positioning algorithm according to the audio data around the vehicle.
The acoustic electronic control unit takes sound source signals (namely audio data) detected by each MEMS-based microphone array as input, and adopts a sound source characteristic identification algorithm and a sound source positioning algorithm to acquire the characteristics and the position information of a specific sound source related to the vehicle. A sound source feature recognition algorithm in vehicle feature sound recognition is obtained by training an acoustic neural network model through a vehicle environment audio data set. Vehicle environmental audio and audio data sets including vehicle noise, i.e., audio files for various vehicles, sounds made under various driving conditions, various road conditions, and various weather environments; the true value of a sample in a vehicle noise audio data set is the sound characteristic of the source signal emitting the noise, including in particular the component from which the sound is emitted, the component class, and the vehicle condition. The acoustic neural network model is a neural network model commonly used in the field of intelligent algorithms, and a specific embodiment is a BP or RNN neural network model.
In some embodiments, the category of sound source characteristics determined by the second sensing device includes any one or a combination of tire noise, engine noise, exhaust noise, rearview mirror wind noise, and vehicle speaker audio, wherein the vehicle speaker audio includes alarms issued by ambulances, fire trucks, police cars. In some embodiments, the sound source characteristics further include a vehicle type to which the sound source corresponds. In some embodiments, the conditions of the acoustic source signature include emergency braking, rapid acceleration, and steady travel.
Specifically, taking the surrounding audio data acquired by the second sensing device as audio data sent by other vehicles around the vehicle as an example, the sound source information determined by the second sensing device is the sound source feature identification of the other vehicles, and the obtained identification result is the specific sound source feature NoiseVehID (Part, Type, Condition).
The sound source feature NoiseVehID (Part, Type, Condition) is composed of three components of a Part (Part) that emits sound, a category (Type), and a Condition (Condition). Said Part (Part) comprising: tires, engines, exhaust, rear view mirrors, speakers/alarms, etc. The categories (Type) include: passenger cars, commercial passenger cars, trucks, ambulances, fire engines, police cars, and the like. The Condition (Condition) includes: emergency braking, rapid acceleration, etc.
TABLE 1 values of components of sound sources
Figure BDA0002942576580000141
Figure BDA0002942576580000151
TABLE 2 class values of Sound sources
Type value Description of the meaning of the values
1 Passenger car
2 Commercial passenger car
3 Truck
4 Ambulance car
5 Fire engine
6 Police car
7 The sound source cannot be distinguished
TABLE 3 status values of sound sources
Condition value Description of the meaning of the values
1 Emergency brake
2 Fast acceleration
3 Stable driving
A specific example of the noise from the brake-emergency truck is noise (1,3,1) which is identified by the sound source feature identification algorithm. Based on the recognition result, the acoustic size of the corresponding component is further determined.
In some embodiments, the sound source information further comprises an acoustic box of the sound source, wherein the acoustic box is a spatial distribution of acoustic sizes of the sound source. In some embodiments, the step of the acoustic electronic control unit determining the acoustic frame of the acoustic source comprises: determining an acoustic coordinate of a sound source and an acoustic coordinate of a previous moment; and taking the direction determined by the connection line of the current acoustic coordinate of the sound source and the acoustic coordinate of the previous moment as the length direction of the acoustic frame, taking the current acoustic coordinate of the sound source as the center of the acoustic frame, and taking the length and width of the acoustic size of the sound source as the length and width of the acoustic frame. In some embodiments, when the sound source is a whole vehicle, the step of determining the acoustic frame by the acoustic electronic control unit comprises: when the relative distance change between the center coordinates of the acoustic frames of the plurality of sound sources is smaller than a set first threshold value, judging that the plurality of sound sources belong to the same vehicle; and taking the smallest rectangular frame of the acoustic frames of the plurality of sound sources belonging to the same vehicle as the acoustic frame of the whole vehicle.
The sound source information further includes acoustic coordinates of the sound source, i.e., sound source characteristic sound localization, and the vehicle characteristic sound localization outputs localization information of the vehicle corresponding to the sound source detected in the vehicle characteristic sound recognition, taking the vehicle characteristic sound localization as an example. The location information of the specific sound source includes acoustic coordinates (x) of the vehicle corresponding to the specific sound source NV ,y NV ) (ii) a Acoustic coordinate (x) NV ,y NV ) The coordinate values are calculated by adopting a common sound source positioning algorithm in the microphone array positioning field.
The principle and the steps of the vehicle acoustic target recognition are as follows:
NoiseVehID (Part, Type, Condition) obtained by sound source feature identification and coordinates (x) of the specific sound source obtained by sound source positioning NV ,y NV ) Drawing an acoustic frame of the part, which comprises the following specific steps:
a) as shown in FIG. 6, the sound source position (x) according to the current time Tcur NVc ,y NVc ) And the sound source position (x) of the previous time Tpre NVp ,y NVp ) And obtaining a moving direction line of the sound source.
b) The method includes the steps of determining a sounding component according to a Part in NoiseVehID (Part, Type, Condition), and determining a Type of the sounding component according to a Type in NoiseVehID (Part, Type, Condition). The length and width of the acoustic frame are then determined according to the component and its type. For example, when NoiseVehID (1,3,1), Part 1 indicates that the detected sound is from a tireNoise is generated; type 3 indicates that the detected sound comes from the truck, i.e. a truck tire at the sound source. Determining the width W of the acoustic frame of the component according to the average size of the tyres of a truck commonly used in the automotive field NVBP And a component acoustic frame length L NVBP
c) Sound source position (x) at current time Tcur NVc ,y NVc ) Center at W NVBP And L NVBP Drawing a rectangle, which is a component acoustic frame, along the sound source movement direction line obtained in a) for the width and length of the rectangle.
d) Outputting the Part acoustic frames obtained in c), wherein when Part is 1 and Condition is 2 in NoiseVehID (Part, Type, Condition), namely, the tire noise at the time of rapid acceleration, and outputting the tire noise frequency f at the time of rapid vehicle acceleration accN (ii) a When Part is 1 and Condition is 1 in NoiseVehID (Part, Type, Condition), namely, the tire noise during emergency braking, the tire noise frequency f recognized during emergency braking of the vehicle is output braN
And drawing the acoustic frame of the whole vehicle according to the acoustic frame of the part obtained in the previous step. The method comprises the following specific steps:
1) and clustering the acoustic boxes of the parts. Detecting the change values of the relative distances among the acoustic frames of the components obtained in the step d), and judging that the acoustic frames of the components come from the same vehicle when the change values of the relative distances among the acoustic frames are smaller than a preset threshold value NoiseVehDis.
2) And determining a complete vehicle acoustic frame. And (3) including the part acoustic frames judged to be from the same vehicle in the step (1) by using a minimum rectangle, wherein the minimum rectangle is the whole vehicle acoustic frame. Referring to fig. 8, the length of the acoustic frame of the whole vehicle is L NBV (ii) a The width of the acoustic frame of the whole vehicle is W NBV (ii) a The center of the acoustic frame of the whole vehicle is the center (x) of the minimum rectangle NBV ,y NBV )。
Specifically, as shown in fig. 8, when the target obstacle of the vehicle is a whole vehicle, the on-vehicle sensing system determines, by the second sensing device, an engine acoustic frame 201, an exhaust pipe acoustic frame 202, a first tire acoustic frame 203, a second tire acoustic frame 204, a first rearview mirror acoustic frame 205, a second rearview mirror acoustic frame 206, a third tire acoustic frame 207, and a fourth tire acoustic frame 208. In the embodiment, the distance change between the acoustic frames is determined through the distance between the center coordinates of the acoustic frames, when the distance change between the center coordinates of the acoustic frames is smaller than the set first threshold, the acoustic frames corresponding to the acoustic frames are determined to belong to the same vehicle, and the acoustic frames of the whole vehicle are determined according to the positions and the spatial distribution of the acoustic frames, so that the length and the width of the acoustic frames of the whole vehicle are the minimum values including the acoustic frames.
In some embodiments, when the central processing device determines that the distance between the acoustic coordinate of the sound source and the center coordinate of the obstacle is smaller than a set second threshold, the obstacle and the object corresponding to the sound source are the same obstacle, and the spatial distribution of the geometrical size of the obstacle is determined according to a minimum rectangular frame including an acoustic frame of the obstacle and an obstacle sensing frame.
Specifically, the central processing device comprises a receiving module and a processing module, the receiving module receives obstacle information sent by the first sensing device and acoustic information sent by the second sensing device, and the processing module fuses the information obtained by the receiving module to identify obstacles around the vehicle and determine the ambient wind speed of the vehicle. Wherein identifying the obstacle around the vehicle includes correcting obstacle information of the first sensing device by acoustic information transmitted by the second sensing device. As shown in fig. 9, taking a target obstacle around the vehicle as an example, the first sensing device senses the target obstacle and outputs obstacle information including coordinates (x) for the target obstacle TBV ,y TBV ) And a sensing frame size data length L TBV And width W TBV Speed (v) of the sensing frame in X and Y directions xTBV ,v yTBV ) Wherein the sensing frame is a spatial distribution of the obstacle sizes determined by the first sensing device; the second sensing device senses the target obstacle and outputs the acoustic coordinate and the acoustic size of the whole vehicle aiming at the target obstacle, wherein the acoustic coordinate is (x) NBV ,y NBV ) The acoustic dimension data includes a determined length L NBV And widthW NBV . The processing module of the central processing unit is based on the coordinates (x) TBV ,y TBV ) And acoustic coordinates (x) NBV ,y NBV ) And comparing, when the distance between the two points is smaller than a set second threshold value, determining that the target obstacle measured by the first sensing device and the target obstacle measured by the second sensing device are the same target, re-identifying the spatial distribution of the size of the target obstacle according to the size distribution of the sensing frame determined by the first sensing device and the size distribution of the acoustic frame determined by the second sensing device, and including the acoustic frame determined by the second sensing device and the sensing frame determined by the first sensing device by using a minimum rectangle, wherein the minimum rectangle is the final fused sensing frame. The length L of the rectangle BV And width W BV Namely the length and the width of the final finished automobile sensing frame after fusion; the center coordinate (x) of the rectangle BV ,y BV ) Namely the central coordinate of the final finished automobile sensing frame after fusion. The speed of the final finished automobile sensing frame after fusion is (v) xBV ,v yBV ) Wherein v is xBV Is equal to v xTBV ,v yBV Is equal to v yTBV
In some embodiments, the sound source information further includes wind speed sound source center coordinates, wind speed sound source frequency, wind speed sound source intensity.
An acoustic electronic control unit of a second sensing device of the vehicle-mounted sensing system takes sound source signals (namely audio data) detected in the sound source detection of the multiple microphone arrays as input, and obtains the characteristics and the position information of a specific sound source by adopting a wind speed sound source characteristic identification algorithm and a sound source positioning algorithm. The method comprises the following steps:
identifying wind speed sound source characteristics; and the recognition result obtained by the wind speed sound source characteristic recognition is whether the sound source can be used as a wind speed sound source or not. The wind speed sound source feature recognition algorithm is obtained by training an acoustic neural network model through a wind noise frequency set. The wind noise audio is an audio file of air vortex sound when wind blows different static obstacles at different wind speeds. The truth value of the wind noise frequency concentration sample is as follows: a) can be used as a wind speed sound source; b) cannot be used as a wind speed sound source; the truth value corresponding to air vortex sound when the air vortex sound blows through columnar static barriers such as telegraph poles, traffic sign poles, trunks and the like is a) the air vortex sound can be used as a wind speed sound source; the true values for the rest of the sounds are b) cannot be used as a wind speed sound source. The acoustic neural network model is a neural network model commonly used in the field of intelligent algorithms, and a specific embodiment is a BP or RNN neural network model.
Positioning a wind speed sound source; and the positioning information of the wind speed sound source detected in the wind speed sound source characteristic identification is output by the wind speed sound source positioning. The positioning information of the wind speed sound source is position information (x) NW ,y NW ) (ii) a The position information (x) of the specific sound source NW ,y NW ) The coordinate values are calculated by adopting a common sound source positioning algorithm in the microphone array positioning field.
Outputting wind speed sound information; the information output in the wind speed sound information output includes: sound frequency f of wind speed sound source detected in wind speed sound source feature identification NW (ii) a Sound intensity dB of wind speed sound source detected in wind speed sound source characteristic identification NW (ii) a Wind speed sound source center coordinates (x) calculated in wind speed sound source localization NW ,y NW )。
In some embodiments, the step of the central processing device determining the ambient wind speed comprises:
and t1, determining the coordinates of the center of the wind speed sound source by using the first sensing device, determining the coordinates of the static obstacle by using the second sensing device, and taking the center of the wind speed sound source and the static obstacle as a wind speed measurement pair.
Step t2, determining the distance between the wind speed measurement pair according to the wind speed sound source center coordinate and the static obstacle coordinate:
Figure BDA0002942576580000181
determining an included angle between a connecting line of the wind speed measurement pair and the vehicle running direction:
Figure BDA0002942576580000182
in the formula (x) NWM ,y NWM ) Is the coordinate of the wind speed sound source center under the geodetic coordinate system, (x) SON ,y SON ) Is the coordinates of the static obstacle in the geodetic coordinate system.
Step t3, calculating the average value of the included angles between the wind speed measurement pairs and the vehicle running direction
Figure BDA0002942576580000183
Setting the threshold value delta theta of the included angle WANM Distance threshold MaxD NM Will satisfy D NM <MaxD NM And is
Figure BDA0002942576580000184
Figure BDA0002942576580000185
As counting measurement points.
Step t4, calculating the wind noise frequency of the counting measuring point:
Figure BDA0002942576580000186
f NWi the frequency of the received audio frequency emitted from the center of the wind speed sound source corresponding to the ith counting and measuring point, u is the sound velocity, v is the vehicle speed, and alpha i The included angle between the center of the wind speed sound source of the ith counting and measuring point, a connecting line of the vehicle and the moving direction of the vehicle is formed.
And step t5, determining the ambient wind speed of the vehicle as follows according to the determined wind noise frequencies of the plurality of counting measurement points:
Figure BDA0002942576580000191
wherein the content of the first and second substances,
Figure BDA0002942576580000192
wherein, the value range of i corresponds to the total number n, dB of the counting measurement points NWi Sound intensity, v, corresponding to the ith count measurement point WSi For determining f from a model of the sound source NWO Wind speed at the corresponding ith count measurement point.
Specifically, as shown in fig. 12, the wind speed and speed measuring point identification is divided into the following steps: the second sensing device is used for positioning the wind speed sound source to obtain the position (x) of the wind speed sound source NWm ,y NWm ) In the scene of the embodiment, 1, 2 … … M winds are obtainedThe position of the velocity sound source is (x) from the center coordinate of each wind velocity sound source NW01 ,y NW01 ),(x NW02 ,y NW02 )……(x NWM ,y NWM ). The positions of the static obstacles are obtained through the first sensing device, the positions of N static obstacles are obtained in the scene of the embodiment, and the coordinates of the static obstacles are respectively (x) SO01 ,y SO01 ),(x SO02 ,y SO02 )……(x SON ,y SON ). And calculating and finding out the static barrier closest to the position of each wind speed sound source according to the obtained position of the wind speed sound source and the position of the static barrier, wherein the wind speed sound source and the corresponding closest static barrier are a wind speed measurement pair.
Calculating the distance D between the static obstacle and the center of the wind speed sound source in each pair of potential wind speed measurement pairs NM And the angle theta between their line and the direction of travel of the vehicle WANM . If a static obstacle N and a wind speed sound source M form a pair of wind speed measurement pairs, the pair is called a wind speed measurement pair NM, and the distance D between the pair NM NM And an angle theta WANM Expressed as:
Figure BDA0002942576580000193
D NM represents the distance between the center of the static obstacle N and the center of the sound source M;
Figure BDA0002942576580000194
θ WANM and the included angle between the connecting line of the center of the static obstacle N and the center of the sound source M and the x axis of the geodetic coordinate is shown.
Measuring the distance D of each pair according to the wind speed of each pair NM And angle theta WANM And identifying a final counting measurement point by the following specific method:
a) calculating angles theta of all wind speed measuring points WANM Average value of (2)
Figure BDA0002942576580000195
b) Preset doorLimit value MaxD NM And Δ θ WANM Wherein MaxD NM Representing the maximum distance between the sound source and the obstacle in the available wind speed measurement pair; delta theta WANM The maximum value of the angular deviation of the available wind speed measurement points.
c) When the temperature is higher than the set temperature
Figure BDA0002942576580000196
And D NM <MaxD NM Time, judge
The wind speed measurement pair is determined as a counting measurement point.
In some embodiments, the ambient wind speed direction of the vehicle is:
Figure BDA0002942576580000201
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002942576580000202
in the formula D Wi The distance between the wind speed sound source center and the static obstacle corresponding to the ith counting measurement point is calculated, n is the total number of the counting measurement points, and theta WAi And the included angle between the wind speed sound source center in the ith counting and measuring point and the connecting line of the static barrier and the ground is shown.
Specifically, the detection of the ambient wind speed of the vehicle is divided into the following steps:
and calculating the size of the ambient wind speed. a) According to the Doppler effect principle, the frequency calculation formula of the wind noise at the static obstacle is as follows: f. of NWOi =(f NWi ×u)/(u+vcosαi),f NWi The frequency of the identified ith wind speed measurement point received by the second acquisition device; f. of NWOi Measuring the frequency emitted at the sound source for the ith wind speed; u is the speed of sound; v is the vehicle movement speed; and alpha i is an included angle between a connecting line of the sound source and the vehicle in the ith counting measurement pair and the moving direction of the vehicle. b) The neural network estimates the wind speed according to the average rate and the intensity of the sound source, and the wind speed v of the sound source at each wind speed measuring point is identified by adopting a wind speed identification algorithm WS . Wherein the wind speed at the ith count measurement point is denoted v WSi (ii) a The wind speed identification algorithm is obtained by training a wind speed identification neural network model through a wind speed and wind noise data set. The above-mentionedThe wind speed and wind noise data set comprises the sound source intensity dB at different wind speeds NW And the frequency f of the sound source NWi . The true value of the sample in the wind speed and wind noise data set is the wind speed v corresponding to the data WS Of (c) is used. The wind speed recognition neural network model is a neural network model commonly used in the field of neural network algorithms, and a specific embodiment is a BP or RNN neural network model. The wind speed identification algorithm obtains the identification result of the wind speed v of the ith counting and measuring point WSi . c) Estimation of ambient wind speed, ambient wind speed v WS The estimation formula of (c) is as follows:
Figure BDA0002942576580000203
n is the number of counting measurement points; k Si Estimating a weighting coefficient for the ambient wind speed; dB NWi And measuring the sound intensity of the sound source of the point for the ith counting, wherein the sound intensity is obtained by outputting wind speed sound information of the second sensing device.
And calculating the direction of the ambient wind speed. Ambient wind speed direction theta WA The calculation formula of (a) is as follows:
Figure BDA0002942576580000204
wherein the content of the first and second substances,
Figure BDA0002942576580000205
n is the number of counting measurement points; theta WAi Is the angle of the ith counting measurement point, theta if the ith counting measurement point is composed of a static obstacle N and a certain sound source M WAi =θ WANM ;K Ai Calculating a weighting coefficient for the ambient wind speed direction; d Wi Counting the distance of the measurement point for the ith, if the measurement point for the ith wind speed consists of a static obstacle N and a certain sound source M, then D Wi =D NM
And outputting the ambient wind speed. The output of the ambient wind speed is theta WA And D NM
In some embodiments, as shown in fig. 11, the component of the ambient wind speed in the direction of vehicle travel is: v. of WSo =v WS ×cos(θ WAEgo )-v Ego A component in the vertical direction of:v WSa =v WS ×sin(θ WAEgo ) Wherein, θ Ego Is the yaw angle of the vehicle, v Ego Is the speed of the vehicle. In further embodiments, v WSo For the vehicle control module to compensate for longitudinal control. v. of WSa The vehicle control module compensates for the lateral control. That is, the central processing unit transmits the ambient wind speed to the vehicle control module, and the vehicle control module makes an adaptive adjustment to the vehicle driving mode based on the result.
In some embodiments, the central processing device processes the obstacle information and the sound source information to determine a risk coefficient of the obstacle: k Nt =K NC ×K NT Wherein, K is NC Is the risk factor of the acoustic operating conditions, K NT To obtain the danger coefficient of the obstacle kind. In some embodiments, the acoustic condition risk factors include a sudden acceleration factor, which is determined by the following equation: k acc =K accN ×f accN ,K accN For a set rapid acceleration proportionality coefficient, f accN A sudden acceleration fetal noise frequency determined for the second sensing device. In some embodiments, the acoustic operating condition risk factor includes an emergency braking factor, which is determined by the following equation: k bra =K braN ×f braN ,K braN For a set emergency braking proportionality factor, f braN The emergency braking tire noise frequency determined for the second sensing device. In some embodiments, the acoustic condition risk factor includes a steady driving factor, and the steady driving factor is equal to the set steady driving scaling factor.
Specifically, the acoustic target risk level calculation method is as follows: k Nt =K NC ×K NT Said K NC The acoustic condition danger coefficient determined according to the running condition of the vehicle can be respectively K acc ,K bra And K sta
K acc =K accN ×f accN ,K accN Is a predetermined rapid acceleration scale factor, f accN Is part acousticsThe frequency of the tire noise when the vehicle is rapidly accelerated is output in the frame.
K bra =K braN ×f braN ,K braN For a predetermined emergency braking proportionality factor, f braN The frequency of the tire noise during emergency braking of the vehicle output in the component acoustic frame is used as the frequency.
K sta =1,K sta Is a preset stable running proportionality coefficient.
TABLE 4.K NC Parameter value example
Condition value Description of the meaning of Condition value K NC Value taking
1 Emergency brake K acc
2 Fast acceleration K bra
3 Stable driving K sta
Said K NT The value of the acoustic whole vehicle type danger coefficient determined according to the vehicle type is preset fixed threshold values K3, K4, K5, K6, K7 and K8; the values of K3, K4, K5, K6, K7 and K8 are determined according to the danger level of the corresponding vehicleIt was determined that the relative size relationships of K3, K4, K5, K6, K7, K8 in a particular embodiment are: 1-K3<K4<K5<K6<K7<K8。
TABLE 5.K NT Parameter value example
Figure BDA0002942576580000211
Figure BDA0002942576580000221
As shown in fig. 12, the central processing device outputs a final sensing result including a position of the sensing target object, a velocity of the sensing target object, a size of the sensing target object, and an acoustic threat of the sensing target according to the information fusion determined for the first sensing device and the second sensing device, where the acoustic threat can be used for making a decision and planning algorithm to reasonably avoid the sensing target object.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements to the market, or to enable others of ordinary skill in the art to understand the disclosure.

Claims (30)

1. An on-board sensory system, comprising:
the first sensing device is used for acquiring environment data around the vehicle and determining obstacle information around the vehicle based on the environment data, wherein the obstacle information comprises the size, the coordinates and the motion state of an obstacle;
a second sensing device configured to acquire audio data around a vehicle and determine sound source information around the vehicle based on the audio data, the sound source information including acoustic coordinates and acoustic dimensions of a sound source;
the central processing device is electrically connected with the first sensing device and receives the obstacle information determined by the first sensing device, the central processing device is electrically connected with the second sensing device and receives the sound source information determined by the second sensing device, and the central processing device processes the received obstacle information and the sound source information and identifies obstacles around the vehicle and/or determines the ambient wind speed of the vehicle.
2. The vehicle mounted perception system of claim 1, wherein the second perception device includes:
the system comprises at least one microphone array, a control unit and a display unit, wherein the at least one microphone array is fixedly arranged on a vehicle and used for acquiring audio data around the vehicle;
the electronic acoustic control unit is arranged in the vehicle and is electrically connected with the at least one microphone array, the electronic acoustic control unit is electrically connected with the central processing device, and the electronic acoustic control unit is used for receiving and processing audio data acquired by the at least one microphone array to determine the sound source information and sending the sound source information to the central processing device.
3. The vehicle-mounted sensing system according to claim 2, wherein the at least one microphone array is arranged at any one position or a combination of a plurality of positions of a vehicle body sheet metal part between a B column and a ceiling in a vehicle, a vehicle body sheet metal part between a B column and a floor in the vehicle, a vehicle body sheet metal part between an A column and the ceiling in the vehicle, a vehicle body sheet metal part between an A column and the floor in the vehicle, a vehicle body sheet metal part between a C column and the ceiling in the vehicle, a vehicle body sheet metal part between a C column and the floor in the vehicle, and a position below a front row seat.
4. The vehicle-mounted sensing system of claim 2, wherein the acoustic electronic control unit is fixedly arranged on an armrest sheet metal part inside the vehicle.
5. The vehicle mounted perception system of claim 1, wherein the first perception device includes:
the millimeter wave radar module is fixedly arranged on the vehicle and used for acquiring environmental data around the vehicle;
the camera module is fixedly arranged on the vehicle and used for acquiring image data around the vehicle;
the first processing unit is electrically connected with the millimeter wave radar module and the camera module and is used for receiving and processing the environment data and the image data to determine obstacle information around the vehicle and sending the obstacle information to the central processing device.
6. The vehicle sensing system of claim 5, wherein the millimeter wave radar module is disposed at any one or more of a midpoint location of a front bumper of the vehicle, locations of both ends of the front bumper of the vehicle, a midpoint location of a rear bumper of the vehicle, a left side wall location of the vehicle, and a right side wall location of the vehicle.
7. The vehicle-mounted sensing system of claim 6, wherein the millimeter wave radar module comprises a millimeter wave radar control unit and at least one millimeter wave radar, the millimeter wave radar control unit is electrically connected with the at least one millimeter wave radar, and the millimeter wave radar control unit receives and processes signals of the millimeter wave radar.
8. The vehicle mounted perception system of claim 5, wherein the camera module is disposed at any one or a combination of a center point of a heat dissipation grill, a right rear view mirror housing, a left rear view mirror housing, and a center point of a rear bumper.
9. The vehicle-mounted perception system of claim 8, wherein the camera module includes a camera control unit and at least one camera, the camera control unit is electrically connected with the at least one camera, and the camera control unit receives and processes signals of the camera.
10. The vehicle-mounted sensing system of claim 5, wherein the first sensing device further comprises a lidar module electrically connected to the first processing unit, the lidar module configured to acquire environmental data around the vehicle and transmit the environmental data to the first processing unit.
11. A data processing method using the on-board sensory system according to claims 1 to 10, comprising:
acquiring environment data around the vehicle by using a first sensing device, and determining obstacle information around the vehicle based on the environment data, wherein the obstacle information comprises the size, the coordinates and the motion state of an obstacle;
acquiring audio data around the vehicle by using a second sensing device, and determining sound source information around the vehicle based on the audio data, wherein the sound source information comprises acoustic coordinates and acoustic sizes of a sound source;
and electrically connecting a central processing unit with the first sensing device, receiving the obstacle information determined by the first sensing device, electrically connecting the central processing unit with the second sensing device, receiving the sound source information determined by the second sensing device, and processing the obstacle information and the sound source information by the central processing unit, identifying obstacles around the vehicle and/or determining the ambient wind speed of the vehicle.
12. The method according to claim 11, wherein the step of acquiring audio data around the vehicle using the second sensing device and determining sound source information around the vehicle based on the audio data comprises:
acquiring an ambient audio data set including audio files of various vehicles, sounds made under various driving conditions, various road conditions, and various weather environments using at least one microphone array;
determining a sound source feature identification algorithm according to the environment audio data set by utilizing an acoustic electronic control unit;
collecting audio data around a vehicle by using the at least one microphone array, receiving the audio data around the vehicle by using the acoustic electronic control unit and identifying sound source characteristics based on a sound source characteristic identification algorithm, wherein the sound source characteristics comprise components, categories and conditions of emitted sound;
the acoustic electronic control unit determines the acoustic size of the sound source according to the sound source characteristics and determines the acoustic coordinates of the sound source based on a sound source localization algorithm according to the audio data around the vehicle.
13. The method of claim 12, wherein the component of the acoustic source signature comprises any one or a combination of tire noise, engine noise, exhaust noise, rearview mirror wind noise, and vehicle speaker audio including alarms from ambulances, fire trucks, police cars.
14. The method of claim 13, wherein the class of acoustic source features comprises passenger cars, commercial passenger cars, trucks, ambulances, fire trucks, police cars, and sources of sound that are not identifiable.
15. The method of claim 14, wherein the conditions of the acoustic source signature include emergency braking, rapid acceleration, and steady travel.
16. The method of claim 15, wherein the sound source information further comprises an acoustic frame of the sound source, wherein the acoustic frame is a spatial distribution of acoustic sizes of the sound source.
17. The method according to claim 16, characterized in that the step of the acoustic electronic control unit determining the acoustic frame of the acoustic source comprises:
determining the current acoustic coordinate of a sound source and the acoustic coordinate of the previous moment;
and taking the direction determined by the connection line of the current acoustic coordinate of the sound source and the acoustic coordinate of the previous moment as the length direction of the acoustic frame, taking the current acoustic coordinate of the sound source as the center of the acoustic frame, and taking the length and width of the acoustic size of the sound source as the length and width of the acoustic frame.
18. The method according to claim 17, wherein the step of determining the acoustic frame by the acoustic electronic control unit when the acoustic source is a whole vehicle comprises:
when the relative distance change between the center coordinates of the acoustic frames of the plurality of sound sources is smaller than a set first threshold value, judging that the plurality of sound sources belong to the same vehicle;
and taking the smallest rectangular frame of the acoustic frames of the plurality of sound sources belonging to the same vehicle as the acoustic frame of the whole vehicle.
19. The method of claim 18, wherein the sound source information further comprises wind speed sound source center coordinates, wind speed sound source frequency, wind speed sound source intensity.
20. The method of claim 19, wherein the step of determining the sound source information for a wind speed sound source comprises:
collecting a wind noise sound data set by using at least one microphone array, wherein the wind noise sound data set comprises audio files of air vortex sounds when wind blows over different static obstacles at different wind speeds;
determining a wind speed characteristic identification algorithm according to the wind noise sound data set by utilizing an acoustic electronic control unit;
collecting audio data around the vehicle by using the at least one microphone array, receiving the audio data around the vehicle by using the acoustic electronic control unit, and judging whether the audio data is a wind speed sound source or not based on a wind speed characteristic identification algorithm;
when the audio data around the vehicle collected by the at least one microphone array is judged to be a wind speed sound source, the acoustic electronic control unit determines the acoustic information of the wind speed sound source according to the audio data around the vehicle based on a sound source positioning algorithm.
21. The method of claim 20, wherein the step of determining obstacle information around the vehicle using the first sensing device comprises:
acquiring environment data by using a millimeter wave radar module and a camera module, and identifying static obstacles and dynamic obstacles based on the environment data;
and fusing the recognition results of the millimeter wave radar module and the camera module by using a first processing unit to determine the size, the coordinates and the motion state of the obstacles around the vehicle.
22. The method of claim 20, wherein the step of acquiring obstacle information around the vehicle using the first sensing device comprises:
acquiring environment data by using a millimeter wave radar module, a camera module and a laser radar module, and identifying static obstacles and dynamic obstacles based on the environment data;
and fusing the identification results of the millimeter wave radar module, the camera module and the laser radar module by using a first processing unit so as to determine the size, the coordinates and the motion state of the obstacles around the vehicle.
23. The method according to claim 21 or 22, wherein the central processing unit determines that the obstacle and the sound source are the same obstacle when the acoustic coordinates of the sound source and the center coordinates of the obstacle are smaller than a second threshold, and determines the spatial distribution of the geometrical size of the obstacle according to a minimum rectangular frame including the acoustic frame and the obstacle sensing frame of the sound source.
24. The method of claim 11, wherein the step of the central processing device determining the ambient wind speed comprises:
determining the center coordinate of a wind speed sound source by using the second sensing device, determining the coordinate of a static obstacle by using the first sensing device, taking the center of the wind speed sound source and the static obstacle as a wind speed measurement pair, and determining the distance between the wind speed measurement pair according to the center coordinate of the wind speed sound source and the coordinate of the static obstacle:
Figure FDA0002942576570000041
determining an included angle between a connecting line of the wind speed measurement pair and the vehicle running direction:
Figure FDA0002942576570000042
in the formula, x NWM And y NWM Is the coordinate, x, of the center of the wind speed sound source in the geodetic coordinate system SON And y SON Coordinates of the static barrier in a geodetic coordinate system;
calculating the average value of included angles between the multiple groups of wind speed measurement pairs and the vehicle running direction
Figure FDA0002942576570000051
Setting included angle threshold value delta theta WANM Distance threshold MaxD NM Will satisfy D NM <MaxD NM And is provided with
Figure FDA0002942576570000052
Figure FDA0002942576570000053
As counting measurement points;
calculating wind noise frequency of the counting measuring points:
Figure FDA0002942576570000054
f NWi correspond to the ith count measurement pointU is the speed of sound, v is the speed of the vehicle, α is the frequency at which the audio frequency emitted from the center of the wind speed sound source is received i The included angle between the wind speed sound source center of the ith counting measurement point and a vehicle connecting line and the vehicle moving direction is formed;
according to the determined wind noise frequencies of the plurality of counting measurement points, determining the ambient wind speed of the vehicle as follows:
Figure FDA0002942576570000055
Figure FDA0002942576570000056
wherein the content of the first and second substances,
Figure FDA0002942576570000057
wherein, the value range of i corresponds to the total number n, dB of the counting measurement points NWi Measuring the corresponding sound intensity, v, for the ith count WSi Wind speed for the ith count measurement point.
25. The method of claim 24, wherein the ambient wind speed direction of the vehicle is:
Figure FDA0002942576570000058
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0002942576570000059
in the formula D Wi The distance between the wind speed sound source center corresponding to the ith counting measurement point and the static obstacle is calculated, n is the total number of the counting measurement points, and theta is WAi And (4) forming an included angle between the connecting line of the wind speed sound source center and the static barrier in the ith counting and measuring point and the ground.
26. The method of claim 25, wherein the component of the ambient wind speed in the direction of travel of the vehicle is: v. of WSo =v WS ×cos(θ WAEgo )-v Ego The component in the vertical direction is: v. of WSa =v WS ×sin(θ WAEgo ) Wherein, theta Ego Is the yaw angle of the vehicle, v Ego Is the speed of the vehicle.
27. The method of claim 26, wherein the central processing device processes the obstacle information and the sound source information to determine a risk factor for an obstacle: k Nt =K NC ×K NT Wherein, K is NC Is the risk coefficient of the acoustic operating conditions, K NT To obtain the danger coefficient of the obstacle kind.
28. The method of claim 27, wherein the acoustic condition risk factor comprises a jerk factor, the jerk factor determined by the formula: k acc =K accN ×f accN ,K accN For a set rapid acceleration proportionality coefficient, f accN A sudden acceleration of the fetal noise frequency determined for the second sensing device.
29. The method of claim 28, wherein the acoustic condition hazard coefficient comprises an emergency braking coefficient, the emergency braking coefficient determined by the equation: k is bra =K braN ×f braN ,K braN For a set emergency braking proportionality factor, f braN An emergency braking tire noise frequency determined for said second sensing device.
30. The method of claim 29, wherein the acoustic condition risk factor comprises a steady driving factor, and wherein the steady driving factor is equal to a set steady driving proportionality factor.
CN202110184659.7A 2021-02-10 2021-02-10 Vehicle-mounted sensing system and data processing method Pending CN114906190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110184659.7A CN114906190A (en) 2021-02-10 2021-02-10 Vehicle-mounted sensing system and data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110184659.7A CN114906190A (en) 2021-02-10 2021-02-10 Vehicle-mounted sensing system and data processing method

Publications (1)

Publication Number Publication Date
CN114906190A true CN114906190A (en) 2022-08-16

Family

ID=82761619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110184659.7A Pending CN114906190A (en) 2021-02-10 2021-02-10 Vehicle-mounted sensing system and data processing method

Country Status (1)

Country Link
CN (1) CN114906190A (en)

Similar Documents

Publication Publication Date Title
US9487212B1 (en) Method and system for controlling vehicle with automated driving system
CN208101973U (en) A kind of intelligent driving auxiliary anti-collision system
US10528850B2 (en) Object classification adjustment based on vehicle communication
US11435465B2 (en) Vehicle radar device and system thereof
US11958505B2 (en) Identifying the position of a horn honk or other acoustical information using multiple autonomous vehicles
JP2004537057A5 (en)
JP2004537057A (en) Method and apparatus for determining stationary and / or moving objects
JP2001001851A (en) Alarm device for vehicle
US11403950B2 (en) Moving body detection system
CN215793835U (en) Vehicle-mounted sensing system
CN105572675A (en) Automobile anti-collision early warning method
CN114906190A (en) Vehicle-mounted sensing system and data processing method
CN111409644A (en) Autonomous vehicle and sound feedback adjusting method thereof
WO2019131121A1 (en) Signal processing device and method, and program
US11904848B2 (en) Low-energy impact collision detection
TWI798646B (en) Warning device of vehicle and warning method thereof
CN114750754A (en) Intelligent driving automobile accident detection system
US11186223B2 (en) Large vehicle approach warning device and method for controlling the same
CN114312820A (en) Early warning method for assisting motorcycle driving and millimeter wave radar system
CN111231949A (en) Anti-collision system and method for side road vehicle in heavy rain and dense fog weather
CN219893452U (en) Vehicle-mounted wireless detection system
TWI838737B (en) Vehicle-mounted wireless detection system
KR102595574B1 (en) Methods for recognizing a stop line of an autonomous vehicle
KR20220115695A (en) Accident prediction system for self driving cars
CN112986915A (en) Method, device, system and vehicle for positioning acoustic wave signal source

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination