WO2024087216A1 - 一种控制方法、装置和运载工具 - Google Patents

一种控制方法、装置和运载工具 Download PDF

Info

Publication number
WO2024087216A1
WO2024087216A1 PCT/CN2022/128404 CN2022128404W WO2024087216A1 WO 2024087216 A1 WO2024087216 A1 WO 2024087216A1 CN 2022128404 W CN2022128404 W CN 2022128404W WO 2024087216 A1 WO2024087216 A1 WO 2024087216A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
vehicle
target
prompt
trend
Prior art date
Application number
PCT/CN2022/128404
Other languages
English (en)
French (fr)
Inventor
赵阳
邓家钰
马瑞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2022/128404 priority Critical patent/WO2024087216A1/zh
Publication of WO2024087216A1 publication Critical patent/WO2024087216A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the embodiments of the present application relate to the field of intelligent driving, and more specifically, to a control method, device and vehicle.
  • the vehicle can proactively warn of surrounding targets while the user is driving. For example, pedestrians or vehicles crossing in front can be warned through images or text displayed on the vehicle display. For another example, for blind spot warning targets, warnings can be issued through the indicator lights on the rearview mirror.
  • these warning methods require the user's attention to be diverted to the vehicle display or rearview mirror, which is not conducive to the user's driving safety.
  • the embodiments of the present application provide a control method, device and vehicle, which help to improve the driving safety of users and also help to improve the intelligence level of the vehicle.
  • the vehicles in this application may include road vehicles, water vehicles, air vehicles, industrial equipment, agricultural equipment, or entertainment equipment, etc.
  • the vehicle may be a vehicle, which is a vehicle in a broad sense, and may be a vehicle (such as a commercial vehicle, a passenger car, a motorcycle, a flying car, a train, etc.), an industrial vehicle (such as a forklift, a trailer, a tractor, etc.), an engineering vehicle (such as an excavator, a bulldozer, a crane, etc.), agricultural equipment (such as a mower, a harvester, etc.), amusement equipment, a toy vehicle, etc.
  • the embodiments of this application do not specifically limit the type of vehicle.
  • the vehicle may be a vehicle such as an airplane or a ship.
  • a control method comprising: detecting that a first target is within a warning range of a vehicle, the vehicle comprising a plurality of sound-emitting devices; controlling at least two of the plurality of sound-emitting devices to emit a prompt sound, the sound image (sound image, or soundstage) of the prompt sound drifting in a direction corresponding to a first motion trend, the first motion trend comprising a relative motion trend between the vehicle and the first target.
  • Sound image can be used to express one or more of the depth, height and width of the sound emitted by the sound-emitting device.
  • Sound image shift can be, for example, a change or movement of the sound image position of the sound in a certain direction within a certain time interval. In this way, the prompt sound of sound image shift can give the user an experience of the change of the spatial position of the sound.
  • the relative movement trend of the vehicle and the target can be simulated or expressed by a prompt sound whose sound and image drift direction corresponds to the first movement trend.
  • the user does not need to use other devices or equipment to clarify the movement trend, which helps the user to realize the danger more quickly and safely, helps to improve the user's driving safety, and also helps to improve the intelligence level of the vehicle.
  • the sound and image drift prompt sound can enable the user to intuitively identify the prompt sound that prompts the relative movement trend of the vehicle and the target among the multiple prompt sounds. For example, taking the vehicle as a vehicle, in the vehicle lane change scenario, the prompt sound corresponding to the first movement trend in the sound and image drift direction, the front radar warning prompt sound, and the turn signal sound may burst out. The driver can accurately understand the functional semantics of the prompt sound corresponding to the first movement trend in the sound and image drift direction, thereby avoiding the driver's negative evaluation and confused understanding of the vehicle's acoustic prompt sound, which helps to improve the user's driving experience.
  • the fact that the sound image drift direction of the prompt sound corresponds to the first movement trend can be understood as the sound image drift direction of the prompt sound is consistent with the first movement trend.
  • the relative motion trend of the vehicle and the first target may be the motion trend of the vehicle relative to the first target, or the relative motion trend of the vehicle and the first target may also be the motion trend of the first target relative to the vehicle.
  • the first target may be a moving target or a stationary target.
  • the method further includes: controlling a sound image drift speed of the prompt sound.
  • a faster and faster drift speed of the sound image of the prompt sound may indicate that the distance between the target and the vehicle is getting closer and closer.
  • the increasingly faster drift speed of the sound image of the prompt sound can indicate that the time to collision (TTC) between the target and the vehicle is becoming increasingly shorter.
  • TTC time to collision
  • controlling the sound and image drift speed of the prompt sound includes: controlling the sound and image drift speed of the prompt sound according to information between the vehicle and the first target.
  • the sound image drift speed of the prompt sound can be controlled by the information between the vehicle and the first target.
  • the user can intuitively understand the information change between the vehicle and the target or the danger level of the target through the sound image drift speed of the prompt sound, thereby controlling the vehicle to avoid collision with the target, which helps to improve the safety of the vehicle and also helps to improve the user's driving experience.
  • the information between the vehicle and the first target includes at least one of the first motion trend, the alarm level, the distance between the vehicle and the first target, and the TTC between the vehicle and the first target.
  • the user can intuitively understand the first motion trend, the alarm level, the distance between the vehicle and the first target, or the TTC between the vehicle and the first target, thereby controlling the vehicle to avoid collision with the target, which helps to improve the safety of the vehicle and also helps to improve the user's driving experience.
  • the method further includes: determining a sound emitting device located at an end of the sound image drifting direction among the at least two sound emitting devices according to a location of the prompted user.
  • the sound device at the end of the sound image drift direction among the at least two sound devices is associated with the location of the user being prompted.
  • the user being prompted can quickly understand that there may be targets around that pose a danger to him/her based on the prompt sound of the sound image drifting in the direction of his/her location, which helps to improve the user's safety awareness and thus helps to avoid the occurrence of safety accidents.
  • the method further includes: determining the sound emitting device among the at least two sound emitting devices that is located at the starting end of the sound and image drift direction based on the position of the first target relative to the vehicle when the first target enters the warning range.
  • the sound device at the beginning of the sound image drift direction among the at least two sound devices is associated with the position of the first target relative to the vehicle when it enters the warning range.
  • the user can clearly know the position of the first target relative to the vehicle when it enters the warning range through the sound device at the beginning, so that the user can observe the position of the first target in advance, which helps to avoid the occurrence of safety accidents.
  • the method also includes: predicting the first motion trend based on the state of the first target; wherein, controlling at least two of the multiple sound emitting devices to emit a prompt sound includes: controlling at least two of the multiple sound emitting devices to emit the prompt sound based on the predicted first motion trend.
  • the first movement trend can be predicted by the state of the first target, so that at least two sound-generating devices can be controlled to emit a sound prompt of the sound image drift according to the predicted first movement trend.
  • the user can learn the movement trend of the first target in the future through the prompt sound whose sound image drift direction corresponds to the first movement trend, which helps the user make driving decisions in advance, helps avoid collision between the vehicle and the first target, and thus helps improve the driving safety of the user.
  • the method further includes: when it is detected that the first target enters a warning range of the vehicle, acquiring a state of the first target.
  • the state of the first target includes one or more of the speed, acceleration, speed direction, heading angle direction, or heading angle velocity rate of the first target.
  • the method also includes: when the predicted first motion trend is different from the actual relative motion trend of the first target and the vehicle, controlling at least two of the multiple sound emitting devices to emit a prompt sound in which the sound and image drift direction corresponds to the actual relative motion trend.
  • At least two of the multiple sound-emitting devices can be controlled to emit a prompt sound whose sound image drift direction corresponds to the actual relative motion trend.
  • the user can intuitively understand that the motion trend of the first target relative to the vehicle has changed.
  • the user can promptly learn the actual relative motion trend based on the prompt sound after the sound image drift direction is switched, thereby helping the user make driving decisions in advance. In this way, it helps to avoid a collision between the vehicle and the first target, thereby helping to improve the driving safety of the user.
  • the method also includes: determining the first motion trend based on first data collected by the sensor of the vehicle; or, obtaining second data sent by the cloud server and determining the first motion trend based on the second data.
  • the vehicle can determine the first motion trend based on the first data collected by the sensor or the second data sent by the server, thereby controlling at least two sound-emitting devices to emit a prompt sound, which helps to improve the driving safety of the user and also helps to improve the intelligence level of the vehicle.
  • the first motion trend determined based on first data collected by a sensor or the first motion trend determined through second data sent by a server may be the actual motion trend of the first target relative to the vehicle, or may be the actual motion trend of the vehicle relative to the first target.
  • the first motion trend is determined based on first data collected by the sensor of the vehicle, including: determining the position of the first target at multiple times based on the first data; determining the first motion trend based on the position of the first target at the multiple times.
  • the vehicle includes a mapping relationship between motion trends and sound-emitting devices
  • the method further includes: determining the at least two sound-emitting devices based on the mapping relationship and the first motion trend.
  • the at least two sound-generating devices can be determined by the mapping relationship between the motion trend and the sound-generating device stored in the vehicle and the first motion trend. In this way, the computational overhead of determining the at least two sound-generating devices from multiple sound-generating devices can be saved, which helps to save power consumption of the vehicle.
  • the method also includes: controlling the direction of the ambient light, the direction of the ambient light corresponding to the first motion trend, and the vehicle includes the ambient light; and/or controlling the vibration direction of the steering wheel, the vibration direction corresponding to the first motion trend, and the vehicle includes the steering wheel; and/or controlling the display device to display prompt information, the prompt information is used to prompt the first motion trend, and the vehicle includes the display device.
  • At least one of the ambient light, steering wheel and display device can also be controlled.
  • the user can further clarify the first movement trend, which helps to improve the user's driving safety.
  • the detection of the first target being within the warning range of the vehicle includes: when the distance between the vehicle and the first target is less than or equal to a preset distance, and/or the TTC between the vehicle and the first target is less than or equal to a preset time length, detecting that the first target is within the warning range of the vehicle.
  • the distance and TTC can be used to determine whether the first target has entered the warning range of the vehicle.
  • the target within the warning range can be prompted by the sound of the sound and image drift, which helps the user to intuitively understand the movement trend of the target within the warning range, helps to improve the user's driving safety, and also helps to improve the intelligence level of the vehicle.
  • controlling at least two of the multiple sound-emitting devices to emit a prompt sound includes: controlling the intensity of the sound emitted by the at least two sound-emitting devices, and/or controlling the time delay of the sound emitted by the at least two sound-emitting devices.
  • the plurality of sound emitting devices are located in a cabin of the vehicle.
  • the plurality of sound-emitting devices may also be located outside the cabin of the vehicle.
  • a control device which includes: a detection unit for detecting that a first target is within a warning range of a vehicle, the vehicle including multiple sound-emitting devices; a control unit for controlling at least two of the multiple sound-emitting devices to emit a prompt sound, the sound and image drift direction of the prompt sound corresponds to a first motion trend, and the first motion trend includes the relative motion trend of the vehicle and the first target.
  • control unit is further used to: control the sound image drift speed of the prompt sound.
  • control unit is used to: control the sound and image drift speed of the prompt sound according to the information between the vehicle and the first target.
  • the information between the vehicle and the first target includes at least one of the first motion trend, the alarm level, the distance between the vehicle and the first target, and the collision time TTC between the vehicle and the first target.
  • the device further includes: a first determination unit, configured to determine a sound emitting device located at an end of the sound image drift direction among the at least two sound emitting devices according to a location of the prompted user.
  • the device also includes: a second determination unit, used to determine the sound device among the at least two sound devices that is located at the starting end of the sound and image drift direction based on the orientation of the first target relative to the vehicle when entering the warning range.
  • the device also includes: a prediction unit, used to predict the first motion trend according to the state of the first target; wherein the control unit is used to: control at least two of the multiple sound emitting devices to emit the prompt sound according to the predicted first motion trend.
  • control unit is also used to: when the predicted first motion trend is different from the actual relative motion trend of the first target and the vehicle, control at least two of the multiple sound emitting devices to emit a prompt sound in which the sound and image drift direction corresponds to the actual relative motion trend.
  • the device also includes: a third determination unit, used to determine the first motion trend based on first data collected by the sensor of the vehicle; or, the third determination unit, used to obtain second data sent by the cloud server and determine the first motion trend based on the second data.
  • the third determination unit is used to: determine the position of the first target at multiple times based on the first data; and determine the first motion trend based on the position of the first target at the multiple times.
  • the vehicle includes a mapping relationship between a motion trend and a sound-emitting device
  • the method further includes: determining the at least two sound-emitting devices based on the mapping relationship and the first motion trend.
  • control unit is further used to: control the direction in which the ambient light is lit, the direction in which the ambient light is lit corresponds to the first motion trend, and the vehicle includes the ambient light; and/or control the vibration direction of the steering wheel, the vibration direction corresponds to the first motion trend, and the vehicle includes the steering wheel; and/or control the display device to display prompt information, the prompt information is used to prompt the first motion trend, and the vehicle includes the display device.
  • the detection unit is used to detect that the first target is within the warning range of the vehicle when the distance between the vehicle and the first target is less than or equal to a preset distance and/or the TTC between the vehicle and the first target is less than or equal to a preset time duration.
  • control unit is used to: control the intensity of the sound emitted by the sound emitting device among the at least two sound emitting devices, and/or control the time delay of the sound emitted by the sound emitting device among the at least two sound emitting devices.
  • the plurality of sound emitting devices are located in a cabin of the vehicle.
  • a control method which includes: detecting that a first target is within a warning range of a vehicle, the vehicle including an ambient light; controlling a lighting direction of the ambient light, the lighting direction of the ambient light corresponding to a first motion trend, the first motion trend including a relative motion trend between the vehicle and the first target.
  • the method further includes: controlling a lighting speed of the atmosphere light.
  • controlling the lighting speed of the atmosphere light includes: controlling the lighting speed of the atmosphere light according to information between the vehicle and the first target.
  • the information between the vehicle and the first target includes at least one of the first motion trend, the alarm level, the distance between the vehicle and the first target, and the TTC between the vehicle and the first target.
  • the method also includes: determining the first motion trend based on first data collected by the sensor of the vehicle; or, obtaining second data sent by the cloud server and determining the first motion trend based on the second data.
  • the first motion trend is determined based on the first data collected by the sensor of the vehicle, including: determining the position of the first target at multiple times based on the first data; determining the first motion trend based on the position of the first target at the multiple times.
  • the detection of the first target being within the warning range of the vehicle includes: when the distance between the vehicle and the first target is less than or equal to a preset distance, and/or the TTC between the vehicle and the first target is less than or equal to a preset time length, detecting that the first target is within the warning range of the vehicle.
  • a control device which includes: a detection unit for detecting that a first target is within a warning range of a vehicle, and the vehicle includes an atmosphere light; a control unit for controlling a lighting direction of the atmosphere light, and the lighting direction of the atmosphere light corresponds to a first motion trend, and the first motion trend includes a relative motion trend between the vehicle and the first target.
  • control unit is further used to control a lighting speed of the atmosphere light.
  • control unit is used to: control the lighting speed of the atmosphere light according to information between the vehicle and the first target.
  • the information between the vehicle and the first target includes at least one of the first motion trend, the alarm level, the distance between the vehicle and the first target, and the collision time TTC between the vehicle and the first target.
  • the device also includes: a determination unit, used to determine the first motion trend based on first data collected by the sensor of the vehicle; or, to obtain second data sent by the cloud server and determine the first motion trend based on the second data.
  • the determination unit is used to: determine the position of the first target at multiple times based on the first data; and determine the first motion trend based on the position of the first target at the multiple times.
  • the detection unit is used to detect that the first target is within the warning range of the vehicle when the distance between the vehicle and the first target is less than or equal to a preset distance and/or the TTC between the vehicle and the first target is less than or equal to a preset time duration.
  • a control device which includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit to enable the control device to perform any possible method in the first aspect or the third aspect.
  • a control system comprising at least two sound-emitting devices and a computing platform, wherein the computing platform comprises any possible device in the second aspect or the fourth aspect, or the computing platform comprises the device described in the fifth aspect.
  • control system further includes one or more sensors.
  • a vehicle which includes any possible device in the second aspect, or includes the device described in the fourth aspect, or includes the device described in the fifth aspect, or includes the system described in the sixth aspect.
  • the vehicle is a vehicle.
  • a computer program product comprising: a computer program code, when the computer program code is run on a computer, the computer executes any possible method in the first aspect or the third aspect.
  • the above-mentioned computer program code can be stored in whole or in part on the first storage medium, wherein the first storage medium can be packaged together with the processor or separately packaged with the processor, and the embodiments of the present application do not specifically limit this.
  • a computer-readable medium stores a program code, and when the computer program code runs on a computer, the computer executes any possible method in the first aspect or the third aspect.
  • an embodiment of the present application provides a chip system, which includes a processor for calling a computer program or computer instructions stored in a memory so that the processor executes any possible method in the first aspect or the third aspect mentioned above.
  • the processor is coupled to the memory through an interface.
  • the chip system also includes a memory, in which a computer program or computer instructions are stored.
  • the prompt sound corresponding to the first motion trend in the direction of the sound image drift can simulate or express the relative motion trend information between the vehicle and the target, without the user having to use other devices or equipment to clarify the motion trend, which helps the user to be aware of the danger more quickly and safely, helps to improve the user's driving safety, and also helps to improve the intelligence of the vehicle.
  • the prompt sound of the sound image drift can enable the user to intuitively clarify the prompt sound that prompts the relative motion trend between the vehicle and the target among the multiple prompt sounds.
  • the user can intuitively understand the first motion trend, the alarm level, the distance between the vehicle and the first target, or the TTC between the vehicle and the first target, thereby controlling the vehicle to avoid collision with the target, which helps to improve the safety of the vehicle and the user's driving experience.
  • the user being prompted can quickly understand that there may be targets around him that may pose a danger to him based on the prompt sound of the sound and image drifting in the direction of his position, which helps to enhance the user's safety awareness and thus helps to avoid the occurrence of safety accidents.
  • the user can clearly know the position of the first target relative to the vehicle when it enters the warning range through the sound-emitting device at the beginning, so as to observe the position of the first target in advance, which helps to avoid the occurrence of safety accidents.
  • the first movement trend can be predicted by the state of the first target, so that the user can know the movement trend of the first target in the future through the prompt sound corresponding to the first movement trend in the direction of the sound and image drift, thereby helping the user to make driving decisions in advance.
  • the user can intuitively understand that the relative movement trend of the first target relative to the vehicle has changed.
  • the user can promptly learn the actual relative movement trend based on the prompt sound after the sound and image drift direction is switched, thereby helping the user make driving decisions in advance.
  • the at least two sound generating devices are determined by the mapping relationship between the motion trend and the sound generating device stored in the vehicle and the first motion trend. In this way, the calculation cost of determining the at least two sound generating devices from multiple sound generating devices can be saved, which helps to save the power consumption of the vehicle.
  • the user can further clarify the first movement trend, which helps to improve the user's driving safety and also helps to improve the intelligence level of the vehicle.
  • FIG1 is a functional block diagram of a vehicle provided in an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a vehicle provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the warning range of a vehicle provided in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the correspondence between the loudspeaker and the position of the target provided in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a method for dividing an alarm range provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of a prompt tone with sound image drifting emitted by at least two speakers provided in an embodiment of the present application.
  • FIG. 9 is another schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of speaker distribution provided in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of prompting a user through a prompt sound of sound and image drift and an ambient light in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of prompting a user through a prompt sound of sound and image drift and a steering wheel vibration in an embodiment of the present application.
  • FIG. 15 is a schematic flow chart of a control method provided in an embodiment of the present application.
  • FIG. 16 is a schematic block diagram of a control device provided in an embodiment of the present application.
  • FIG. 17 is a schematic block diagram of a control system provided in an embodiment of the present application.
  • prefixes such as “first” and “second” are used only to distinguish different description objects, and have no limiting effect on the position, order, priority, quantity or content of the described objects.
  • the use of prefixes such as ordinal numbers to distinguish description objects in the embodiments of the present application does not constitute a limitation on the described objects.
  • the meaning of "multiple" is two or more.
  • FIG1 is a functional block diagram of a vehicle 100 provided in an embodiment of the present application.
  • the vehicle 100 may include a perception system 120, a sound generating device 130, and a computing platform 150, wherein the perception system 120 may include one or more sensors for sensing information about the environment surrounding the vehicle 100.
  • the perception system 120 may include a positioning system, and the positioning system may be a global positioning system (GPS), a Beidou system, or other positioning systems.
  • the perception system 120 may also include one or more of an inertial measurement unit (IMU), a laser radar, a millimeter wave radar, an ultrasonic radar, and a camera device.
  • IMU inertial measurement unit
  • the computing platform 150 may include one or more processors, such as processors 151 to 15n (n is a positive integer).
  • the processor is a circuit with signal processing capability.
  • the processor may be a circuit with instruction reading and execution capability, such as a central processing unit (CPU), a microprocessor, a graphics processing unit (GPU) (which can be understood as a microprocessor), or a digital signal processor (DSP); in another implementation, the processor may implement certain functions through the logical relationship of a hardware circuit, and the logical relationship of the hardware circuit is fixed or reconfigurable, such as a processor that is a hardware circuit implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD), such as a field programmable gate array (FPGA).
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the process of the processor loading a configuration document to implement the hardware circuit configuration can be understood as the process of the processor loading instructions to implement the functions of some or all of the above units.
  • the processor can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as a neural network processing unit (NPU), a tensor processing unit (TPU), a deep learning processing unit (DPU), etc.
  • the computing platform 150 can also include a memory, the memory is used to store instructions, and some or all of the processors 151 to 15n can call the instructions in the memory and execute the instructions to implement the corresponding functions.
  • the vehicle can actively warn surrounding targets while the user is driving. For example, for pedestrians or vehicles crossing in front, images or texts can be displayed on the vehicle display screen for warning. For another example, for blind spot warning targets, warnings can be issued through the indicator lights on the rearview mirror.
  • these warning methods require the user's attention to be diverted to the vehicle display screen or rearview mirror, which is not conducive to the user's driving safety.
  • the embodiment of the present application provides a control method, device and vehicle, which controls at least two sound-generating devices in the vehicle to emit a prompt sound whose sound image drift direction corresponds to the relative motion trend of the vehicle and the target.
  • the user can learn the motion trend through the sound image drift direction of the prompt sound without affecting his attention, which helps to improve the driving safety of the user; at the same time, it also helps to improve the intelligence level of the vehicle.
  • FIG. 2 is a schematic structural diagram of a vehicle 200 provided in an embodiment of the present application.
  • the vehicle 200 includes a computing platform 210, cameras 221-222, radars 231-238, and speakers 241-249 located in the cockpit.
  • the computing platform 210 can determine whether the target enters the warning range of the vehicle 200 through the data collected by the cameras 221-222 and radars 231-238 outside the cockpit.
  • the computing platform 210 can control at least two of the speakers 241-249 to emit a prompt sound, and the sound and image drift direction of the prompt sound corresponds to the relative motion trend of the target and the vehicle 200, or the sound and image drift direction of the prompt sound corresponds to the motion trend of the target itself.
  • the relative movement trend between the target and the vehicle 200 may be the movement trend of the target relative to the vehicle 200, or may be the movement trend of the vehicle 200 relative to the target.
  • the above movement trend of the target itself may be the movement trend of the target not relative to any reference object.
  • seat 1 may be a seat in the driver's area
  • seat 2 may be a seat in the passenger's area
  • seat 3 may be a seat in the second row right area
  • seat 4 may be a seat in the second row left area.
  • the above computing platform 210 may be the computing platform 150 in FIG. 1 , the cameras 221 - 222 and the radars 231 - 238 may be located in the perception system 120 in FIG. 1 , and the speakers 241 - 249 may be located in the sound generating device 130 in FIG. 1 .
  • the vehicle 200 includes 2 cameras, 8 radars, and 9 speakers.
  • the number of cameras, radars, and speakers in the vehicle 200 is not specifically limited.
  • the position of the speaker 241 may include one or more speakers.
  • the multiple speakers may form a speaker group.
  • FIG3 shows a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the application scenario is a front crossing traffic alert scenario.
  • Vehicle 1 can collect information about vehicle 2 through a camera and a radar.
  • the information about vehicle 2 includes information such as the distance between vehicle 2 and vehicle 1, the speed of vehicle 2, the speed direction of vehicle 2, and the heading angle of vehicle 2.
  • vehicle 1 can control at least two speakers in vehicle 1 to emit a prompt sound, and the sound and image drift direction of the prompt sound corresponds to the relative motion trend of vehicle 2 and vehicle 1, or the sound and image drift direction of the prompt sound corresponds to the motion trend of vehicle 2 itself.
  • the relative motion trend between the above vehicle 2 and vehicle 1 may be the motion trend of vehicle 2 relative to vehicle 1, or may be the motion trend of vehicle 1 relative to vehicle 2.
  • the motion trend of vehicle 2 relative to vehicle 1 is taken as an example for description.
  • the movement trend of vehicle 2 relative to vehicle 1 is from east to west.
  • vehicle 1 can control speakers 241-243 to emit a prompt sound, and the sound and image drift direction of the prompt sound is from east to west, or, the sound and image drift direction of the prompt sound is from the co-pilot area to the main driver area, or, the sound and image drift direction of the prompt sound is from speaker 241 to speaker 243.
  • the vehicle 1 may also control the speakers 244 - 246 to emit a prompt sound, and the sound image of the prompt sound drifts from east to west.
  • the vehicle 1 may also control the speakers 247 - 249 to emit a prompt sound, and the sound image of the prompt sound drifts from east to west.
  • the driver can accurately know the movement trend of vehicle 2 relative to vehicle 1 through the sound and image drift direction of the prompt sound, and can thus control vehicle 1 according to the movement trend of vehicle 2 relative to vehicle 1, which helps to improve the user's driving safety; at the same time, it also helps to improve the intelligence level of the vehicle.
  • the prompt sound of the sound image drifting can enable the user to accurately know the prompt sound that prompts the relative movement trend of vehicle 2 and vehicle 1 among the multiple prompt sounds.
  • the prompt sound corresponding to the sound image drift direction and the relative movement trend of vehicle 2 and vehicle 1 the front radar warning prompt sound, and the turn signal sound may burst out.
  • the driver can accurately understand the functional semantics of the prompt sound corresponding to the sound image drift direction and the relative movement trend, thereby avoiding the driver's negative evaluation and confused understanding of the vehicle's acoustic prompt sound, which helps to improve the user's driving experience.
  • the vehicle 1 can control at least two speakers in the vehicle 1 to emit a prompt sound, including: the vehicle 1 controls the playback intensity of the sound emitted by the at least two speakers.
  • vehicle 1 can control the sounds emitted by speakers 241-243 to have no time delay and control the sound intensity of the sounds emitted by speakers 241-243 to be different.
  • the sound intensity of speaker 241 can be controlled to be 40dB
  • the sound intensity of speaker 242 can be controlled to be 20dB
  • the sound intensity of speaker 243 can be controlled to be 20dB
  • the sound intensity of speaker 241 can be controlled to be 20dB
  • the sound intensity of speaker 242 can be controlled to be 40dB
  • the sound intensity of speaker 243 can be controlled to be 20dB
  • the sound intensity of speaker 241 can be controlled to be 20dB
  • the sound intensity of speaker 242 can be controlled to be 20dB
  • the sound intensity of speaker 243 can be controlled to be 40dB.
  • the speakers 241 - 243 can be controlled to emit a prompt sound indicating that the sound and image drift direction
  • one or more speakers may be included at the position of the speaker 241, the speaker 242 or the speaker 244.
  • controlling the playing intensity of the sound emitted by the speaker 241 to be 40 dB includes: controlling the playing intensity of the sound emitted by at least some of the multiple speakers to be 40 dB.
  • the time interval between time T1 and time T2 , and the time interval between time T2 and time T3 may be equal, or may be unequal. For example, if the time interval between time T1 and time T2 , and the time interval between time T2 and time T3 are equal, the time interval may be 10 milliseconds (ms).
  • the vehicle 1 may broadcast the prompt tone for a preset duration.
  • the preset time duration includes a plurality of broadcast cycles, and in each broadcast cycle, the speakers 241 - 243 can be controlled to emit a prompt tone indicating that the direction of the sound and image drift is from east to west.
  • the speakers 241-243 may continue to be controlled to emit the prompt sound until the preset time period ends.
  • the broadcasting of the prompt sound may be stopped.
  • the time T1 may be the time when the vehicle 2 is detected to enter the warning range of the vehicle 1 .
  • the vehicle 1 may control at least two speakers in the vehicle 1 to emit a prompt sound, including: the vehicle 1 controls the time delay of the at least two speakers emitting the sound.
  • vehicle 1 can control the sound playing intensity of speakers 241-243 to be the same and the time delay of the sound emitted by speakers 241-243 to be different.
  • the playing intensity of the sound emitted by speaker 241 can be controlled to be 20dB and speakers 242 and 243 can be controlled not to emit sound
  • the playing intensity of the sound emitted by speaker 242 can be controlled to be 20dB and speakers 241 and 243 can be controlled not to emit sound
  • the playing intensity of the sound emitted by speaker 243 can be controlled to be 20dB and speakers 241 and 242 can be controlled not to emit sound.
  • speakers 241-243 can be controlled to emit a prompt tone with the sound image drifting direction from east to west, or emit a prompt tone with the sound image drifting direction from speaker 241 to speaker 243.
  • the vehicle 1 can control at least two speakers in the vehicle 1 to emit a prompt sound, including: the vehicle 1 controls the time delay and playback intensity of the sound emitted by the at least two speakers.
  • vehicle 1 can control the time delay of the sound emitted by speakers 241-243 to be different and control the playing intensity of the sound emitted by speakers 241-243 to be different.
  • the playing intensity of the sound emitted by speaker 241 can be controlled to be 40dB and speakers 242 and 243 can be controlled not to emit sound
  • the playing intensity of the sound emitted by speaker 242 can be controlled to be 20dB and speakers 241 and 243 can be controlled not to emit sound
  • at time T1 +2 ⁇ T the playing intensity of the sound emitted by speaker 243 can be controlled to be 20dB and speakers 241 and 242 can be controlled not to emit sound.
  • the intensity of the sound emitted by speaker 241 can be controlled to be 20dB, and speakers 242 and 243 can be controlled to not emit any sound;
  • the intensity of the sound emitted by speaker 242 can be controlled to be 40dB, and speakers 241 and 243 can be controlled to not emit any sound;
  • the intensity of the sound emitted by speaker 243 can be controlled to be 20dB, and speakers 241 and 242 can be controlled to not emit any sound.
  • the intensity of the sound emitted by speaker 241 can be controlled to be 20dB, and speakers 242 and 243 can be controlled not to emit sound; at time T3 + ⁇ T, the intensity of the sound emitted by speaker 242 can be controlled to be 20dB, and speakers 241 and 243 can be controlled not to emit sound; at time T3 +2 ⁇ T, the intensity of the sound emitted by speaker 243 can be controlled to be 40dB, and speakers 241 and 242 can be controlled not to emit sound.
  • speakers 241-243 can be controlled to emit a prompt sound indicating that the sound image drifts from east to west, or to emit a prompt sound indicating that the sound image drifts from speaker 241 to speaker 243.
  • FIG4 shows a schematic diagram of the warning range of vehicle 1 provided in an embodiment of the present application.
  • the warning range of vehicle 1 can be determined by the distance between vehicle 1 and the target.
  • the warning range can be a circular area with a certain point on vehicle 1 as the center, and the radius of the circular area can be a preset distance L (for example, 20 meters (meter, m)).
  • L for example, 20 meters (meter, m)
  • the distance between vehicle 1 and the target is used to determine whether the target enters the warning range of vehicle 1.
  • the determination of the warning range in the embodiment of the present application is not limited to this.
  • the TTC between the target and vehicle 1 can also be used to determine whether the target enters the warning range of vehicle 1.
  • a preset time length for example, 5 seconds (second, s)
  • the vehicle 1 may also determine the warning range of the vehicle 1 based on data obtained from the cloud server.
  • the vehicle 1 may determine the starting speaker of the at least two speakers according to the position of the vehicle 2 relative to the vehicle 1 .
  • Fig. 5 shows a schematic diagram of the correspondence between the speaker and the position (or area) of the target provided in an embodiment of the present application. It can be seen that vehicle 2 first enters area 1 in the warning range, and at this time, the starting speaker can be determined as speaker 241 according to the correspondence.
  • the above starting loudspeaker can also be understood as the loudspeaker at the beginning of the sound image drifting direction.
  • the vehicle 1 may determine the terminating speaker among the at least two speakers according to the location of the user being prompted.
  • the speaker 243 closest to the driver's position can be selected as the termination speaker. Therefore, when the movement trend of vehicle 2 relative to vehicle 1 is from east to west, the speakers 241-243 are controlled to emit a prompt sound with the sound image drifting direction from east to west.
  • FIG6 shows a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG6 (a) when the movement trend of vehicle 2 relative to vehicle 1 is adjusted from east to west to south to north, it is possible to switch to control speaker 249, speaker 246 and speaker 243 to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound is from south to north, or the sound image drift direction of the prompt sound is from speaker 249 to speaker 243.
  • the control speakers 247, 248, 249, 246 and 243 can be switched to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound corresponds to the movement trend of vehicle 2 relative to vehicle 1 within the warning range, or the sound image drift direction of the prompt sound is first from east to west and then from south to north.
  • the driver's perception of the relative motion trend between the target and the vehicle 1 can be improved, which helps the driver to make driving decisions quickly based on the relative motion trend between the target and the vehicle 1, thereby helping to improve the user's driving safety.
  • the above-mentioned termination speaker can also be understood as the speaker at the end of the sound image drifting direction.
  • the above-mentioned termination speaker may also be unrelated to the location of the user being prompted. For example, when the movement trend of vehicle 2 relative to vehicle 1 is adjusted from east to west to south to north, it is possible to switch to controlling speaker 248, speaker 245 and speaker 242 to emit a prompt sound of sound image drift, and the sound image drift direction of the prompt sound is from south to north, or the sound image drift direction of the prompt sound is from speaker 248 to speaker 242.
  • the vehicle 1 may predict the movement trend of the target relative to the vehicle 1 within a certain period of time in the future based on the data collected by the sensor.
  • the vehicle 1 may predict the movement trend of the target relative to the vehicle 1 within a certain period of time in the future according to the state of the target when it enters the warning range of the vehicle 1 .
  • the state of the target when entering the warning range of the vehicle 1 includes but is not limited to one or more of the speed, acceleration, speed direction, heading angle direction or heading angle velocity rate when the target enters the warning range of the vehicle 1.
  • vehicle 1 when the predicted movement trend of vehicle 2 relative to vehicle 1 is from east to west, vehicle 1 can determine that vehicle 2 will pass through area 1, area 2 and area 3 in the warning range in sequence in the future according to the predicted movement trend. Since area 1 has a corresponding relationship with speaker 241, area 2 has a corresponding relationship with speaker 242, and area 3 has a corresponding relationship with speaker 243, vehicle 1 can control speakers 241-243 to emit a prompt tone, and the sound and image drift direction of the prompt tone corresponds to the movement trend. In this way, when vehicle 2 enters area 1 and does not enter area 2, vehicle 1 can control speakers 241-243 to emit a prompt tone that can drift sound and image.
  • the sound and image drift prompt tone emitted by speakers 241-243 allows users to know the movement trend of vehicle 2 relative to vehicle 1 in advance, helping users to make driving decisions in advance according to the movement trend, which helps to improve the driving safety of users.
  • At least two of the multiple speakers are controlled to emit a prompt sound with the sound image drift direction corresponding to the actual movement trend.
  • vehicle 1 may control at least two speakers to emit a prompt sound in a first sound image drift direction according to the predicted movement trend of vehicle 2 relative to vehicle 1, and the first sound image drift direction corresponds to the predicted movement trend of vehicle 2 relative to vehicle 1.
  • vehicle 1 may switch to controlling at least two speakers to emit a prompt sound in a second sound image drift direction, and the second sound image drift direction corresponds to the actual movement trend.
  • vehicle 1 when vehicle 2 is located in area 1 and has not yet entered area 2, vehicle 1 can control speaker 241 to emit a prompt sound, and the prompt sound emitted by speaker 241 is used to prompt that the target has entered the warning range of vehicle 1 and the target is located at the direction corresponding to speaker 241 (or area 1 corresponding to speaker 241).
  • vehicle 1 can control speakers 241 and 242 to emit a prompt sound, and the sound image drift direction of the prompt sound corresponds to the movement trend of vehicle 2 relative to vehicle 1, for example, the sound image drift direction of the prompt sound is from east to west, or the sound image drift direction of the prompt sound is from speaker 241 to speaker 242.
  • vehicle 1 can control speakers 241-243 to emit a prompt sound, and the sound image drift direction of the prompt sound corresponds to the movement trend of vehicle 2 relative to vehicle 1, for example, the sound image drift direction of the prompt sound is from east to west, and the sound image drift direction of the prompt sound is from speaker 241 to speaker 243.
  • vehicle 1 can determine the movement trend of vehicle 2 relative to vehicle 1 based on data collected by sensors outside the cabin, and thereby control the at least two sound-emitting devices to emit a prompt sound based on the movement trend, and the sound and image drift direction of the prompt sound corresponds to the movement trend.
  • vehicle 1 can determine the movement trend of vehicle 2 relative to vehicle 1 based on sensors (e.g., one or more of a camera, a laser radar, and a millimeter-wave radar) outside the cabin of vehicle 1, thereby controlling at least two sound-generating devices to emit a sound image drift prompt sound. For example, if the movement trend of vehicle 2 relative to vehicle 1 in area 1 is from east to west, then vehicle 1 can control speakers 241-243 to emit a prompt sound, and the sound image drift direction of the prompt sound is from east to west.
  • sensors e.g., one or more of a camera, a laser radar, and a millimeter-wave radar
  • vehicle 1 may also determine the movement trend of vehicle 2 relative to vehicle 1 based on data sent by the cloud server.
  • the vehicle 1 can control the sound image drift speed of the prompt sound.
  • the vehicle 1 may control the sound and image drift speed of the prompt sound, including: the vehicle 1 controls the sound and image drift speed of the prompt sound according to the information between the vehicle 1 and the target.
  • the information between the vehicle 1 and the target includes but is not limited to one or more of the movement trend of the target relative to the vehicle 1, the warning level, the distance between the vehicle 1 and the target, and the TTC between the vehicle 1 and the target.
  • the movement trend of the target relative to vehicle 1 includes the acceleration of the target relative to vehicle 1.
  • the acceleration of vehicle 2 relative to vehicle 1 becomes larger and larger, the sound image drift speed of the prompt sound emitted by speakers 241-243 can be controlled to become faster and faster.
  • Table 1 shows a corresponding relationship between the acceleration of a target relative to vehicle 1 and the sound image drift speed.
  • vehicle 1 may adopt a low sound image drift speed.
  • the playing intensity of the sound emitted by speaker 241 may be controlled to be 40 dB
  • the playing intensity of the sound emitted by speaker 242 may be controlled to be 20 dB
  • the playing intensity of the sound emitted by speaker 243 may be controlled to be 20 dB
  • the playing intensity of the sound emitted by speaker 241 may be controlled to be 20 dB
  • the playing intensity of the sound emitted by speaker 242 may be controlled to be 40 dB
  • the playing intensity of the sound emitted by speaker 243 may be controlled to be 20 dB
  • the playing intensity of the sound emitted by speaker 241 may be controlled to be 20 dB
  • the playing intensity of the sound emitted by speaker 242 may be controlled to be 20 dB
  • vehicle 1 can adopt a medium sound image drift speed.
  • the playing intensity of the sound emitted by speaker 241 can be controlled to be 40dB, the playing intensity of the sound emitted by speaker 242 can be controlled to be 20dB, and the playing intensity of the sound emitted by speaker 243 can be controlled to be 20dB;
  • the playing intensity of the sound emitted by speaker 241 can be controlled to be 20dB, the playing intensity of the sound emitted by speaker 242 can be controlled to be 40dB, and the playing intensity of the sound emitted by speaker 243 can be controlled to be 20dB;
  • the playing intensity of the sound emitted by speaker 241 can be controlled to be 20dB, the playing intensity of the sound emitted by speaker 242 can be controlled to be 20dB, and the playing intensity of the sound emitted by the sound e
  • vehicle 1 can adopt a high sound image drift speed.
  • the playing intensity of the sound emitted by speaker 241 can be controlled to be 40dB, the playing intensity of the sound emitted by speaker 242 can be controlled to be 20dB, and the playing intensity of the sound emitted by speaker 243 can be controlled to be 20dB;
  • the playing intensity of the sound emitted by speaker 241 can be controlled to be 20dB, the playing intensity of the sound emitted by speaker 242 can be controlled to be 40dB, and the playing intensity of the sound emitted by speaker 243 can be controlled to be 20dB;
  • the playing intensity of the sound emitted by speaker 241 can be controlled to be 20dB, the playing intensity of the sound emitted by speaker 242 can be controlled to be 20dB, and the playing intensity of the sound emitted
  • the corresponding relationship between the acceleration of the target relative to the vehicle 1 and the sound image drift speed shown in Table 1 above is only illustrative, and the embodiments of the present application are not limited thereto.
  • a low sound image drift speed is used; when the moving speed of the target relative to the vehicle 1 is greater than or equal to 5m/ s2 , a high sound image drift speed is used.
  • the movement trend of the target relative to the vehicle 1 includes the movement direction of the target relative to the vehicle 1.
  • the target can be divided into a front lateral penetration target and a blind spot warning target according to the movement direction of the target relative to the vehicle 1.
  • the vehicle 2 shown in FIG3 is a front lateral penetration target
  • the vehicle 3 shown in FIG9 can be a blind spot warning target.
  • the vehicle 1 can use a high sound and image drift speed; or, when the target is determined to be a blind spot warning target, the vehicle 1 can use a low sound and image drift speed.
  • the vehicle 1 can also control the sound and image drift speed of the prompt sound according to the warning level.
  • the warning level can be output by the advanced driving assistant system (ADAS) in the vehicle 1 based on the data collected by the sensor outside the cockpit. The higher the warning level, the more dangerous the target.
  • Table 2 shows the corresponding relationship between the warning level and the sound and image drift speed.
  • a low sound and image drift speed may be used.
  • a medium sound and image drift speed may be used.
  • a high sound image drift speed may be used.
  • vehicle 1 can also determine the warning level of the target based on the data sent by the cloud server.
  • the warning level can be determined by the cloud server, so that the cloud server can send the warning level of the target to vehicle 1.
  • vehicle 1 can also control the sound image drift speed of the prompt sound according to the distance between the target and vehicle 1.
  • FIG7 shows a schematic diagram of a warning range division method provided in an embodiment of the present application.
  • Table 3 shows the corresponding relationship between the distance between the target and vehicle 1 and the sound image drift speed provided in an embodiment of the present application.
  • a low sound image drift speed may be used.
  • a medium sound image drift speed may be adopted.
  • a high sound image drift speed may be used.
  • the corresponding relationship between the distance between the target and the vehicle 1 and the sound image drift speed in the above Table 3 is only illustrative, and the embodiments of the present application do not specifically limit this.
  • a low sound image drift speed can be used; when the distance between the target and the vehicle 1 is less than or equal to 5m, a high sound image drift speed can be used.
  • the distance between the above target and the vehicle 1 may be determined by the vehicle 1 based on data collected by a sensor outside the cabin of the vehicle 1, or may be determined by the vehicle 1 based on data sent by a cloud server.
  • the vehicle 1 may also control the sound image drift speed of the prompt sound according to the TTC between the target and the vehicle 1.
  • Table 4 shows the corresponding relationship between the TTC between the target and the vehicle 1 and the sound image drift speed provided by the embodiment of the present application.
  • TTC Image drift speed (5s, 4s) Low [4s, 3s) middle [3s, 2s) high ... ...
  • TTC when TTC is 5 s, a low sound image drift speed may be used.
  • TTC when TTC is 4 s, a medium sound image drift speed may be adopted.
  • TTC is 3 s
  • a high sound image drift speed may be used.
  • the above TTC can be determined by vehicle 1 based on data collected by sensors outside the cabin of vehicle 1, or it can be determined by vehicle 1 based on data sent by the cloud server.
  • the sound and image drift speed of the prompt sound can be controlled in combination with multiple of the movement trend of the target relative to vehicle 1, the alarm level, the distance between vehicle 1 and the target, and the TTC between vehicle 1 and the target.
  • the sound and image drift speed of the prompt sound can be controlled in combination with the alarm level and the TTC between vehicle 1 and the target.
  • Table 5 shows the corresponding relationship between the alarm level, TTC and the sound and image drift speed provided in the embodiment of the present application.
  • the sound image drift speed of the prompt sound can also be controlled according to the type of the target.
  • Table 6 shows the corresponding relationship between the type of the target and the sound image drift speed.
  • the at least two speakers may also be located at the headrest of the seat.
  • FIG8 shows a schematic diagram of an embodiment of the present application providing a prompt sound of sound image drifting through at least two speakers.
  • the headrest of the driver's seat includes a speaker 251 and a speaker 252.
  • the speakers 251 and 252 can be controlled to emit a prompt sound of sound image drifting, and the sound image drift direction of the prompt sound is from speaker 251 to speaker 252.
  • Driving blind spot monitoring and warning scenarios include but are not limited to blind spot monitoring and warning, lane change assist warning, rear traffic crossing warning (rear crossing traffic alert), rear traffic crossing braking (rear cross traffic assist with braking) and door open warning (door open warning, DOW) and other scenarios.
  • FIG9 shows another schematic diagram of an application scenario provided by an embodiment of the present application.
  • This scenario is a rear traffic crossing warning scenario.
  • data collected by the sensor outside the cockpit detects that vehicle 3 accelerates and overtakes vehicle 1 from the right rear of vehicle 1.
  • vehicle 1 can control speakers 247, 245 and 243 to emit a prompt sound with a sound image that can drift, and the sound image drift direction of the prompt sound corresponds to the movement trend of vehicle 3 relative to vehicle 1. This can prompt the driver that the target is overtaking vehicle 1 from the right rear of vehicle 1.
  • the at least two speakers may be determined from a plurality of speakers in the vehicle 1 according to the area where the user is located.
  • At least two speakers can be determined from multiple speakers in vehicle 1 based on the area where the user who opens the door is located, so as to control the at least two speakers to produce a prompt sound with sound and image drift, and the sound and image drift direction of the prompt sound is the movement trend of the target outside the door relative to vehicle 1.
  • FIG10 shows a schematic diagram of the speaker distribution provided in an embodiment of the present application.
  • vehicle 1 may include speakers 1001-1016.
  • the cabin of vehicle 1 may be divided into a main driving area, a co-pilot area, a second row left area, and a second row right area.
  • speakers 1001-1004 are speakers in the co-pilot area
  • speakers 1005-1008 are speakers in the main driving area
  • speakers 1009-1012 are speakers in the second row right area
  • speakers 1013-1016 are speakers in the second row left area.
  • the user may not notice a target moving outside the door in the main driving area (for example, a user riding a bicycle or a motorcycle).
  • the user who is about to get off the vehicle can be prompted by a sound and image drifting prompt sound, so that the user knows that there is a risk of collision when opening the door.
  • the speaker 1005 and the speaker 1007 can be controlled to emit a prompt sound with a drifting sound and image, and the sound and image drift direction of the prompt sound is from the rear of the vehicle 1 to the head of the vehicle 1, or the sound and image drift direction of the prompt sound is from the speaker 1007 to the speaker 1005.
  • At least two speakers are determined from multiple speakers in the vehicle 1, so as to control the at least two speakers to produce a prompt sound of sound and image drift, including: according to the area where the user who opens the car door is located and the movement trend of the target, the at least two speakers are determined from multiple speakers in the vehicle 1.
  • the target within the warning range of the vehicle 1 can be detected.
  • the speaker 1001 and the speaker 1004 can be determined from the speakers 1001-1004 according to the movement trend of the target relative to the vehicle 1.
  • the speaker 1001 and the speaker 1004 are controlled to emit a prompt sound, and the sound and image drift direction of the prompt sound is from the speaker 1001 to the speaker 1004.
  • the user in the co-pilot area can determine that there is a risk of collision after opening the door through the prompt sound.
  • the sound and image drift direction of the prompt sound can also be from the speaker 1004 to the speaker 1001.
  • the user in the co-pilot area can determine that a target is approaching outside the door through the prompt sound.
  • Fig. 11 is a schematic diagram of another application scenario provided by an embodiment of the present application. As shown in Fig. 11, the scenario is a lane departure warning scenario.
  • Vehicle 1 can determine the information of the marking line of the lane where vehicle 1 is located through data collected by sensors outside the cockpit.
  • speakers 242 and 243 can be controlled to emit a sound prompt of image drift.
  • the sound image drift direction of the prompt sound can be directed from speaker 243 to speaker 242, and the prompt sound is used to prompt the user that the current vehicle is approaching the marking line on the right; or, the sound image drift direction of the prompt sound can be directed from speaker 242 to speaker 243, and the prompt sound is used to prompt the user to turn the steering wheel to the left, so that the vehicle is in the middle position of lane 1.
  • FIG12 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • a user drives vehicle 1 to park in parking space 1.
  • speakers 1201 and 1202 outside the cockpit can be controlled to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound can be directed from speaker 1201 to speaker 1202, so that the user standing still can know that vehicle 1 is approaching him/her, so that he/she can avoid vehicle 1 in time.
  • the vehicle 1 stores a mapping relationship between the motion trend and the sound-emitting device, and the vehicle 1 can determine the at least two speakers based on the mapping relationship and the motion trend of the target relative to the vehicle 1.
  • the target within the warning range of the vehicle 1 can be divided into a front lateral penetrating target and a blind spot target.
  • the motion trend of the front lateral penetrating target relative to the vehicle 1 may include moving from the left side of the front of the vehicle to the right side or from the right side of the front of the vehicle to the left side;
  • the motion trend of the blind spot target relative to the vehicle 1 may include accelerating from the left rear side of the vehicle 1 to overtake the vehicle 1 or accelerating from the right rear side of the vehicle 1 to overtake the vehicle 1.
  • Table 7 shows the relationship between the motion trend, the sound-emitting device, and the sound and image drift direction.
  • the blind spot target may also include a blind spot target in a door opening warning scenario, for example, the movement trend of the blind spot target relative to the vehicle 1 is moving from behind the door of the main driver's area to the front, from behind the door of the co-driver's area to the front, from behind the door of the second row left area to the front, or from behind the door of the second row right area to the front.
  • Table 8 shows the relationship between the movement trend, the sound device, and the sound image drift direction.
  • one or more of the following prompting methods can also be combined: prompting through the lighting direction of the ambient light, prompting through the vibration direction of the steering wheel, and prompting through the vehicle display screen or head up display (HUD).
  • prompting through the lighting direction of the ambient light prompting through the vibration direction of the steering wheel
  • prompting through the vehicle display screen or head up display (HUD) prompting through the vehicle display screen or head up display (HUD).
  • FIG13 shows a schematic diagram of prompting the user through a sound and image drifting prompt sound and an ambient light in an embodiment of the present application.
  • the ambient light includes a light strip 1310 arranged at the door armrest.
  • the light strip 1310 includes a plurality of lamp beads.
  • the speaker 1013 and the speaker 1015 can be controlled to emit a sound and image drift prompt sound and control the direction of the light gradient in the light strip 1310 to change from the lamp bead 1311 to the lamp bead 1312.
  • the sound and image drift speed of the prompt sound and/or the light gradient speed can also be controlled according to the distance between the target and the second row left door or TTC.
  • the direction of the light gradient in the light strip 1310 can also be controlled to change from lamp bead 1311 to lamp bead 1312.
  • FIG14 is a schematic diagram showing a user prompt through a sound image drifting prompt tone and a steering wheel vibration in an embodiment of the present application.
  • speakers 241-243 can be controlled to emit a prompt tone with a sound image drifting direction from speaker 241 to speaker 243 and the steering wheel vibration direction can be controlled to vibrate counterclockwise.
  • FIG15 shows a schematic flow chart of a control method 1500 provided in an embodiment of the present application.
  • the method 1500 may be executed by a vehicle (e.g., a vehicle), or the method 1500 may be executed by the computing platform, or the method 1500 may be executed by a system consisting of a computing platform and at least two sound-generating devices, or the method 1500 may be executed by a system-on-a-chip (SoC) in the computing platform, or the method 1500 may be executed by a processor in the computing platform.
  • the method 1500 includes:
  • the first target may be vehicle 2 or vehicle 3 in the above embodiment.
  • the vehicle may be the vehicle 1 in the above embodiment.
  • the detection of the first target being within the warning range of the vehicle includes: when the distance between the vehicle and the first target is less than or equal to a preset distance, and/or the TTC between the vehicle and the first target is less than or equal to a preset time duration, detecting that the first target is within the warning range of the vehicle.
  • the predicted distance may be 20 meters.
  • the preset duration may be 5 seconds.
  • S1520 controlling at least two of the multiple sound-emitting devices to emit a prompt sound, wherein a sound image drift direction of the prompt sound corresponds to a first motion trend, and the first motion trend includes a relative motion trend between the vehicle and the first target.
  • the relative motion trend of the vehicle and the first target may include the relative motion trend of the vehicle relative to the first target, or the relative motion trend of the first target relative to the vehicle.
  • controlling at least two of the multiple sound-generating devices to emit a prompt sound includes: when the speed of the vehicle is greater than or equal to a preset speed threshold, controlling at least two of the multiple sound-generating devices to emit a prompt sound.
  • the method 1500 further includes: controlling the sound image drift speed of the prompt sound.
  • controlling the sound and image drift speed of the prompt sound includes: controlling the sound and image drift speed of the prompt sound according to information between the vehicle and the first target.
  • the information between the vehicle and the first target includes at least one of the first motion trend, the warning level, the distance between the vehicle and the first target, and the collision time TTC between the vehicle and the first target.
  • the method 1500 further includes: determining a sound emitting device located at an end of the sound image drifting direction among the at least two sound emitting devices according to a location of the prompted user.
  • vehicle 1 may determine to prompt the driver, thereby selecting the speaker 241 closest to the driver as the speaker at the end of the sound and image drift direction.
  • the vehicle 1 can determine to prompt the user in the passenger seat.
  • a speaker can be selected from speakers 1001-1004 as the terminal sound device.
  • the method 1500 further includes: determining a sound emitting device located at a starting end of the sound and image drift direction among the at least two sound emitting devices according to the position of the first target relative to the vehicle when the first target enters the warning range.
  • vehicle 2 first enters area 1 in the warning range.
  • the starting speaker can be determined to be speaker 241 based on the corresponding relationship between the speaker and the position of the target.
  • the method 1500 also includes: predicting the first motion trend according to the state of the first target; wherein, controlling at least two of the multiple sound emitting devices to emit a prompt sound includes: controlling at least two of the multiple sound emitting devices to emit the prompt sound according to the predicted first motion trend.
  • the state of the first target includes one or more of the speed, acceleration, speed direction, heading angle direction or heading angular velocity rate of the first target.
  • predicting the first motion trend according to the state of the first target includes: inputting the state of the first target into a trajectory prediction model to predict the first motion trend.
  • the state of the first target includes a state when the first target enters a warning range of the vehicle.
  • the state of the first target may also include a historical trajectory of the first target before it enters the warning range of the vehicle.
  • the method 1500 also includes: when the predicted first motion trend is different from the actual relative motion trend of the first target and the vehicle, controlling at least two of the multiple sound emitting devices to emit a prompt sound in which the sound and image drift direction corresponds to the actual relative motion trend.
  • vehicle 1 when vehicle 2 enters the warning range of vehicle 1, vehicle 1 can control speakers 241-243 to emit a prompt tone that the sound image drifts from east to west according to the predicted movement trend of vehicle 2 relative to vehicle 1 (for example, from east to west).
  • vehicle 1 can switch to controlling speakers 243, 246, and 249 to emit a prompt tone that the sound image drifts from south to north.
  • the method 1500 also includes: determining the first motion trend based on first data collected by the sensor of the vehicle; or, obtaining second data sent by the cloud server and determining the first motion trend based on the second data.
  • vehicle 1 when detecting that vehicle 2 enters the warning range of vehicle 1, vehicle 1 can determine the movement trend of vehicle 2 relative to vehicle 1 through the sensor data collected by the sensor outside the cabin. For example, vehicle 2 enters the warning range of vehicle 1 at the first moment. Vehicle 1 can determine the movement trend of vehicle 2 relative to vehicle 1 within 1 second based on the sensor data collected by the sensor within 1 second from the first moment. Thus, based on the movement trend, at least two of the multiple sound-emitting speakers are controlled to emit a prompt sound corresponding to the movement trend.
  • the first motion trend is determined based on first data collected by the sensor of the vehicle, including: determining the position of the first target at multiple moments based on the first data; and determining the first motion trend based on the position of the first target at the multiple moments.
  • the position of vehicle 2 relative to vehicle 1 at 100 moments may be collected at 10 ms intervals.
  • the movement trend of vehicle 2 relative to vehicle 1 may be obtained through the position of vehicle 2 relative to vehicle 1 at these 100 moments.
  • the vehicle includes a mapping relationship between a motion trend and a sound-emitting device
  • the method 1500 further includes: determining the at least two sound-emitting devices based on the mapping relationship and the first motion trend.
  • mapping relationship may be as shown in Table 7 or Table 8 above.
  • the method 1500 also includes: controlling the direction of the ambient light, the direction of the ambient light corresponding to the first motion trend, and the vehicle includes the ambient light; and/or controlling the vibration direction of the steering wheel, the vibration direction corresponding to the first motion trend, and the vehicle includes the steering wheel; and/or controlling the display device to display prompt information, the prompt information is used to prompt the first motion trend, and the vehicle includes the display device.
  • the speaker 1013 and the speaker 1015 can be controlled to emit a sound warning of image drift and the direction of the light gradient in the light strip 1310 can be controlled to change from lamp bead 1311 to lamp bead 1312.
  • speakers 241-243 can be controlled to emit a prompt sound with the sound and image drifting in the direction from speaker 241 to speaker 243 and the vibration direction of the steering wheel can be controlled to vibrate counterclockwise.
  • controlling at least two of the multiple sound emitting devices to emit a prompt sound includes: controlling the intensity of the sound emitted by the at least two sound emitting devices, and/or controlling the time delay of the sound emitted by the at least two sound emitting devices.
  • the above process of controlling the intensity and/or delay of the sound emitted by the sound-emitting device can refer to the description in the above embodiment, which will not be repeated here.
  • the plurality of sound emitting devices are located in a cabin of the vehicle.
  • the plurality of sound emitting devices may also be located outside the cabin of the vehicle.
  • speakers 1201 and 1202 outside the cabin can be controlled to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound can be directed from the front position of vehicle 1 to the position of the still standing user, or, the sound image drift direction of the prompt sound can be directed from speaker 1201 to speaker 1202.
  • the still standing user can know that vehicle 1 is approaching him/her, and can avoid vehicle 1 in time.
  • An embodiment of the present application also provides a control method, which includes: detecting that a vehicle deviates from a first marking line of a lane where the vehicle is located, and the vehicle includes multiple sound-emitting devices; controlling at least two of the multiple sound-emitting devices to emit a sound image drift prompt sound, and the sound image drift direction of the prompt sound is a direction close to the first marking line, or the sound image drift direction of the prompt sound is a direction away from the first marking line.
  • detecting that the vehicle deviates from a first marking line of a lane where the vehicle is located includes: detecting that a distance between the vehicle and the first marking line gradually decreases.
  • the prompt sound can be used to prompt the user that the vehicle is approaching the first marking line.
  • the prompt sound can be used to prompt the user to drive the vehicle away from the first marking line, or to prompt the user to drive towards the center line of the lane.
  • Embodiments of the present application also provide an apparatus for implementing any of the above methods.
  • an apparatus includes units (or means) for implementing each step executed by a vehicle (e.g., a vehicle), or a computing platform in a vehicle, or an SoC in a computing platform, or a processor in a computing platform in any of the above methods.
  • Fig. 16 shows a schematic block diagram of a control device 1600 provided in an embodiment of the present application.
  • the device 1600 includes: a detection unit 1610, which is used to detect that a first target is within the warning range of a vehicle, and the vehicle includes multiple sound-generating devices; a control unit 1620, which is used to control at least two of the multiple sound-generating devices to emit a prompt sound, and the sound image drift direction of the prompt sound corresponds to the first motion trend, and the first motion trend includes the relative motion trend of the vehicle and the first target.
  • control unit 1620 is further configured to control a sound image drift speed of the prompt sound.
  • control unit 1620 is used to control the sound and image drift speed of the prompt sound according to the information between the vehicle and the first target.
  • the information between the vehicle and the first target includes at least one of the first motion trend, the warning level, the distance between the vehicle and the first target, and the collision time TTC between the vehicle and the first target.
  • the device 1600 further includes: a first determining unit, configured to determine, according to a location of the prompted user, a sound emitting device located at an end of the sound image drifting direction among the at least two sound emitting devices.
  • a first determining unit configured to determine, according to a location of the prompted user, a sound emitting device located at an end of the sound image drifting direction among the at least two sound emitting devices.
  • the device 1600 further includes: a second determination unit, configured to determine a sound emitting device located at the beginning of the sound and image drift direction among the at least two sound emitting devices according to the position of the first target relative to the vehicle when the first target enters the warning range.
  • a second determination unit configured to determine a sound emitting device located at the beginning of the sound and image drift direction among the at least two sound emitting devices according to the position of the first target relative to the vehicle when the first target enters the warning range.
  • the device 1600 further includes: a prediction unit, configured to predict the first motion trend according to the state of the first target; wherein the control unit is configured to control at least two of the multiple sound emitting devices to emit the prompt sound according to the predicted first motion trend.
  • a prediction unit configured to predict the first motion trend according to the state of the first target
  • the control unit is configured to control at least two of the multiple sound emitting devices to emit the prompt sound according to the predicted first motion trend.
  • control unit 1620 is also used to: when the predicted first motion trend is different from the actual relative motion trend of the first target and the vehicle, control at least two of the multiple sound emitting devices to emit a prompt sound in which the sound and image drift direction corresponds to the actual relative motion trend.
  • the device 1600 also includes: a third determination unit, used to determine the first motion trend based on first data collected by the sensor of the vehicle; or, the third determination unit, used to obtain second data sent by the cloud server and determine the first motion trend based on the second data.
  • a third determination unit used to determine the first motion trend based on first data collected by the sensor of the vehicle
  • the third determination unit used to obtain second data sent by the cloud server and determine the first motion trend based on the second data.
  • the third determination unit is used to: determine the position of the first target at multiple moments based on the first data; determine the first motion trend based on the position of the first target at the multiple moments.
  • the vehicle includes a mapping relationship between a motion trend and a sound-emitting device
  • the method further includes: determining the at least two sound-emitting devices based on the mapping relationship and the first motion trend.
  • control unit 1620 is also used to: control the direction of the ambient light, the direction of the ambient light corresponding to the first motion trend, and the vehicle includes the ambient light; and/or control the vibration direction of the steering wheel, the vibration direction corresponding to the first motion trend, and the vehicle includes the steering wheel; and/or control the display device to display prompt information, the prompt information is used to prompt the first motion trend, and the vehicle includes the display device.
  • the detection unit 1610 is used to detect that the first target is within the warning range of the vehicle when the distance between the vehicle and the first target is less than or equal to a preset distance and/or the TTC between the vehicle and the first target is less than or equal to a preset time duration.
  • control unit 1620 is used to: control the intensity of the sound emitted by a sound emitting device among the at least two sound emitting devices, and/or control the time delay of the sound emitted by a sound emitting device among the at least two sound emitting devices.
  • the plurality of sound emitting devices are located in a cabin of the vehicle.
  • the detection unit 1610 may be the computing platform in FIG. 1 or a processing circuit, a processor, or a controller in the computing platform. Taking the detection unit 1610 as the processor 151 in the computing platform as an example, the processor 151 may detect whether the first target is in the warning range of the vehicle. For example, the processor 151 may obtain the sensor data collected by the sensor outside the cockpit and determine whether the first target enters the warning range of the vehicle based on the sensor data. For another example, the processor 151 may also determine the movement trend of the first target relative to the vehicle based on the sensor data.
  • control unit 1620 may be the computing platform in FIG. 1 or a processing circuit, a processor or a controller in the computing platform.
  • the control unit 1620 may send indication information and the movement trend of the first target to the processor 152, and the indication information is used to indicate that the first target enters the warning range of the vehicle.
  • the processor 152 may control the at least two sound-emitting devices to emit a sound image drift prompt sound according to the indication information and the movement trend of the first target, and the sound image drift direction of the prompt sound corresponds to the first movement trend.
  • the processor 152 may also control the sound image drift speed of the prompt sound.
  • the above prediction unit may be the computing platform in Figure 1 or a processing circuit, processor or controller in the computing platform.
  • the processor 15n may determine the state of the first target based on the sensor data collected by the sensor outside the vehicle cabin, and predict the movement trend of the first target relative to the vehicle in the future based on the state of the first target.
  • the functions implemented by the above detection unit 1610 and the functions implemented by the control unit 1620 can be implemented by different processors, or can also be implemented by the same processor, which is not limited in the embodiments of the present application.
  • the division of the units in the above device is only a division of logical functions. In actual implementation, they can be fully or partially integrated into one physical entity, or they can be physically separated.
  • the units in the device can be implemented in the form of a processor calling software; for example, the device includes a processor, the processor is connected to a memory, and instructions are stored in the memory.
  • the processor calls the instructions stored in the memory to implement any of the above methods or realize the functions of the units of the device, wherein the processor is, for example, a general-purpose processor, such as a CPU or a microprocessor, and the memory is a memory in the device or a memory outside the device.
  • the units in the device can be implemented in the form of hardware circuits, and the functions of some or all of the units can be realized by designing the hardware circuits.
  • the hardware circuit can be understood as one or more processors; for example, in one implementation, the hardware circuit is an ASIC, and the functions of some or all of the above units are realized by designing the logical relationship of the components in the circuit; for another example, in another implementation, the hardware circuit can be realized by PLD.
  • FPGA as an example, it can include a large number of logic gate circuits, and the connection relationship between the logic gate circuits is configured through the configuration file, so as to realize the functions of some or all of the above units. All units of the above device may be implemented entirely in the form of a processor calling software, or entirely in the form of a hardware circuit, or partially in the form of a processor calling software and the rest in the form of a hardware circuit.
  • a processor is a circuit with the ability to process signals.
  • the processor may be a circuit with the ability to read and run instructions, such as a CPU, a microprocessor, a GPU, or a DSP; in another implementation, the processor may implement certain functions through the logical relationship of a hardware circuit, and the logical relationship of the hardware circuit is fixed or reconfigurable, such as a hardware circuit implemented by an ASIC or PLD, such as an FPGA.
  • the process of the processor loading a configuration document to implement the configuration of the hardware circuit can be understood as the process of the processor loading instructions to implement the functions of some or all of the above units.
  • it can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as an NPU, TPU, DPU, etc.
  • each unit in the above device can be one or more processors (or processing circuits) configured to implement the above method, such as: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA, or a combination of at least two of these processor forms.
  • processors or processing circuits
  • the SoC may include at least one processor for implementing any of the above methods or implementing the functions of each unit of the device.
  • the type of the at least one processor may be different, for example, including CPU and FPGA, CPU and artificial intelligence processor, CPU and GPU, etc.
  • An embodiment of the present application also provides a device, which includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit so that the device executes the method or steps executed by the above embodiment.
  • the processing unit may be the processor 151 - 15n shown in FIG. 1 .
  • Fig. 17 shows a schematic block diagram of a control system 1700 provided in an embodiment of the present application.
  • the control system 1700 includes at least two sound generating devices and a computing platform, wherein the computing platform may include the control device 1600 described above.
  • control system 1700 also includes one or more sensors.
  • An embodiment of the present application also provides a vehicle, which may include the above-mentioned control device 1600 or control system 1700.
  • the vehicle may be a vehicle.
  • the embodiment of the present application further provides a computer program product, which includes: a computer program code, and when the computer program code is executed on a computer, the computer executes the above method.
  • the embodiment of the present application further provides a computer-readable medium, wherein the computer-readable medium stores a program code.
  • the computer program code is executed on a computer, the computer executes the above method.
  • each step of the above method can be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software.
  • the method disclosed in conjunction with the embodiment of the present application can be directly embodied as a hardware processor for execution, or a combination of hardware and software modules in a processor for execution.
  • the software module can be located in a storage medium mature in the art such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or a power-on erasable programmable memory, a register, etc.
  • the storage medium is located in a memory, and the processor reads the information in the memory and completes the steps of the above method in conjunction with its hardware. To avoid repetition, it is not described in detail here.
  • the memory may include a read-only memory and a random access memory, and provide instructions and data to the processor.
  • the size of the serial numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application can be essentially or partly embodied in the form of a software product that contributes to the prior art.
  • the computer software product is stored in a storage medium and includes several instructions for a computer device (which can be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, and other media that can store program codes.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种控制方法,该方法包括:检测到第一目标处于运载工具(100)的告警范围内,运载工具(100)包括多个发声装置(130);控制多个发声装置(130)中至少两个发声装置发出提示音,提示音的声像漂移方向与第一运动趋势相对应,第一运动趋势包括运载工具和第一目标的相对运动趋势。该控制方法可以应用于智能汽车或者电动汽车,有助于提升用户的驾乘安全,也有助于提升运载工具的智能化程度。还涉及一种控制装置、一种控制系统、一种运载工具、一种计算机可读存储介质、及一种芯片。

Description

一种控制方法、装置和运载工具 技术领域
本申请实施例涉及智能驾驶领域,并且更具体地,涉及一种控制方法、装置和运载工具。
背景技术
随着车辆的智能化,用户在驾驶车辆行驶的过程中,车辆可以主动对周围的目标进行预警。例如,对于前方穿插的行人或者车辆,可以通过车载显示屏显示的图像或者文字进行预警。又例如,对于盲区预警目标,可以通过后视镜上的指示灯进行预警。但是这些预警方式需要用户的注意力转移至车载显示屏或者后视镜上,不利于用户的驾乘安全。
发明内容
本申请实施例提供一种控制方法、装置和运载工具,有助于提升用户的驾乘安全,也有助于提升运载工具的智能化程度。
本申请中的运载工具可以包括路上交通工具、水上交通工具、空中交通工具、工业设备、农业设备、或娱乐设备等。例如运载工具可以为车辆,该车辆为广义概念上的车辆,可以是交通工具(如商用车、乘用车、摩托车、飞行车、火车等),工业车辆(如:叉车、挂车、牵引车等),工程车辆(如挖掘机、推土车、吊车等),农用设备(如割草机、收割机等),游乐设备,玩具车辆等,本申请实施例对车辆的类型不作具体限定。再如,运载工具可以为飞机、或轮船等交通工具。
第一方面,提供了一种控制方法,该方法包括:检测到第一目标处于运载工具的告警范围内,该运载工具包括多个发声装置;控制该多个发声装置中至少两个发声装置发出提示音,该提示音的声像(sound image,或者,soundstage)漂移方向与第一运动趋势相对应,该第一运动趋势包括该运载工具和该第一目标的相对运动趋势。
声像可以用来表现通过发声装置发出的声音的深度、高度和宽度中的一项或多项。声像漂移(sound image shift)例如可以是在某个时间区间内,声音的声像位置朝着某个方向发生变化或者移动。这样,声像漂移的提示音可以给用户带来声音空间位置变化的体验感。
本申请实施例中,通过声像漂移方向与第一运动趋势相对应的提示音可以模拟或者表达运载工具和目标的相对运动趋势,无需用户借助其他装置或者设备来明确该运动趋势,有助于用户更快且更安全地意识到危险,有助于提升用户的驾乘安全,也有助于提升运载工具的智能化程度。
此外,在运载工具内部多个提示音迸发时,通过声像漂移的提示音可以使得用户直观地明确多个提示音中对运载工具和目标的相对运动趋势进行提示的提示音。例如,以该运载工具是车辆为例,在车辆变道场景下,声像漂移方向与第一运动趋势相对应的提示音、前方雷达预警提示音、转向灯声音可能会迸发。驾驶员可以准确理解声像漂移方向与第一 运动趋势相对应的提示音对应的功能语义,从而避免了驾驶员对车辆的声学提示音产生消极评价和混乱理解,有助于提升用户的驾乘体验。
在一些可能的实现方式中,该提示音的声像漂移方向与第一运动趋势相对应可以理解为该提示音的声像漂移方向与第一运动趋势一致。
在一些可能的实现方式中,该运载工具和该第一目标的相对运动趋势可以为该运载工具相对于该第一目标的运动趋势,或者,该运载工具和该第一目标的相对运动趋势也可以为该第一目标相对于该运载工具的运动趋势。
在一些可能的实现方式中,该第一目标可以为运动目标或者静止目标。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:控制该提示音的声像漂移速度。
本申请实施例中,通过控制提示音的声像漂移速度,有助于提升对提示音控制的灵活性。
在一些可能的实现方式中,提示音的声像漂移速度越来越快可以表示目标与运载工具之间的距离越来越近。
在一些可能的实现方式中,提示音的声像漂移速度越来越快可以表示目标与运载工具之间的碰撞时间(time to collision,TTC)越来越短。
结合第一方面,在第一方面的某些实现方式中,该控制该提示音的声像漂移速度,包括:根据该运载工具和该第一目标之间的信息,控制该提示音的声像漂移速度。
本申请实施例中,通过运载工具和第一目标之间的信息,可以控制该提示音的声像漂移速度。这样,用户可以通过提示音的声像漂移速度直观地明确运载工具和目标之间的信息变化情况或者目标的危险程度,从而操控运载工具避免与目标发生碰撞,有助于提升运载工具的安全性,也有助于提升用户的驾乘体验。
结合第一方面,在第一方面的某些实现方式中,该运载工具和该第一目标之间的信息包括该第一运动趋势,告警等级,该运载工具和该第一目标之间的距离和该运载工具和该第一目标之间的TTC中的至少一个。
本申请实施例中,通过控制提示音的声像漂移速度,用户可以直观地明确该第一运动趋势、告警等级、该运载工具和该第一目标之间的距离或者该运载工具和该第一目标之间的TTC,从而操控运载工具避免与目标发生碰撞,有助于提升运载工具的安全性,也有助于提升用户的驾乘体验。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据被提示的用户所在的位置,确定该至少两个发声装置中位于该声像漂移方向末端的发声装置。
本申请实施例中,至少两个发声装置中位于该声像漂移方向末端的发声装置与被提示的用户所在的位置相关联。这样,被提示的用户根据朝自己所在位置的方向进行声像漂移的提示音,快速地理解周围可能存在对自己构成危险的目标,有助于提升用户的安全意识,从而有助于避免安全事故的发生。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该第一目标进入该告警范围时相对于该运载工具的方位,确定该至少两个发声装置中位于该声像漂移方向始端的发声装置。
本申请实施例中,至少两个发声装置中位于该声像漂移方向始端的发声装置与该第一目标进入该告警范围时相对于该运载工具的方位相关联。这样,用户可以通过该始端的发 声装置明确第一目标进入告警范围时相对于该运载工具的方位,从而可以提前对该第一目标所在的方位进行观察,有助于避免安全事故的发生。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该第一目标的状态,预测该第一运动趋势;其中,该控制该多个发声装置中至少两个发声装置发出提示音,包括:根据预测得到的该第一运动趋势,控制该多个发声装置中至少两个发声装置发出该提示音。
本申请实施例中,通过第一目标的状态可以预测该第一运动趋势,从而可以根据预测得到的第一运动趋势控制至少两个发声装置发出声像漂移的提示音。这样,用户可以通过该声像漂移方向与该第一运动趋势相对应的提示音获知第一目标未来一段时间的运动趋势,帮助用户提前做出驾驶决策,有助于避免运载工具与第一目标发生碰撞,从而有助于提升用户的驾乘安全。
在一些可能的实现方式中,该方法还包括:在检测到该第一目标进入该运载工具的告警范围时,获取该第一目标的状态。
在一些可能的实现方式中,该第一目标的状态包括该第一目标的速度、加速度、速度方向、航向角方向或者航向角速度率中的一个或者多个。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:在预测得到的该第一运动趋势与该第一目标和该运载工具的实际相对运动趋势不同时,控制该多个发声装置中的至少两个发声装置发出声像漂移方向与该实际相对运动趋势相对应的提示音。
本申请实施例中,在预测得到的该第一运动趋势与实际相对运动趋势不同时,可以控制多个发声装置中至少两个发声装置发出声像漂移方向与该实际相对运动趋势相对应的提示音。这样,通过不同的声像漂移方向的提示音之间的切换,可以使得用户直观地明确第一目标相对于运载工具的运动趋势发生了改变。用户可以根据声像漂移方向切换后的提示音及时获知该实际相对运动趋势,从而帮助用户提前做出驾驶决策。这样,有助于避免运载工具与第一目标发生碰撞,从而有助于提升用户的驾乘安全。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该运载工具的传感器采集的第一数据,确定该第一运动趋势;或者,获取云端服务器发送的第二数据且根据该第二数据,确定该第一运动趋势。
本申请实施例中,该运载工具可以根据传感器采集的第一数据确定该第一运动趋势或者通过服务器发送的第二数据确定该第一运动趋势,从而控制至少两个发声装置发出提示音,有助于提升用户的驾乘安全,也有助于提升运载工具的智能化程度。
在一些可能的实现方式中,根据传感器采集的第一数据确定该第一运动趋势或者通过服务器发送的第二数据确定该第一运动趋势可以为该第一目标相对于该运载工具的实际运动趋势,或者,也可以为该运载工具相对于该第一目标的实际运动趋势。
结合第一方面,在第一方面的某些实现方式中,该根据该运载工具的传感器采集的第一数据,确定该第一运动趋势,包括:根据该第一数据,确定该第一目标在多个时刻的方位;根据该第一目标在该多个时刻的方位,确定该第一运动趋势。
结合第一方面,在第一方面的某些实现方式中,该运载工具中包括运动趋势与发声装置的映射关系,该方法还包括:根据该映射关系和该第一运动趋势,确定该至少两个发声装置。
本申请实施例中,可以通过运载工具中保存的运动趋势与发声装置的映射关系以及该 第一运动趋势,确定该至少两个发声装置。这样,可以节省从多个发声装置中确定该至少两个发声装置时的计算开销,有助于节省运载工具的功耗。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:控制氛围灯点亮的方向,该氛围灯点亮的方向与该第一运动趋势相对应,该运载工具包括该氛围灯;和/或,控制方向盘的震动方向,该震动方向与该第一运动趋势相对应,该运载工具包括该方向盘;和/或,控制显示装置显示提示信息,该提示信息用于提示该第一运动趋势,该运载工具包括该显示装置。
本申请实施例中,在控制至少两个发声装置发出提示音的同时还可以控制氛围灯、方向盘以及显示装置中的至少一个,通过多部件的联合提示方式,可以使得用户进一步明确该第一运动趋势,有助于提升用户的驾乘体安全。
结合第一方面,在第一方面的某些实现方式中,该检测到第一目标处于运载工具的告警范围内,包括:在该运载工具和该第一目标之间的距离小于或者等于预设距离,和/或,该运载工具和该第一目标之间的TTC小于或者等于预设时长时,检测到该第一目标处于该运载工具的告警范围内。
本申请实施例中,可以通过距离和TTC来判断第一目标是否进入了该运载工具的告警范围。这样,通过声像漂移的提示音可以对告警范围内的目标进行提示,有助于用户直观地明确告警范围内目标的运动趋势,有助于提升用户的驾乘安全,也有助于提升运载工具的智能化程度。
结合第一方面,在第一方面的某些实现方式中,该控制该多个发声装置中至少两个发声装置发出提示音,包括:控制该至少两个发声装置中发声装置发出声音的强度,和/或,控制该至少两个发声装置中发声装置发出声音的时延。
结合第一方面,在第一方面的某些实现方式中,该多个发声装置位于该运载工具的座舱内。
在一些可能的实现方式中,该多个发声装置也可以位于该运载工具的座舱外。
第二方面,提供了一种控制装置,该装置包括:检测单元,用于检测到第一目标处于运载工具的告警范围内,该运载工具包括多个发声装置;控制单元,用于控制该多个发声装置中至少两个发声装置发出提示音,该提示音的声像漂移方向与第一运动趋势相对应,该第一运动趋势包括该运载工具和该第一目标的相对运动趋势。
结合第二方面,在第二方面的某些实现方式中,该控制单元,还用于:控制该提示音的声像漂移速度。
结合第二方面,在第二方面的某些实现方式中,该控制单元,用于:根据该运载工具和该第一目标之间的信息,控制该提示音的声像漂移速度。
结合第二方面,在第二方面的某些实现方式中,该运载工具和该第一目标之间的信息包括该第一运动趋势,告警等级,该运载工具和该第一目标之间的距离和该运载工具和该第一目标之间的碰撞时间TTC中的至少一个。
结合第二方面,在第二方面的某些实现方式中,该装置还包括:第一确定单元,用于根据被提示的用户所在的位置,确定该至少两个发声装置中位于该声像漂移方向末端的发声装置。
结合第二方面,在第二方面的某些实现方式中,该装置还包括:第二确定单元,用于根据该第一目标进入该告警范围时相对于该运载工具的方位,确定该至少两个发声装置中 位于该声像漂移方向始端的发声装置。
结合第二方面,在第二方面的某些实现方式中,该装置还包括:预测单元,用于根据该第一目标的状态,预测该第一运动趋势;其中,该控制单元,用于:根据预测得到的该第一运动趋势,控制该多个发声装置中至少两个发声装置发出该提示音。
结合第二方面,在第二方面的某些实现方式中,该控制单元,还用于:在预测得到的该第一运动趋势与该第一目标和该运载工具的实际相对运动趋势不同时,控制该多个发声装置中的至少两个发声装置发出声像漂移方向与该实际相对运动趋势相对应的提示音。
结合第二方面,在第二方面的某些实现方式中,该装置还包括:第三确定单元,用于根据该运载工具的传感器采集的第一数据,确定该第一运动趋势;或者,该第三确定单元,用于获取云端服务器发送的第二数据且根据该第二数据,确定该第一运动趋势。
结合第二方面,在第二方面的某些实现方式中,该第三确定单元,用于:根据该第一数据,确定该第一目标在多个时刻的方位;根据该第一目标在该多个时刻的方位,确定该第一运动趋势。
结合第二方面,在第二方面的某些实现方式中,该运载工具中包括运动趋势与发声装置的映射关系,该方法还包括:根据该映射关系和该第一运动趋势,确定该至少两个发声装置。
结合第二方面,在第二方面的某些实现方式中,该控制单元,还用于:控制氛围灯点亮的方向,该氛围灯点亮的方向与该第一运动趋势相对应,该运载工具包括该氛围灯;和/或,控制方向盘的震动方向,该震动方向与该第一运动趋势相对应,该运载工具包括该方向盘;和/或,控制显示装置显示提示信息,该提示信息用于提示该第一运动趋势,该运载工具包括该显示装置。
结合第二方面,在第二方面的某些实现方式中,该检测单元,用于:在该运载工具和该第一目标之间的距离小于或者等于预设距离,和/或,该运载工具和该第一目标之间的TTC小于或者等于预设时长时,检测到该第一目标处于该运载工具的告警范围内。
结合第二方面,在第二方面的某些实现方式中,该控制单元,用于:控制该至少两个发声装置中发声装置发出声音的强度,和/或,控制该至少两个发声装置中发声装置发出声音的时延。
结合第二方面,在第二方面的某些实现方式中,该多个发声装置位于该运载工具的座舱内。
第三方面,提供了一种控制方法,该方法包括:检测到第一目标处于运载工具的告警范围内,该运载工具包括氛围灯;控制该氛围灯的点亮方向,该氛围灯的点亮方向与第一运动趋势相对应,该第一运动趋势包括该运载工具和该第一目标的相对运动趋势。
结合第三方面,在第三方面的某些实现方式中,该方法还包括:控制该氛围灯的点亮速度。
结合第三方面,在第三方面的某些实现方式中,该控制该氛围灯的点亮速度,包括:根据该运载工具和该第一目标之间的信息,控制该氛围灯的点亮速度。
结合第三方面,在第三方面的某些实现方式中,该运载工具和该第一目标之间的信息包括该第一运动趋势,告警等级,该运载工具和该第一目标之间的距离和该运载工具和该第一目标之间的TTC中的至少一个。
结合第三方面,在第三方面的某些实现方式中,该方法还包括:根据该运载工具的传 感器采集的第一数据,确定该第一运动趋势;或者,获取云端服务器发送的第二数据且根据该第二数据,确定该第一运动趋势。
结合第三方面,在第三方面的某些实现方式中,该根据该运载工具的传感器采集的第一数据,确定该第一运动趋势,包括:根据该第一数据,确定该第一目标在多个时刻的方位;根据该第一目标在该多个时刻的方位,确定该第一运动趋势。
结合第三方面,在第三方面的某些实现方式中,该检测到第一目标处于运载工具的告警范围内,包括:在该运载工具和该第一目标之间的距离小于或者等于预设距离,和/或,该运载工具和该第一目标之间的TTC小于或者等于预设时长时,检测到该第一目标处于该运载工具的告警范围内。
第四方面,提供了一种控制装置,该装置包括:检测单元,用于检测到第一目标处于运载工具的告警范围内,该运载工具包括氛围灯;控制单元,用于控制该氛围灯的点亮方向,该氛围灯的点亮方向与第一运动趋势相对应,该第一运动趋势包括该运载工具和该第一目标的相对运动趋势。
结合第四方面,在第四方面的某些实现方式中,该控制单元,还用于控制该氛围灯的点亮速度。
结合第四方面,在第四方面的某些实现方式中,该控制单元,用于:根据该运载工具和该第一目标之间的信息,控制该氛围灯的点亮速度。
结合第四方面,在第四方面的某些实现方式中,该运载工具和该第一目标之间的信息包括该第一运动趋势,告警等级,该运载工具和该第一目标之间的距离和该运载工具和该第一目标之间的碰撞时间TTC中的至少一个。
结合第四方面,在第四方面的某些实现方式中,该装置还包括:确定单元,用于根据该运载工具的传感器采集的第一数据,确定该第一运动趋势;或者,获取云端服务器发送的第二数据且根据该第二数据,确定该第一运动趋势。
结合第四方面,在第四方面的某些实现方式中,该确定单元,用于:根据该第一数据,确定该第一目标在多个时刻的方位;根据该第一目标在该多个时刻的方位,确定该第一运动趋势。
结合第四方面,在第四方面的某些实现方式中,该检测单元,用于:在该运载工具和该第一目标之间的距离小于或者等于预设距离,和/或,该运载工具和该第一目标之间的TTC小于或者等于预设时长时,检测到该第一目标处于该运载工具的告警范围内。
第五方面,提供了一种控制装置,该控制装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该控制装置执行第一方面或者第三方面中任一种可能的方法。
第六方面,提供了一种控制系统,该系统包括至少两个发声装置和计算平台,其中,该计算平台包括第二方面或者第四方面中任一种可能的装置,或者,该计算平台包括第五方面所述的装置。
在一些可能的实现方式中,该控制系统还包括一个或者多个传感器。
第七方面,提供了一种运载工具,该运载工具包括第二方面中任一种可能的装置,或者,包括第四方面所述的装置,或者,包括第五方面所述的装置,或者,包括第六方面所述的系统。
在一些可能的实现方式中,该运载工具为车辆。
第八方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面或者第三方面中任一种可能的方法。
需要说明的是,上述计算机程序代码可以全部或者部分存储在第一存储介质上,其中第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装,本申请实施例对此不作具体限定。
第九方面,提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面或者第三方面中任一种可能的方法。
第十方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行上述第一方面或者第三方面中任一种可能的方法。
结合第十方面,在一种可能的实现方式中,该处理器通过接口与存储器耦合。
结合第十方面,在一种可能的实现方式中,该芯片系统还包括存储器,该存储器中存储有计算机程序或计算机指令。
本申请实施例中,通过声像漂移方向与第一运动趋势相对应的提示音可以模拟或者表达运载工具和目标的相对运动趋势信息,无需用户借助其他装置或者设备来明确该运动趋势,有助于用户更快且更安全地意识到危险,有助于提升用户的驾乘安全,也有助于提升运载工具的智能化程度。此外,在运载工具内部多个提示音迸发时,通过声像漂移的提示音可以使得用户直观地明确多个提示音中对运载工具和目标的相对运动趋势进行提示的提示音。
通过控制提示音的声像漂移速度,有助于提升对提示音的控制的灵活性。
通过控制提示音的声像漂移速度,用户可以直观地明确该第一运动趋势、告警等级、该运载工具和该第一目标之间的距离或者该运载工具和该第一目标之间的TTC,从而操控运载工具避免与目标发生碰撞,有助于提升运载工具的安全性,也有助于提升用户的驾乘体验。
通过将至少两个发声装置中位于该声像漂移方向末端的发声装置与被提示的用户所在的位置相关联,被提示的用户根据朝自己所在位置的方向进行声像漂移的提示音,快速地理解周围可能存在对自己构成危险的目标,有助于提升用户的安全意识,从而有助于避免安全事故的发生。
通过将至少两个发声装置中位于该声像漂移方向始端的发声装置与该第一目标进入该告警范围时相对于该运载工具的方位相关联。用户可以通过始端的发声装置明确第一目标进入告警范围时相对于该运载工具的方位,从而可以提前对该第一目标所在的方位进行观察,有助于避免安全事故的发生。
通过第一目标的状态可以预测该第一运动趋势,这样用户可以通过该声像漂移方向与该第一运动趋势相对应的提示音获知第一目标未来一段时间的运动趋势,帮助用户提前做出驾驶决策。
通过不同的声像漂移方向的提示音之间的切换,可以使得用户直观地明确第一目标相对于运载工具的相对运动趋势发生了改变。用户可以根据声像漂移方向切换后的提示音及时获知该实际相对运动趋势,从而帮助用户提前做出驾驶决策。
通过运载工具中保存的运动趋势与发声装置的映射关系以及该第一运动趋势,确定该至少两个发声装置。这样,可以节省从多个发声装置中确定该至少两个发声装置时的计算开销,有助于节省运载工具的功耗。
通过多部件的联合提示方式,可以使得用户进一步明确该第一运动趋势,有助于提升用户的驾乘安全,也有助于提升运载工具的智能化程度。
附图说明
图1是本申请实施例提供的运载工具的功能框图示意。
图2是本申请实施例提供的车辆的示意性结构图。
图3是本申请实施例提供的应用场景的示意图。
图4是本申请实施例提供的车辆的告警范围的示意图。
图5是本申请实施例提供的扬声器与目标所在方位的对应关系的示意图。
图6是本申请实施例提供的另一应用场景的示意图。
图7是本申请实施例提供的一种告警范围划分方式的示意图。
图8是本申请实施例提供的通过至少两个扬声器发出声像能漂移的提示音的示意图。
图9是本申请实施例提供的应用场景的另一示意图。
图10是本申请实施例提供的扬声器分布的示意图。
图11是本申请实施例提供的另一应用场景的示意图。
图12是本申请实施例提供的另一应用场景的示意图。
图13是本申请实施例中通过声像漂移的提示音以及氛围灯提示用户的示意图。
图14是本申请实施例中通过声像漂移的提示音以及方向盘震动提示用户的示意图。
图15是本申请实施例提供的控制方法的示意性流程图。
图16是本申请实施例提供的控制装置的示意性框图。
图17是本申请实施例提供的控制系统的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例中采用诸如“第一”、“第二”的前缀词,仅仅为了区分不同的描述对象,对被描述对象的位置、顺序、优先级、数量或内容等没有限定作用。本申请实施例中对序数词等用于区分描述对象的前缀词的使用不对所描述对象构成限制,对所描述对象的陈述参见权利要求或实施例中上下文的描述,不应因为使用这种前缀词而构成多余的限制。此外,在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
图1是本申请实施例提供的运载工具100的一个功能框图示意。运载工具100可以包括感知系统120、发声装置130和计算平台150,其中,感知系统120可以包括感测关于运载工具100周边的环境的信息的一种或多种传感器。例如,感知系统120可以包括定位系统,定位系统可以是全球定位系统(global positioning system,GPS),也可以是北斗系统或者其他定位系统。感知系统120还可以包括惯性测量单元(inertial measurement unit, IMU)、激光雷达、毫米波雷达、超声雷达以及摄像装置中的一种或者多种。
运载工具100的部分或所有功能可以由计算平台150控制。计算平台150可包括一个或多个处理器,例如处理器151至15n(n为正整数),处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如中央处理单元(central processing unit,CPU)、微处理器、图形处理器(graphics processing unit,GPU)(可以理解为一种微处理器)、或数字信号处理器(digital signal processor,DSP)等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为专用集成电路(application-specific integrated circuit,ASIC)或可编程逻辑器件(programmable logic device,PLD)实现的硬件电路,例如现场可编程门阵列(field programmable gate array,FPGA)。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,处理器还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如神经网络处理单元(neural network processing unit,NPU)、张量处理单元(tensor processing unit,TPU)、深度学习处理单元(deep learning processing unit,DPU)等。此外,计算平台150还可以包括存储器,存储器用于存储指令,处理器151至15n中的部分或全部处理器可以调用存储器中的指令,执行指令,以实现相应的功能。
如前所述,随着车辆的智能化,用户在驾驶车辆行驶的过程中,车辆可以主动对周围的目标进行预警。例如,对于前方穿插的行人或者车辆,可以通过车载显示屏显示图像或者文字进行预警。又例如,对于盲区预警目标,可以通过后视镜上的指示灯进行预警。但是这些预警方式需要用户的注意力转移至车载显示屏或者后视镜上,不利于用户的驾乘安全。
本申请实施例提供了一种控制方法、装置和运载工具,通过控制运载工具中的至少两个发声装置发出声像漂移方向与该运载工具和目标的相对运动趋势相对应的提示音。这样,用户可以在注意力不受影响的情况下,通过该提示音的声像漂移方向获知该运动趋势,有助于提升用户的驾乘安全;同时,也有助于提升运载工具的智能化程度。
以运载工具100是车辆为例,图2是本申请实施例提供的车辆200的示意性结构图。如图2所示,该车辆200包括计算平台210、位于座舱外的摄像头221-222、雷达231-238以及位于座舱内的扬声器241-249。计算平台210可以通过座舱外的摄像头221-222以及雷达231-238采集的数据,可以确定目标是否进入车辆200的告警范围内。在目标进入车辆200的告警范围内时,计算平台210可以控制扬声器241-249中的至少两个扬声器发出提示音,该提示音的声像漂移方向与该目标和车辆200的相对运动趋势相对应,或者,该提示音的声像漂移方向与该目标本身的运动趋势相对应。
以上该目标和该车辆200的相对运动趋势可以为目标相对于车辆200的运动趋势,或者,也可以为车辆200相对于目标的运动趋势。
以上该目标本身的运动趋势可以为该目标不相对于任何参考物体的运动趋势。
以上图2中的座椅1可以为主驾区域的座椅,座椅2可以为副驾区域的座椅,座椅3可以为二排右侧区域的座椅,座椅4可以为二排左侧区域的座椅。
以上计算平台210可以为图1中的计算平台150,摄像头221-222以及雷达231-238可以位于图1中的感知系统120中,扬声器241-249可以位于图1中的发声装置130中。
以上图2中是以车辆200中包括2个摄像头、8个雷达以及9个扬声器为例进行说明的,本申请实施例中对车辆200中摄像头、雷达以及扬声器的数量并不作具体限定。
以扬声器241为例,扬声器241的位置可以包括一个或者多个扬声器。例如,当该扬声器241的位置上包括多个扬声器时,该多个扬声器可以组成扬声器群。
图3示出了本申请实施例提供的一种应用场景的示意图。该应用场景为前方侧向穿插碰撞预警(front crossing traffic alert)场景。车辆1可以通过摄像头和雷达采集车辆2的信息,例如,车辆2的信息包括车辆2与车辆1之间的距离、车辆2的速度、车辆2的速度方向、车辆2的航向角等信息。在检测到车辆2进入车辆1的告警范围时,车辆1可以控制车辆1中至少两个扬声器发出提示音,该提示音的声像漂移方向与车辆2和车辆1的相对运动趋势相对应,或者,该提示音的声像漂移方向与车辆2本身的运动趋势相对应。
以上车辆2和车辆1的相对运动趋势可以为车辆2相对于车辆1的运动趋势,或者,也可以为车辆1相对于车辆2的运动趋势。以下实施例中以车辆2相对于车辆1的运动趋势为例进行说明。
例如,如图3所示,车辆2相对于车辆1的运动趋势为由东向西行驶,此时车辆1可以控制扬声器241-243发出提示音,该提示音的声像漂移方向为由东向西,或者,该提示音的声像漂移方向由副驾区域指向主驾区域,或者,该提示音的声像漂移方向为由扬声器241指向扬声器243。
又例如,车辆1也可以控制扬声器244-246发出提示音,该提示音的声像漂移方向为由东向西。
又例如,车辆1也可以控制扬声器247-249发出提示音,该提示音的声像漂移方向为由东向西。
这样,驾驶员通过提示音的声像漂移方向可以准确获知车辆2相对于车辆1的运动趋势,从而可以根据车辆2相对于车辆1的运动趋势对车辆1进行操控,有助于提升用户的驾乘安全;同时,也有助于提升车辆的智能化程度。
同时,在车辆1内部多个提示音迸发时,通过声像漂移的提示音可以使得用户准确获知多个提示音中对车辆2和车辆1的相对运动趋势进行提示的提示音。例如,在车辆1变道场景下,声像漂移方向与车辆2和车辆1的相对运动趋势相对应的提示音、前方雷达预警提示音、转向灯声音可能会迸发。驾驶员可以准确理解声像漂移方向与该相对运动趋势相对应的提示音对应的功能语义,从而避免了驾驶员对车辆的声学提示音产生消极评价和混乱理解,有助于提升用户的驾乘体验。
一个实施例中,车辆1可以控制车辆1中至少两个扬声器发出提示音,包括:车辆1控制该至少两个扬声器发出声音的播放强度。
以控制扬声器241-243为例,车辆1可以控制扬声器241-243发出的声音之间没有时延且控制扬声器241-243发出的声音的播放强度不同。例如,在T 1时刻,可以控制扬声器241发出的声音的播放强度为40dB,控制扬声器242发出的声音的播放强度为20dB且控制扬声器243发出的声音的播放强度为20dB;在T 1时刻之后的T 2时刻,可以控制扬声器241发出的声音的播放强度为20dB,控制扬声器242发出的声音的播放强度为40dB且控制扬声器243发出的声音的播放强度为20dB;在T 2时刻之后的T 3时刻,可以控制扬声器241发出的声音的播放强度为20dB,控制扬声器242发出的声音的播放强度为20dB且控制扬声器243发出的声音的播放强度为40dB。从而可以控制扬声器241-243发出声像漂 移方向为由东向西的提示音,或者,发出声像漂移方向为由扬声器241指向扬声器243的提示音。
一个实施例中,该扬声器241、扬声器242或者扬声器244的位置上可以包括一个或者多个扬声器。以扬声器241的位置上包括多个扬声器为例,控制扬声器241发出的声音的播放强度为40dB,包括:控制该多个扬声器中的至少部分扬声器发出的声音的播放强度为40dB。
一个实施例中,T 1时刻和T 2时刻之间的时间间隔,T 2时刻和T 3时刻之间的时间间隔可以是相等的,或者,也可以是不相等的。例如,若T 1时刻和T 2时刻之间的时间间隔,T 2时刻和T 3时刻之间的时间间隔相等时,该时间间隔可以为10毫秒(millisecond,ms)。
一个实施例中,车辆1可以对该提示音进行预设时长的播报。
示例性的,该预设时长内包括多个播报周期,每个播报周期内可以均可以控制扬声器241-243发出声像漂移方向为由东向西的提示音。
一个实施例中,在该预设时长结束之前检测到车辆2驶出该车辆1的告警范围内,还可以继续控制扬声器241-243发出提示音,直到该预设时长结束。或者,在完成车辆2驶出车辆1的告警范围的时刻所在的播报周期内提示音的播报后,可以停止该提示音的播报。
一个实施例中,该T 1时刻可以为检测到车辆2进入车辆1的告警范围内的时刻。
一个实施例中,车辆1可以控制车辆1中至少两个扬声器发出提示音,包括:车辆1控制该至少两个扬声器发出声音的时延。
以控制扬声器241-243为例,车辆1可以控制扬声器241-243发出的声音的播放强度相同且扬声器241-243发出的声音的时延不同。例如,在T 1时刻,可以控制扬声器241发出的声音的播放强度为20dB且控制扬声器242和扬声器243不发出声音;在T 1+△T时刻,控制扬声器242发出的声音的播放强度为20dB且控制扬声器241和扬声器243不发出声音;在T 1+2△T时刻,控制扬声器243发出的声音的播放强度为20dB且控制扬声器241和扬声器242不发出声音。从而可以控制扬声器241-243发出声像漂移方向为由东向西的提示音,或者,发出声像漂移方向为由扬声器241指向扬声器243的提示音。
一个实施例中,车辆1可以控制车辆1中至少两个扬声器发出提示音,包括:车辆1控制该至少两个扬声器发出声音的时延和播放强度。
以控制扬声器241-243为例,车辆1可以控制扬声器241-243发出的声音的时延不同且控制扬声器241-243发出的声音的播放强度不同。例如,在T 1时刻,可以控制扬声器241发出的声音的播放强度为40dB且控制扬声器242和扬声器243不发出声音;在T 1+△T时刻,控制扬声器242发出的声音的播放强度为20dB且控制扬声器241和扬声器243不发出声音;在T 1+2△T时刻,控制扬声器243发出的声音的播放强度为20dB且控制扬声器241和扬声器242不发出声音。
在T 2时刻,可以控制扬声器241发出的声音的播放强度为20dB且控制扬声器242和扬声器243不发出声音;在T 2+△T时刻,控制扬声器242发出的声音的播放强度为40dB且控制扬声器241和扬声器243不发出声音;在T 2+2△T时刻,控制扬声器243发出的声音的播放强度为20dB且控制扬声器241和扬声器242不发出声音。
在T 3时刻,可以控制扬声器241发出的声音的播放强度为20dB且控制扬声器242和扬声器243不发出声音;在T 3+△T时刻,控制扬声器242发出的声音的播放强度为20dB 且控制扬声器241和扬声器243不发出声音;在T 3+2△T时刻,控制扬声器243发出的声音的播放强度为40dB且控制扬声器241和扬声器242不发出声音。从而可以控制扬声器241-243发出声像漂移方向为由东向西的提示音,或者,发出声像漂移方向为由扬声器241指向扬声器243的提示音。
图4示出了本申请实施例提供的车辆1的告警范围的示意图。如图4所示,可以通过车辆1与目标之间的距离确定车辆1的告警范围,例如,该告警范围可以为以车辆1上的某个点为圆心的圆形区域,该圆形区域的半径可以为预设距离L(例如,20米(meter,m))。在检测到车辆2进入该圆形区域时,可以确定车辆2进入车辆1的告警范围。
以上图4中是以车辆1与目标之间的距离来确定目标是否进入车辆1的告警范围,本申请实施例对于告警范围的确定并不限于此。例如,还可以通过目标与车辆1的TTC来确定目标是否进入车辆1的告警范围。例如,在检测到车辆1与目标之间的TTC小于或者等于预设时长(例如,5秒(second,s))时,可以确定目标进入车辆1的告警范围;或者,在检测到车辆1与目标之间的TTC大于该预设时长时,可以确定目标未进入车辆1的告警范围。
一个实施例中,车辆1还可以根据从云端服务器获取的数据确定车辆1的告警范围。
一个实施例中,车辆1可以通过车辆2相对于车辆1的方位确定该至少两个扬声器中的起始扬声器。
例如,图5示出了本申请实施例提供的扬声器与目标所在方位(或者区域)的对应关系的示意图。可以看出,车辆2首先进入该告警范围中的区域1,此时可以根据该对应关系确定起始扬声器为扬声器241。
以上起始扬声器也可以理解为声像漂移方向上始端的扬声器。
一个实施例中,车辆1可以根据被提示用户所在的位置确定该至少两个扬声器中的终止扬声器。
例如,以图3所示的应用场景为例,在车辆2进入车辆1的告警范围内时,可以确定此时需要对驾驶员进行提示,从而可以选择距离驾驶员所在位置最近的扬声器243为终止扬声器。从而在车辆2相对于车辆1的运动趋势为由东向西运动时,控制扬声器241-243发出声像漂移方向为由东至西的提示音。
又例如,图6示出了本申请实施例提供的另一应用场景的示意图。如图6中的(a)所示,当车辆2相对于车辆1的运动趋势从由东向西调整为由南向北行驶时,可以切换至控制扬声器249、扬声器246和扬声器243发出声像漂移的提示音,该提示音的声像漂移方向为由南向北,或者,该提示音的声像漂移方向为由扬声器249指向扬声器243。
又例如,如图6中的(b)所示,当车辆2相对于车辆1的运动趋势从由东向西调整为由南向北行驶时,可以切换至控制扬声器247、扬声器248、扬声器249、扬声器246和扬声器243发出声像漂移的提示音,该提示音的声像漂移方向与该车辆2在该告警范围内相对于车辆1的运动趋势相对应,或者,该提示音的声像漂移方向为先由东向西,再由南向北。
通过被提示用户所在的位置确定该至少两个扬声器中的终止扬声器的位置,这样可以提升驾驶员对于目标和车辆1的相对运动趋势的感知能力,有助于驾驶员根据目标和车辆1的相对运动趋势快速做出驾驶决策,从而有助于提升用户的驾乘安全。
以上终止扬声器也可以理解为声像漂移方向末端的扬声器。
以上终止扬声器也可以与被提示用户所在的位置不相关。例如,当车辆2相对于车辆1的运动趋势从由东向西调整为由南向北行驶时,可以切换至控制扬声器248、扬声器245和扬声器242发出声像漂移的提示音,该提示音的声像漂移方向为由南向北,或者,该提示音的声像漂移方向为由扬声器248指向扬声器242。
一个实施例中,车辆1可以根据传感器采集的数据预测未来一段时间内目标相对于车辆1的运动趋势。
一个实施例中,车辆1可以根据目标进入车辆1的告警范围时的状态,预测未来一段时间内目标相对于车辆1的运动趋势。
示例性的,目标进入车辆1的告警范围时的状态包括但不限于目标进入车辆1的告警范围内时的速度、加速度、速度方向、航向角方向或者航向角速度率中的一个或者多个。
以图3所示的应用场景为例,当预测得到的车辆2相对于车辆1的运动趋势为由东向西时,车辆1可以根据该预测得到的运动趋势,确定车辆2在未来一段时间内会依次经过告警范围中的区域1、区域2和区域3。由于区域1和扬声器241具有对应关系,区域2和扬声器242具有对应关系且区域3和扬声器243具有对应关系,车辆1可以控制扬声器241-243发出提示音,该提示音的声像漂移方向与该运动趋势相对应。这样,在车辆2进入区域1且未进入区域2时,车辆1就可以控制扬声器241-243发出声像能漂移的提示音。通过扬声器241-243发出的声像漂移的提示音,使得用户提前获知车辆2相对于车辆1的运动趋势,帮助用户根据该运动趋势提前做出驾驶决策,有助于提升用户的驾乘安全。
一个实施例中,在预测得到的车辆2相对于车辆1的运动趋势与车辆2相对于车辆1的实际运动趋势不同时,控制该多个扬声器中的至少两个扬声器发出声像漂移方向与该实际运动趋势相对应的提示音。
例如,在车辆2进入车辆1的告警范围内时,车辆1可以根据预测得到的车辆2相对于车辆1的运动趋势,控制至少两个扬声器发出第一声像漂移方向的提示音,该第一声像漂移方向与预测得到的车辆2相对于车辆1的运动趋势相对应。在预测得到的车辆2相对于车辆1的运动趋势与车辆2相对于车辆1的实际运动趋势不一致时,车辆1可以切换至控制至少两个扬声器发出第二声像漂移方向的提示音,该第二声像漂移方向与该实际运动趋势相对应。
一个实施例中,当车辆2位于区域1且还未进入区域2时,车辆1可以控制扬声器241发出提示音,扬声器241发出的提示音用于提示目标已经进入了车辆1的告警范围且目标位于与扬声器241相对应的方位(或者,与扬声器241相对应的区域1)。当车辆2从区域1驶入区域2时,车辆1可以控制扬声器241和242发出提示音,该提示音的声像漂移方向与车辆2相对于车辆1的运动趋势相对应,例如,该提示音的声像漂移方向为由东向西,或者,该提示音的声像漂移方向为由扬声器241指向扬声器242。当车辆2从区域2驶入区域3时,车辆1可以控制扬声器241-243发出提示音,该提示音的声像漂移方向与车辆2相对于车辆1的运动趋势相对应,例如,该提示音的声像漂移方向为由东向西,该提示音的声像漂移方向为由扬声器241指向扬声器243。
一个实施例中,车辆1可以根据座舱外的传感器采集的数据确定车辆2相对于车辆1的运动趋势,从而根据该运动趋势控制该至少两个发声装置发出提示音,该提示音的声像漂移方向与该运动趋势相对应。
例如,如图5所示,当车辆2位于区域1且还未进入区域2时,车辆1可以根据车辆 1的座舱外的传感器(例如,摄像头、激光雷达以及毫米波雷达中的一种或者多种)确定车辆2相对于车辆1的运动趋势,从而控制至少两个发声装置发出声像漂移的提示音。例如,车辆2在区域1中相对于车辆1的运动趋势为由东向西行驶,那么车辆1可以控制扬声器241-243发出提示音,该提示音的声像漂移方向为由东向西。
一个实施例中,车辆1也可以根据云端服务器发送的数据确定车辆2相对于车辆1的运动趋势。
一个实施例中,车辆1可以控制提示音的声像漂移速度。
一个实施例中,车辆1可以控制提示音的声像漂移速度,包括:车辆1根据车辆1和目标之间的信息,控制提示音的声像漂移速度。例如,车辆1和目标之间的信息包括但不限于目标相对于车辆1的运动趋势、告警等级、车辆1和目标之间的距离以及车辆1和目标之间的TTC中的一个或者多个。
例如,该目标相对于车辆1的运动趋势中包括目标相对于车辆1的加速度。以图3所示的应用场景为例,当车辆2进入车辆1的告警范围后,若车辆2相对于车辆1的加速度越来越大,则可以控制扬声器241-243发出的提示音的声像漂移速度越来越快。表1示出了一种目标相对于车辆1的加速度与声像漂移速度之间的对应关系。
表1
目标相对于车辆1的加速度 声像漂移速度
(0m/s 2,2m/s 2)
[2m/s 2,5m/s 2)
[5m/s 2,10m/s 2)
例如,在车辆2相对于车辆1的加速度为1米每二次方秒(m/s 2)时,车辆1可以采用低的声像漂移速度。示例性的,在T 1时刻,可以控制扬声器241发出的声音的播放强度为40dB,控制扬声器242发出的声音的播放强度为20dB且控制扬声器243发出的声音的播放强度为20dB;在T 1时刻之后的T 2时刻,可以控制扬声器241发出的声音的播放强度为20dB,控制扬声器242发出的声音的播放强度为40dB且控制扬声器243发出的声音的播放强度为20dB;在T 2时刻之后的T 3时刻,可以控制扬声器241发出的声音的播放强度为20dB,控制扬声器242发出的声音的播放强度为20dB且控制扬声器243发出的声音的播放强度为40dB。其中,T 1时刻和T 2时刻之间的时间间隔以及T 2时刻和T 3时刻之间的时间间隔可以为10ms。
在车辆2相对于车辆1的加速度从1m/s 2提高至4m/s 2时,车辆1可以采用中等的声像漂移速度。示例性的,在T 1时刻,可以控制扬声器241发出的声音的播放强度为40dB,控制扬声器242发出的声音的播放强度为20dB且控制扬声器243发出的声音的播放强度为20dB;在T 1时刻之后的T 4时刻,可以控制扬声器241发出的声音的播放强度为20dB,控制扬声器242发出的声音的播放强度为40dB且控制扬声器243发出的声音的播放强度为20dB;在T 4时刻之后的T 5时刻,可以控制扬声器241发出的声音的播放强度为20dB,控制扬声器242发出的声音的播放强度为20dB且控制扬声器243发出的声音的播放强度为40dB。其中,T 1时刻和T 4时刻之间的时间间隔以及T 4时刻和T 5时刻之间的时间间隔可以为6ms。
在车辆2相对于车辆1的运动速度从4m/s 2提高至8m/s 2时,车辆1可以采用高的声像漂移速度。示例性的,在T 1时刻,可以控制扬声器241发出的声音的播放强度为40dB,控制扬声器242发出的声音的播放强度为20dB且控制扬声器243发出的声音的播放强度为20dB;在T 1时刻之后的T 6时刻,可以控制扬声器241发出的声音的播放强度为20dB,控制扬声器242发出的声音的播放强度为40dB且控制扬声器243发出的声音的播放强度为20dB;在T 6时刻之后的T 7时刻,可以控制扬声器241发出的声音的播放强度为20dB,控制扬声器242发出的声音的播放强度为20dB且控制扬声器243发出的声音的播放强度为40dB。其中,T 1时刻和T 6时刻之间的时间间隔以及T 6时刻和T 7时刻之间的时间间隔可以为2ms。
以上表1中所示的目标相对于车辆1的加速度与声像漂移速度的对应关系仅仅是示意性的,本申请实施例并不限于此。例如,目标相对于车辆1的加速度小于5m/s 2时,采用低的声像漂移速度;目标相对于车辆1的运动速度大于或者等于5m/s 2时,采用高的声像漂移速度。
一个实施例中,该目标相对于车辆1的运动趋势中包括目标相对于车辆1的运动方向。例如,根据目标相对于车辆1的运动方向可以将目标划分为前方侧向穿插目标和盲区预警目标。例如,图3所示的车辆2为前方侧向穿插目标,图9所示的车辆3可以为盲区预警目标。在确定目标为前方侧向穿插目标时,车辆1可以采用高的声像漂移速度;或者,在确定目标为盲区预警目标时,车辆1可以采用低的声像漂移速度。
又例如,车辆1还可以根据告警等级,控制提示音的声像漂移速度。示例性的,该告警等级可以是由车辆中1的高级驾驶辅助系统(advanced driving assistant system,ADAS)根据座舱外的传感器采集的数据输出的。告警等级越高,表示目标越危险。示例性的,表2示出了告警等级和声像漂移速度的对应关系。
表2
告警等级 声像漂移速度
告警等级1
告警等级2
告警等级3
示例性的,在目标的告警等级为告警等级1时,可以采用低的声像漂移速度。
示例性的,在目标的告警等级为告警等级2时,可以采用中等的声像漂移速度。
示例性的,在目标的告警等级为告警等级3时,可以采用高的声像漂移速度。
以上车辆1根据不同的声像漂移速度控制扬声器工作的过程可以参考上述实施例中的描述,此处不再赘述。
以上表2中告警范围与声像漂移速度的对应关系仅仅是示意性的,本申请实施例对此并不作具体限定。例如,还可以分为比3个告警等级更多或者更少的告警等级。
以上是以告警等级由ADAS根据座舱外的传感器采集的数据输出为例进行说明,本申请实施例并不限于此。例如,车辆1还可以根据云端服务器发送的数据,确定目标的告警等级。例如,该告警等级可以是由云端服务器确定的,从而云端服务器可以将该目标的告警等级发送给车辆1。
又例如,车辆1还可以根据目标与车辆1之间的距离,控制提示音的声像漂移速度。示例性的,相比于图4所示的告警范围,图7示出了本申请实施例提供的一种告警范围划分方式的示意图。表3示出了本申请实施例提供的目标与车辆1之间的距离与声像漂移速度之间的对应关系。
表3
目标与车辆1之间的距离 声像漂移速度
(20m,12m)
[12m,4m)
[4m,0m)
示例性的,在目标与车辆1之间的距离为15m时,可以采用低的声像漂移速度。
示例性的,在目标与车辆1之间的距离为10m时,可以采用中等的声像漂移速度。
示例性的,在目标与车辆1之间的距离为4m时,可以采用高的声像漂移速度。
以上车辆1根据不同的声像漂移速度控制扬声器工作的过程可以参考上述实施例中的描述,此处不再赘述。
以上表3中目标与车辆1之间的距离与声像漂移速度之间的对应关系仅仅是示意性的,本申请实施例对此并不作具体限定。例如,当目标与车辆1之间的距离大于5m时,可以采用低的声像漂移速度;当目标与车辆1之间的距离小于或者等于5m时,可以采用高的声像漂移速度。
以上目标与车辆1之间的距离可以是车辆1根据车辆1座舱外的传感器采集的数据确定的,或者,也可以是车辆1根据云端服务器发送的数据确定的。
又例如,车辆1还可以根据目标与车辆1之间的TTC,控制提示音的声像漂移速度。表4示出了本申请实施例提供的目标与车辆1之间的TTC与声像漂移速度之间的对应关系。
表4
TTC 声像漂移速度
(5s,4s)
[4s,3s)
[3s,2s)
示例性的,当TTC为5s时,可以采用低的声像漂移速度。
示例性的,当TTC为4s时,可以采用中等的声像漂移速度。
示例性的,当TTC为3s时,可以采用高的声像漂移速度。
以上车辆1根据不同的声像漂移速度控制扬声器工作的过程可以参考上述实施例中的描述,此处不再赘述。
以上表4中TTC与声像漂移速度之间的对应关系仅仅是示意性的,本申请实施例对此并不作具体限定。
以上TTC可以是车辆1根据车辆1座舱外的传感器采集的数据确定的,或者,也可以是车辆1根据云端服务器发送的数据确定的。又例如,还可以结合目标相对于车辆1的 运动趋势、告警等级、车辆1和目标之间的距离以及车辆1和目标之间的TTC中的多个,控制提示音的声像漂移速度。例如,可以结合告警等级以及车辆1和目标之间的TTC,控制提示音的声像漂移速度。表5示出了本申请实施例提供的告警等级、TTC与声像漂移速度之间的对应关系。
表5
Figure PCTCN2022128404-appb-000001
以上表5中的告警等级、TTC与声像漂移速度之间的对应关系仅仅是示意性的,本申请实施例中对此并不作具体限定。
一个实施例中,还可以根据目标的类型,控制提示音的声像漂移速度。示例性的,表6示出了目标的类型与声像漂移速度之间的对应关系。
表6
Figure PCTCN2022128404-appb-000002
以上表6中目标的类型与声像漂移速度之间的对应关系仅仅是示意性的,本申请实施例中对此并不作具体限定。
一个实施例中,该至少两个扬声器还可以位于座椅的头枕处。
图8示出了本申请实施例提供的通过至少两个扬声器发出声像漂移的提示音的示意图。如图8所示,主驾座椅的头枕处包括扬声器251和扬声器252。以图3所示的应用场景为例,在车辆2进入车辆1的告警范围内且车辆2相对于车辆1的运动趋势为由东向西时,可以控制扬声器251和扬声器252发出声像漂移的提示音,该提示音的声像漂移方向由扬声器251指向扬声器252。
以上结合图3至图8介绍了前方侧向穿插碰撞预警场景下,车辆1控制至少两个扬声器发出声像漂移的提示音的过程。下面结合图9至图10介绍本申请实施例提供的其他应用场景。例如,本申请实施例还可以应用于驾驶盲区监测预警(blind spod detection)场景。驾驶盲区监测预警场景包括但不限于盲区监测预警、变道辅助预警、后方交通穿行预警 (rear crossing traffic alert)、后方交通穿行制动(rear cross traffic assist with braking)和开门预警(door open warning,DOW)等场景。
图9示出了本申请实施例提供的应用场景的另一示意图。该场景为后方交通穿行预警场景。如图9所示,车辆1在行驶的过程中,通过座舱外传感器采集的数据检测到车辆3从车辆1的右后方加速超越车辆1。在车辆3进入车辆1的告警范围内且车辆3相对于车辆1的运动趋势为由东南向西北运行时,车辆1可以控制扬声器247、扬声器245和扬声器243发出声像能漂移的提示音,该提示音的声像漂移方向与车辆3相对于车辆1的运动趋势相对应。从而可以提示驾驶员目标正在从车辆1的右后方超越车辆1。
一个实施例中,可以根据用户所在的区域,从车辆1中的多个扬声器中确定该至少两个扬声器。
例如,在开门预警场景下,可以根据打开车门的用户所在的区域,从车辆1中的多个扬声器中确定该至少两个扬声器,从而控制该至少两个扬声器发生声像漂移的提示音,该提示音的声像漂移方向为车门外的目标相对于车辆1的运动趋势。
示例性的,图10示出了本申请实施例提供的扬声器分布的示意图。如图10所示,车辆1可以包括扬声器1001-1016。可以将车辆1的座舱内划分为主驾区域、副驾区域、二排左侧区域和二排右侧区域。其中,扬声器1001-1004为副驾区域内的扬声器,扬声器1005-1008为主驾区域的扬声器,扬声器1009-1012为二排右侧区域的扬声器,扬声器1013-1016为二排左侧区域的扬声器。
例如,当主驾区域的用户下车时,用户可能不会注意到主驾区域车门外移动的目标(例如,骑自行车或者摩托车的用户)。通过声像漂移的提示音可以对准备下车的用户进行提示,从而使得用户获知打开车门会存在碰撞风险。例如,当主驾区域车门外的目标从车门后方向前方行驶时,可以控制扬声器1005和扬声器1007发出声像能漂移的提示音,该提示音的声像漂移方向为从车辆1的尾部指向车辆1的头部,或者,该提示音的声像漂移方向为由扬声器1007指向扬声器1005。
一个实施例中,根据打开车门的用户所在的区域,从车辆1中的多个扬声器中确定该至少两个扬声器,从而控制该至少两个扬声器发生声像漂移的提示音,包括:根据打开车门的用户所在的区域和目标的运动趋势,从车辆1中的多个扬声器中确定该至少两个扬声器。
例如,在通过座舱内的摄像头检测到副驾区域的用户准备打开车门的操作时,可以对车辆1告警范围内的目标进行检测。当检测到有目标从副驾区域车门外的右后方向左前方靠近副驾区域车门时,可以根据目标相对于车辆1的运动趋势,从扬声器1001-1004中确定扬声器1001和扬声器1004。从而控制扬声器1001和扬声器1004发出提示音,该提示音的声像漂移方向由扬声器1001指向扬声器1004。从而副驾区域的用户通过该提示音可以确定打开车门后存在碰撞风险。或者,该提示音的声像漂移方向也可以是由扬声器1004指向扬声器1001。从而副驾区域的用户通过该提示音可以确定车门外有目标正在靠近。
图11是本申请实施例提供的另一应用场景的示意图。如图11所示,该场景为车道偏离预警(lane departure warning)场景。
车辆1可以通过座舱外的传感器采集的数据确定车辆1所在车道的标识线的信息。在车辆1从位于车道1中的中间位置向标识线1偏离时,可以控制扬声器242和扬声器243发出声像漂移的提示音。例如,该提示音的声像漂移方向可以由扬声器243指向扬声器 242,该提示音用于提示用户当前车辆正在靠近右侧的标识线;或者,该提示音的声像漂移方向可以由扬声器242指向扬声器243,该提示音用于提示用户向左打方向盘,从而使得车辆处于车道1的中间位置。
以上结合附图介绍了通过座舱内的至少两个扬声器发出声像漂移的提示音,从而使得座舱内的用户获知目标相对于车辆1的运动趋势的过程,本申请实施例并不限于此。例如,本申请实施例的技术方案还可以应用于座舱外,通过座舱外的至少两个扬声器发出声像漂移的提示音,从而对座舱外的用户进行提示。
图12示出了本申请实施例提供的另一应用场景的示意图。如图12所示,用户驾驶车辆1向车位1泊车。在通过座舱外的传感器采集的数据检测到车位1附近静止站立的用户时,可以控制座舱外的扬声器1201和扬声器1202发出声像漂移的提示音,该提示音的声像漂移方向可以由扬声器1201指向扬声器1202,这样可以使得静止站立的用户获知车辆1正在靠近自己,从而可以及时对车辆1进行避让。
一个实施例中,该车辆1中保存有运动趋势与发声装置之间的映射关系,该车辆1可以根据该映射关系以及目标相对于车辆1的运动趋势,确定该至少两个扬声器。示例性的,可以将处于车辆1的告警范围内的目标分为前方侧向穿插目标和盲区目标。前方侧向穿插目标相对于车辆1的运动趋势可以包括从车头的左侧向右侧运动或者从车头的右侧向左侧运动;盲区目标相对于车辆1的运动趋势可以包括从车辆1的左后侧加速超过车辆1或者从车辆1的右后侧加速超过车辆1。表7示出了运动趋势、发声装置与声像漂移方向之间的对关系。
表7
Figure PCTCN2022128404-appb-000003
示例性的,盲区目标还可以包括开门预警场景下的盲区目标,例如,盲区目标相对于车辆1的运动趋势为从主驾区域车门后方向前方运动、从副驾区域车门后方向前方运动、从二排左侧区域车门后方向前方运动或者从二排右侧车门后方向前方运动。表8示出了运动趋势、发声装置与声像漂移方向之间的对关系。
表8
Figure PCTCN2022128404-appb-000004
Figure PCTCN2022128404-appb-000005
以上表7和表8仅仅是示意性的,本申请实施例并不限于此。
一个实施例中,在控制该至少两个扬声器发出声像漂移的提示音的同时,还可以结合以下提示方式中的一种或者多种:通过氛围灯的点亮方向进行提示、通过方向盘的震动方向提示和通过车载显示屏或者抬头显示装置(head up display,HUD)进行提示。
图13示出了本申请实施例中通过声像漂移的提示音以及氛围灯提示用户的示意图。如图13所示,以运载工具为车辆为例,氛围灯包括设置在车门扶手处的灯带1310。其中,灯带1310包括多个灯珠。以位于二排左侧区域的用户下车为例,在检测到目标位于车辆的告警范围内且该目标从二排左侧车门后方向前方移动时,可以控制扬声器1013和扬声器1015发出声像漂移的提示音且控制灯带1310中灯光渐变的方向为从灯珠1311向灯珠1312的方向变化。可选地,还可以根据该目标的与二排左侧车门之间的距离或者TTC,控制该提示音的声像漂移速度和/或该灯光渐变的速度。
一个实施例中,在检测到目标位于车辆的告警范围内且该目标从二排左侧车门后方向前方移动,也可以仅控制灯带1310中灯光渐变的方向为从灯珠1311向灯珠1312的方向变化。
图14示出了本申请实施例中通过声像漂移的提示音以及方向盘震动提示用户的示意图。如图14所示,在检测到车辆2进入车辆1的告警区域内且车辆2从车辆1的车头右侧向左侧移动时,可以控制扬声器241-243发出声像漂移方向为由扬声器241指向扬声器243的提示音且控制方向盘的震动方向为逆时针震动。
图15示出了本申请实施例提供的控制方法1500的示意性流程图。该方法1500可以由运载工具(例如,车辆)执行,或者,该方法1500可以由上述计算平台执行,或者,该方法1500可以由计算平台和至少两个发声装置组成的系统执行,或者,该方法1500可以由上述计算平台中的片上系统(system-on-a-chip,SoC)执行,或者,该方法1500可以由计算平台中的处理器执行。该方法1500包括:
S1510,检测到第一目标处于运载工具的告警范围内,该运载工具包括多个发声装置。
示例性的,该第一目标可以为上述实施例中的车辆2或者车辆3。
示例性的,该运载工具可以为上述实施例中的车辆1。
可选地,该检测到第一目标处于运载工具的告警范围内,包括:在该运载工具和该第一目标之间的距离小于或者等于预设距离,和/或,该运载工具和该第一目标之间的TTC小于或者等于预设时长时,检测到该第一目标处于该运载工具的告警范围内。
示例性的,该预测距离可以为20米。
示例性的,该预设时长可以为5秒。
S1520,控制该多个发声装置中至少两个发声装置发出提示音,该提示音的声像漂移方向与第一运动趋势相对应,该第一运动趋势包括该运载工具和该第一目标的相对运动趋 势。
以上该运载工具和该第一目标的相对运动趋势可以包括该运载工具相对于该第一目标的相对运动趋势,或者,该第一目标相对于该运载工具的相对运动趋势。可选地,该控制该多个发声装置中至少两个发声装置发出提示音,包括:在该运载工具的速度大于或者等于预设速度阈值时,控制该多个发声装置中至少两个发声装置发出提示音。
可选地,该方法1500还包括:控制该提示音的声像漂移速度。
可选地,该控制该提示音的声像漂移速度,包括:根据该运载工具和该第一目标之间的信息,控制该提示音的声像漂移速度。
可选地,该运载工具和该第一目标之间的信息包括该第一运动趋势,告警等级,该运载工具和该第一目标之间的距离和该运载工具和该第一目标之间的碰撞时间TTC中的至少一个。
可选地,该方法1500还包括:根据被提示的用户所在的位置,确定该至少两个发声装置中位于该声像漂移方向末端的发声装置。
示例性的,如图6所示,在检测到车辆2进入车辆1的告警范围且车辆2为前方侧向穿插目标时,车辆1可以确定对驾驶员进行提示,从而可以选择距离驾驶员最近的扬声器241作为声像漂移方向末端的扬声器。
示例性的,如图10所示,在副驾区域的用户准备下车时,检测到副驾区域车门外有目标从后方向前方移动且车门打开时可能会与该目标发生碰撞,此时车辆1可以确定对副驾区域的用户进行提示。可以从扬声器1001-1004中选择一个扬声器作为末端的发声装置。
可选地,该方法1500还包括:根据该第一目标进入该告警范围时相对于该运载工具的方位,确定该至少两个发声装置中位于该声像漂移方向始端的发声装置。
示例性的,如图5所示,车辆2首先进入该告警范围中的区域1,此时可以根据扬声器与目标所在方位的对应关系确定起始扬声器为扬声器241。
可选地,该方法1500还包括:根据该第一目标的状态,预测该第一运动趋势;其中,该控制该多个发声装置中至少两个发声装置发出提示音,包括:根据预测得到的该第一运动趋势,控制该多个发声装置中至少两个发声装置发出该提示音。
可选地,该第一目标的状态包括该第一目标的速度、加速度、速度方向、航向角方向或者航向角速度率中的一个或者多个。
可选地,该根据该第一目标的状态,预测该第一运动趋势,包括:将该第一目标的状态输入轨迹预测模型中,预测得到该第一运动趋势。
示例性的,该第一目标的状态包括该第一目标进入该运载工具的告警范围时的状态。
示例性的,该第一目标的状态还可以包括该第一目标进入该运载工具的告警范围之前的历史轨迹。
可选地,该方法1500还包括:在预测得到的该第一运动趋势与该第一目标和该运载工具的实际相对运动趋势不同时,控制该多个发声装置中的至少两个发声装置发出声像漂移方向与该实际相对运动趋势相对应的提示音。
示例性的,以图3所示的应用场景为例,在车辆2进入车辆1的告警范围内时,车辆1可以根据预测得到的车辆2相对于车辆1的运动趋势(例如,由东向西),控制扬声器241-243发出声像漂移方向为由东向西的提示音。当预测得到的车辆2相对于车辆1的运动趋势与车辆2相对于车辆1的实际运动趋势(例如,由南向北)不一致时,车辆1可以 切换至控制扬声器243、扬声器246和扬声器249发出声像漂移方向为由南向北的提示音。
可选地,该方法1500还包括:根据该运载工具的传感器采集的第一数据,确定该第一运动趋势;或者,获取云端服务器发送的第二数据且根据该第二数据,确定该第一运动趋势。
示例性的,如图5所示,在检测到车辆2进入车辆1的告警范围内时,车辆1可以通过座舱外的传感器采集的传感数据确定该车辆2相对于车辆1的运动趋势。例如,车辆2在第一时刻进入车辆1的告警范围内。车辆1可以根据传感器采集的从第一时刻起1s内的传感数据,确定在1s内车辆2相对于车辆1的运动趋势。从而根据该运动趋势,控制多个发声扬声器中的至少两个扬声器发出与该运动趋势相对应的提示音。
可选地,该根据该运载工具的传感器采集的第一数据,确定该第一运动趋势,包括:根据该第一数据,确定该第一目标在多个时刻的方位;根据该第一目标在该多个时刻的方位,确定该第一运动趋势。
示例性的,在确定1s内车辆2相对于车辆1的运动趋势时,可以以10ms为时间间隔采集100个时刻下车辆2的相对于车辆1的方位。通过这100个时刻下车辆2相对于车辆1的方位,可以获得车辆2相对于车辆1的运动趋势。
可选地,该运载工具中包括运动趋势与发声装置的映射关系,该方法1500还包括:根据该映射关系和该第一运动趋势,确定该至少两个发声装置。
示例性的,该映射关系可以如上述表7或者表8所示。
可选地,该方法1500还包括:控制氛围灯点亮的方向,该氛围灯点亮的方向与该第一运动趋势相对应,该运载工具包括该氛围灯;和/或,控制方向盘的震动方向,该震动方向与该第一运动趋势相对应,该运载工具包括该方向盘;和/或,控制显示装置显示提示信息,该提示信息用于提示该第一运动趋势,该运载工具包括该显示装置。
示例性的,如图13所示,在检测到目标位于车辆的告警范围内且该目标从二排左侧车门外由后方向前方移动时,可以控制扬声器1013和扬声器1015发出声像漂移的提示音且控制灯带1310中灯光渐变的方向为从灯珠1311向灯珠1312的方向变化。
示例性的,如图14所示,在检测到车辆2进入车辆1的告警区域内且车辆2从车辆1的车头右侧向左侧移动时,可以控制扬声器241-243发出声像漂移方向为由扬声器241指向扬声器243的提示音且控制方向盘的震动方向为逆时针震动。
可选地,该控制该多个发声装置中至少两个发声装置发出提示音,包括:控制该至少两个发声装置中发声装置发出声音的强度,和/或,控制该至少两个发声装置中发声装置发出声音的时延。
以上控制发声装置发出声音的强度和/或时延的过程可以参考上述实施例中的描述,此处不再赘述。
可选地,该多个发声装置位于该运载工具的座舱内。
可选地,该多个发声装置也可以位于该运载工具的座舱外。
示例性的,如图12所示,在通过座舱外的传感器采集的数据检测到车位1附近静止站立的用户时,可以控制座舱外的扬声器1201和扬声器1202发出声像漂移的提示音,该提示音的声像漂移方向可以由车辆1的车头位置指向该静止站立的用户所在的位置,或者,该提示音的声像漂移方向可以由扬声器1201指向扬声器1202,这样可以使得静止站立的用户获知车辆1正在靠近自己,从而可以及时对车辆1进行避让。
本申请实施例还提供了一种控制方法,该方法包括:检测到车辆向该车辆所在车道的第一标识线偏离,该车辆包括多个发声装置;控制该多个发声装置中的至少两个发声装置发出声像漂移的提示音,该提示音的声像漂移方向为靠近该第一标识线的方向,或者,该提示音的声像漂移方向为远离该第一标识线的方向。
可选地,检测到车辆向该车辆所在车道的第一标识线偏离,包括:检测到车辆与该第一标识线的距离逐渐减小。
可选地,在该提示音的声像漂移方向为靠近该第一标识线的方向时,该提示音可以用于提示用户车辆正在靠近第一标识线。
可选地,在该提示音的声像漂移方向为远离该第一标识线的方向时,该提示音可以用于提示用户驾驶车辆向远离该第一标识线的方向行驶,或者,提示用户向车道的中心线位置行驶。
本申请实施例还提供用于实现以上任一种方法的装置,例如,提供一种装置包括用以实现以上任一种方法中运载工具(例如,车辆),或者,车辆中的计算平台,或者,计算平台中的SoC,或者,计算平台中的处理器所执行的各步骤的单元(或手段)。
图16示出了本申请实施例提供的控制装置1600的示意性框图。如图16所示,该装置1600包括:检测单元1610,用于检测到第一目标处于运载工具的告警范围内,该运载工具包括多个发声装置;控制单元1620,用于控制该多个发声装置中至少两个发声装置发出提示音,该提示音的声像漂移方向与第一运动趋势相对应,该第一运动趋势包括该运载工具和该第一目标的相对运动趋势。
可选地,该控制单元1620,还用于:控制该提示音的声像漂移速度。
可选地,该控制单元1620,用于:根据该运载工具和该第一目标之间的信息,控制该提示音的声像漂移速度。
可选地,该运载工具和该第一目标之间的信息包括该第一运动趋势,告警等级,该运载工具和该第一目标之间的距离和该运载工具和该第一目标之间的碰撞时间TTC中的至少一个。
可选地,该装置1600还包括:第一确定单元,用于根据被提示的用户所在的位置,确定该至少两个发声装置中位于该声像漂移方向末端的发声装置。
可选地,该装置1600还包括:第二确定单元,用于根据该第一目标进入该告警范围时相对于该运载工具的方位,确定该至少两个发声装置中位于该声像漂移方向始端的发声装置。
可选地,该装置1600还包括:预测单元,用于根据该第一目标的状态,预测该第一运动趋势;其中,该控制单元,用于:根据预测得到的该第一运动趋势,控制该多个发声装置中至少两个发声装置发出该提示音。
可选地,该控制单元1620,还用于:在预测得到的该第一运动趋势与该第一目标和该运载工具的实际相对运动趋势不同时,控制该多个发声装置中的至少两个发声装置发出声像漂移方向与该实际相对运动趋势相对应的提示音。
可选地,该装置1600还包括:第三确定单元,用于根据该运载工具的传感器采集的第一数据,确定该第一运动趋势;或者,该第三确定单元,用于获取云端服务器发送的第二数据且根据该第二数据,确定该第一运动趋势。
可选地,该第三确定单元,用于:根据该第一数据,确定该第一目标在多个时刻的方 位;根据该第一目标在该多个时刻的方位,确定该第一运动趋势。
可选地,该运载工具中包括运动趋势与发声装置的映射关系,该方法还包括:根据该映射关系和该第一运动趋势,确定该至少两个发声装置。
可选地,该控制单元1620,还用于:控制氛围灯点亮的方向,该氛围灯点亮的方向与该第一运动趋势相对应,该运载工具包括该氛围灯;和/或,控制方向盘的震动方向,该震动方向与该第一运动趋势相对应,该运载工具包括该方向盘;和/或,控制显示装置显示提示信息,该提示信息用于提示该第一运动趋势,该运载工具包括该显示装置。
可选地,该检测单元1610,用于:在该运载工具和该第一目标之间的距离小于或者等于预设距离,和/或,该运载工具和该第一目标之间的TTC小于或者等于预设时长时,检测到该第一目标处于该运载工具的告警范围内。
可选地,该控制单元1620,用于:控制该至少两个发声装置中发声装置发出声音的强度,和/或,控制该至少两个发声装置中发声装置发出声音的时延。
可选地,该多个发声装置位于该运载工具的座舱内。
例如,该检测单元1610可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以检测单元1610为计算平台中的处理器151为例,处理器151可以检测第一目标是否处于运载工具的告警范围。例如,处理器151可以获取座舱外的传感器采集的传感数据并根据该传感数据,确定第一目标是否进入该运载工具的告警范围。又例如,处理器151还可以根据该传感数据,确定该第一目标相对于运载工具的运动趋势。
又例如,控制单元1620可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以控制单元1620为计算平台中的处理器152为例,在处理器151确定第一目标进入运载工具的告警范围内时,处理器151可以向处理器152发送指示信息以及该第一目标的运动趋势,该指示信息用于指示第一目标进入运载工具的告警范围内。处理器152可以根据指示信息以及该第一目标的运动趋势,控制该至少两个发声装置发出声像漂移的提示音,该提示音的声像漂移方向与该第一运动趋势相对应。
又例如,处理器152还可以控制该提示音的声像漂移速度。
又例如,以上预测单元可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以预测单元为计算平台中的处理器15n为例,处理器15n可以根据运载工具座舱外的传感器采集的传感数据确定第一目标的状态,并根据该第一目标的状态预测该第一目标在未来一段时间相对于运载工具的运动趋势。
以上检测单元1610所实现的功能和控制单元1620所实现的功能可以由不同的处理器实现,或者,也可以由相同的处理器实现,本申请实施例对此不作限定。
应理解以上装置中各单元的划分仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。此外,装置中的单元可以以处理器调用软件的形式实现;例如装置包括处理器,处理器与存储器连接,存储器中存储有指令,处理器调用存储器中存储的指令,以实现以上任一种方法或实现该装置各单元的功能,其中处理器例如为通用处理器,例如CPU或微处理器,存储器为装置内的存储器或装置外的存储器。或者,装置中的单元可以以硬件电路的形式实现,可以通过对硬件电路的设计实现部分或全部单元的功能,该硬件电路可以理解为一个或多个处理器;例如,在一种实现中,该硬件电路为ASIC,通过对电路内元件逻辑关系的设计,实现以上部分或全部单元的功能;再如,在另一种实现中,该硬件电路为可以通过PLD实现,以FPGA为例,其可以 包括大量逻辑门电路,通过配置文件来配置逻辑门电路之间的连接关系,从而实现以上部分或全部单元的功能。以上装置的所有单元可以全部通过处理器调用软件的形式实现,或全部通过硬件电路的形式实现,或部分通过处理器调用软件的形式实现,剩余部分通过硬件电路的形式实现。
在本申请实施例中,处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如CPU、微处理器、GPU、或DSP等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为ASIC或PLD实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如NPU、TPU、DPU等。
可见,以上装置中的各单元可以是被配置成实施以上方法的一个或多个处理器(或处理电路),例如:CPU、GPU、NPU、TPU、DPU、微处理器、DSP、ASIC、FPGA,或这些处理器形式中至少两种的组合。
此外,以上装置中的各单元可以全部或部分可以集成在一起,或者可以独立实现。在一种实现中,这些单元集成在一起,以SoC的形式实现。该SoC中可以包括至少一个处理器,用于实现以上任一种方法或实现该装置各单元的功能,该至少一个处理器的种类可以不同,例如包括CPU和FPGA,CPU和人工智能处理器,CPU和GPU等。
本申请实施例还提供了一种装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行上述实施例执行的方法或者步骤。
可选地,若该装置位于运载工具中,上述处理单元可以是图1所示的处理器151-15n。
图17示出了本申请实施例提供的控制系统1700的示意性框图。如图17所示,该控制系统1700中包括至少两个发声装置和计算平台,其中,该计算平台可以包括上述控制装置1600。
可选地,该控制系统1700中还包括一个或者多个传感器。
本申请实施例还提供了一种运载工具,该运载工具可以包括上述控制装置1600或者控制系统1700。
可选地,该运载工具可以为车辆。
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
本申请实施例还提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者上电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,该存储器可以包括只读存储器和随机存取存储器,并向处 理器提供指令和数据。
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖。在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (36)

  1. 一种控制方法,其特征在于,包括:
    检测到第一目标处于运载工具的告警范围内,所述运载工具包括多个发声装置;
    控制所述多个发声装置中至少两个发声装置发出提示音,所述提示音的声像漂移方向与第一运动趋势相对应,所述第一运动趋势包括所述运载工具和所述第一目标的相对运动趋势。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    控制所述提示音的声像漂移速度。
  3. 如权利要求2所述的方法,其特征在于,所述控制所述提示音的声像漂移速度,包括:
    根据所述运载工具和所述第一目标之间的信息,控制所述提示音的声像漂移速度。
  4. 如权利要求3所述的方法,其特征在于,所述运载工具和所述第一目标之间的信息包括所述第一运动趋势,告警等级,所述运载工具和所述第一目标之间的距离以及所述运载工具和所述第一目标之间的碰撞时间TTC中的至少一个。
  5. 如权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    根据被提示的用户所在的位置,确定所述至少两个发声装置中位于所述声像漂移方向末端的发声装置。
  6. 如权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一目标进入所述告警范围时相对于所述运载工具的方位,确定所述至少两个发声装置中位于所述声像漂移方向始端的发声装置。
  7. 如权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一目标的状态,预测所述第一运动趋势;
    其中,所述控制所述多个发声装置中至少两个发声装置发出提示音,包括:
    根据预测得到的所述第一运动趋势,控制所述多个发声装置中至少两个发声装置发出所述提示音。
  8. 如权利要求7所述的方法,其特征在于,所述方法还包括:
    在预测得到的所述第一运动趋势与所述第一目标和所述运载工具的实际相对运动趋势不同时,控制所述多个发声装置中的至少两个发声装置发出声像漂移方向与所述实际相对运动趋势相对应的提示音。
  9. 如权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述运载工具的传感器采集的第一数据,确定所述第一运动趋势;或者,
    获取云端服务器发送的第二数据且根据所述第二数据,确定所述第一运动趋势。
  10. 如权利要求9所述的方法,其特征在于,所述根据所述运载工具的传感器采集的第一数据,确定所述第一运动趋势,包括:
    根据所述第一数据,确定所述第一目标在多个时刻的方位;
    根据所述第一目标在所述多个时刻的方位,确定所述第一运动趋势。
  11. 如权利要求1至10中任一项所述的方法,其特征在于,所述运载工具中包括运 动趋势与发声装置的映射关系,所述方法还包括:
    根据所述映射关系和所述第一运动趋势,确定所述至少两个发声装置。
  12. 如权利要求1至11中任一项所述的方法,其特征在于,所述方法还包括:
    控制氛围灯点亮的方向,所述氛围灯点亮的方向与所述第一运动趋势相对应,所述运载工具包括所述氛围灯;和/或,
    控制方向盘的震动方向,所述震动方向与所述第一运动趋势相对应,所述运载工具包括所述方向盘;和/或,
    控制显示装置显示提示信息,所述提示信息用于提示所述第一运动趋势,所述运载工具包括所述显示装置。
  13. 如权利要求1至12中任一项所述的方法,其特征在于,所述检测到第一目标处于运载工具的告警范围内,包括:
    在所述运载工具和所述第一目标之间的距离小于或者等于预设距离,和/或,所述运载工具和所述第一目标之间的TTC小于或者等于预设时长时,检测到所述第一目标处于所述运载工具的告警范围内。
  14. 如权利要求1至13中任一项所述的方法,其特征在于,所述控制所述多个发声装置中至少两个发声装置发出提示音,包括:
    控制所述至少两个发声装置中发声装置发出声音的强度,和/或,控制所述至少两个发声装置中发声装置发出声音的时延。
  15. 如权利要求1至14中任一项所述的方法,其特征在于,所述多个发声装置位于所述运载工具的座舱内。
  16. 一种控制装置,其特征在于,包括:
    检测单元,用于检测到第一目标处于运载工具的告警范围内,所述运载工具包括多个发声装置;
    控制单元,用于控制所述多个发声装置中至少两个发声装置发出提示音,所述提示音的声像漂移方向与第一运动趋势相对应,所述第一运动趋势包括所述运载工具和所述第一目标的相对运动趋势。
  17. 如权利要求16所述的装置,其特征在于,所述控制单元,还用于:
    控制所述提示音的声像漂移速度。
  18. 如权利要求17所述的装置,其特征在于,所述控制单元,用于:
    根据所述运载工具和所述第一目标之间的信息,控制所述提示音的声像漂移速度。
  19. 如权利要求18所述的装置,其特征在于,所述运载工具和所述第一目标之间的信息包括所述第一运动趋势,告警等级,所述运载工具和所述第一目标之间的距离以及所述运载工具和所述第一目标之间的碰撞时间TTC中的至少一个。
  20. 如权利要求16至19中任一项所述的装置,其特征在于,所述装置还包括:
    第一确定单元,用于根据被提示的用户所在的位置,确定所述至少两个发声装置中位于所述声像漂移方向末端的发声装置。
  21. 如权利要求16至20中任一项所述的装置,其特征在于,所述装置还包括:
    第二确定单元,用于根据所述第一目标进入所述告警范围时相对于所述运载工具的方位,确定所述至少两个发声装置中位于所述声像漂移方向始端的发声装置。
  22. 如权利要求16至21中任一项所述的装置,其特征在于,所述装置还包括:
    预测单元,用于根据所述第一目标的状态,预测所述第一运动趋势;
    其中,所述控制单元,用于:根据预测得到的所述第一运动趋势,控制所述多个发声装置中至少两个发声装置发出所述提示音。
  23. 如权利要求22所述的装置,其特征在于,所述控制单元,还用于:
    在预测得到的所述第一运动趋势与所述第一目标和所述运载工具的实际相对运动趋势不同时,控制所述多个发声装置中的至少两个发声装置发出声像漂移方向与所述实际相对运动趋势相对应的提示音。
  24. 如权利要求16至21中任一项所述的装置,其特征在于,所述装置还包括:
    第三确定单元,用于根据所述运载工具的传感器采集的第一数据,确定所述第一运动趋势;或者,
    所述第三确定单元,用于获取云端服务器发送的第二数据且根据所述第二数据,确定所述第一运动趋势。
  25. 如权利要求24所述的装置,其特征在于,所述第三确定单元,用于:
    根据所述第一数据,确定所述第一目标在多个时刻的方位;
    根据所述第一目标在所述多个时刻的方位,确定所述第一运动趋势。
  26. 如权利要求16至25中任一项所述的装置,其特征在于,所述运载工具中包括运动趋势与发声装置的映射关系,所述方法还包括:
    根据所述映射关系和所述第一运动趋势,确定所述至少两个发声装置。
  27. 如权利要求16至26中任一项所述的装置,其特征在于,所述控制单元,还用于:
    控制氛围灯点亮的方向,所述氛围灯点亮的方向与所述第一运动趋势相对应,所述运载工具包括所述氛围灯;和/或,
    控制方向盘的震动方向,所述震动方向与所述第一运动趋势相对应,所述运载工具包括所述方向盘;和/或,
    控制显示装置显示提示信息,所述提示信息用于提示所述第一运动趋势,所述运载工具包括所述显示装置。
  28. 如权利要求16至27中任一项所述的装置,其特征在于,所述检测单元,用于:
    在所述运载工具和所述第一目标之间的距离小于或者等于预设距离,和/或,所述运载工具和所述第一目标之间的TTC小于或者等于预设时长时,检测到所述第一目标处于所述运载工具的告警范围内。
  29. 如权利要求16至28中任一项所述的装置,其特征在于,所述控制单元,用于:
    控制所述至少两个发声装置中发声装置发出声音的强度,和/或,控制所述至少两个发声装置中发声装置发出声音的时延。
  30. 如权利要求16至29中任一项所述的装置,其特征在于,所述多个发声装置位于所述运载工具的座舱内。
  31. 一种控制装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,以使得所述装置执行如权利要求1至15中任一项所述的方法。
  32. 一种控制系统,其特征在于,包括至少两个发声装置和计算平台,其中,所述计算平台包括如权利要求16至31中任一项所述的装置。
  33. 一种运载工具,其特征在于,包括如权利要求16至31中任一项的控制装置,或者,包括如权利要求32所述的控制系统。
  34. 根据权利要求33所述的运载工具,其特征在于,所述运载工具为车辆。
  35. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被计算机执行时,以使得实现如权利要求1至15中任一项所述的方法。
  36. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1至15中任一项所述的方法。
PCT/CN2022/128404 2022-10-28 2022-10-28 一种控制方法、装置和运载工具 WO2024087216A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/128404 WO2024087216A1 (zh) 2022-10-28 2022-10-28 一种控制方法、装置和运载工具

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/128404 WO2024087216A1 (zh) 2022-10-28 2022-10-28 一种控制方法、装置和运载工具

Publications (1)

Publication Number Publication Date
WO2024087216A1 true WO2024087216A1 (zh) 2024-05-02

Family

ID=90829790

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/128404 WO2024087216A1 (zh) 2022-10-28 2022-10-28 一种控制方法、装置和运载工具

Country Status (1)

Country Link
WO (1) WO2024087216A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005329754A (ja) * 2004-05-18 2005-12-02 Nissan Motor Co Ltd 運転者知覚制御装置
US20180077492A1 (en) * 2016-09-09 2018-03-15 Toyota Jidosha Kabushiki Kaisha Vehicle information presentation device
CN108146367A (zh) * 2016-12-02 2018-06-12 王国超 汽车盲区声向定位示警系统
CN109795408A (zh) * 2019-01-17 2019-05-24 深圳市元征科技股份有限公司 一种预警方法及车辆
CN110431613A (zh) * 2017-03-29 2019-11-08 索尼公司 信息处理装置、信息处理方法、程序和移动物体
CN110718082A (zh) * 2019-09-29 2020-01-21 深圳市元征科技股份有限公司 一种基于声音的车辆报警方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005329754A (ja) * 2004-05-18 2005-12-02 Nissan Motor Co Ltd 運転者知覚制御装置
US20180077492A1 (en) * 2016-09-09 2018-03-15 Toyota Jidosha Kabushiki Kaisha Vehicle information presentation device
CN108146367A (zh) * 2016-12-02 2018-06-12 王国超 汽车盲区声向定位示警系统
CN110431613A (zh) * 2017-03-29 2019-11-08 索尼公司 信息处理装置、信息处理方法、程序和移动物体
CN109795408A (zh) * 2019-01-17 2019-05-24 深圳市元征科技股份有限公司 一种预警方法及车辆
CN110718082A (zh) * 2019-09-29 2020-01-21 深圳市元征科技股份有限公司 一种基于声音的车辆报警方法及装置

Similar Documents

Publication Publication Date Title
US11231905B2 (en) Vehicle with external audio speaker and microphone
CN107539313B (zh) 车辆通信网络以及其使用和制造方法
JP7067067B2 (ja) 信号機認識装置、及び自動運転システム
US20190369391A1 (en) Three dimensional augmented reality involving a vehicle
CN111361552B (zh) 自动驾驶系统
US20190004513A1 (en) Driving assistance control apparatus
US20200398743A1 (en) Method and apparatus for learning how to notify pedestrians
US20130144490A1 (en) Presentation of shared threat information in a transportation-related context
CN111278702B (zh) 车辆控制装置、具有该车辆控制装置的车辆以及控制方法
EP2969660A1 (en) Integrated navigation and collision avoidance systems
JP7139902B2 (ja) 報知装置
CN114763190A (zh) 用于摩托车的障碍物检测和通知
CN106292432B (zh) 信息处理方法、装置和电子设备
JP2004259069A (ja) 車両危険度に応じた警報信号を出力する警報装置
WO2018163472A1 (ja) モード切替制御装置、モード切替制御システム、モード切替制御方法およびプログラム
CN108569282A (zh) 用于车辆的辅助驾驶设备和方法
JP2017107328A (ja) 運転支援装置
JP2019003278A (ja) 車外報知装置
US20220314982A1 (en) Control device
CN113548043B (zh) 用于自主车辆的安全操作员的碰撞告警系统和方法
CN115583250A (zh) 用于向车辆的用户界面提供建议的转向动作指示符的系统和方法
KR20220047532A (ko) 보행자에게 차량 정보 전달
CN111098864B (zh) 提示方法、装置、自动驾驶车辆及存储介质
CN107599965B (zh) 用于车辆的电子控制装置及方法
WO2024087216A1 (zh) 一种控制方法、装置和运载工具