EP4344255A1 - Procédé de commande d'appareils de production de son, et système de production de son et véhicule - Google Patents

Procédé de commande d'appareils de production de son, et système de production de son et véhicule Download PDF

Info

Publication number
EP4344255A1
EP4344255A1 EP22832173.3A EP22832173A EP4344255A1 EP 4344255 A1 EP4344255 A1 EP 4344255A1 EP 22832173 A EP22832173 A EP 22832173A EP 4344255 A1 EP4344255 A1 EP 4344255A1
Authority
EP
European Patent Office
Prior art keywords
sound
vehicle
areas
making
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22832173.3A
Other languages
German (de)
English (en)
Inventor
Tianyu Huang
Chunhe DONG
Shangwei XIE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP4344255A1 publication Critical patent/EP4344255A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/80Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
    • Y02T10/84Data processing systems or methods, management, administration

Definitions

  • Embodiments of this application relate to the field of intelligent vehicles, and more specifically, to a sound-making apparatus control method, a sound-making system, and a vehicle.
  • Embodiments of this application provide a sound-making apparatus control method, a sound-making system, and a vehicle. Position information of an area in which a user is located is obtained for adaptively adjusting a sound field optimization center, to help improve listening experience of the user.
  • a sound-making apparatus control method includes: A first device obtains position information of a plurality of areas in which a plurality of users are located. The first device controls, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.
  • the first device obtains the position information of the plurality of areas in which the plurality of users are located, and controls, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work, without a need for a user to manually adjust the sound-making apparatuses.
  • This helps to reduce learning costs of the user and reduce complicated operations of the user.
  • this also helps the plurality of users enjoy good listening effect, and helps to improve user experience.
  • the first device may be a sound-making system in a vehicle or a home theater, or a sound-making system in a KTV
  • the method before that a first device obtains position information of areas in which a plurality of users are located, the method further includes: The first device detects a first operation of a user.
  • the first operation is an operation of the user controlling the first device to play audio content; or the first operation is an operation of the user connecting a second device to the first device and playing audio content on the second device by using the first device; or the first device is an operation of the user enabling a sound field adaptation switch.
  • a first device obtains position information of a plurality of areas in which a plurality of users are located includes: The first device determines, based on collected sensing information, the position information of the areas in which the plurality of users are located.
  • the sensing information may be one or more of image information, sound information, and pressure information.
  • the image information may be collected by an image sensor, for example, a camera apparatus or a radar.
  • the sound information may be collected by a sound sensor, for example, a microphone array.
  • the pressure information may be collected by a pressure sensor, for example, a pressure sensor mounted in a seat.
  • the sensing information may be data collected by a sensor, or may be information obtained based on data collected by a sensor.
  • a first device obtains position information of a plurality of areas in which a plurality of users are located includes: The first device determines the position information of the plurality of areas based on data collected by the image sensor; or the first device determines the position information of the plurality of areas based on data collected by the pressure sensor; or the first device determines the position information of the plurality of areas based on data collected by the sound sensor.
  • the plurality of sound-making apparatuses are controlled to work. In this way, a calculation process in which the first device controls the plurality of sound-making apparatuses to work can be simplified, and the first device can control the plurality of sound-making apparatuses more conveniently.
  • the image sensor may include a camera, a lidar, and the like.
  • the image sensor may determine whether there is a user in an area by collecting image information in the area and determining, based on the image information, whether the image information includes face contour information, human ear information, iris information, and the like.
  • the sound sensor may include a microphone array.
  • the senor may be one sensor or may be a plurality of sensors, where the plurality of sensors may be sensors of a same type, for example, all image sensors.
  • sensing information of a plurality of types of sensors for example, image information and sound information collected by the image sensor and the sound sensor may be used to determine the user.
  • the position information of the plurality of areas in which the plurality of users are located may include a center point of each of the plurality of areas, or a preset point of each of the plurality of areas, or a point of each area that is obtained according to a preset rule.
  • the first device controls, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: The first device determines a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and the first device controls, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.
  • the first device may first determine the current sound field optimization center point, and control, based on information about the distance between the sound field optimization center point and the plurality of sound-making apparatuses, the sound-making apparatus to work. This helps the plurality of users enjoy good listening effect, and helps improve user experience.
  • that the first device controls, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: controlling, based on the position information of the plurality of areas and a mapping relationship, the plurality of sound-making apparatuses to work, where the mapping relationship is a mapping relationship between positions of the plurality of areas and play intensities of the plurality of sound-making apparatuses.
  • the method further includes: The first device notifies position information of the sound field optimization center point.
  • the position information of the sound field optimization center point is notified to the user, so that listening effect of the plurality of users is improved and the user is also helped to determine the current sound field optimization center point.
  • that the first device notifies position information of the sound field optimization center point includes: The first device notifies the position information of the sound field optimization center point by using a human-computer interaction interface HMI or a sound.
  • the first device may be a vehicle, and that the first device notifies position information of the sound field optimization center point includes: The vehicle notifies the position information of the sound field optimization center point by using an atmosphere light.
  • the plurality of areas are areas in a vehicle cockpit.
  • the plurality of areas include a front-row area and a rear-row area.
  • the plurality of areas may include a driver area and a front passenger area.
  • the plurality of areas include a driver area, a front passenger area, a second-row left area, and a second-row right area.
  • the first device may be a vehicle, and that a first device obtains position information of a plurality of areas in which a plurality of users are located includes: The vehicle obtains, by using pressure sensors under seats in the areas, the position information of the plurality of areas in which the plurality of users are located.
  • the first device includes a microphone array
  • a first device obtains position information of a plurality of users includes: The first device obtains a voice signal in an environment by using the microphone array; and determines, based on the voice signal, the position information of the plurality of areas in which the plurality of users in the environment are located.
  • the method further includes: The first device notifies the position information of the plurality of areas in which the plurality of users are located.
  • that the first device notifies the position information of the plurality of areas in which the plurality of users are located includes: The first device notifies, by using the human-computer interaction interface HMI or the sound, the position information of the plurality of areas in which the plurality of users are located.
  • the first device may be a vehicle, and that the first device notifies the position information of the plurality of areas in which the plurality of users are located includes: The vehicle notifies, by using an atmosphere light, the position information of the plurality of areas in which the plurality of users are located.
  • the controlling the plurality of sound-making apparatuses to work includes: adjusting a play intensity of each of the plurality of sound-making apparatuses.
  • the play intensity of each of the plurality of sound-making apparatuses is directly proportional to a distance between each sound-making apparatus and the user.
  • the plurality of sound-making apparatuses include a first sound-making apparatus, and that the first device adjusts a play intensity of each of the sound-making apparatuses includes: The first device controls a play intensity of the first sound-making apparatus to be a first play intensity.
  • the method further includes: The first device obtains an instruction of a user to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity; and the first device adjusts the play intensity of the first sound-making apparatus to the second play intensity in response to obtaining the instruction.
  • the first device may adjust the play intensity of the first sound-making apparatus to the second play intensity. In this way, the user can quickly adjust the play intensity of the first sound-making apparatus, so that the first sound-making apparatus better meets listening effect of the user.
  • a sound-making system including a sensor, a controller, and a plurality of sound-making apparatuses.
  • the sensor is configured to collect data and send the data to the controller.
  • the controller is configured to obtain, based on the data, position information of a plurality of areas in which a plurality of users are located; and control, based on the position information of the plurality of areas and position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.
  • the controller is specifically configured to: obtain the position information of the plurality of areas based on data collected by an image sensor; obtain the position information of the plurality of areas based on data collected by a pressure sensor; or obtain the position information of the plurality of areas based on data collected by a sound sensor.
  • the controller is specifically configured to: determine a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and control, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.
  • the controller is further configured to send a first instruction to a first prompt apparatus, where the first instruction instructs the first prompt apparatus to notify the position information of the sound field optimization center point.
  • the plurality of areas are areas in a vehicle cockpit.
  • the plurality of areas include a front-row area and a rear-row area.
  • the front-row area includes a driver area and a front passenger area.
  • the controller is further configured to send a second instruction to a second prompt apparatus, where the second instruction instructs the second prompt apparatus to notify the position information of the plurality of areas in which the plurality of users are located.
  • the controller is specifically configured to adjust a play intensity of each of the plurality of sound-making apparatuses.
  • the plurality of sound-making apparatuses include a first sound-making apparatus.
  • the controller is specifically configured to control a play intensity of the first sound-making apparatus to be a first play intensity.
  • the controller is further configured to: obtain a third instruction of the user to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity, and adjust the play intensity of the first sound-making apparatus to the second play intensity in response to obtaining the third instruction.
  • an electronic apparatus configured to include: a transceiver unit, configured to receive sensing information; and a processing unit, configured to obtain, based on the sensing information, position information of a plurality of areas in which a plurality of users are located.
  • the processing unit is further configured to control, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.
  • the processing unit is further configured to control, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: The processing unit is configured to: determine a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and control, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.
  • the transceiver unit is further configured to send a first instruction to a first prompt unit, where the first instruction instructs the first prompt unit to notify position information of the sound field optimization center point.
  • the plurality of areas are areas in a vehicle cockpit.
  • the plurality of areas include a front-row area and a rear-row area.
  • the plurality of areas include a driver area and a front passenger area.
  • the transceiver unit is further configured to send a second instruction to a second prompt unit, where the second instruction instructs the second prompt unit to notify the position information of the plurality of areas in which the plurality of users are located.
  • the processing unit is specifically configured to adjust a play intensity of each of the plurality of sound-making apparatuses.
  • the plurality of sound-making apparatuses include a first sound-making apparatus.
  • the processing unit is specifically configured to control a play intensity of the first sound-making apparatus to be a first play intensity.
  • the transceiver unit is further configured to receive a third instruction, where the third instruction is an instruction instructing to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity.
  • the processing unit is further configured to adjust the play intensity of the first sound-making apparatus to the second play intensity.
  • the sensing information includes one or more of image information, pressure information, and sound information.
  • the electronic apparatus may be a chip or an in-vehicle apparatus (for example, a controller).
  • the transceiver unit may be an interface circuit.
  • the processing unit may be a processor, a processing apparatus, or the like.
  • an apparatus configured to perform the method in any implementation of the first aspect.
  • an apparatus includes a processing unit and a storage unit.
  • the storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the apparatus performs the method in any possible implementation of the first aspect.
  • the processing unit may be a processor, and the storage unit may be a memory.
  • the memory may be a storage unit (for example, a register or a cache) in a chip, or may be a storage unit (for example, a read-only memory, or a random access memory) located outside the chip in a vehicle.
  • a system includes a sensor and an electronic apparatus.
  • the electronic apparatus may be the electronic apparatus according to any possible implementation of the third aspect.
  • the system further includes a plurality of sound making apparatuses.
  • a system includes a plurality of sound-making apparatuses and an electronic apparatus, where the electronic apparatus may be the electronic apparatus according to any possible implementation of the third aspect.
  • the system further includes a sensor.
  • a vehicle includes the sound-making system according to any one of the possible implementations of the second aspect, or the vehicle includes the electronic apparatus according to any one of the possible implementations of the third aspect, or the vehicle includes the apparatus according to any possible implementation of the fourth aspect, or the vehicle includes the apparatus according to any possible implementation of the fifth aspect, or the vehicle includes the system according to any possible implementation of the sixth aspect, or the vehicle includes the system according to any possible implementation of the seventh aspect.
  • a computer program product includes computer program code, and when the computer program code is run on a computer, the computer is enabled to perform the method according to the first aspect.
  • the computer program code may be stored in a first storage medium.
  • the first storage medium may be encapsulated together with a processor, or may be encapsulated separately from a processor. This is not specifically limited in this embodiment of this application.
  • a computer-readable medium stores program code, and when the computer program code is run on a computer, the computer is enabled to perform the method according to the first aspect.
  • FIG. 1 is a schematic functional block diagram of a vehicle 100 according to an embodiment of this application.
  • the vehicle 100 may be configured to be in a full or partial automatic driving mode.
  • the vehicle 100 may obtain environment information around the vehicle 100 by using a sensing system 120, and obtain an autonomous driving policy based on analysis of the ambient environment information, to implement full-autonomous driving, or present an analysis result to a user, to implement partial autonomous driving.
  • the vehicle 100 may include various subsystems, such as an infotainment system 110, a sensing system 120, a decision control system 130, a propulsion system 140, and a computing platform 150.
  • the vehicle 100 may include more or fewer subsystems, and each subsystem may include a plurality of components.
  • each subsystem and component of the vehicle 100 may be interconnected in a wired or wireless manner.
  • the infotainment system 110 may include a communication system 111, an entertainment system 112, and a navigation system 113.
  • the communication system 111 may include a wireless communication system, and the wireless communication system may communicate with one or more devices in wireless manner directly or by using a communication network.
  • the wireless communication system 146 may use a third generation (3rd generation, 3G) cellular communication, for example, code division multiple access (code division multiple access, CDMA), evolution data optimized (evolution data optimized, EVDO), a global system for mobile communication (global system for mobile communication, GSM), or a general packet radio service (general packet radio service, GPRS); or a fourth generation (4th generation, 4G) cellular communication, for example, long term evolution (long term evolution, LTE); or a fifth generation (5th generation, 5G) cellular communication.
  • code division multiple access code division multiple access
  • EVDO evolution data optimized
  • GSM global system for mobile communication
  • GSM global system for mobile communication
  • GPRS general packet radio service
  • 4th generation, 4G fourth generation
  • long term evolution long term evolution
  • 5G fifth generation
  • the wireless communication system may communicate with a wireless local area network (wireless local area network, WLAN) through Wi-Fi.
  • the wireless communication system 146 may directly communicate with a device by using an infrared link, Bluetooth, or ZigBee.
  • the wireless communication system may include one or more dedicated short range communications (dedicated short range communications, DSRC) devices, and these devices may include public and/or private data communication between vehicles and/or roadside stations.
  • DSRC dedicated short range communications
  • the entertainment system 112 may include a central control screen, a microphone, and a sound box.
  • a user may listen to radio and play music in a vehicle through the entertainment system.
  • a mobile phone is connected to a vehicle, to realize screen projection of the mobile phone on the central control screen.
  • the central control screen may be a touchscreen, and the user may perform an operation by touching the screen.
  • a voice signal of the user may be obtained by using the microphone, and some control performed by the user on the vehicle 100 is implemented based on analysis of the voice signal of the user, for example, a temperature inside the vehicle is adjusted.
  • music may be played for the user by using the sound box.
  • the navigation system 113 may include a map service provided by a map supplier, to provide navigation of a driving route for the vehicle 100.
  • the navigation system 113 may be used together with a global positioning system 121 and an inertial measurement unit 122 of the vehicle.
  • the map service provided by the map provider may be a two-dimensional map or a high-precision map.
  • the sensing system 120 may include several types of sensors that sense the ambient environment information of the vehicle 100.
  • the sensing system 120 may include the global positioning system 121 (the global positioning system may be a GPS system, or may be a BeiDou system or another positioning system), the inertial measurement unit (inertial measurement unit, IMU) 122, a lidar 123, a millimeter-wave radar 124, an ultrasonic radar 125, and a camera apparatus 126.
  • the sensing system 120 may further include sensors (for example, an in-vehicle air quality monitor, a fuel gauge, and an oil temperature gauge) of an internal system of the vehicle 100 that is monitored. Sensor data from one or more of these sensors can be used to detect an object and corresponding features (a position, a shape, a direction, a speed, and the like) of the object. Such detection and recognition are key functions of a safe operation of the vehicle 100.
  • the global positioning system 121 may be configured to estimate a geographical position of the vehicle 100.
  • the inertial measurement unit 122 is configured to sense a position and an orientation change of the vehicle 100 based on an inertial acceleration.
  • the inertial measurement unit 122 may be a combination of an accelerometer and a gyroscope.
  • the lidar 123 may sense, by using a laser, an object in an environment in which the vehicle 100 is located.
  • the lidar 123 may include one or more laser sources, a laser scanner, one or more detectors, and other system components.
  • the millimeter-wave radar 124 may sense an object in an ambient environment of the vehicle 100 by using a radio signal.
  • the radar 126 may further be configured to sense a speed and/or a moving direction of the object.
  • the ultrasonic radar 125 may sense an object around the vehicle 100 by using an ultrasonic signal.
  • the camera apparatus 126 may be configured to capture image information of the ambient environment of the vehicle 100.
  • the camera apparatus 126 may include a monocular camera device, a binocular camera device, a structured light camera device, a panorama camera device, and the like.
  • the image information obtained by using the camera apparatus 126 may include a static image, or may include video stream information.
  • the decision control system 130 includes a computing system 131 that performs analysis and decision-making based on information obtained by the sensing system 120.
  • the decision control system 130 further includes a vehicle control unit 132 that controls a power system of the vehicle 100, and a steering system 133, a throttle 134, and a braking system 135 that are configured to control the vehicle 100.
  • the computing system 131 may operate to process and analyze various information obtained by the sensing system 120 to identify a target, an object, and/or a feature in the ambient environment of the vehicle 100.
  • the target may include a pedestrian or an animal, and the object and/or the feature may include a traffic signal, a road boundary, and an obstacle.
  • the computing system 131 may use technologies such as an object recognition algorithm, a structure from motion (structure from motion, SFM) algorithm, and video tracking.
  • the computing system 131 may be configured to: map an environment, track an object, estimate a speed of an object, and the like.
  • the computing system 131 may analyze the obtained various information and obtain a control policy for the vehicle.
  • the vehicle control unit 132 may be configured to coordinate and control a power battery and an engine 141 of the vehicle, to improve power performance of the vehicle 100.
  • the steering system 133 may be operated to adjust a moving direction of the vehicle 100.
  • the steering system 133 may be a steering wheel system.
  • the throttle 134 is configured to control an operating speed of the engine 141 and control a speed of the vehicle 100.
  • the braking system 135 is configured to control the vehicle 100 to decelerate.
  • the braking system 135 may slow down a wheel 144 by using a friction force.
  • the braking system 135 may convert kinetic energy of the wheel 144 into a current.
  • the braking system 135 may also slow down a rotation speed of the wheel 144 by using other forms, to control the speed of the vehicle 100.
  • the propulsion system 140 may include a component that provides power for the vehicle 100 to move.
  • the propulsion system 140 may include the engine 141, an energy source 142, a drive system 143, and the wheel 144.
  • the engine 141 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, for example, a hybrid engine formed by a gasoline engine and an electric motor, or a hybrid engine formed by an internal combustion engine and an air compression engine.
  • the engine 141 converts the energy source 142 into mechanical energy.
  • Examples of the energy source 142 include gasoline, diesel, other oil-based fuels, propane, other compressed gas-based fuels, ethyl alcohol, solar panels, batteries, and other power sources.
  • the energy source 142 may also provide energy for another system of the vehicle 100.
  • the drive apparatus 143 may transmit mechanical power from the engine 141 to the wheel 144.
  • the drive apparatus 143 may include a gearbox, a differential, and a drive shaft.
  • the drive apparatus 143 may further include another component, for example, a clutch.
  • the drive shaft may include one or more shafts that may be coupled to one or more wheels 121.
  • the computing platform 150 may include at least one processor 151, and the processor 151 may execute instructions 153 stored in a non-transitory computer-readable medium such as a memory 152.
  • the computing platform 150 may alternatively be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
  • the processor 151 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor 151 may further include an image processor (graphic process unit, GPU), a field programmable gate array (field programmable gate array, FPGA), a system on chip (system on chip, SOC), an application-specific integrated circuit (application-specific integrated circuit, ASIC) or a combination thereof.
  • FIG. 1 functionally illustrates a processor, a memory, and other components of a computer 110 in a same block, a person of ordinary skill in the art should understand that the processor, the computer, or the memory may actually include a plurality of processors, computers, or memories that may or may not be stored in a same physical housing.
  • the memory may be a hard disk drive, or another storage medium located in a housing different from that of the computer 110.
  • a reference to the processor or the computer includes a reference to a set of processors or computers or memories that may or may not operate in parallel.
  • some components such as a steering component and a deceleration component may include respective processors.
  • the processor performs only computation related to a component-specific function.
  • the processor may be located far away from the vehicle and wirelessly communicate with the vehicle.
  • some processes described herein are performed on a processor disposed inside the vehicle, while others are performed by a remote processor, including performing steps necessary for single manipulation.
  • the memory 152 may include the instructions 153 (for example, program logics), and the instructions 153 may be executed by the processor 151 to perform various functions of the vehicle 100.
  • the memory 152 may also include additional instructions, including instructions used to send data to, receive data from, interact with, and/or control one or more of the infotainment system 110, the sensing system 120, the decision control system 130, and the propulsion system 140.
  • the memory 152 may further store data, such as a road map, route information, a position, a direction, a speed, and other vehicle data of the vehicle, and other information. This information may be used by the vehicle 100 and the computing platform 150 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • the computing platform 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the propulsion system 140, the sensing system 120, and the decision control system 130). For example, the computing platform 150 may utilize an input from the decision control system 130 to control the steering system 133 to avoid obstacles detected by the sensing system 120. In some embodiments, the computing platform 150 may operate to provide control over many aspects of the vehicle 100 and the subsystems of the vehicle 100.
  • various subsystems for example, the propulsion system 140, the sensing system 120, and the decision control system 130.
  • the computing platform 150 may utilize an input from the decision control system 130 to control the steering system 133 to avoid obstacles detected by the sensing system 120.
  • the computing platform 150 may operate to provide control over many aspects of the vehicle 100 and the subsystems of the vehicle 100.
  • one or more of the foregoing components may be installed separately from or associated with the vehicle 100.
  • the memory 152 may be partially or completely separated from the vehicle 100.
  • the foregoing components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation on this embodiment of this application.
  • An autonomous driving vehicle traveling on a road may identify an object in an ambient environment of the autonomous driving vehicle, to determine to adjust a current speed.
  • the object may be another vehicle, a traffic control device, or another type of object.
  • each identified object may be considered independently, and a feature of each object, such as a current speed of the object, an acceleration of the object, and an interval between the object and the vehicle may be used to determine a speed to be adjusted by the autonomous driving vehicle.
  • the vehicle 100 or a sensing and computing device (for example, the computing system 131 and the computing platform 150) associated with the vehicle 100 may predict behavior of the identified object based on the features of the identified object and a state of the ambient environment (for example, traffic, rain, and ice on the road).
  • all identified objects depend on behavior of each other, and therefore all the identified objects may be considered together to predict behavior of a single identified object.
  • the vehicle 100 can adjust the speed of the vehicle 100 based on the predicted behavior of the identified object.
  • the autonomous driving vehicle can determine, based on the predicted behavior of the object, a stable state to which the vehicle needs to be adjusted (for example, acceleration, deceleration, or stop).
  • another factor may also be considered to determine the speed of the vehicle 100, for example, a horizontal position of the vehicle 100 on a road on which the vehicle drives, curvature of the road, and proximity between a static object and a dynamic object.
  • the computing device may further provide an instruction for modifying a steering angle of the vehicle 100, so that the autonomous driving vehicle follows a given track and/or maintains safe lateral and longitudinal distances between the autonomous driving vehicle and an object (for example, a car in an adjacent lane on the road) near the autonomous driving vehicle.
  • an object for example, a car in an adjacent lane on the road
  • the vehicle 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, a construction device, a trolley, a golf cart, a train, or the like. This is not specifically limited in embodiments of this application.
  • Embodiments of this application provide a sound-making apparatus control method, a sound-making system, and a vehicle. Position information of an area in which a user is located is identified, and a sound field optimization center is automatically adjusted, so that each user can achieve good listening effect.
  • FIG. 2 is a schematic diagram of a structure of a sound-making system according to an embodiment of this application.
  • the sound-making system may be a controller area network (controller area network, CAN) control system.
  • the CAN control system may include a plurality of sensors (such as a sensor 1 and a sensor 2), a plurality of electronic control units (electronic control unit, ECU), an in-vehicle entertainment host, a speaker controller, and a speaker.
  • the sensor includes but is not limited to a camera, a microphone, an ultrasonic radar, a millimeter-wave radar, a lidar, a vehicle speed sensor, a motor power sensor, and an engine speed sensor.
  • the ECU is configured to receive data collected by the sensor, execute a corresponding command, and obtain a periodic signal or an event signal after executing the corresponding command. Then, the ECU may send these signals to a public CAN network, where the ECU includes but is not limited to a complete vehicle controller, a hybrid controller, an automatic transmission controller, and an automatic driving controller.
  • the in-vehicle entertainment host is configured to capture a periodic signal or an event signal sent by each ECU on the public CAN network, and perform a corresponding operation or forward the signal to the speaker controller when the corresponding signal is recognized.
  • the speaker controller is used to receive the command signal from the in-vehicle entertainment host on the private CAN network to adjust the speaker.
  • the in-vehicle entertainment host may capture, from the CAN bus, image information collected by the camera.
  • the in-vehicle entertainment host may determine, based on image information, whether there are users in a plurality of areas in the vehicle, and send position information of the user to the speaker controller.
  • the speaker controller may control a play intensity of each speaker based on the position information of the user.
  • FIG. 3 is a schematic diagram of another structure of an in-vehicle sound-making system according to an embodiment of this application.
  • the sound-making system may be a ring network communication architecture. All sensors and actuators (such as components like a speaker, an atmosphere light, an air conditioner, and a motor that obtain and execute a command) may be connected to a nearby vehicle integration unit (vehicle integration unit, VIU).
  • VIU vehicle integration unit
  • the VIU may be deployed at a position in which sensors and actuators of a vehicle are dense, so that the sensors and the actuators of the vehicle can perform nearby connection.
  • the VIU may have specific computing and driving capabilities (for example, the VIU may absorb driving computing functions of some actuators).
  • the sensor includes but is not limited to a camera, a microphone, an ultrasonic radar, a millimeter-wave radar, a lidar, a vehicle speed sensor, a motor power sensor, an engine rotation speed sensor, and the like.
  • VIUs communicate with each other through networking.
  • An intelligent driving computing platform/a mobile data center mobile data center (mobile data center, MDC), a vehicle domain controller (vehicle domain controller, VDC), and an intelligent cockpit domain controller (cockpit domain controller, CDC) are separately and redundantly connected to the ring network communication network formed by the VIUs.
  • MDC mobile data center
  • VDC vehicle domain controller
  • CDC cockpit domain controller
  • the sensor may send the collected data to the VIU.
  • the VIU can publish the data to the ring network.
  • the MDC, VDC, and CDC collect the related data on the ring network, calculate the data, convert the data into a signal including position information of a user, and publish the signal to the ring network.
  • a play intensity of a speaker is controlled through a corresponding computing capability and driving capability in the VIU.
  • a VIU 1 is configured to drive a speaker 1
  • a VIU 2 is configured to drive a speaker 2
  • a VIU 3 is configured to drive a speaker 3
  • a VIU 4 is configured to drive a speaker 4.
  • An arrangement of the VIU may be independent of the speaker.
  • the VIU 1 may be arranged at the left rear of the vehicle, but the speaker 1 may be arranged near a door on a driver side.
  • the sensor or actuator can be connected to the nearby VIU, thereby reducing cable bundles. Due to a limited quantity of interfaces of the MDC, VDC, and CDC, the VIU can be connected a plurality of sensors and a plurality of actuators to implement interface and communication functions.
  • a VIU to which the sensor or a controller is connected and a controller by which the connection is controlled may be set before delivery of the sound-making system, or may be defined by the user, and hardware of the sound-making system may be replaced and upgraded.
  • the VIU may absorb driving computing functions of some sensors and actuators. In this way, when some actuators (for example, a CDC or a VDC) are faulty, the VIU may directly process the data collected by the sensor, to further control the actuators.
  • actuators for example, a CDC or a VDC
  • the communication architecture shown in FIG. 3 may be an intelligent digital vehicle platform (intelligent digital vehicle platform, IDVP) ring network communication architecture.
  • IDVP intelligent digital vehicle platform
  • FIG. 4 is a top view of a vehicle.
  • a position 1 is a driver seat
  • a position 2 is a front passenger seat
  • positions 3 to 5 are rear-row areas
  • positions 6a to 6d are positions of four speakers in the vehicle
  • a position 7 is a position of an in-vehicle camera
  • a position 8 is a position where a CDC and an in-vehicle central control screen are located.
  • the speaker may be configured to play a media sound in the vehicle.
  • the in-vehicle camera may be used to detect a position of a passenger in the vehicle.
  • the in-vehicle central control screen may be used to display image information and an interface of an application.
  • the CDC is used to connect peripherals and provide data analysis and processing capabilities.
  • positions of the speakers are not specifically limited.
  • the speakers can alternatively be located near the vehicle doors, near the large central control screen, on a ceiling, on floors, or on seats (for example, on pillows of the seats).
  • FIG. 5 is a schematic flowchart of a sound-making apparatus control method 500 according to an embodiment of this application.
  • the method 500 may be applied to a vehicle, the vehicle includes a plurality of sound-making apparatuses (for example, speakers), and the method 500 includes the following steps.
  • S501 The vehicle obtains position information of a user.
  • the vehicle may obtain image information of each area (for example, a driver seat, a front passenger seat, and a rear-row area) in the vehicle by starting an in-vehicle camera, and determine, based on the image information of each area, whether there is a user in the area. For example, the vehicle may analyze, based on the image information collected by the camera, whether the image information includes an outline of a human face, so that the vehicle may determine whether there is a user in the area. For another example, the vehicle may analyze, based on the image information collected by the camera, whether the image information includes iris information of a human eye, so that the vehicle may determine that there is a user in the area.
  • the vehicle may analyze, based on the image information collected by the camera, whether the image information includes iris information of a human eye, so that the vehicle may determine that there is a user in the area.
  • the vehicle when the vehicle detects an operation of turning on a sound field adaptation switch by the user, the vehicle may start the camera to obtain the image information of each area in the vehicle.
  • the user may select a setting option on a large central control screen to enter a sound effect function interface, and may choose to enable the sound field adaptation switch on the sound effect function interface.
  • the vehicle may alternatively detect, by using a pressure sensor under a seat, whether there is a user in a current area. For example, when a pressure value detected by a pressure sensor under a seat in an area is greater than or equal to a preset value, it may be determined that there is a user in the area.
  • the vehicle may alternatively determine position information of a sound source by using audio information obtained by a microphone array, to determine specific areas in which there are users.
  • the vehicle may alternatively obtain the position information of the user in the vehicle by using one or a combination of the in-vehicle camera, the pressure sensor, and the microphone array.
  • data collected by the sensor can be transmitted to a CDC, and the CDC can process the data to determine specific areas in which there are users.
  • the CDC may convert the data into a flag bit.
  • the CDC may output 1000 when there is a user in only the driver seat.
  • the CDC may output 0100 when there is a user in only the front passenger seat.
  • the CDC may output 0010 when there is a user in only a second-row left area.
  • the CDC may output 1100 when there are users in both the driver seat and the front passenger seat.
  • the CDC may output 1110 when there are users in the driver seat, the front passenger seat, and the second-row left area.
  • areas in the vehicle may alternatively be divided into a driver seat, a front passenger seat, a second-row left area, a second-row middle area, and a second-row right area.
  • areas in the vehicle may alternatively be divided into a driver seat, a front passenger seat, a second-row left area, a second-row right area, a third-row left area, and a third-row right area.
  • areas in the vehicle may be divided into a front-row area and a rear-row area.
  • areas in the vehicle may be divided into a driving area, a passenger area, and the like.
  • S502 The vehicle adjusts a sound-making apparatus based on the position information of the user.
  • FIG. 6 shows positions of the four speakers.
  • a graph formed by connection lines of points at which the four speakers are located is a rectangle ABCD.
  • a speaker 1 is disposed at a point A on the rectangle ABCD
  • a speaker 2 is disposed at a point B
  • a speaker 3 is disposed at a point C
  • a speaker 4 is disposed at a point D.
  • a point O is a center point of the rectangle ABCD (distances between the point O and the four points A, B, C, and D are equal).
  • a specific adjustment manner may be designed based on a model of an automobile or a setting of a speaker in an automobile. This is not limited in this application.
  • FIG. 7 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat, a front passenger seat, a second-row left area, and a second-row right area according to an embodiment of this application.
  • Center points of all the areas may form a rectangular EFGH, and a center point of the rectangular EFGH may be a point Q.
  • the point Q may be a current sound field optimization center point in the vehicle.
  • the point Q may coincide with the point O. Since distances between the center point Q of the rectangular EFGH and the four speakers are equal, the vehicle can control play intensities of the four speakers to be the same (for example, the play intensities of the four speakers are all p).
  • the vehicle may control the play intensities of the four speakers based on the distances between the point Q and the four speakers.
  • the vehicle may control a play intensity of the speaker 1 to be AQ AO • p .
  • the vehicle may control a play intensity of the speaker 2 to be BQ AO • p .
  • the vehicle may control a play intensity of the speaker 3 to be CQ AO • p .
  • the vehicle may control a play intensity of the speaker 4 to be DQ AO • p .
  • FIG. 8 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the driver seat, the front passenger seat, the second-row left area, and the second-row right area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located, a center point of an area in which the front passenger seat is located, a center point of the second-row left area, and a center point of the second-row right area.
  • FIG. 9 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat, a front passenger seat, and a second-row left area according to an embodiment of this application.
  • Center points of the driver seat, the front passenger seat, and the second-row left area may form a triangle EFG, where a circumcenter of the triangle EFG may be a point Q.
  • the point Q may be a current sound field optimization center point in the vehicle.
  • the point Q may coincide with the point O. Since distances between the center point Q of the rectangular EFGH and the four speakers are equal, the vehicle can control play intensities of the four speakers to be the same (for example, the play intensities of the four speakers are all p).
  • the point Q and the point O may not overlap.
  • the vehicle controls the play intensities of the four speakers refer to the description in the foregoing embodiment. Details are not described herein again.
  • FIG. 10 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the driver seat, the front passenger seat, and the second-row left area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located, a center point of an area in which the front passenger seat is located, and a center point of the second-row left area.
  • FIG. 11 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the driver seat, the front passenger seat, and the second-row right area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located, a center point of an area in which the front passenger seat is located, and a center point of the second-row right area.
  • FIG. 12 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the driver seat, the second-row left area, and the second-row right area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located, a center point of the second-row left area, and a center point of the second-row right area.
  • FIG. 13 is a schematic diagram of a sound field optimization center of speakers in a vehicle when there are users in a driver seat and a second-row left area according to an embodiment of this application.
  • a connection line between a center point of the driver seat and a center point of the second-row left area is EG, where a midpoint of EG may be a point Q.
  • the point Q may be a current sound field optimization center point in the vehicle.
  • the point Q may coincide with the point O. Since distances between the midpoint of the line segment EH and the four speakers are equal, the vehicle can control play intensities of the four speakers to be the same (for example, the play intensities of the four speakers are all p).
  • the vehicle may control the play intensities of the four speakers based on the distances between the point Q and the four speakers.
  • the vehicle may control the play intensities of the four speakers based on the distances between the point Q and the four speakers.
  • FIG. 14 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the driver seat and the second-row right area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located and a center point of the second-row right area.
  • FIG. 15 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the front passenger seat and the second-row left area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the front passenger seat is located and a center point of the second-row left area.
  • FIG. 16 is a schematic diagram of a sound field optimization center in a vehicle when there are users in a driver seat and a front passenger seat according to an embodiment of this application.
  • a connection line between center points of areas in which the driver seat and the front passenger seat are located is EF, where a midpoint of EF may be a point P.
  • the point P may be a current sound field optimization center in the vehicle.
  • FIG. 17 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the central control screen may notify a user that "Detect that there are persons in the driver seat and the front passenger seat", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located and a center point of an area in which the front passenger seat is located.
  • the vehicle may control play intensities of four speakers based on a distance between the point P and the four speakers.
  • the vehicle may control a play intensity of the speaker 1 to be AP AO • p .
  • the vehicle may control a play intensity of the speaker 2 to be BP AO • p .
  • the vehicle may control a play intensity of the speaker 3 to be CP AO • p .
  • the vehicle may control a play intensity of the speaker 4 to be DP AO • p .
  • FIG. 18 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the second-row left area and the second-row right area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of the second-row left area and a center point of the second-row right area.
  • FIG. 19 is a schematic diagram of a sound field optimization center in a vehicle when there are users in a driver seat and a second-row left area according to an embodiment of this application.
  • a connection line between a center point of the driver seat and a center point of the second-row left area is EH, where a middle point of EH may be a point R.
  • the point R may be a current sound field optimization center in the vehicle.
  • FIG. 20 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the driver seat and the second-row left area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the driver seat is located and a center point of the second-row left area.
  • the vehicle may control play intensities of four speakers based on distances between the point R and the four speakers.
  • the vehicle may control a play intensity of the speaker 1 to be AR AO • p .
  • the vehicle may control a play intensity of the speaker 2 to be BR AO • p .
  • the vehicle may control a play intensity of the speaker 3 to be CR AO • p .
  • the vehicle may control a play intensity of the speaker 4 to be DR AO • p .
  • FIG. 21 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there are persons in the front passenger seat and the second-row right area", and notify the user that a current sound field optimization center point may be a point having equal distances with a center point of an area in which the front passenger seat is located and a center point of the second-row right area.
  • FIG. 22 is a schematic diagram of a sound field optimization center in a vehicle when there is a user in a driver seat according to an embodiment of this application.
  • a center point of an area in which the driver seat is located is a point E, where the point E may be a current sound field optimization center point in the vehicle.
  • FIG. 23 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there is a person in the driver seat", and notify the user that a current sound field optimization center point may be a center point of an area in which the driver seat is located.
  • the vehicle may control play intensities of four speakers based on distances between the point E and the four speakers.
  • the vehicle may control a play intensity of the speaker 1 to be AE AO • p .
  • the vehicle may control a play intensity of the speaker 2 to be BE AO • p .
  • the vehicle may control a play intensity of the speaker 3 to be CE AO • p .
  • the vehicle may control a play intensity of the speaker 4 to be DE AO • p .
  • FIG. 24 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify a user that "Detect that there is a person in the front passenger seat", and notify the user that a current sound field optimization center point may be a center point of an area in which the front passenger seat is located.
  • FIG. 25 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify the user that "Detect that there is a person in the second-row left area", and notify the user that a current sound field optimization center point may be a center point of the second-row left area.
  • FIG. 26 is a schematic diagram of displaying a sound field optimization center on a large central control screen of a vehicle according to an embodiment of this application.
  • the large central control screen may notify the user that "Detect that there is a person in the second-row right area" and notify the user that a current sound field optimization center point may be a center point of the second-row right area.
  • position information of the user in S501 is a center point of the area in which the user is located is used for description.
  • This embodiment of this application is not limited thereto.
  • the position information of the user may alternatively be another preset point of the area in which the user is located, or the position information of the user may alternatively be a point of an area that is obtained through calculation according to a preset rule (for example, a preset algorithm).
  • the position information of the user may alternatively be determined based on position information of a human ear of the user.
  • the position information of the human ear of the user may be determined based on image information collected by a camera apparatus.
  • the position information of the human ear of the user is a midpoint of a connection line between a first point and a second point, where the first point is a point on a left ear of the user, and the second point is a point on a right ear of the user.
  • position information of a pinna of the human ear of the user may be determined based on image information collected by a camera apparatus.
  • Position information of an area may be determined based on the position information of the human ear of the user or the position information of the pinna of the human ear.
  • FIG. 27 and FIG. 28 the following describes a process in which a user manually adjusts a play intensity of a specific speaker after the vehicle adjusts play intensities of a plurality of sound-making apparatuses by using the position information of the user.
  • FIG. 27 shows a group of graphical user interfaces (graphical user interface, GUI) according to an embodiment of this application.
  • a vehicle may notify a user that "Detect that there are persons in the driver seat, the front passenger seat, the second-row left area and the second-row right area" on an HMI, and notify the user of a current sound field optimization center.
  • smiley faces in the driver seat, the front passenger seat, the second-row left area, and the second-row upper right area indicate that there are users in the areas.
  • the vehicle may display an icon 2701 (for example, a garbage bin icon) on the HMI.
  • an icon 2701 for example, a garbage bin icon
  • the vehicle may display, on the HMI, a GUI shown in (b) in FIG. 27 .
  • the vehicle in response to the detected operation of the user dragging the smiley face in the second-row left area to the icon 2701, the vehicle may notify the user that "The volume of the speaker in the second-row left area has been reduced to 0" on the HMI.
  • play intensities of current four speakers may be p.
  • the vehicle may reduce a play intensity of a speaker of the second-row left area to 0, or decrease the speaker play intensity of the second-row left area from p to 0.1p. This is not limited in this embodiment of this application.
  • FIG. 28 shows a group of GUIs according to an embodiment of this application.
  • a vehicle may notify a user that "Detect that there are persons in the driver seat, the front passenger seat, the second-row left area, and the second-row right area" on an HMI, and notify the user of a current sound field optimization center.
  • a scroll bar 2801 of a play intensity may be displayed.
  • the scroll bar 2801 of the play intensity may include a scroll block 2802.
  • the vehicle in response to the detected operation of sliding the finger of the user upward in the second-row left area, the vehicle may increase a play intensity of a speaker near the second-row left area and display the scroll block 2802 on the HMI to move upward.
  • the play intensity of the speaker near the second-row left area may be increased from p to 1.5p.
  • the vehicle can notify the user that "The volume of the speaker in the second-row left area has been increased" on the HMI.
  • the vehicle may adjust the play intensity of the speaker to the second play intensity.
  • the user can quickly adjust the play intensity of the speaker in the area, so that the speaker in the area better meets listening effect of the user.
  • the vehicle may further determine a status of a user in an area based on image information collected by a camera, so as to adjust a play intensity of a speaker near the area with reference to position information of the area and the status of the user. For example, when the vehicle detects that there is a user in the second-row left area and the user is resting, the vehicle may control the play intensity of the speaker near the second-row left area to be 0 or another value.
  • the second play intensity may alternatively be a default play intensity (for example, the second play intensity is 0).
  • the vehicle may adjust a play intensity of a speaker in the area from the first play intensity to the default play intensity.
  • the preset operation includes but is not limited to a touch and hold operation of the user detected in the area, (for example, touch and hold a seat in the second-row left area), and a sliding or tapping operation in the area.
  • FIG. 29 is a schematic diagram of a sound-making apparatus control method when applied to a home theater according to an embodiment of this application.
  • the home theater may include a sound box 1, a sound box 2, and a sound box 3.
  • a sound-making system in the home theater can adjust the three sound boxes by detecting position relationship between a user and the three sound boxes.
  • FIG. 30 is a schematic diagram of a sound field optimization center in a home theater according to an embodiment of this application.
  • a graph including connection lines of points at which three sound boxes are located is a triangle ABC, where a sound box 1 is disposed at a point A on the triangle ABC, a sound box 2 is disposed at a point B, and a sound box 3 is disposed at a point C.
  • a point O is a circumcenter of the triangle ABC.
  • a sound-making system may control the sound box 1, the sound box 2, and the sound box 3 to have a same play intensity. (For example, all play intensities of the three sound boxes are p).
  • the sound-making system may adjust the play intensities of the three sound boxes based on a position relationship between the area in which the user is located and the three sound boxes.
  • the center point of the area in which the user is located is a point Q.
  • the sound-making system may control a play intensity of the sound box 1 to be AQ AO • p .
  • the sound-making system may control a play intensity of the sound box 2 to be BQ AO • p .
  • the sound-making system may control a play intensity of the sound box 3 to be CQ AO • p .
  • FIG. 31 is a schematic flowchart of a sound-making apparatus control method 3100 according to an embodiment of this application.
  • the method 3100 may be applied to a first device. As shown in FIG. 31 , the method 3100 includes the following steps.
  • a first device obtains position information of a plurality of areas in which a plurality of users are located.
  • a first device obtains position information of a plurality of areas in which a plurality of users are located includes: obtaining sensing information; and determining the position information of the plurality of areas based on the sensing information, where the sensing information includes one or more of image information, pressure information, and sound information.
  • the sensing information may include image information.
  • the first device may obtain the image information by using an image sensor.
  • the first device is a vehicle.
  • the vehicle may determine, based on image information collected by an image shooting apparatus, whether the image information includes face contour information, human ear information, iris information, or the like.
  • the vehicle may obtain image information of the driver area that is collected by a driver camera. If the vehicle may determine that the image information includes one or more of face contour information, human ear information, or iris information, the first device may determine that there is a user in the driver area.
  • the vehicle may input the image information into a neural network, to obtain a classification result thin the area includes a face of the user.
  • the vehicle may further establish a coordinate system for the driver area.
  • the vehicle may collect image information of a plurality of coordinate points in the coordinate system by using the driver camera, and further analyze whether there is feature information of a person at the plurality of coordinate points. If there is the feature information of the person, the vehicle may determine that there is the user in the driver area.
  • the first device is a vehicle, and the sensing information may be pressure information.
  • a pressure sensor is included under each seat in the vehicle, and that a first device obtains position information of a plurality of areas in which a plurality of users are located includes: The first device obtains, based on the pressure information (for example, a pressure value) collected by the pressure sensor, the position information of the plurality of areas in which the plurality of users are located.
  • the vehicle determines that there is a user in the area corresponding to the pressure sensor. For example, when a pressure value detected by a pressure sensor under a seat at the driver area is greater than or equal to a preset pressure value, the vehicle may determine that there is a user in the driver area.
  • the sensing information may be sound information. That a first device obtains position information of a plurality of areas in which a plurality of users are located includes: The first device obtains, by using sound signals collected by a microphone array, the position information of the plurality of areas in which the plurality of users are located. For example, the first device may locate a user based on a sound signal collected by the microphone array. If the first device locates, based on the sound signal, that a user is located in an area, the first device may determine that there is the user in the area.
  • the first device may further determine, with reference to at least two types of image information, pressure information, and sound information, whether there is the user in the area.
  • the first device is a vehicle.
  • the vehicle may obtain image information collected by the driver camera and pressure information collected by the pressure sensor in the driver seat. If determining, based on the image information collected by the driver camera, that the image information includes face information and the pressure value collected by the pressure sensor in the driver seat is greater than or equal to the first threshold, the vehicle may determine that there is the user in the driver area.
  • the first device when the first device needs to determine whether there is a user in an area, the first device may obtain image information in the area that is collected by the camera, and pick up sound information in an environment by using the microphone array. If determining, based on the image information of the area that is collected by the camera, that the image information includes face information, and determining, based on the sound information collected by the microphone array, that a sound comes from the area, the vehicle may determine that there is the user in the area.
  • S3102 The first device controls, based on the position information of the plurality of areas in which the plurality of users are located and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.
  • the position information of the plurality of areas in which the plurality of users are located may include a center point of each of the plurality of areas, or a preset point of each of the plurality of areas, or a point of each area that is obtained according to a preset rule (for example, a preset algorithm).
  • a preset rule for example, a preset algorithm
  • the first device controls, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: controlling, based on the position information of the plurality of areas and a mapping relationship, the plurality of sound-making apparatuses to work, where the mapping relationship is a mapping relationship between positions of the plurality of areas and play intensities of the plurality of sound-making apparatuses.
  • Table 1 shows a mapping relationship between positions of a plurality of areas and play intensities of a plurality of sound-making apparatuses.
  • Table 1 Positions of a plurality of areas Play intensities of a plurality of sound-making apparatuses
  • Speaker 1 Speaker 2
  • Speaker 3 Speaker 4 Person Person Person Person p p p p Person Person Person No person p p p p Person Person No person Person Person p p p p Person Person No person Person Person Person p p p p Person No person Person Person Person p p p p Person No person Person Person Person p p p p Person No person No person Person Person p p p p Person No person No person Person Person p p p p Person No person No person Person Person p p p p Person Person No person No person 0.6p 0.6p 1.8p 1.8p No person No person No person Person Person 1.8p 1.8p 0.6p 0.6p Person 0.8p 1.4p 1.4p 0.8p No person Person
  • mapping relationship between the positions and the play intensities of the plurality of sound-making apparatuses shown in Table 1 is merely an example.
  • An area division manner and a play intensity of a speaker are not limited in this embodiment of this application.
  • the first device controls, based on the position information of the plurality of areas and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: The first device determines a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and the first device controls, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.
  • the method further includes: The first device notifies position information of the sound field optimization center point.
  • the first device notifies the position information of the current sound field optimization center by using an HMI, a sound, or an atmosphere light.
  • the plurality of areas are areas in a vehicle cockpit.
  • the plurality of areas include a front-row area and a rear-row area.
  • the plurality of areas may include a driver area and a front passenger area.
  • the method further includes: The first device notifies the position information of the plurality of areas in which the plurality of users are located.
  • the first device controls the plurality of sound-making apparatuses to work includes: The first device adjusts a play intensity of each of the plurality of sound-making apparatuses.
  • the plurality of sound-making apparatuses include a first sound-making apparatus, and that the first device adjusts a play intensity of each of the sound-making apparatuses includes: The first device controls a play intensity of the first sound-making apparatus to be a first play intensity.
  • the method further includes: The first device obtains an instruction of a user to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity; and the first device adjusts the play intensity of the first sound-making apparatus to the second play intensity in response to obtaining the instruction.
  • the vehicle may control play intensities of four speakers to be p.
  • the vehicle may adjust a play intensity of a speaker near the second-row left area from p to 0.
  • the vehicle may control the play intensities of the four speakers to be p.
  • the vehicle may adjust the play intensity of the speaker near the second-row left area from p to 1.5p.
  • the first device obtains the position information of the plurality of areas in which the plurality of users are located, and controls, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work, without a need for a user to manually adjust the sound-making apparatuses.
  • This helps to reduce learning costs of the user and reduce complicated operations of the user.
  • this also helps the plurality of users enjoy good listening effect, and helps to improve user experience.
  • FIG. 32 is a schematic diagram of a structure of a sound-making system 3200 according to an embodiment of this application.
  • the sound-making system may include a sensor 3201, a controller 3202, and a plurality of sound-making apparatuses 3203.
  • the sound-making system includes a sensor, a controller, and a plurality of sound-making apparatuses.
  • the sensor 3201 is configured to collect data and send the data to the controller.
  • the controller 3202 is configured to obtain, based on the data, position information of a plurality of areas in which a plurality of users are located; and control, based on the position information of the plurality of areas and position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses 3203 to work.
  • the data includes at least one of image information, pressure information, and sound information.
  • the controller 3202 is specifically configured to: determine a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and control, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.
  • the controller 3202 is further configured to send a first instruction to a first prompt apparatus, where the first instruction instructs the first prompt apparatus to notify the position information of the sound field optimization center point.
  • the plurality of areas are areas in a vehicle cockpit.
  • the plurality of areas include a front-row area and a rear-row area.
  • the plurality of areas may include a driver area and a front passenger area.
  • the controller 3202 is further configured to send a second instruction to a second prompt apparatus, where the second instruction instructs the second prompt apparatus to notify the position information of the plurality of areas in which the plurality of users are located.
  • controller 3202 is specifically configured to adjust a play intensity of each of the plurality of sound-making apparatuses 3203.
  • the plurality of sound-making apparatuses 3203 include a first sound-making apparatus.
  • the controller 3202 is specifically configured to control a play intensity of the first sound-making apparatus to be a first play intensity.
  • the controller 3202 is further configured to: obtain a third instruction of the user to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity, and adjust the play intensity of the first sound-making apparatus to the second play intensity in response to obtaining the third instruction.
  • FIG. 33 is a schematic block diagram of an apparatus 3300 according to an embodiment of this application.
  • the apparatus 3300 includes a transceiver unit 3301 and a processing unit 3302.
  • the transceiver unit 3301 is configured to receive sensing information.
  • the processing unit 3302 is configured to obtain, based on the sensing information, position information of a plurality of areas in which a plurality of users are located.
  • the processing unit 3302 is further configured to control, based on the position information of the plurality of areas in which the plurality of users are located and position information of a plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work.
  • the processing unit 3302 is further configured to control, based on the position information of the plurality of areas and the position information of the plurality of sound-making apparatuses, the plurality of sound-making apparatuses to work includes: The processing unit 3302 is configured to: determine a sound field optimization center point, where distances between the sound field optimization center point and center points of all of the plurality of areas are equal; and control, based on a distance between the sound field optimization center point and each of the plurality of sound-making apparatuses, each of the plurality of sound-making apparatuses to work.
  • the transceiver unit 3301 is further configured to send a first instruction to a first prompt unit, where the first instruction instructs the first prompt unit to notify the position information of the sound field optimization center point.
  • the plurality of areas are areas in a vehicle cockpit.
  • the plurality of areas include a front-row area and a rear-row area.
  • the plurality of areas include a driver area and a front passenger area.
  • the transceiver unit 3301 is further configured to send a second instruction to a second prompt unit, where the second instruction instructs the second prompt unit to notify the position information of the plurality of areas in which the plurality of users are located.
  • the processing unit 3302 is specifically configured to adjust a play intensity of each of the plurality of sound-making apparatuses.
  • the plurality of sound-making apparatuses include a first sound-making apparatus.
  • the processing unit 3302 is specifically configured to control a play intensity of the first sound-making apparatus to be a first play intensity.
  • the transceiver unit 3301 is further configured to receive a third instruction, where the third instruction is an instruction instructing to adjust the play intensity of the first sound-making apparatus from the first play intensity to a second play intensity.
  • the processing unit 3302 is further configured to adjust the play intensity of the first sound-making apparatus to the second play intensity.
  • the sensing information includes one or more of image information, pressure information, and sound information.
  • An embodiment of this application further provides an apparatus.
  • the apparatus includes a processing unit and a storage unit.
  • the storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the apparatus performs the sound-making apparatus control method.
  • the processing unit may be the processor 151 shown in FIG. 1
  • the storage unit may be the memory 152 shown in FIG. 1
  • the memory 152 may be a storage unit (for example, a register or a cache) in a chip, or may be a storage unit located outside the chip in a vehicle (for example, a read-only memory, or a random access memory).
  • An embodiment of this application further provides a vehicle including the sound-making system 3200 or the apparatus 3300.
  • An embodiment of this application further provides a computer program product.
  • the computer program product includes computer program code, and when the computer program code is run on a computer, the computer is enabled to perform the method.
  • An embodiment of this application further provides a computer-readable medium.
  • the computer-readable medium stores program code, and when the computer program code is run on a computer, the computer is enabled to perform the method.
  • steps in the method can be implemented by using a hardware integrated logical circuit in the processor 151, or by using instructions in a form of software.
  • the method disclosed with reference to embodiments of this application may be directly performed by a hardware processor, or may be performed by using a combination of hardware in the processor 151 and a software module.
  • a software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register.
  • the storage medium is located in the memory, and a processor 151 reads information in the memory 152 and completes the steps in the method in combination with hardware of the processor 151. To avoid repetition, details are not described herein again.
  • the processor 151 in embodiments of this application may be a central processing unit (central processing unit, CPU), or may be another general purpose processor, a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA), or another programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like.
  • the general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like.
  • the memory 152 may include a read-only memory and a random access memory, and provide instructions and data to the processor.
  • first”, “second”, and various numeric numbers are merely used for distinguishing for ease of description and are not intended to limit the scope of embodiments of this application.
  • first”, “second”, and various numeric numbers are used for distinguishing between different pipes, through holes, and the like.
  • sequence numbers of the foregoing processes do not mean execution sequences in various embodiments of this application.
  • the execution sequences of the processes should be determined according to functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of embodiments of this application.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the described apparatus embodiment is merely an example.
  • division into the units is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments.
  • the functions When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the conventional technology, or some of the technical solutions may be implemented in a form of a software product.
  • the computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in embodiments of this application.
  • the foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (read-only memory, ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disc.
  • program code such as a USB flash drive, a removable hard disk, a read-only memory (read-only memory, ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)
EP22832173.3A 2021-06-30 2022-06-30 Procédé de commande d'appareils de production de son, et système de production de son et véhicule Pending EP4344255A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110744208.4A CN113596705B (zh) 2021-06-30 2021-06-30 一种发声装置的控制方法、发声系统以及车辆
PCT/CN2022/102818 WO2023274361A1 (fr) 2021-06-30 2022-06-30 Procédé de commande d'appareils de production de son, et système de production de son et véhicule

Publications (1)

Publication Number Publication Date
EP4344255A1 true EP4344255A1 (fr) 2024-03-27

Family

ID=78245719

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22832173.3A Pending EP4344255A1 (fr) 2021-06-30 2022-06-30 Procédé de commande d'appareils de production de son, et système de production de son et véhicule

Country Status (3)

Country Link
EP (1) EP4344255A1 (fr)
CN (1) CN113596705B (fr)
WO (1) WO2023274361A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596705B (zh) * 2021-06-30 2023-05-16 华为技术有限公司 一种发声装置的控制方法、发声系统以及车辆
CN114038240B (zh) * 2021-11-30 2023-05-05 东风商用车有限公司 一种商用车声场控制方法、装置及设备
CN117985035A (zh) * 2022-10-28 2024-05-07 华为技术有限公司 一种控制方法、装置和运载工具

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9510126B2 (en) * 2012-01-11 2016-11-29 Sony Corporation Sound field control device, sound field control method, program, sound control system and server
CN103220594A (zh) * 2012-01-20 2013-07-24 新昌有限公司 用于车辆的音效调控系统
CN103220597B (zh) * 2013-03-29 2014-07-23 苏州上声电子有限公司 车内声场均衡装置
CN204316717U (zh) * 2014-09-01 2015-05-06 歌尔声学股份有限公司 一种自动调整车内声场分布的系统
CN104270695B (zh) * 2014-09-01 2018-07-31 歌尔股份有限公司 一种自动调整车内声场分布的方法和系统
US9509820B2 (en) * 2014-12-03 2016-11-29 Harman International Industries, Incorporated Methods and systems for controlling in-vehicle speakers
DK179663B1 (en) * 2015-10-27 2019-03-13 Bang & Olufsen A/S Loudspeaker with controlled sound fields
KR101791843B1 (ko) * 2016-04-29 2017-10-31 주식회사 에스큐그리고 차량 내 음향 공간 보정 시스템
CN107592588B (zh) * 2017-07-18 2020-07-10 科大讯飞股份有限公司 声场调节方法及装置、存储介质、电子设备
KR101927819B1 (ko) * 2017-12-06 2018-12-11 주식회사 피티지 차량용 지향성 음향시스템
CN108551623A (zh) * 2018-05-15 2018-09-18 上海博泰悦臻网络技术服务有限公司 车辆及其基于声音识别的音频播放调节方法
CN108834030A (zh) * 2018-09-28 2018-11-16 广州小鹏汽车科技有限公司 一种车内声场调节方法及音频系统
CN109922411A (zh) * 2019-01-29 2019-06-21 惠州市华智航科技有限公司 声场控制方法及声场控制系统
CN110149586A (zh) * 2019-05-23 2019-08-20 贵安新区新特电动汽车工业有限公司 声音调整方法及装置
CN112312280B (zh) * 2019-07-31 2022-03-01 北京地平线机器人技术研发有限公司 一种车内声音播放方法及装置
US20210152939A1 (en) * 2019-11-19 2021-05-20 Analog Devices, Inc. Audio system speaker virtualization
CN113055810A (zh) * 2021-03-05 2021-06-29 广州小鹏汽车科技有限公司 音效控制方法、装置、系统、车辆以及存储介质
CN113596705B (zh) * 2021-06-30 2023-05-16 华为技术有限公司 一种发声装置的控制方法、发声系统以及车辆

Also Published As

Publication number Publication date
WO2023274361A1 (fr) 2023-01-05
CN113596705A (zh) 2021-11-02
CN113596705B (zh) 2023-05-16
US20240137721A1 (en) 2024-04-25

Similar Documents

Publication Publication Date Title
WO2021052213A1 (fr) Procédé et dispositif de réglage de caractéristique de pédale d'accélérateur
EP4344255A1 (fr) Procédé de commande d'appareils de production de son, et système de production de son et véhicule
EP3132987B1 (fr) Systèmes et procédés d'assistance au conducteur
US10234859B2 (en) Systems and methods for driver assistance
WO2022000448A1 (fr) Procédé d'interaction de geste d'air dans un véhicule, dispositif électronique et système
WO2022204925A1 (fr) Procédé d'obtention d'image et équipement associé
WO2022205243A1 (fr) Procédé et appareil pour obtenir une zone de changement de voie
WO2021217570A1 (fr) Procédé et appareil de commande basée sur un geste dans l'air, et système
EP3892960A1 (fr) Systèmes et procédés de réalité augmentée dans un véhicule
WO2021217575A1 (fr) Procédé d'identification et dispositif d'identification pour un objet d'intérêt d'un utilisateur
WO2024131698A1 (fr) Procédé de réglage de siège dans un véhicule, procédé de stationnement et dispositif associé
WO2024093768A1 (fr) Procédé d'alarme de véhicule et dispositif associé
EP4180297A1 (fr) Procédé et appareil de commande de conduite automatique
CN115056784B (zh) 车辆控制方法、装置、车辆、存储介质及芯片
US20240236599A9 (en) Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle
EP4400367A1 (fr) Procédé et appareil de commande d'angle de vision d'une caméra montée sur un véhicule et véhicule
CN115042813B (zh) 车辆控制方法、装置、存储介质及车辆
CN114572219B (zh) 自动超车方法、装置、车辆、存储介质及芯片
CN114802435B (zh) 车辆控制方法、装置、车辆、存储介质及芯片
EP4292896A1 (fr) Procédé de commande de déplacement de véhicule, dispositif électronique, support d'informations, puce et véhicule
US20240236494A1 (en) Method and Apparatus for Controlling Angle of View of Vehicle-Mounted Camera, and Vehicle
CN115221260B (zh) 数据处理方法、装置、车辆及存储介质
EP4365052A1 (fr) Procédé d'évitement de collision et appareil de commande
CN115221261A (zh) 地图数据融合方法、装置、车辆及存储介质
CN115447506A (zh) 设备控制方法、装置、车辆、介质及芯片

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231221

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR