EP4307722A1 - Véhicule routier et procédé et système de commande d'un dispositif de sortie acoustique dans un véhicule routier - Google Patents

Véhicule routier et procédé et système de commande d'un dispositif de sortie acoustique dans un véhicule routier Download PDF

Info

Publication number
EP4307722A1
EP4307722A1 EP22185123.1A EP22185123A EP4307722A1 EP 4307722 A1 EP4307722 A1 EP 4307722A1 EP 22185123 A EP22185123 A EP 22185123A EP 4307722 A1 EP4307722 A1 EP 4307722A1
Authority
EP
European Patent Office
Prior art keywords
acoustic output
output device
vehicle
location
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22185123.1A
Other languages
German (de)
English (en)
Inventor
Tarek Zaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Priority to EP22185123.1A priority Critical patent/EP4307722A1/fr
Publication of EP4307722A1 publication Critical patent/EP4307722A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/022Plurality of transducers corresponding to a plurality of sound channels in each earpiece of headphones or in a single enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • the present invention relates to a road-based vehicle and to a method and system for controlling an acoustic output device in a road-based vehicle.
  • the invention further relates to a computer program for carrying out the method.
  • the present invention can, in principle, be used in connection with any type of road-based vehicle, in particular a motorized vehicle, such as a (passenger) car, lorry / truck, bus or coach.
  • a motorized vehicle such as a (passenger) car, lorry / truck, bus or coach.
  • the invention finds particular application in (passenger) cars, the invention will be described primarily in connection with cars, without being limited to this.
  • one skilled in the art will not have any difficulties, guided by the present disclosure, to implement the invention in the context of road-based vehicles other than cars.
  • Some acoustic output devices available today have a surround sound function, which can provide a user with a particularly rich listening experience.
  • Such surround sound systems not only exist in a public cinema or home cinema setting; some portable devices such as certain headphones or headsets also have a surround sound function.
  • the present invention provides a method of controlling an acoustic output device in a road-based vehicle comprising:
  • the orientation of the acoustic output device is taken into account so that, when the acoustic output device is in a first orientation with respect to the road-based vehicle, the acoustic output perceived by a user of the acoustic output device will, as a rule, be different when compared with a situation where the acoustic output device is in a second orientation with respect to the road-based vehicle, the second orientation being different from the first orientation.
  • the headphones might be controlled in such a way that, in the first orientation, a first subset of the speakers is activated (to output an audible sound), and in the second orientation, a second subset of the speakers different from the first subset of the speakers is activated.
  • orientation of the acoustic output device, in particular with respect to the road-based vehicle
  • orientation information does not necessarily refer to a value or set of values which (directly) provides a correct measurement of the orientation (of the acoustic output device, in particular with respect to the road-based vehicle) as expressed in (correct) physical units (e.g. polar coordinates or similar).
  • orientation and “orientation information” can, in principle, mean any information which (at least approximately) characterises the orientation (of the acoustic output device, in particular with respect to the road-based vehicle), in particular (at least approximately) uniquely characterises the orientation (of the acoustic output device, in particular with respect to the road-based vehicle), in particular a value or set of values from which physically correct values of the orientation (of the acoustic output device, in particular with respect to the road-based vehicle) can (at least approximately) be obtained, in particular without having to resort to other information or measurements.
  • the orientation or orientation information would be such that a computing device can process it.
  • the term "surround sound” may refer to a two-dimensional surround sound (e.g. a plurality of speakers distributed substantially in a single plane) or to a three-dimensional surround sound (e.g. a plurality of speakers, not all distributed in a single plane).
  • the present invention provides a system for controlling an acoustic output device in a road-based vehicle, the system comprising: a processing unit configured to receive orientation information regarding an orientation of the acoustic output device with respect to the road-based vehicle, wherein the processing unit is further configured to generate and output a control signal for causing the acoustic output device to output audible sound, wherein the processing unit is configured to generate and output the control signal as a function of the orientation information.
  • a "processing unit” is preferably intended to be understood to mean any electrical component or device which can receive orientation information and generate a control signal based thereon which can be used by the acoustic output device.
  • the processing unit can either (substantially) transparently pass the orientation information to the acoustic output device if the acoustic output device can be controlled directly by the orientation information, or it may process the orientation information and generate a control signal which is (substantially) different from the orientation information.
  • the processing unit may in particular comprise a microprocessor.
  • the processing unit of the second aspect of the invention may, for example, comprise an onboard computer of the vehicle, or form part thereof - and may accordingly form part of the vehicle. The processing unit may however also form part of the acoustic output device.
  • system further comprises a detector configured to detect said orientation of the acoustic output device with respect to the road-based vehicle in order to provide the orientation information, or the processing unit comprises an interface for receiving the orientation information.
  • Suitable detectors configured to detect the orientation of the acoustic output device with respect to the road-based vehicle are known in the art and include, for example, accelerometers, gyroscope sensors and magnetometer sensors, or combinations of these, in particular integrated into the acoustic output device.
  • Other types of sensors may detect the orientation with respect to a local or global reference, for example using a satellite-based positioning system such as GPS or the like.
  • the orientation of the vehicle with respect to the local or global reference might also need to be known or determined, and the relative orientation of the acoustic output device with respect to the vehicle may then be calculated or derived therefrom.
  • An interface for receiving the orientation information of the acoustic output device with respect to the vehicle may be wired or wireless.
  • a wired interface may, for example, be an electrical connector for establishing a connection between the processing unit and other devices that may provide the orientation information, e.g. the acoustic output device itself. If such other devices, for example the acoustic output device, is hard-wired to the processing unit, the interface may be regarded as a point along the connection between the processing unit and such other devices.
  • the system can be built into the vehicle or may form part of the vehicle.
  • the system can also be provided on its own and, for example, be supplied to vehicle manufacturers so that the system may be built into vehicles.
  • the present invention provides a road-based vehicle comprising the system according to the first aspect, or any embodiment thereof.
  • the present invention provides a computer program product comprising a program code which is stored on a computer readable medium, for carrying out the method in accordance with the first aspect, or any of its steps or combination of steps, or any embodiments thereof.
  • the computer program may in particular be stored on a non-volatile data carrier.
  • this is a data carrier in the form of an optical data carrier or a flash memory module.
  • the computer program may be provided as a file or a group of files on one or more data processing units, in particular on a server, and can be downloaded via a data connection, for example the Internet, or a dedicated data connection, such as for example a proprietary or a local network.
  • the computer program may comprise a plurality of interacting, individual program modules.
  • the computer program may be updatable and configurable, in particular in a wireless manner, for example by a user or manufacturer.
  • the method further comprises: determining a location of the acoustic output device with respect to the road-based vehicle or a specific point thereof; and controlling the acoustic output function of the acoustic output device as a function of the location of the acoustic output device with respect to the road-based vehicle or the specific point thereof.
  • the system comprises a detector configured to detect the location of the acoustic output device with respect to the road-based vehicle or a specific point thereof in order to provide location information, or the processing unit comprises an interface for receiving the location information.
  • the location of the acoustic output device is taken into account so that, when the acoustic output device is in a first location with respect to the road-based vehicle or the specific point thereof, the acoustic output perceived by a user of the acoustic output device will, as a rule, be different when compared with a situation where the acoustic output device is in a second location with respect to the road-based vehicle or the specific point thereof, the second location being different from the first location.
  • the headphones might be controlled in such a way that, when the acoustic output device is in the first location, a first subset of the speakers is activated (to output an audible sound), and when the acoustic output device is in the second location, a second subset of the speakers different from the first subset of the speakers is activated.
  • the "specific point" of the vehicle does not necessarily need to be a point within the vehicle or be a point on a component of the vehicle but can be a point whose location remains in a fixed (spatial) relationship with respect to the vehicle as the vehicle moves.
  • controlling the acoustic output function of the acoustic output device as a function of the orientation of the acoustic output device with respect to road-based vehicle and/or controlling the acoustic output function of the acoustic output device as a function of the location of the acoustic output device with respect to the road-based vehicle or the specific point thereof comprises causing the acoustic output device to output audible sound in such a way that a user of the acoustic output device perceives the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • control signal to be generated and output by the processing unit is such that the control signal causes the acoustic output device to output audible sound in such a way that a user of the acoustic output device perceives the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • the central location at or towards the front of the vehicle may be considered to correspond to a direction of 11 o'clock (12 o'clock being straight ahead).
  • the headphones are controlled in such a way, in particular by a control signal from the processing unit, that the user does indeed perceive the audible sound as coming from the central location at or towards the front of the vehicle.
  • a speaker towards the front left-hand side of the headphones might be controlled to output a greater volume than a speaker on the right-hand side of the headphones.
  • the user now turns their head towards the left (but remains in the same location, i.e. the passenger seat at the front, right).
  • the orientation of the head and of the headphones has now changed to a second orientation with respect to the vehicle, i.e. a "left" orientation.
  • the orientation (now: "left") and location (passenger seat) of the headphones with respect to the vehicle can again be determined and, given the determined orientation and/or location of the headphones with respect to the vehicle, the headphones are controlled in such a way, in particular by a control signal from the processing unit, that the user still perceives the audible sound as coming from the central location at or towards the front of the vehicle - despite the head of the user/headphones having changed their orientation.
  • a speaker on the right-hand side of the headphones and slightly towards the front (with respect to the head of the user) might be controlled to output a greater volume than a speaker on the left-hand side of the headphones.
  • the headphones would be controlled in a corresponding way (mutatis mutandis), in particular by a control signal from the processing unit, if the user (additionally) changed their location within the vehicle, for example by moving to the driver's seat (front, left).
  • the user can perceive the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • consistent direction and/or location does not necessarily mean that the audible sound only ever comes from the same location/direction with respect to the vehicle, such as the central location towards the front of the vehicle. Instead, the direction and/or location from which the audible sound is intended to be perceived to come may (intentionally) change over time.
  • the audible sound initially is perceived to come from the front right-hand corner of the vehicle, and, over time, the location from which the audible sound is intended to be perceived to come moves to the front left-hand corner of the vehicle.
  • the headphones would be controlled accordingly, in particular by a control signal from the processing unit, and during the course of this, the orientation and/or location of the headphones with respect to the vehicle or the specific point thereof will be taken into account.
  • the audible sound mentioned above does not need to be a sound originating from, or originally generated by, the vehicle itself, or generated in order to mimic or resemble a function of the vehicle.
  • the vehicle itself in particular when considering only its primary function of transporting persons, objects or animals, or functions associated with (or mimicking or resembling) this primary function, does not need to be the source of the audible sound.
  • the source of the audible sound may be an entertainment or information device, in particular an entertainment or information device that is integrated into the vehicle, for example a rear seat entertainment (RSE) device or a co-driver entertainment (CDE) device.
  • the acoustic output device would be controlled, in particular by a control signal from the processing unit, in such a way that a user of the acoustic output device perceives the audible sound as coming from the substantially consistent direction and/or location (e.g. the entertainment or information device) relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • a control signal from the processing unit in such a way that a user of the acoustic output device perceives the audible sound as coming from the substantially consistent direction and/or location (e.g. the entertainment or information device) relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • the audible sound is perceived to come from a particular position of an entertainment or information device, e.g. from a particular location on a screen of the entertainment or information device.
  • This position or location may also (intentionally) change over time, for example if the audible sound is supposed to be the voice of a voice assistant, whose position on the entertainment or information device might change.
  • the location from which the audible sound is intended to be perceived to come from might even switch from one (entertainment or information) device to another.
  • the acoustic output device would be controlled, in particular by a control signal from the processing unit, in such a way that a user of the acoustic output device perceives the audible sound as coming from the appropriate (substantially consistent) direction and/or location (in particular corresponding to the direction and/or specific location of the entertainment or information device), irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • mutatis mutandis if the device from which the audible sound is intended to be perceived to come from is movable with respect to the vehicle, e.g. if such a device is a smart phone, laptop or tablet computer connected to the vehicle (by a wired or wireless connection).
  • Consistent direction and/or location is preferably to be understood to refer to a direction, for example expressed in terms of one or more angles (with respect to a reference) and/or a distance (with respect to a reference).
  • Ensuring that the audible sound is perceived as coming from a substantially consistent direction and/or location may also address problems associated with motion sickness.
  • a large number of people experience motion sickness (also referred to as kinetosis, among other names) when travelling in a vehicle.
  • motion sickness also referred to as kinetosis, among other names
  • this problem is exacerbated if there is a disconnect between what a person perceives (in terms of hearing and/or seeing) and what that person feels (in terms of movements of the vehicle), in particular acceleration in any direction, including when the vehicle is passing through a curve. Ensuring that the audible sound is perceived as coming from a substantially consistent direction and/or location may reduce or eliminate this disconnect.
  • the processing unit provides to the acoustic output device, via the control signal, information regarding the direction/position (with respect to the vehicle or specific location thereof) which the audible sound is supposed to be perceived to originate from.
  • This information is provided to the portable device without the processing unit necessarily being aware of the orientation and/or location of the acoustic output device relative to the vehicle.
  • the acoustic output device can then determine, on the basis of its own knowledge of its own orientation and/or location, how to generate and output the audible sound so that the audible sound is able to be perceived by the user as coming from the "consistent" direction/location.
  • the expression "substantially consistent direction relative to the vehicle” and similar is preferably intended to be understood in such a way that an angular difference between, on the one hand, the direction which the audible sound is supposed to be perceived to originate from, and, on the other hand, the direction from which the audible sound will be perceived, by a user, to be coming from, is at most, or less than, one of: 90°, 80°, 70°, 60°, 50°, 45°, 40°, 30°, 20° or 10°.
  • the expression "substantially consistent location relative to the vehicle or the specific point thereof” and similar is preferably intended to be understood in such a way that a distance between, on the one hand, the location which the audible sound is supposed to be perceived to originate from, and, on the other hand, the location from which the audible sound will be perceived, by a user, to be coming from, is at most, or less than, one of: 100 cm, 90 cm, 80 cm, 70 cm, 60 cm, 50 cm, 40 cm, 30 cm, 20 cm or 10 cm.
  • the audible sound is part of audible content forming part of infotainment content, the infotainment content further comprising visual content associated with the audible content, in particular synchronised with the audible content, the visual content being (intended to be) displayed on a display device.
  • infotainment content is preferably intended to be understood to mean any audio-visual (media) content, in particular content for work, information, entertainment or social purposes. This encompasses in particular news, video clips, video calls etc.
  • associated in the expression “visual content associated with the audible content” is intended to be understood to mean that the visual and audible content together form audio-visual content (in a coherent manner).
  • audible content associated with visual content may be the soundtrack of a film or video clip, or speech of a video call etc.
  • the display device may, for example, be a central information display (CID), a head-up display or a rear seat entertainment device (RSE).
  • CID central information display
  • RSE rear seat entertainment device
  • the audible content comprises a first audible portion and a second audible portion, wherein the first audible portion comprises the audible sound
  • the audible content comprises a first audible portion and a second audible portion, wherein the first audible portion comprises the audible sound
  • the infotainment content of which the audible content forms part may be a film, for example a scene in a busy restaurant, in which many voices can be heard as ambient noise.
  • the film may focus on a particular person speaking.
  • the present embodiment envisages that the sounds uttered by the particular person speaking (first audible portion) should be treated differently from the ambient noise (second audible portion).
  • the present embodiment envisages that the first audible portion should be perceived by a user as coming from a particular direction/location, in particular from a direction (with respect to a user) corresponding to the location of the particular person on the display device, whereas the ambient noise (second audible portion) should be perceived by a user as coming from a variety of directions - as is typical for ambient noise - in particular in an omnidirectional manner (i.e. with substantially equal intensity from all directions).
  • the orientation and/or location of the acoustic output device is taken into account
  • the orientation and/or location of the acoustic output device is not taken into account (or the second audible portion is not output as a function of the orientation/location of the acoustic output device with respect to the road-based vehicle).
  • the method further comprises detecting sounds generated by an occupant of the road-based vehicle other than a user of the acoustic output device; and causing the detected sounds or processed versions thereof to be output as said audible sound, in particular in such a way that a user of the acoustic output device perceives the audible sound as coming from a direction and/or location relative to said user which corresponds to a direction and/or location of said occupant relative to said user, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • the system is arranged to detect sounds generated by an occupant of the road-based vehicle other than a user of the acoustic output device. These sounds may be detected by a detector forming part of the system. Alternatively, information regarding the sounds may be obtained via an interface of the system, in particular an interface of the processing unit.
  • the control signal to be generated and output by the processing unit is such that the control signal causes the acoustic output device to output the detected sounds or processed versions thereof as said audible sound, in particular in such a way that a user of the acoustic output device perceives the audible sound as coming from a direction and/or location relative to said user which corresponds to a direction and/or location of said occupant relative to said user, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • the acoustic output device used by the user has a noise cancellation function.
  • spoken words uttered by an occupant other than the user would then normally (i.e. without the method or system of the present invention) not be heard by the user, or at least would be difficult to hear.
  • the present embodiments can ensure firstly that the user can hear the other occupant (via the acoustic output device) and secondly that the user can hear the words uttered by the other occupant as coming from a direction and/or location relative to the user which corresponds to a direction and/or location of the occupant relative to the user.
  • Such a function might be useful for safety reasons or might even be required for safety reasons.
  • Such a “transparent” mode might be active permanently, or might, in a further development, be activated through a particular action/event, for example if the other occupant utters a particular sound, for example a specific word or combination of words, such as "Hi, ⁇ name of the user ⁇ ” (e.g. “Hi, John” if the user of the headphones is called “John"), or "Hi, this is ⁇ name of other occupant ⁇ speaking” (e.g. "Hi, this is Michael speaking” if the other occupant is called “Michael”), or some other "wake-up" expression.
  • a particular event could be the detection of a siren of an emergency vehicle.
  • the method further comprises detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to derive therefrom the direction and/or location of said occupant relative to said user.
  • the system further comprises a detector for detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to derive therefrom the direction and/or location of said occupant relative to said user.
  • the system may comprise an interface for receiving information regarding a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to be able to derive therefrom the direction and/or location of said occupant relative to said user.
  • detecting the direction and/or location of said occupant with respect to the vehicle or the specific point thereof comprises detecting the direction and/or location of said occupant with respect to the vehicle or the specific point thereof using one or more microphones and/or cameras, in particular one or more microphones and/or cameras installed in the vehicle.
  • the detector for detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof comprises one or more microphones and/or cameras, in particular one or more microphones and/or cameras installed in the vehicle.
  • the one or more microphones and/or cameras can detect which occupant or occupants are currently e.g. speaking, or where any sounds generated within the vehicle are coming from.
  • the audible sound is an audible sound arranged
  • the audible sound may therefore be a (subtle) sound that is (artificially) generated or added to give a person or user in the vehicle an indication of the condition of the vehicle (operational/moving etc.).
  • an audible sound arranged to indicate that the vehicle is operational may be generated for example after a user of the vehicle has put the vehicle into an "activated state", e.g. by inserting a key, key card or similar into an appropriate receptacle of the vehicle or has pressed a start button or similar and while the vehicle remains stationary.
  • An audible sound of the type described above may however also be generated and output in connection with a vehicle which is equipped with an internal combustion engine.
  • the audible sound is an audible sound arranged to resemble the sound of an internal combustion engine
  • the audible sound may be arranged to be generated in such a way that it is perceived as coming from a central location towards the front of the vehicle, where, in a typical vehicle with a combustion engine, the engine would be located.
  • the acoustic output device comprises a portable device, in particular a personal device or a proprietary device of the vehicle, in particular headphones or a headset, in particular with a surround sound function and/or a noise cancellation function.
  • the acoustic output device may be removable (from the vehicle).
  • the vehicle may include a holder or cradle or similar for carrying the portable device.
  • a cradle may, for example, be provided on the back of a headrest (of a front seat) for use by a person sitting on a seat behind that headrest.
  • other locations are also possible.
  • the portable device When the portable device is placed in the holder or cradle, it is substantially fixed with respect to the vehicle. However, it can be removed from the holder or cradle (and may or may not still be attached to the vehicle via a cable or similar).
  • the vehicle may have proprietary connectors which (typically) only fit a corresponding connector of the holder or cradle, or the communication protocols of the holder or cradle (of the vehicle) need to match those of the portable device, so that a generic device (such as a personal headset or personal headphones) might not be able to communicate with the vehicle via the holder or cradle.
  • proprietary connectors typically only fit a corresponding connector of the holder or cradle, or the communication protocols of the holder or cradle (of the vehicle) need to match those of the portable device, so that a generic device (such as a personal headset or personal headphones) might not be able to communicate with the vehicle via the holder or cradle.
  • the invention encompasses embodiments in which no holder or cradle is provided in connection with a proprietary device.
  • the device may be proprietary to the vehicle in the sense that the communication protocols of the portable device need to match those of the vehicle in order to be able to communicate with the vehicle, whereas a generic device (such as a personal headset or personal headphones) might not be able to communicate with the vehicle.
  • the acoustic output device may be connected to the vehicle in a wired or wireless manner, for example via Bluetooth ® or similar.
  • connection encompasses not only connections established by, or using, a (metallic) wire, but also optical connections, preferably also any other type of physical connection allowing for the transmission of information.
  • Fig. 1 schematically shows a plan view of a vehicle 15 according to an embodiment of the present invention.
  • the vehicle 15 is shown, by way of example, as a left-hand drive car, with two front seats 11, 12, two rear seats 13, 14 and a steering wheel (not labelled).
  • the front of the vehicle is at the top of Fig. 1 .
  • Vehicle 15 is equipped with a processing unit 1.
  • Processing unit 1 may comprise, or form part of, an onboard computer of vehicle 15.
  • Processing unit 1 is connected to one or more detectors 2, such as microphones 2, for detecting sounds within vehicle 15.
  • Microphones 2 are distributed at locations in the vehicle 15 and may be substantially permanently installed in vehicle 15. In the example shown in Fig. 1 , there are ten such microphones 2. Only one (wired) connection between processing unit 1 and one of the microphones 2 (right-hand side, towards the front of the vehicle 15) is shown. The other microphones 2 may be connected in like manner. The connection(s) may be wired or wireless. More, or fewer, than ten microphones 2 may be provided. Microphones 2 may all be positioned at the same height within the vehicle 15, or at different heights, in which case it may be possible to determine (more accurately) the location where sounds which are detected by microphones 2 originate from, in particular in a three-dimensional space.
  • processing unit 1 is connected to one or more cameras 16, for capturing (part of) the interior of vehicle 15, in particular the seats 11-14 and any occupants thereof.
  • Cameras 16 are distributed at locations in the vehicle 15 and may be substantially permanently installed in vehicle 15. In the example shown in Fig. 1 , there are two such cameras 16. Only one (wired) connection between processing unit 1 and one of the cameras 16 (towards the front of the vehicle 15) is shown. The other camera(s) 16 may be connected in like manner. The connection(s) may be wired or wireless. More, or fewer, than two cameras 16 may be provided. Cameras 16 may for example be mounted to the ceiling of vehicle 15. Using cameras 16, the location of a person speaking within vehicle 15 may be able to be determined, for example by processing unit 1 analysing any mouth movements as detected by cameras 16.
  • Processing unit 1 is also connected to one or more acoustic output devices.
  • Fig. 1 illustrates three such acoustic output devices 3, 4, 5. These will now be explained.
  • acoustic output devices 3, 4, 5 are movable (or portable), at least within a certain range.
  • Acoustic output device 3 is a personal portable device, such as headphones or a headset, which may be connected to vehicle 15, in particular to processing unit 1, in a wired or wireless manner, for example via Bluetooth ® or similar. Acoustic output device 3 may have a surround sound function.
  • Acoustic output device 4 is proprietary to vehicle 15. It is held by, placed upon, or (removably) attached to a cradle or holder 6, which may be (substantially permanently) installed in vehicle 15, for example attached to the back of seat 11 or the headrest of seat 11, and may form part of a rear seat entertainment (RSE) system.
  • Cradle 6 is connected to processing unit 1 by a wired connection.
  • Acoustic output device 4 is also connected to cradle 6 by a wired connection.
  • Acoustic output device 4 may comprise headphones or a headset and may have a surround sound function.
  • the location of cradle 6 with respect to the vehicle 15 (and therefore potentially also an at least approximate location of acoustic output device 4) may be known to processing unit 1. A possible use of this information will become clear later.
  • Acoustic output device 5 and cradle/holder 7 may substantially correspond to acoustic output device 4 and cradle/holder 6, except that these are not connected by a wired connection to processing unit 1, i.e. there is neither a wired connection between processing unit 1 and cradle 7, nor between cradle 7 and acoustic output device 5.
  • Acoustic output device 5 may communicate with cradle 7, or directly with processing unit 1, in a wireless manner.
  • Cradle 7 may also communicate with processing unit 1 in a wireless manner, or may not communicate with processing unit 1 at all and may simply be a holder for (mechanically) accommodating portable device 5 and/or serve as a charging station for acoustic output device 5.
  • acoustic output devices 4, 5 and cradles/holders 6, 7 are also possible, for example such that the acoustic output device 4, 5 is connected to a respective cradle 6, 7 in a wireless manner whilst the respective cradle 6, 7 is connected to processing unit 1 by a wired connection.
  • portable device 4, 5 may be connected to the respective cradle 6, 7 by a wired connection whilst the respective cradle 6, 7 is connected to processing unit 1 in a wireless manner.
  • cradles 6 and/or 7 may be omitted.
  • Processing unit 1 may have one or more wireless interfaces 8 for communicating with any one, some or all of the microphones 2, acoustic output devices 3, 4, 5 and/or cameras 16. Signals may be sent between processing unit 1 and microphones 2, acoustic output devices 3, 4, 5 and/or cameras 16 via their respective connections, if applicable via cradles 6, 7.
  • processing unit 1 may comprise a wired interface, for example an electrical connector for a cable-based connection with any, some or all of the other devices mentioned above. If these devices are hard-wired to processing unit 1, then the interface may be regarded as a point along the connection between these devices and processing unit 1.
  • a wired connection can also be an indirect wired connection, for example such that cradles 6, 7 are connected to processing unit 1 via a first wired connection (hard-wired or using a removable cable with one or more connectors), and acoustic output devices 4, 5 are connected to their respective cradle 6, 7 via a second wired connection (hard-wired or using a removable cable with one or more connectors, for example plugged into an AUX socket of the respective cradle).
  • a first wired connection hard-wired or using a removable cable with one or more connectors
  • acoustic output devices 4, 5 are connected to their respective cradle 6, 7 via a second wired connection (hard-wired or using a removable cable with one or more connectors, for example plugged into an AUX socket of the respective cradle).
  • not all of the devices are present or connected, or the system of certain embodiments of the invention may comprise only some of the devices illustrated in Fig. 1 .
  • the system comprises only the processing unit 1 and one acoustic output device such as one of the acoustic output devices 3-5, or only the processing unit 1 (for controlling one or more acoustic output devices).
  • Fig. 2 schematically shows a top view of the head 22 of a person wearing headphones (or a headset), in a first orientation
  • Fig. 3 schematically shows a top view of the head 22 of the person from Fig. 2 , in a second orientation.
  • the reference number 22 is respectively placed towards the direction in which head 22 is facing. In other words, in the orientation of Fig. 2 , the head 22 is turned towards the top of the figure, whereas in Fig. 3 , the head 22 is turned towards the left.
  • Figs. 2 and 3 schematically show headphones worn by the person 22, whereby reference number 20 indicates a right-hand portion of the headphones and reference number 21 indicates a left-hand portion of the headphones.
  • the headphones 20, 21 are shown as examples of the acoustic output devices 3, 4, 5 of Fig. 1 .
  • the headphones 20, 21 are headphones with surround sound function.
  • the headphones are further equipped with one or more (built-in) devices 23 for determining the orientation/position of the headphones, in particular with respect to a local or global reference.
  • Such devices 23 can, for example, comprise a gyroscope and/or an accelerometer, in particular a 3-axis accelerometer, for example one gyroscope/accelerometer per headset or, as shown in Fig. 3 , one gyroscope/accelerometer 23 each for the right-hand portion 20 and the left-hand portion 21 of the headphones.
  • Such devices 23 can, for example, (also) determine the orientation/position of the headphones with respect to a satellite-based signal, such as a GPS signal.
  • Two speakers 24 (front) and 27 (rear) are integrated into right-hand portion 20.
  • two speakers 25 (front) and 26 (rear) are integrated into left-hand portion 21.
  • More than two speakers may be provided in each of the right-hand portion 20 and the left-hand portion 21. It may also be possible to provide only one speaker for each of these portions, whereby this single speaker should be such that, depending on how it is activated, a user of the headphones 20, 21 will perceive audible sounds generated and output by this single speaker to come from different directions - so that the right-hand and left-hand portions 20, 21 together can provide a surround sound experience.
  • arrow 17 indicates the (forward) direction of travel of vehicle 15
  • arrow 18 indicates the direction in which the user 22 is facing.
  • Arrow 18 therefore also indicates the orientation of the acoustic output device (headphones 20, 21).
  • the user 22 is supposed to perceive an audible sound as coming approximately from a central location at the front of vehicle 15, where, at least in most conventional vehicles with an internal combustion engine, the engine would typically be located. This location is indicated as a square 10 in Figs. 1 , 2 and 3 .
  • the audible sound may be intended to resemble the sound of a typical internal combustion engine. Given the (perceived) location from which the audible sound is intended to originate from, the user 22 will be likely to be provided with a particularly realistic impression of a sound coming from an internal combustion engine - even if vehicle 15 is an electric vehicle, for example. In some cases, this may reduce the effects of motion sickness.
  • the location 10 may be considered to correspond to a direction of 11 o'clock (12 o'clock being straight ahead).
  • the system determines, using gyroscopes and/or accelerometers 23 or similar, the orientation and/or position of the headphones 20, 21 in relation to vehicle 15.
  • the system causes the headphones 20, 21 to output the audible sound in such a way that the person will perceive the audible sound as coming (approximately) from the central location 10 at the front of vehicle 15, i.e.
  • the sound output of the speakers is indicated by circles, whereby a black filed-in circle indicates an activated speaker (i.e. outputting sound), and a white circle with a black outline indicates a speaker that is not activated (i.e. not outputting sound).
  • the speakers 24, 25 and 26 are activated, whereas speaker 27 is not activated.
  • the size of the circles of the activated speakers indicates the sound volume. As shown in Fig. 2 , the sound volume of speaker 25 is greatest, whereas the sound volume of speaker 26 is smallest.
  • the activation of the speakers as shown in Fig. 2 can give the user 22 the impression that the sound is coming from the 11 o'clock direction, as also indicated by arrow 19.
  • the person has turned their head 22 by 90° towards the left.
  • the central location 10 at the front of vehicle 15 now corresponds to 2 o'clock with respect to the head 22 of the person.
  • the system can cause the headphones 20, 21 to output the audible sound in such a way that the person will perceive the audible sound as coming (approximately) from the central location 10 at the front of vehicle 15, i.e. the direction 2 o'clock with respect to the head 22 of the person. Accordingly, in Fig.
  • the speakers 24, 25 and 27 are activated, whereas speaker 26 is not activated, whereby the sound volume of speaker 24 is now greatest and the sound volume of speaker 25 is smallest.
  • the activation of the speakers as shown in Fig. 3 can give the user 22 the impression that the sound is now coming from the 2 o'clock direction, as also indicated by arrow 19.
  • the user 22 can perceive the sound as coming from the (consistent) direction indicated by arrow 19, which, in both cases, has the same orientation with respect to the vehicle 15.
  • the system can ensure that the sound is perceived as coming from a consistent direction with respect to vehicle 15, irrespective of the orientation of the acoustic output device.
  • the audible sound is perceived by the person not only as coming from a consistent direction with respect to vehicle 15 but also as coming from an appropriate/consistent distance.
  • the audible sound imitating an engine noise coming from a central location 10 at the front of vehicle 15.
  • a first user is sitting on (front) seat 12, and a second user is sitting on (rear) seat 14, behind seat 12.
  • An embodiment of the invention envisages that the first and second users not only perceive the audible sound as coming from slightly different directions, but also that the second user perceives the audible sound as coming from a location at a greater distance than is the case for the first user.
  • processing unit 1 can take this information into account to generate appropriate control signals respectively for the acoustic output devices used by the first and second users to ensure that the first user will perceive the audible sound as coming from a closer distance than the second user. It is envisaged that in most cases this will mean (inter alia) that the audible sound which the first user will perceive is at a greater volume than the audible sound that the second user will perceive.
  • Ensuring that the "correct” portion of headphones 20, 21, in particular the “correct” (subset of) speakers, is/are activated and at an appropriate volume so as to enable the person to perceive the audible sound as coming from a consistent direction/location can be implemented in at least two ways, which will be explained in the following.
  • the gyroscopes and/or accelerometers 23 or similar provide information regarding the orientation (and potentially the position) of the headphones 20, 21 to the processing unit 1. This may be supplemented with information regarding the location of the headphones with respect to a reference, such as vehicle 15 or a particular reference location within the vehicle 15.
  • a reference such as vehicle 15 or a particular reference location within the vehicle 15.
  • a portable device such as headphones 20, 21 are connected to processing unit 1 or a cradle 6, 7 or some other reference point of vehicle 15 via Bluetooth ® or similar
  • signal parameters such as the angle of arrival, potentially detected at multiple, spaced apart locations, can be used to determine the relative location of the headphones 20, 21.
  • the processing unit 1 then generates a control signal taking this information into account, i.e. the control signal then includes information as to which portion/speakers of headphones 20, 21 to activate and at what volume (based on the distance between headphones 20, 21 and the consistent location 10).
  • the headphones 20, 21 can then use the received control signal to output the audible sound via the respective portion/speakers of headphones 20, 21.
  • the headphones 20, 21 then do not need to calculate themselves which portion/speakers of headphones 20, 21 to activate and at what volume (taking into account the direction from which the audible sound is supposed to be perceived as coming from, as well as the orientation/position of the headphones 20, 21, and in particular the distance between headphones 20, 21 and the consistent location 10) - since the control signal received from processing unit 1 already contains this information.
  • the gyroscopes and/or accelerometers 23 or similar again provide information regarding the orientation (and potentially the position) of the headphones 20, 21.
  • this information is not transmitted to the processing unit 1 so that the processing unit then generates the control signal without being aware of the orientation/position of headphones 20, 21.
  • the control signal will therefore also not include information as to which portion/speakers of headphones 20, 21 to activate and at what volume but instead informs the headphones 20, 21 of the position (with respect to vehicle 15) from which the audible sound is supposed to be perceived as coming from.
  • the headphones 20, 21 can then use the received control signal and the information provided by its own gyroscope(s) and/or accelerometer(s) 23 or similar to calculate which portion/speakers of headphones 20, 21 to activate and at what volume so that the person will perceive the audible sound as coming from the "correct"/consistent direction with respect to vehicle 15 and "correct"/consistent distance.
  • information regarding the location of headphones 20, 21 with respect to a reference, such as vehicle 15 or a particular reference location within the vehicle 15 can be determined (e.g. using signal parameters such as the angle of arrival, potentially detected at multiple, spaced apart locations, see above) and then used in order to control the headphones 20, 21.
  • this location information is determined by the headphones 20, 21 themselves, there is no need to transmit this location information to the processing unit 1. If, on the other hand, this location information is determined by the processing unit 1, this location information should be taken into account by the processing unit 1 when generating the control signal.
  • the orientation/position of the headphones 20, 21 is detected by one or more cameras or other sensors, such as cameras 16 shown in Fig. 1 , and the corresponding information is then provided to processing unit 1 and/or the headphones 20, 21.
  • Cameras 16 may be substantially permanently installed in vehicle 15. The position of cameras 16 in Fig. 1 is indicative only; other locations are possible.
  • information regarding the location of the headphones with respect to a reference is initially determined on the basis of any of the techniques described above.
  • Information from the gyroscope(s) and/or accelerometer(s) 23 of the headphones 20, 21 is then used to update this location information.
  • Fig. 4 again illustrates an acoustic output device (headphones 20, 21) worn by a user 22, whereby the functions of the acoustic output device may correspond to those explained with reference to Figs. 2 and 3 and will therefore not be explained again.
  • Display device 9 can for example be a central information display (CID) or rear seat entertainment (RSE) device of vehicle 15.
  • CID central information display
  • RSE rear seat entertainment
  • the display device 9 is located in front of user 22.
  • a person/actor 28 located towards the left-hand side of display device 9 is speaking. Accordingly, an audible sound to be output via headphones 20, 21 should be perceived by user 22 as coming from a forward direction and slightly to the left.
  • this is indicated by activated (front) speakers 24 and 25, whereby the volume to be output by (left-hand) speaker 25 is greater than that to be output by (right-hand) speaker 24.
  • Speakers 26 and 27 towards the rear of headphones 20, 21 are not activated.
  • the activation of speakers 24-27 can be regarded as being driven by the soundtrack of the film associated with the visual content to be displayed on display device 9.
  • the soundtrack of the film may have two portions, a first audible portion and a second audible portion (or a first audio portion and a second audio portion).
  • the sound associated with particular persons/actors (such as person/actor 28) or objects etc. might be represented by the first audible portion, whereas other sounds as part of the media content might be represented by the second audible portion.
  • the film shown on display device 9 may include a scene in a busy restaurant, in which (main) actor 28 is speaking, while ambient noise, for example conversations by other restaurant guests 29 (or extras in the film) should also be heard.
  • the activation of speakers 24-27 associated with person 28 speaking has already been explained above with reference to Fig. 4a ).
  • the activation of speakers 24-27 associated with the ambient noise from extras 29 is illustrated, by way of example, in Fig. 4b ).
  • all speakers 24 to 27 are activated at the same sound volume, albeit at a smaller sound volume than speaker 25 in Fig. 4a ), to provide a realistic impression of ambient noise.
  • a user 22 is again travelling in vehicle 15, for example sitting on (front right-hand) passenger seat 12. Another occupant of vehicle 15 is sitting, by way of example, on (rear left-hand) seat 13.
  • User 22 may be wearing headphones (such as those illustrated in, and explained with reference to, Figs. 2 and 3 ), in particular headphones with a noise cancellation function.
  • User 22 may therefore not normally be able to hear speech uttered by the other occupant.
  • this embodiment envisages that the system detects the speech uttered by the other occupant, using one or more microphones 2.
  • the system detects the (approximate) location of the other occupant (unless this is already known).
  • the information regarding the location of the other occupant, as well as the orientation and/or location of headphones 20, 21 of user 22, are then used by the system, in particular the processing unit 1, to generate a control signal to ensure that headphones 20, 21 output audible sound, representing the speech uttered by the other occupant, which can be perceived by user 22 as coming from the direction/location of the other occupant with respect to the user 22 (or their headphones 20, 21), in this example from a direction of somewhere between 7 o'clock and 8 o'clock, in particular from a direction corresponding to half past 7 (in relation to the forward direction of vehicle 15), in particular irrespective of the orientation of headphones 20, 21.
  • Fig. 5 shows a flow chart illustrating a method according to an embodiment of the present invention.
  • an orientation of an acoustic output device (such as any of acoustic output devices 3-5) with respect to the road-based vehicle 15 is determined (31).
  • An acoustic output function of the acoustic output device is then controlled (32) as a function of the orientation of the acoustic output device 3-5 with respect to the road-based vehicle 15.
  • the processing unit 1 may send a corresponding control signal or control signals to the acoustic output device(s) 3-5.
EP22185123.1A 2022-07-15 2022-07-15 Véhicule routier et procédé et système de commande d'un dispositif de sortie acoustique dans un véhicule routier Pending EP4307722A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22185123.1A EP4307722A1 (fr) 2022-07-15 2022-07-15 Véhicule routier et procédé et système de commande d'un dispositif de sortie acoustique dans un véhicule routier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP22185123.1A EP4307722A1 (fr) 2022-07-15 2022-07-15 Véhicule routier et procédé et système de commande d'un dispositif de sortie acoustique dans un véhicule routier

Publications (1)

Publication Number Publication Date
EP4307722A1 true EP4307722A1 (fr) 2024-01-17

Family

ID=83049969

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22185123.1A Pending EP4307722A1 (fr) 2022-07-15 2022-07-15 Véhicule routier et procédé et système de commande d'un dispositif de sortie acoustique dans un véhicule routier

Country Status (1)

Country Link
EP (1) EP4307722A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999039546A1 (fr) 1998-02-02 1999-08-05 Christopher Glenn Wass Systeme de sortie haut-parleur pour casque a son surround virtuel
US20080205662A1 (en) * 2007-02-23 2008-08-28 John Lloyd Matejczyk Vehicle sound (s) enhancing accessory and method
US20130191068A1 (en) * 2012-01-25 2013-07-25 Harman Becker Automotive Systems Gmbh Head tracking system
EP3985482A1 (fr) * 2020-10-13 2022-04-20 Koninklijke Philips N.V. Appareil de rendu audiovisuel et son procédé de fonctionnement
US20220210556A1 (en) * 2020-12-31 2022-06-30 Hyundai Motor Company Driver's vehicle sound perception method during autonomous traveling and autonomous vehicle thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999039546A1 (fr) 1998-02-02 1999-08-05 Christopher Glenn Wass Systeme de sortie haut-parleur pour casque a son surround virtuel
US20080205662A1 (en) * 2007-02-23 2008-08-28 John Lloyd Matejczyk Vehicle sound (s) enhancing accessory and method
US20130191068A1 (en) * 2012-01-25 2013-07-25 Harman Becker Automotive Systems Gmbh Head tracking system
EP3985482A1 (fr) * 2020-10-13 2022-04-20 Koninklijke Philips N.V. Appareil de rendu audiovisuel et son procédé de fonctionnement
US20220210556A1 (en) * 2020-12-31 2022-06-30 Hyundai Motor Company Driver's vehicle sound perception method during autonomous traveling and autonomous vehicle thereof

Similar Documents

Publication Publication Date Title
EP3424229B1 (fr) Systèmes et procédés de réglage audio spatial
US10032453B2 (en) System for providing occupant-specific acoustic functions in a vehicle of transportation
JP6965783B2 (ja) 音声提供方法および音声提供システム
JP6284331B2 (ja) 会話支援装置、会話支援方法及び会話支援プログラム
CN108058663A (zh) 车辆声音处理系统
CN111016824B (zh) 交流支援系统、交流支援方法及存储介质
EP3495942B1 (fr) Visiocasque et son procédé de commande
CN111007968A (zh) 智能体装置、智能体提示方法及存储介质
JP2015128915A (ja) 後席乗員モニタシステム及び後席乗員モニタ方法
JP2019086805A (ja) 車内システム
EP4307722A1 (fr) Véhicule routier et procédé et système de commande d'un dispositif de sortie acoustique dans un véhicule routier
CN111717083B (zh) 一种车辆交互方法和一种车辆
US10812906B2 (en) System and method for providing a shared audio experience
WO2019058496A1 (fr) Système d'enregistrement d'expression
CN113939858A (zh) 自动驾驶辅助装置、自动驾驶辅助系统和自动驾驶辅助方法
CN115431911A (zh) 交互控制方法、装置、电子设备、存储介质和车辆
JP7065353B2 (ja) ヘッドマウントディスプレイ及びその制御方法
JP2022518135A (ja) 車内ヘッドフォンの音響拡張現実システム
EP4286861A1 (fr) Système, procédé et produit de programme informatique pour fournir une indication de l'accélération ou d'un changement d'orientation d'un véhicule et un tel véhicule
EP4286860A1 (fr) Système, procédé et produit de programme informatique pour fournir une indication de l'accélération d'un véhicule routier et un tel véhicule routier
JP2021150835A (ja) 音データ処理装置および音データ処理方法
JP7443877B2 (ja) 音声出力制御装置、音声出力システム、音声出力制御方法およびプログラム
WO2018173112A1 (fr) Dispositif de commande de sortie du son, système de commande de sortie du son et procédé de commande de sortie du son
JP2019057869A (ja) 車両内コミュニケーション装置
US11974103B2 (en) In-car headphone acoustical augmented reality system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR