EP4307722A1 - Road-based vehicle and method and system for controlling an acoustic output device in a road-based vehicle - Google Patents

Road-based vehicle and method and system for controlling an acoustic output device in a road-based vehicle Download PDF

Info

Publication number
EP4307722A1
EP4307722A1 EP22185123.1A EP22185123A EP4307722A1 EP 4307722 A1 EP4307722 A1 EP 4307722A1 EP 22185123 A EP22185123 A EP 22185123A EP 4307722 A1 EP4307722 A1 EP 4307722A1
Authority
EP
European Patent Office
Prior art keywords
acoustic output
output device
vehicle
location
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22185123.1A
Other languages
German (de)
French (fr)
Inventor
Tarek Zaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Priority to EP22185123.1A priority Critical patent/EP4307722A1/en
Publication of EP4307722A1 publication Critical patent/EP4307722A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/022Plurality of transducers corresponding to a plurality of sound channels in each earpiece of headphones or in a single enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • the present invention relates to a road-based vehicle and to a method and system for controlling an acoustic output device in a road-based vehicle.
  • the invention further relates to a computer program for carrying out the method.
  • the present invention can, in principle, be used in connection with any type of road-based vehicle, in particular a motorized vehicle, such as a (passenger) car, lorry / truck, bus or coach.
  • a motorized vehicle such as a (passenger) car, lorry / truck, bus or coach.
  • the invention finds particular application in (passenger) cars, the invention will be described primarily in connection with cars, without being limited to this.
  • one skilled in the art will not have any difficulties, guided by the present disclosure, to implement the invention in the context of road-based vehicles other than cars.
  • Some acoustic output devices available today have a surround sound function, which can provide a user with a particularly rich listening experience.
  • Such surround sound systems not only exist in a public cinema or home cinema setting; some portable devices such as certain headphones or headsets also have a surround sound function.
  • the present invention provides a method of controlling an acoustic output device in a road-based vehicle comprising:
  • the orientation of the acoustic output device is taken into account so that, when the acoustic output device is in a first orientation with respect to the road-based vehicle, the acoustic output perceived by a user of the acoustic output device will, as a rule, be different when compared with a situation where the acoustic output device is in a second orientation with respect to the road-based vehicle, the second orientation being different from the first orientation.
  • the headphones might be controlled in such a way that, in the first orientation, a first subset of the speakers is activated (to output an audible sound), and in the second orientation, a second subset of the speakers different from the first subset of the speakers is activated.
  • orientation of the acoustic output device, in particular with respect to the road-based vehicle
  • orientation information does not necessarily refer to a value or set of values which (directly) provides a correct measurement of the orientation (of the acoustic output device, in particular with respect to the road-based vehicle) as expressed in (correct) physical units (e.g. polar coordinates or similar).
  • orientation and “orientation information” can, in principle, mean any information which (at least approximately) characterises the orientation (of the acoustic output device, in particular with respect to the road-based vehicle), in particular (at least approximately) uniquely characterises the orientation (of the acoustic output device, in particular with respect to the road-based vehicle), in particular a value or set of values from which physically correct values of the orientation (of the acoustic output device, in particular with respect to the road-based vehicle) can (at least approximately) be obtained, in particular without having to resort to other information or measurements.
  • the orientation or orientation information would be such that a computing device can process it.
  • the term "surround sound” may refer to a two-dimensional surround sound (e.g. a plurality of speakers distributed substantially in a single plane) or to a three-dimensional surround sound (e.g. a plurality of speakers, not all distributed in a single plane).
  • the present invention provides a system for controlling an acoustic output device in a road-based vehicle, the system comprising: a processing unit configured to receive orientation information regarding an orientation of the acoustic output device with respect to the road-based vehicle, wherein the processing unit is further configured to generate and output a control signal for causing the acoustic output device to output audible sound, wherein the processing unit is configured to generate and output the control signal as a function of the orientation information.
  • a "processing unit” is preferably intended to be understood to mean any electrical component or device which can receive orientation information and generate a control signal based thereon which can be used by the acoustic output device.
  • the processing unit can either (substantially) transparently pass the orientation information to the acoustic output device if the acoustic output device can be controlled directly by the orientation information, or it may process the orientation information and generate a control signal which is (substantially) different from the orientation information.
  • the processing unit may in particular comprise a microprocessor.
  • the processing unit of the second aspect of the invention may, for example, comprise an onboard computer of the vehicle, or form part thereof - and may accordingly form part of the vehicle. The processing unit may however also form part of the acoustic output device.
  • system further comprises a detector configured to detect said orientation of the acoustic output device with respect to the road-based vehicle in order to provide the orientation information, or the processing unit comprises an interface for receiving the orientation information.
  • Suitable detectors configured to detect the orientation of the acoustic output device with respect to the road-based vehicle are known in the art and include, for example, accelerometers, gyroscope sensors and magnetometer sensors, or combinations of these, in particular integrated into the acoustic output device.
  • Other types of sensors may detect the orientation with respect to a local or global reference, for example using a satellite-based positioning system such as GPS or the like.
  • the orientation of the vehicle with respect to the local or global reference might also need to be known or determined, and the relative orientation of the acoustic output device with respect to the vehicle may then be calculated or derived therefrom.
  • An interface for receiving the orientation information of the acoustic output device with respect to the vehicle may be wired or wireless.
  • a wired interface may, for example, be an electrical connector for establishing a connection between the processing unit and other devices that may provide the orientation information, e.g. the acoustic output device itself. If such other devices, for example the acoustic output device, is hard-wired to the processing unit, the interface may be regarded as a point along the connection between the processing unit and such other devices.
  • the system can be built into the vehicle or may form part of the vehicle.
  • the system can also be provided on its own and, for example, be supplied to vehicle manufacturers so that the system may be built into vehicles.
  • the present invention provides a road-based vehicle comprising the system according to the first aspect, or any embodiment thereof.
  • the present invention provides a computer program product comprising a program code which is stored on a computer readable medium, for carrying out the method in accordance with the first aspect, or any of its steps or combination of steps, or any embodiments thereof.
  • the computer program may in particular be stored on a non-volatile data carrier.
  • this is a data carrier in the form of an optical data carrier or a flash memory module.
  • the computer program may be provided as a file or a group of files on one or more data processing units, in particular on a server, and can be downloaded via a data connection, for example the Internet, or a dedicated data connection, such as for example a proprietary or a local network.
  • the computer program may comprise a plurality of interacting, individual program modules.
  • the computer program may be updatable and configurable, in particular in a wireless manner, for example by a user or manufacturer.
  • the method further comprises: determining a location of the acoustic output device with respect to the road-based vehicle or a specific point thereof; and controlling the acoustic output function of the acoustic output device as a function of the location of the acoustic output device with respect to the road-based vehicle or the specific point thereof.
  • the system comprises a detector configured to detect the location of the acoustic output device with respect to the road-based vehicle or a specific point thereof in order to provide location information, or the processing unit comprises an interface for receiving the location information.
  • the location of the acoustic output device is taken into account so that, when the acoustic output device is in a first location with respect to the road-based vehicle or the specific point thereof, the acoustic output perceived by a user of the acoustic output device will, as a rule, be different when compared with a situation where the acoustic output device is in a second location with respect to the road-based vehicle or the specific point thereof, the second location being different from the first location.
  • the headphones might be controlled in such a way that, when the acoustic output device is in the first location, a first subset of the speakers is activated (to output an audible sound), and when the acoustic output device is in the second location, a second subset of the speakers different from the first subset of the speakers is activated.
  • the "specific point" of the vehicle does not necessarily need to be a point within the vehicle or be a point on a component of the vehicle but can be a point whose location remains in a fixed (spatial) relationship with respect to the vehicle as the vehicle moves.
  • controlling the acoustic output function of the acoustic output device as a function of the orientation of the acoustic output device with respect to road-based vehicle and/or controlling the acoustic output function of the acoustic output device as a function of the location of the acoustic output device with respect to the road-based vehicle or the specific point thereof comprises causing the acoustic output device to output audible sound in such a way that a user of the acoustic output device perceives the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • control signal to be generated and output by the processing unit is such that the control signal causes the acoustic output device to output audible sound in such a way that a user of the acoustic output device perceives the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • the central location at or towards the front of the vehicle may be considered to correspond to a direction of 11 o'clock (12 o'clock being straight ahead).
  • the headphones are controlled in such a way, in particular by a control signal from the processing unit, that the user does indeed perceive the audible sound as coming from the central location at or towards the front of the vehicle.
  • a speaker towards the front left-hand side of the headphones might be controlled to output a greater volume than a speaker on the right-hand side of the headphones.
  • the user now turns their head towards the left (but remains in the same location, i.e. the passenger seat at the front, right).
  • the orientation of the head and of the headphones has now changed to a second orientation with respect to the vehicle, i.e. a "left" orientation.
  • the orientation (now: "left") and location (passenger seat) of the headphones with respect to the vehicle can again be determined and, given the determined orientation and/or location of the headphones with respect to the vehicle, the headphones are controlled in such a way, in particular by a control signal from the processing unit, that the user still perceives the audible sound as coming from the central location at or towards the front of the vehicle - despite the head of the user/headphones having changed their orientation.
  • a speaker on the right-hand side of the headphones and slightly towards the front (with respect to the head of the user) might be controlled to output a greater volume than a speaker on the left-hand side of the headphones.
  • the headphones would be controlled in a corresponding way (mutatis mutandis), in particular by a control signal from the processing unit, if the user (additionally) changed their location within the vehicle, for example by moving to the driver's seat (front, left).
  • the user can perceive the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • consistent direction and/or location does not necessarily mean that the audible sound only ever comes from the same location/direction with respect to the vehicle, such as the central location towards the front of the vehicle. Instead, the direction and/or location from which the audible sound is intended to be perceived to come may (intentionally) change over time.
  • the audible sound initially is perceived to come from the front right-hand corner of the vehicle, and, over time, the location from which the audible sound is intended to be perceived to come moves to the front left-hand corner of the vehicle.
  • the headphones would be controlled accordingly, in particular by a control signal from the processing unit, and during the course of this, the orientation and/or location of the headphones with respect to the vehicle or the specific point thereof will be taken into account.
  • the audible sound mentioned above does not need to be a sound originating from, or originally generated by, the vehicle itself, or generated in order to mimic or resemble a function of the vehicle.
  • the vehicle itself in particular when considering only its primary function of transporting persons, objects or animals, or functions associated with (or mimicking or resembling) this primary function, does not need to be the source of the audible sound.
  • the source of the audible sound may be an entertainment or information device, in particular an entertainment or information device that is integrated into the vehicle, for example a rear seat entertainment (RSE) device or a co-driver entertainment (CDE) device.
  • the acoustic output device would be controlled, in particular by a control signal from the processing unit, in such a way that a user of the acoustic output device perceives the audible sound as coming from the substantially consistent direction and/or location (e.g. the entertainment or information device) relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • a control signal from the processing unit in such a way that a user of the acoustic output device perceives the audible sound as coming from the substantially consistent direction and/or location (e.g. the entertainment or information device) relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • the audible sound is perceived to come from a particular position of an entertainment or information device, e.g. from a particular location on a screen of the entertainment or information device.
  • This position or location may also (intentionally) change over time, for example if the audible sound is supposed to be the voice of a voice assistant, whose position on the entertainment or information device might change.
  • the location from which the audible sound is intended to be perceived to come from might even switch from one (entertainment or information) device to another.
  • the acoustic output device would be controlled, in particular by a control signal from the processing unit, in such a way that a user of the acoustic output device perceives the audible sound as coming from the appropriate (substantially consistent) direction and/or location (in particular corresponding to the direction and/or specific location of the entertainment or information device), irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • mutatis mutandis if the device from which the audible sound is intended to be perceived to come from is movable with respect to the vehicle, e.g. if such a device is a smart phone, laptop or tablet computer connected to the vehicle (by a wired or wireless connection).
  • Consistent direction and/or location is preferably to be understood to refer to a direction, for example expressed in terms of one or more angles (with respect to a reference) and/or a distance (with respect to a reference).
  • Ensuring that the audible sound is perceived as coming from a substantially consistent direction and/or location may also address problems associated with motion sickness.
  • a large number of people experience motion sickness (also referred to as kinetosis, among other names) when travelling in a vehicle.
  • motion sickness also referred to as kinetosis, among other names
  • this problem is exacerbated if there is a disconnect between what a person perceives (in terms of hearing and/or seeing) and what that person feels (in terms of movements of the vehicle), in particular acceleration in any direction, including when the vehicle is passing through a curve. Ensuring that the audible sound is perceived as coming from a substantially consistent direction and/or location may reduce or eliminate this disconnect.
  • the processing unit provides to the acoustic output device, via the control signal, information regarding the direction/position (with respect to the vehicle or specific location thereof) which the audible sound is supposed to be perceived to originate from.
  • This information is provided to the portable device without the processing unit necessarily being aware of the orientation and/or location of the acoustic output device relative to the vehicle.
  • the acoustic output device can then determine, on the basis of its own knowledge of its own orientation and/or location, how to generate and output the audible sound so that the audible sound is able to be perceived by the user as coming from the "consistent" direction/location.
  • the expression "substantially consistent direction relative to the vehicle” and similar is preferably intended to be understood in such a way that an angular difference between, on the one hand, the direction which the audible sound is supposed to be perceived to originate from, and, on the other hand, the direction from which the audible sound will be perceived, by a user, to be coming from, is at most, or less than, one of: 90°, 80°, 70°, 60°, 50°, 45°, 40°, 30°, 20° or 10°.
  • the expression "substantially consistent location relative to the vehicle or the specific point thereof” and similar is preferably intended to be understood in such a way that a distance between, on the one hand, the location which the audible sound is supposed to be perceived to originate from, and, on the other hand, the location from which the audible sound will be perceived, by a user, to be coming from, is at most, or less than, one of: 100 cm, 90 cm, 80 cm, 70 cm, 60 cm, 50 cm, 40 cm, 30 cm, 20 cm or 10 cm.
  • the audible sound is part of audible content forming part of infotainment content, the infotainment content further comprising visual content associated with the audible content, in particular synchronised with the audible content, the visual content being (intended to be) displayed on a display device.
  • infotainment content is preferably intended to be understood to mean any audio-visual (media) content, in particular content for work, information, entertainment or social purposes. This encompasses in particular news, video clips, video calls etc.
  • associated in the expression “visual content associated with the audible content” is intended to be understood to mean that the visual and audible content together form audio-visual content (in a coherent manner).
  • audible content associated with visual content may be the soundtrack of a film or video clip, or speech of a video call etc.
  • the display device may, for example, be a central information display (CID), a head-up display or a rear seat entertainment device (RSE).
  • CID central information display
  • RSE rear seat entertainment device
  • the audible content comprises a first audible portion and a second audible portion, wherein the first audible portion comprises the audible sound
  • the audible content comprises a first audible portion and a second audible portion, wherein the first audible portion comprises the audible sound
  • the infotainment content of which the audible content forms part may be a film, for example a scene in a busy restaurant, in which many voices can be heard as ambient noise.
  • the film may focus on a particular person speaking.
  • the present embodiment envisages that the sounds uttered by the particular person speaking (first audible portion) should be treated differently from the ambient noise (second audible portion).
  • the present embodiment envisages that the first audible portion should be perceived by a user as coming from a particular direction/location, in particular from a direction (with respect to a user) corresponding to the location of the particular person on the display device, whereas the ambient noise (second audible portion) should be perceived by a user as coming from a variety of directions - as is typical for ambient noise - in particular in an omnidirectional manner (i.e. with substantially equal intensity from all directions).
  • the orientation and/or location of the acoustic output device is taken into account
  • the orientation and/or location of the acoustic output device is not taken into account (or the second audible portion is not output as a function of the orientation/location of the acoustic output device with respect to the road-based vehicle).
  • the method further comprises detecting sounds generated by an occupant of the road-based vehicle other than a user of the acoustic output device; and causing the detected sounds or processed versions thereof to be output as said audible sound, in particular in such a way that a user of the acoustic output device perceives the audible sound as coming from a direction and/or location relative to said user which corresponds to a direction and/or location of said occupant relative to said user, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • the system is arranged to detect sounds generated by an occupant of the road-based vehicle other than a user of the acoustic output device. These sounds may be detected by a detector forming part of the system. Alternatively, information regarding the sounds may be obtained via an interface of the system, in particular an interface of the processing unit.
  • the control signal to be generated and output by the processing unit is such that the control signal causes the acoustic output device to output the detected sounds or processed versions thereof as said audible sound, in particular in such a way that a user of the acoustic output device perceives the audible sound as coming from a direction and/or location relative to said user which corresponds to a direction and/or location of said occupant relative to said user, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • the acoustic output device used by the user has a noise cancellation function.
  • spoken words uttered by an occupant other than the user would then normally (i.e. without the method or system of the present invention) not be heard by the user, or at least would be difficult to hear.
  • the present embodiments can ensure firstly that the user can hear the other occupant (via the acoustic output device) and secondly that the user can hear the words uttered by the other occupant as coming from a direction and/or location relative to the user which corresponds to a direction and/or location of the occupant relative to the user.
  • Such a function might be useful for safety reasons or might even be required for safety reasons.
  • Such a “transparent” mode might be active permanently, or might, in a further development, be activated through a particular action/event, for example if the other occupant utters a particular sound, for example a specific word or combination of words, such as "Hi, ⁇ name of the user ⁇ ” (e.g. “Hi, John” if the user of the headphones is called “John"), or "Hi, this is ⁇ name of other occupant ⁇ speaking” (e.g. "Hi, this is Michael speaking” if the other occupant is called “Michael”), or some other "wake-up" expression.
  • a particular event could be the detection of a siren of an emergency vehicle.
  • the method further comprises detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to derive therefrom the direction and/or location of said occupant relative to said user.
  • the system further comprises a detector for detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to derive therefrom the direction and/or location of said occupant relative to said user.
  • the system may comprise an interface for receiving information regarding a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to be able to derive therefrom the direction and/or location of said occupant relative to said user.
  • detecting the direction and/or location of said occupant with respect to the vehicle or the specific point thereof comprises detecting the direction and/or location of said occupant with respect to the vehicle or the specific point thereof using one or more microphones and/or cameras, in particular one or more microphones and/or cameras installed in the vehicle.
  • the detector for detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof comprises one or more microphones and/or cameras, in particular one or more microphones and/or cameras installed in the vehicle.
  • the one or more microphones and/or cameras can detect which occupant or occupants are currently e.g. speaking, or where any sounds generated within the vehicle are coming from.
  • the audible sound is an audible sound arranged
  • the audible sound may therefore be a (subtle) sound that is (artificially) generated or added to give a person or user in the vehicle an indication of the condition of the vehicle (operational/moving etc.).
  • an audible sound arranged to indicate that the vehicle is operational may be generated for example after a user of the vehicle has put the vehicle into an "activated state", e.g. by inserting a key, key card or similar into an appropriate receptacle of the vehicle or has pressed a start button or similar and while the vehicle remains stationary.
  • An audible sound of the type described above may however also be generated and output in connection with a vehicle which is equipped with an internal combustion engine.
  • the audible sound is an audible sound arranged to resemble the sound of an internal combustion engine
  • the audible sound may be arranged to be generated in such a way that it is perceived as coming from a central location towards the front of the vehicle, where, in a typical vehicle with a combustion engine, the engine would be located.
  • the acoustic output device comprises a portable device, in particular a personal device or a proprietary device of the vehicle, in particular headphones or a headset, in particular with a surround sound function and/or a noise cancellation function.
  • the acoustic output device may be removable (from the vehicle).
  • the vehicle may include a holder or cradle or similar for carrying the portable device.
  • a cradle may, for example, be provided on the back of a headrest (of a front seat) for use by a person sitting on a seat behind that headrest.
  • other locations are also possible.
  • the portable device When the portable device is placed in the holder or cradle, it is substantially fixed with respect to the vehicle. However, it can be removed from the holder or cradle (and may or may not still be attached to the vehicle via a cable or similar).
  • the vehicle may have proprietary connectors which (typically) only fit a corresponding connector of the holder or cradle, or the communication protocols of the holder or cradle (of the vehicle) need to match those of the portable device, so that a generic device (such as a personal headset or personal headphones) might not be able to communicate with the vehicle via the holder or cradle.
  • proprietary connectors typically only fit a corresponding connector of the holder or cradle, or the communication protocols of the holder or cradle (of the vehicle) need to match those of the portable device, so that a generic device (such as a personal headset or personal headphones) might not be able to communicate with the vehicle via the holder or cradle.
  • the invention encompasses embodiments in which no holder or cradle is provided in connection with a proprietary device.
  • the device may be proprietary to the vehicle in the sense that the communication protocols of the portable device need to match those of the vehicle in order to be able to communicate with the vehicle, whereas a generic device (such as a personal headset or personal headphones) might not be able to communicate with the vehicle.
  • the acoustic output device may be connected to the vehicle in a wired or wireless manner, for example via Bluetooth ® or similar.
  • connection encompasses not only connections established by, or using, a (metallic) wire, but also optical connections, preferably also any other type of physical connection allowing for the transmission of information.
  • Fig. 1 schematically shows a plan view of a vehicle 15 according to an embodiment of the present invention.
  • the vehicle 15 is shown, by way of example, as a left-hand drive car, with two front seats 11, 12, two rear seats 13, 14 and a steering wheel (not labelled).
  • the front of the vehicle is at the top of Fig. 1 .
  • Vehicle 15 is equipped with a processing unit 1.
  • Processing unit 1 may comprise, or form part of, an onboard computer of vehicle 15.
  • Processing unit 1 is connected to one or more detectors 2, such as microphones 2, for detecting sounds within vehicle 15.
  • Microphones 2 are distributed at locations in the vehicle 15 and may be substantially permanently installed in vehicle 15. In the example shown in Fig. 1 , there are ten such microphones 2. Only one (wired) connection between processing unit 1 and one of the microphones 2 (right-hand side, towards the front of the vehicle 15) is shown. The other microphones 2 may be connected in like manner. The connection(s) may be wired or wireless. More, or fewer, than ten microphones 2 may be provided. Microphones 2 may all be positioned at the same height within the vehicle 15, or at different heights, in which case it may be possible to determine (more accurately) the location where sounds which are detected by microphones 2 originate from, in particular in a three-dimensional space.
  • processing unit 1 is connected to one or more cameras 16, for capturing (part of) the interior of vehicle 15, in particular the seats 11-14 and any occupants thereof.
  • Cameras 16 are distributed at locations in the vehicle 15 and may be substantially permanently installed in vehicle 15. In the example shown in Fig. 1 , there are two such cameras 16. Only one (wired) connection between processing unit 1 and one of the cameras 16 (towards the front of the vehicle 15) is shown. The other camera(s) 16 may be connected in like manner. The connection(s) may be wired or wireless. More, or fewer, than two cameras 16 may be provided. Cameras 16 may for example be mounted to the ceiling of vehicle 15. Using cameras 16, the location of a person speaking within vehicle 15 may be able to be determined, for example by processing unit 1 analysing any mouth movements as detected by cameras 16.
  • Processing unit 1 is also connected to one or more acoustic output devices.
  • Fig. 1 illustrates three such acoustic output devices 3, 4, 5. These will now be explained.
  • acoustic output devices 3, 4, 5 are movable (or portable), at least within a certain range.
  • Acoustic output device 3 is a personal portable device, such as headphones or a headset, which may be connected to vehicle 15, in particular to processing unit 1, in a wired or wireless manner, for example via Bluetooth ® or similar. Acoustic output device 3 may have a surround sound function.
  • Acoustic output device 4 is proprietary to vehicle 15. It is held by, placed upon, or (removably) attached to a cradle or holder 6, which may be (substantially permanently) installed in vehicle 15, for example attached to the back of seat 11 or the headrest of seat 11, and may form part of a rear seat entertainment (RSE) system.
  • Cradle 6 is connected to processing unit 1 by a wired connection.
  • Acoustic output device 4 is also connected to cradle 6 by a wired connection.
  • Acoustic output device 4 may comprise headphones or a headset and may have a surround sound function.
  • the location of cradle 6 with respect to the vehicle 15 (and therefore potentially also an at least approximate location of acoustic output device 4) may be known to processing unit 1. A possible use of this information will become clear later.
  • Acoustic output device 5 and cradle/holder 7 may substantially correspond to acoustic output device 4 and cradle/holder 6, except that these are not connected by a wired connection to processing unit 1, i.e. there is neither a wired connection between processing unit 1 and cradle 7, nor between cradle 7 and acoustic output device 5.
  • Acoustic output device 5 may communicate with cradle 7, or directly with processing unit 1, in a wireless manner.
  • Cradle 7 may also communicate with processing unit 1 in a wireless manner, or may not communicate with processing unit 1 at all and may simply be a holder for (mechanically) accommodating portable device 5 and/or serve as a charging station for acoustic output device 5.
  • acoustic output devices 4, 5 and cradles/holders 6, 7 are also possible, for example such that the acoustic output device 4, 5 is connected to a respective cradle 6, 7 in a wireless manner whilst the respective cradle 6, 7 is connected to processing unit 1 by a wired connection.
  • portable device 4, 5 may be connected to the respective cradle 6, 7 by a wired connection whilst the respective cradle 6, 7 is connected to processing unit 1 in a wireless manner.
  • cradles 6 and/or 7 may be omitted.
  • Processing unit 1 may have one or more wireless interfaces 8 for communicating with any one, some or all of the microphones 2, acoustic output devices 3, 4, 5 and/or cameras 16. Signals may be sent between processing unit 1 and microphones 2, acoustic output devices 3, 4, 5 and/or cameras 16 via their respective connections, if applicable via cradles 6, 7.
  • processing unit 1 may comprise a wired interface, for example an electrical connector for a cable-based connection with any, some or all of the other devices mentioned above. If these devices are hard-wired to processing unit 1, then the interface may be regarded as a point along the connection between these devices and processing unit 1.
  • a wired connection can also be an indirect wired connection, for example such that cradles 6, 7 are connected to processing unit 1 via a first wired connection (hard-wired or using a removable cable with one or more connectors), and acoustic output devices 4, 5 are connected to their respective cradle 6, 7 via a second wired connection (hard-wired or using a removable cable with one or more connectors, for example plugged into an AUX socket of the respective cradle).
  • a first wired connection hard-wired or using a removable cable with one or more connectors
  • acoustic output devices 4, 5 are connected to their respective cradle 6, 7 via a second wired connection (hard-wired or using a removable cable with one or more connectors, for example plugged into an AUX socket of the respective cradle).
  • not all of the devices are present or connected, or the system of certain embodiments of the invention may comprise only some of the devices illustrated in Fig. 1 .
  • the system comprises only the processing unit 1 and one acoustic output device such as one of the acoustic output devices 3-5, or only the processing unit 1 (for controlling one or more acoustic output devices).
  • Fig. 2 schematically shows a top view of the head 22 of a person wearing headphones (or a headset), in a first orientation
  • Fig. 3 schematically shows a top view of the head 22 of the person from Fig. 2 , in a second orientation.
  • the reference number 22 is respectively placed towards the direction in which head 22 is facing. In other words, in the orientation of Fig. 2 , the head 22 is turned towards the top of the figure, whereas in Fig. 3 , the head 22 is turned towards the left.
  • Figs. 2 and 3 schematically show headphones worn by the person 22, whereby reference number 20 indicates a right-hand portion of the headphones and reference number 21 indicates a left-hand portion of the headphones.
  • the headphones 20, 21 are shown as examples of the acoustic output devices 3, 4, 5 of Fig. 1 .
  • the headphones 20, 21 are headphones with surround sound function.
  • the headphones are further equipped with one or more (built-in) devices 23 for determining the orientation/position of the headphones, in particular with respect to a local or global reference.
  • Such devices 23 can, for example, comprise a gyroscope and/or an accelerometer, in particular a 3-axis accelerometer, for example one gyroscope/accelerometer per headset or, as shown in Fig. 3 , one gyroscope/accelerometer 23 each for the right-hand portion 20 and the left-hand portion 21 of the headphones.
  • Such devices 23 can, for example, (also) determine the orientation/position of the headphones with respect to a satellite-based signal, such as a GPS signal.
  • Two speakers 24 (front) and 27 (rear) are integrated into right-hand portion 20.
  • two speakers 25 (front) and 26 (rear) are integrated into left-hand portion 21.
  • More than two speakers may be provided in each of the right-hand portion 20 and the left-hand portion 21. It may also be possible to provide only one speaker for each of these portions, whereby this single speaker should be such that, depending on how it is activated, a user of the headphones 20, 21 will perceive audible sounds generated and output by this single speaker to come from different directions - so that the right-hand and left-hand portions 20, 21 together can provide a surround sound experience.
  • arrow 17 indicates the (forward) direction of travel of vehicle 15
  • arrow 18 indicates the direction in which the user 22 is facing.
  • Arrow 18 therefore also indicates the orientation of the acoustic output device (headphones 20, 21).
  • the user 22 is supposed to perceive an audible sound as coming approximately from a central location at the front of vehicle 15, where, at least in most conventional vehicles with an internal combustion engine, the engine would typically be located. This location is indicated as a square 10 in Figs. 1 , 2 and 3 .
  • the audible sound may be intended to resemble the sound of a typical internal combustion engine. Given the (perceived) location from which the audible sound is intended to originate from, the user 22 will be likely to be provided with a particularly realistic impression of a sound coming from an internal combustion engine - even if vehicle 15 is an electric vehicle, for example. In some cases, this may reduce the effects of motion sickness.
  • the location 10 may be considered to correspond to a direction of 11 o'clock (12 o'clock being straight ahead).
  • the system determines, using gyroscopes and/or accelerometers 23 or similar, the orientation and/or position of the headphones 20, 21 in relation to vehicle 15.
  • the system causes the headphones 20, 21 to output the audible sound in such a way that the person will perceive the audible sound as coming (approximately) from the central location 10 at the front of vehicle 15, i.e.
  • the sound output of the speakers is indicated by circles, whereby a black filed-in circle indicates an activated speaker (i.e. outputting sound), and a white circle with a black outline indicates a speaker that is not activated (i.e. not outputting sound).
  • the speakers 24, 25 and 26 are activated, whereas speaker 27 is not activated.
  • the size of the circles of the activated speakers indicates the sound volume. As shown in Fig. 2 , the sound volume of speaker 25 is greatest, whereas the sound volume of speaker 26 is smallest.
  • the activation of the speakers as shown in Fig. 2 can give the user 22 the impression that the sound is coming from the 11 o'clock direction, as also indicated by arrow 19.
  • the person has turned their head 22 by 90° towards the left.
  • the central location 10 at the front of vehicle 15 now corresponds to 2 o'clock with respect to the head 22 of the person.
  • the system can cause the headphones 20, 21 to output the audible sound in such a way that the person will perceive the audible sound as coming (approximately) from the central location 10 at the front of vehicle 15, i.e. the direction 2 o'clock with respect to the head 22 of the person. Accordingly, in Fig.
  • the speakers 24, 25 and 27 are activated, whereas speaker 26 is not activated, whereby the sound volume of speaker 24 is now greatest and the sound volume of speaker 25 is smallest.
  • the activation of the speakers as shown in Fig. 3 can give the user 22 the impression that the sound is now coming from the 2 o'clock direction, as also indicated by arrow 19.
  • the user 22 can perceive the sound as coming from the (consistent) direction indicated by arrow 19, which, in both cases, has the same orientation with respect to the vehicle 15.
  • the system can ensure that the sound is perceived as coming from a consistent direction with respect to vehicle 15, irrespective of the orientation of the acoustic output device.
  • the audible sound is perceived by the person not only as coming from a consistent direction with respect to vehicle 15 but also as coming from an appropriate/consistent distance.
  • the audible sound imitating an engine noise coming from a central location 10 at the front of vehicle 15.
  • a first user is sitting on (front) seat 12, and a second user is sitting on (rear) seat 14, behind seat 12.
  • An embodiment of the invention envisages that the first and second users not only perceive the audible sound as coming from slightly different directions, but also that the second user perceives the audible sound as coming from a location at a greater distance than is the case for the first user.
  • processing unit 1 can take this information into account to generate appropriate control signals respectively for the acoustic output devices used by the first and second users to ensure that the first user will perceive the audible sound as coming from a closer distance than the second user. It is envisaged that in most cases this will mean (inter alia) that the audible sound which the first user will perceive is at a greater volume than the audible sound that the second user will perceive.
  • Ensuring that the "correct” portion of headphones 20, 21, in particular the “correct” (subset of) speakers, is/are activated and at an appropriate volume so as to enable the person to perceive the audible sound as coming from a consistent direction/location can be implemented in at least two ways, which will be explained in the following.
  • the gyroscopes and/or accelerometers 23 or similar provide information regarding the orientation (and potentially the position) of the headphones 20, 21 to the processing unit 1. This may be supplemented with information regarding the location of the headphones with respect to a reference, such as vehicle 15 or a particular reference location within the vehicle 15.
  • a reference such as vehicle 15 or a particular reference location within the vehicle 15.
  • a portable device such as headphones 20, 21 are connected to processing unit 1 or a cradle 6, 7 or some other reference point of vehicle 15 via Bluetooth ® or similar
  • signal parameters such as the angle of arrival, potentially detected at multiple, spaced apart locations, can be used to determine the relative location of the headphones 20, 21.
  • the processing unit 1 then generates a control signal taking this information into account, i.e. the control signal then includes information as to which portion/speakers of headphones 20, 21 to activate and at what volume (based on the distance between headphones 20, 21 and the consistent location 10).
  • the headphones 20, 21 can then use the received control signal to output the audible sound via the respective portion/speakers of headphones 20, 21.
  • the headphones 20, 21 then do not need to calculate themselves which portion/speakers of headphones 20, 21 to activate and at what volume (taking into account the direction from which the audible sound is supposed to be perceived as coming from, as well as the orientation/position of the headphones 20, 21, and in particular the distance between headphones 20, 21 and the consistent location 10) - since the control signal received from processing unit 1 already contains this information.
  • the gyroscopes and/or accelerometers 23 or similar again provide information regarding the orientation (and potentially the position) of the headphones 20, 21.
  • this information is not transmitted to the processing unit 1 so that the processing unit then generates the control signal without being aware of the orientation/position of headphones 20, 21.
  • the control signal will therefore also not include information as to which portion/speakers of headphones 20, 21 to activate and at what volume but instead informs the headphones 20, 21 of the position (with respect to vehicle 15) from which the audible sound is supposed to be perceived as coming from.
  • the headphones 20, 21 can then use the received control signal and the information provided by its own gyroscope(s) and/or accelerometer(s) 23 or similar to calculate which portion/speakers of headphones 20, 21 to activate and at what volume so that the person will perceive the audible sound as coming from the "correct"/consistent direction with respect to vehicle 15 and "correct"/consistent distance.
  • information regarding the location of headphones 20, 21 with respect to a reference, such as vehicle 15 or a particular reference location within the vehicle 15 can be determined (e.g. using signal parameters such as the angle of arrival, potentially detected at multiple, spaced apart locations, see above) and then used in order to control the headphones 20, 21.
  • this location information is determined by the headphones 20, 21 themselves, there is no need to transmit this location information to the processing unit 1. If, on the other hand, this location information is determined by the processing unit 1, this location information should be taken into account by the processing unit 1 when generating the control signal.
  • the orientation/position of the headphones 20, 21 is detected by one or more cameras or other sensors, such as cameras 16 shown in Fig. 1 , and the corresponding information is then provided to processing unit 1 and/or the headphones 20, 21.
  • Cameras 16 may be substantially permanently installed in vehicle 15. The position of cameras 16 in Fig. 1 is indicative only; other locations are possible.
  • information regarding the location of the headphones with respect to a reference is initially determined on the basis of any of the techniques described above.
  • Information from the gyroscope(s) and/or accelerometer(s) 23 of the headphones 20, 21 is then used to update this location information.
  • Fig. 4 again illustrates an acoustic output device (headphones 20, 21) worn by a user 22, whereby the functions of the acoustic output device may correspond to those explained with reference to Figs. 2 and 3 and will therefore not be explained again.
  • Display device 9 can for example be a central information display (CID) or rear seat entertainment (RSE) device of vehicle 15.
  • CID central information display
  • RSE rear seat entertainment
  • the display device 9 is located in front of user 22.
  • a person/actor 28 located towards the left-hand side of display device 9 is speaking. Accordingly, an audible sound to be output via headphones 20, 21 should be perceived by user 22 as coming from a forward direction and slightly to the left.
  • this is indicated by activated (front) speakers 24 and 25, whereby the volume to be output by (left-hand) speaker 25 is greater than that to be output by (right-hand) speaker 24.
  • Speakers 26 and 27 towards the rear of headphones 20, 21 are not activated.
  • the activation of speakers 24-27 can be regarded as being driven by the soundtrack of the film associated with the visual content to be displayed on display device 9.
  • the soundtrack of the film may have two portions, a first audible portion and a second audible portion (or a first audio portion and a second audio portion).
  • the sound associated with particular persons/actors (such as person/actor 28) or objects etc. might be represented by the first audible portion, whereas other sounds as part of the media content might be represented by the second audible portion.
  • the film shown on display device 9 may include a scene in a busy restaurant, in which (main) actor 28 is speaking, while ambient noise, for example conversations by other restaurant guests 29 (or extras in the film) should also be heard.
  • the activation of speakers 24-27 associated with person 28 speaking has already been explained above with reference to Fig. 4a ).
  • the activation of speakers 24-27 associated with the ambient noise from extras 29 is illustrated, by way of example, in Fig. 4b ).
  • all speakers 24 to 27 are activated at the same sound volume, albeit at a smaller sound volume than speaker 25 in Fig. 4a ), to provide a realistic impression of ambient noise.
  • a user 22 is again travelling in vehicle 15, for example sitting on (front right-hand) passenger seat 12. Another occupant of vehicle 15 is sitting, by way of example, on (rear left-hand) seat 13.
  • User 22 may be wearing headphones (such as those illustrated in, and explained with reference to, Figs. 2 and 3 ), in particular headphones with a noise cancellation function.
  • User 22 may therefore not normally be able to hear speech uttered by the other occupant.
  • this embodiment envisages that the system detects the speech uttered by the other occupant, using one or more microphones 2.
  • the system detects the (approximate) location of the other occupant (unless this is already known).
  • the information regarding the location of the other occupant, as well as the orientation and/or location of headphones 20, 21 of user 22, are then used by the system, in particular the processing unit 1, to generate a control signal to ensure that headphones 20, 21 output audible sound, representing the speech uttered by the other occupant, which can be perceived by user 22 as coming from the direction/location of the other occupant with respect to the user 22 (or their headphones 20, 21), in this example from a direction of somewhere between 7 o'clock and 8 o'clock, in particular from a direction corresponding to half past 7 (in relation to the forward direction of vehicle 15), in particular irrespective of the orientation of headphones 20, 21.
  • Fig. 5 shows a flow chart illustrating a method according to an embodiment of the present invention.
  • an orientation of an acoustic output device (such as any of acoustic output devices 3-5) with respect to the road-based vehicle 15 is determined (31).
  • An acoustic output function of the acoustic output device is then controlled (32) as a function of the orientation of the acoustic output device 3-5 with respect to the road-based vehicle 15.
  • the processing unit 1 may send a corresponding control signal or control signals to the acoustic output device(s) 3-5.

Abstract

The invention relates to a method of controlling an acoustic output device (3-5) in a road-based vehicle (15) comprising: determining an orientation (18) of the acoustic output device (3-5) with respect to the road-based vehicle (15); and controlling an acoustic output function of the acoustic output device (3-5) as a function of the orientation (18) of the acoustic output device (3-5) with respect to the road-based vehicle (15).

Description

  • The present invention relates to a road-based vehicle and to a method and system for controlling an acoustic output device in a road-based vehicle. The invention further relates to a computer program for carrying out the method.
  • The present invention can, in principle, be used in connection with any type of road-based vehicle, in particular a motorized vehicle, such as a (passenger) car, lorry / truck, bus or coach. Given that the invention finds particular application in (passenger) cars, the invention will be described primarily in connection with cars, without being limited to this. In particular, one skilled in the art will not have any difficulties, guided by the present disclosure, to implement the invention in the context of road-based vehicles other than cars.
  • Some acoustic output devices available today have a surround sound function, which can provide a user with a particularly rich listening experience. Such surround sound systems not only exist in a public cinema or home cinema setting; some portable devices such as certain headphones or headsets also have a surround sound function.
  • An example of a headphone loudspeaker system with surround sound is disclosed in WO 99/39546 .
  • It is an object of at least some embodiments of the present invention to provide an alternative technique of controlling an acoustic output device, in particular in a road-based vehicle, in particular an improved technique of controlling an acoustic output device in a road-based vehicle.
  • In a first aspect, the present invention provides a method of controlling an acoustic output device in a road-based vehicle comprising:
    • determining an orientation of the acoustic output device with respect to the road-based vehicle; and
    • controlling an acoustic output function of the acoustic output device as a function of the orientation of the acoustic output device with respect to the road-based vehicle.
  • By determining the orientation of the acoustic output device with respect to the road-based vehicle and then controlling an acoustic output function of the acoustic output device as a function of the determined orientation, the orientation of the acoustic output device is taken into account so that, when the acoustic output device is in a first orientation with respect to the road-based vehicle, the acoustic output perceived by a user of the acoustic output device will, as a rule, be different when compared with a situation where the acoustic output device is in a second orientation with respect to the road-based vehicle, the second orientation being different from the first orientation. Using headphones with a plurality of (internal) speakers as an example of an acoustic output device, the headphones might be controlled in such a way that, in the first orientation, a first subset of the speakers is activated (to output an audible sound), and in the second orientation, a second subset of the speakers different from the first subset of the speakers is activated.
  • In the sense of the invention, the term "orientation (of the acoustic output device, in particular with respect to the road-based vehicle)" and the similar term "orientation information" used below does not necessarily refer to a value or set of values which (directly) provides a correct measurement of the orientation (of the acoustic output device, in particular with respect to the road-based vehicle) as expressed in (correct) physical units (e.g. polar coordinates or similar). Instead, the terms "orientation" and "orientation information" can, in principle, mean any information which (at least approximately) characterises the orientation (of the acoustic output device, in particular with respect to the road-based vehicle), in particular (at least approximately) uniquely characterises the orientation (of the acoustic output device, in particular with respect to the road-based vehicle), in particular a value or set of values from which physically correct values of the orientation (of the acoustic output device, in particular with respect to the road-based vehicle) can (at least approximately) be obtained, in particular without having to resort to other information or measurements. In particular, the orientation or orientation information would be such that a computing device can process it.
  • In the sense of the invention, the term "surround sound" may refer to a two-dimensional surround sound (e.g. a plurality of speakers distributed substantially in a single plane) or to a three-dimensional surround sound (e.g. a plurality of speakers, not all distributed in a single plane).
  • In a second aspect, the present invention provides a system for controlling an acoustic output device in a road-based vehicle, the system comprising:
    a processing unit configured to receive orientation information regarding an orientation of the acoustic output device with respect to the road-based vehicle, wherein the processing unit is further configured to generate and output a control signal for causing the acoustic output device to output audible sound, wherein the processing unit is configured to generate and output the control signal as a function of the orientation information.
  • In the sense of the invention, a "processing unit" is preferably intended to be understood to mean any electrical component or device which can receive orientation information and generate a control signal based thereon which can be used by the acoustic output device. The processing unit can either (substantially) transparently pass the orientation information to the acoustic output device if the acoustic output device can be controlled directly by the orientation information, or it may process the orientation information and generate a control signal which is (substantially) different from the orientation information. The processing unit may in particular comprise a microprocessor. The processing unit of the second aspect of the invention may, for example, comprise an onboard computer of the vehicle, or form part thereof - and may accordingly form part of the vehicle. The processing unit may however also form part of the acoustic output device.
  • In some embodiments of the second aspect, the system further comprises a detector configured to detect said orientation of the acoustic output device with respect to the road-based vehicle in order to provide the orientation information, or the processing unit comprises an interface for receiving the orientation information.
  • Suitable detectors configured to detect the orientation of the acoustic output device with respect to the road-based vehicle are known in the art and include, for example, accelerometers, gyroscope sensors and magnetometer sensors, or combinations of these, in particular integrated into the acoustic output device. Other types of sensors may detect the orientation with respect to a local or global reference, for example using a satellite-based positioning system such as GPS or the like. Depending on the specific implementation, the orientation of the vehicle with respect to the local or global reference might also need to be known or determined, and the relative orientation of the acoustic output device with respect to the vehicle may then be calculated or derived therefrom.
  • An interface for receiving the orientation information of the acoustic output device with respect to the vehicle may be wired or wireless. A wired interface may, for example, be an electrical connector for establishing a connection between the processing unit and other devices that may provide the orientation information, e.g. the acoustic output device itself. If such other devices, for example the acoustic output device, is hard-wired to the processing unit, the interface may be regarded as a point along the connection between the processing unit and such other devices.
  • The system, or at least parts thereof, can be built into the vehicle or may form part of the vehicle. However, the system can also be provided on its own and, for example, be supplied to vehicle manufacturers so that the system may be built into vehicles.
  • In a third aspect, the present invention provides a road-based vehicle comprising the system according to the first aspect, or any embodiment thereof.
  • In a fourth aspect, the present invention provides a computer program product comprising a program code which is stored on a computer readable medium, for carrying out the method in accordance with the first aspect, or any of its steps or combination of steps, or any embodiments thereof.
  • The computer program may in particular be stored on a non-volatile data carrier. Preferably, this is a data carrier in the form of an optical data carrier or a flash memory module. This may be advantageous if the computer program as such is to be traded independently of a processor platform on which the one or more programs are to be executed. In a different implementation, the computer program may be provided as a file or a group of files on one or more data processing units, in particular on a server, and can be downloaded via a data connection, for example the Internet, or a dedicated data connection, such as for example a proprietary or a local network. In addition, the computer program may comprise a plurality of interacting, individual program modules.
  • In some embodiments, the computer program may be updatable and configurable, in particular in a wireless manner, for example by a user or manufacturer.
  • Some (further) embodiments of the first to fourth aspects of the present invention will now be described.
  • In some embodiments of the first aspect, the method further comprises: determining a location of the acoustic output device with respect to the road-based vehicle or a specific point thereof; and
    controlling the acoustic output function of the acoustic output device as a function of the location of the acoustic output device with respect to the road-based vehicle or the specific point thereof.
  • Similarly, in some embodiments of the second aspect, the system comprises a detector configured to detect the location of the acoustic output device with respect to the road-based vehicle or a specific point thereof in order to provide location information, or the processing unit comprises an interface for receiving the location information.
  • By determining the location of the acoustic output device with respect to the road-based vehicle or a specific point thereof and then controlling the acoustic output function of the acoustic output device as a function of the determined location, the location of the acoustic output device is taken into account so that, when the acoustic output device is in a first location with respect to the road-based vehicle or the specific point thereof, the acoustic output perceived by a user of the acoustic output device will, as a rule, be different when compared with a situation where the acoustic output device is in a second location with respect to the road-based vehicle or the specific point thereof, the second location being different from the first location. Again using headphones with a plurality of (internal) speakers as an example of an acoustic output device, the headphones might be controlled in such a way that, when the acoustic output device is in the first location, a first subset of the speakers is activated (to output an audible sound), and when the acoustic output device is in the second location, a second subset of the speakers different from the first subset of the speakers is activated.
  • The "specific point" of the vehicle does not necessarily need to be a point within the vehicle or be a point on a component of the vehicle but can be a point whose location remains in a fixed (spatial) relationship with respect to the vehicle as the vehicle moves.
  • The above explanations regarding the orientation (information) and how it may be determined or obtained may also apply to the location (information), mutatis mutandis.
  • In some embodiments of the first aspect, controlling the acoustic output function of the acoustic output device as a function of the orientation of the acoustic output device with respect to road-based vehicle and/or controlling the acoustic output function of the acoustic output device as a function of the location of the acoustic output device with respect to the road-based vehicle or the specific point thereof comprises causing the acoustic output device to output audible sound in such a way that a user of the acoustic output device perceives the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • Similarly, in some embodiments of the second aspect, the control signal to be generated and output by the processing unit is such that the control signal causes the acoustic output device to output audible sound in such a way that a user of the acoustic output device perceives the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • To illustrate these embodiments, we again use headphones worn by a user as an example of an acoustic output device. We further make the following assumptions for the purpose of this example:
    • the headphones are headphones with surround sound function
    • the user is initially facing forward (in the direction of travel of the vehicle), i.e. the (initial or first) orientation of the headphones is also a "forward" orientation
    • the user is initially sitting on the passenger seat (front, right) of a left-hand drive car/vehicle (first location)
    • the user is supposed to perceive the audible sound as coming from a central location at or towards the front of the vehicle
  • In relation to the head of the user (or in relation to the headphones worn by the user), the central location at or towards the front of the vehicle may be considered to correspond to a direction of 11 o'clock (12 o'clock being straight ahead). Given the determined orientation and/or location of the headphones with respect to the vehicle, the headphones are controlled in such a way, in particular by a control signal from the processing unit, that the user does indeed perceive the audible sound as coming from the central location at or towards the front of the vehicle. For example, a speaker towards the front left-hand side of the headphones might be controlled to output a greater volume than a speaker on the right-hand side of the headphones.
  • Still considering the above example, the user now turns their head towards the left (but remains in the same location, i.e. the passenger seat at the front, right). The orientation of the head and of the headphones has now changed to a second orientation with respect to the vehicle, i.e. a "left" orientation. The orientation (now: "left") and location (passenger seat) of the headphones with respect to the vehicle can again be determined and, given the determined orientation and/or location of the headphones with respect to the vehicle, the headphones are controlled in such a way, in particular by a control signal from the processing unit, that the user still perceives the audible sound as coming from the central location at or towards the front of the vehicle - despite the head of the user/headphones having changed their orientation. For example, in this case, a speaker on the right-hand side of the headphones and slightly towards the front (with respect to the head of the user) might be controlled to output a greater volume than a speaker on the left-hand side of the headphones.
  • The headphones would be controlled in a corresponding way (mutatis mutandis), in particular by a control signal from the processing unit, if the user (additionally) changed their location within the vehicle, for example by moving to the driver's seat (front, left).
  • Accordingly, the user can perceive the audible sound as coming from a substantially consistent direction and/or location relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof. As used herein, the expression "consistent direction and/or location" does not necessarily mean that the audible sound only ever comes from the same location/direction with respect to the vehicle, such as the central location towards the front of the vehicle. Instead, the direction and/or location from which the audible sound is intended to be perceived to come may (intentionally) change over time. For example, it may be intended that the audible sound initially is perceived to come from the front right-hand corner of the vehicle, and, over time, the location from which the audible sound is intended to be perceived to come moves to the front left-hand corner of the vehicle. The headphones would be controlled accordingly, in particular by a control signal from the processing unit, and during the course of this, the orientation and/or location of the headphones with respect to the vehicle or the specific point thereof will be taken into account.
  • In this context it is also worth noting that the audible sound mentioned above does not need to be a sound originating from, or originally generated by, the vehicle itself, or generated in order to mimic or resemble a function of the vehicle. Or in other words, the vehicle itself, in particular when considering only its primary function of transporting persons, objects or animals, or functions associated with (or mimicking or resembling) this primary function, does not need to be the source of the audible sound. Instead, or in addition, the source of the audible sound (or the reason why the audible sound is generated) may be an entertainment or information device, in particular an entertainment or information device that is integrated into the vehicle, for example a rear seat entertainment (RSE) device or a co-driver entertainment (CDE) device. In any of these (or similar) examples, the acoustic output device would be controlled, in particular by a control signal from the processing unit, in such a way that a user of the acoustic output device perceives the audible sound as coming from the substantially consistent direction and/or location (e.g. the entertainment or information device) relative to the vehicle or the specific point thereof, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • In a further development or another example of application, it may be intended that the audible sound is perceived to come from a particular position of an entertainment or information device, e.g. from a particular location on a screen of the entertainment or information device. This position or location may also (intentionally) change over time, for example if the audible sound is supposed to be the voice of a voice assistant, whose position on the entertainment or information device might change. The location from which the audible sound is intended to be perceived to come from might even switch from one (entertainment or information) device to another. In any of these (or similar) examples, the acoustic output device would be controlled, in particular by a control signal from the processing unit, in such a way that a user of the acoustic output device perceives the audible sound as coming from the appropriate (substantially consistent) direction and/or location (in particular corresponding to the direction and/or specific location of the entertainment or information device), irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof. The same applies (mutatis mutandis) if the device from which the audible sound is intended to be perceived to come from is movable with respect to the vehicle, e.g. if such a device is a smart phone, laptop or tablet computer connected to the vehicle (by a wired or wireless connection).
  • The expression "consistent direction and/or location" is preferably to be understood to refer to a direction, for example expressed in terms of one or more angles (with respect to a reference) and/or a distance (with respect to a reference).
  • Ensuring that the audible sound is perceived as coming from a substantially consistent direction and/or location may also address problems associated with motion sickness. A large number of people experience motion sickness (also referred to as kinetosis, among other names) when travelling in a vehicle. Typically, this problem is exacerbated if there is a disconnect between what a person perceives (in terms of hearing and/or seeing) and what that person feels (in terms of movements of the vehicle), in particular acceleration in any direction, including when the vehicle is passing through a curve. Ensuring that the audible sound is perceived as coming from a substantially consistent direction and/or location may reduce or eliminate this disconnect.
  • In some embodiments, the processing unit provides to the acoustic output device, via the control signal, information regarding the direction/position (with respect to the vehicle or specific location thereof) which the audible sound is supposed to be perceived to originate from. This information is provided to the portable device without the processing unit necessarily being aware of the orientation and/or location of the acoustic output device relative to the vehicle. Using this information, the acoustic output device can then determine, on the basis of its own knowledge of its own orientation and/or location, how to generate and output the audible sound so that the audible sound is able to be perceived by the user as coming from the "consistent" direction/location.
  • In the sense of the invention, the expression "substantially consistent direction relative to the vehicle" and similar is preferably intended to be understood in such a way that an angular difference between, on the one hand, the direction which the audible sound is supposed to be perceived to originate from, and, on the other hand, the direction from which the audible sound will be perceived, by a user, to be coming from, is at most, or less than, one of: 90°, 80°, 70°, 60°, 50°, 45°, 40°, 30°, 20° or 10°. Similarly, the expression "substantially consistent location relative to the vehicle or the specific point thereof" and similar is preferably intended to be understood in such a way that a distance between, on the one hand, the location which the audible sound is supposed to be perceived to originate from, and, on the other hand, the location from which the audible sound will be perceived, by a user, to be coming from, is at most, or less than, one of: 100 cm, 90 cm, 80 cm, 70 cm, 60 cm, 50 cm, 40 cm, 30 cm, 20 cm or 10 cm.
  • In some embodiments of the first and second aspects, the audible sound is part of audible content forming part of infotainment content, the infotainment content further comprising visual content associated with the audible content, in particular synchronised with the audible content, the visual content being (intended to be) displayed on a display device.
  • In the sense of the invention, the expression "infotainment content" is preferably intended to be understood to mean any audio-visual (media) content, in particular content for work, information, entertainment or social purposes. This encompasses in particular news, video clips, video calls etc. Further, the term "associated" in the expression "visual content associated with the audible content" is intended to be understood to mean that the visual and audible content together form audio-visual content (in a coherent manner). For example, audible content associated with visual content may be the soundtrack of a film or video clip, or speech of a video call etc.
  • The display device may, for example, be a central information display (CID), a head-up display or a rear seat entertainment device (RSE).
  • In some embodiments of the first aspect, the audible content comprises a first audible portion and a second audible portion, wherein the first audible portion comprises the audible sound, and
    • wherein controlling the acoustic output function of the acoustic output device comprises causing the acoustic output device to output the second audible portion irrespective of the orientation of the acoustic output device with respect to the road-based vehicle and irrespective of the location of the acoustic output device with respect to the road-based vehicle or the specific point thereof,
    • in particular wherein controlling the acoustic output function of the acoustic output device comprises causing the acoustic output device to output the second audible portion in an omnidirectional manner.
  • Similarly, in some embodiments of the second aspect, the audible content comprises a first audible portion and a second audible portion, wherein the first audible portion comprises the audible sound, and
    • the control signal to be generated and output by the processing unit is such that the control signal causes the acoustic output device to output the second audible portion irrespective of the orientation of the acoustic output device with respect to the road-based vehicle and irrespective of the location of the acoustic output device with respect to the road-based vehicle or the specific point thereof,
    • in particular wherein the control signal to be generated and output by the processing unit is such that the control signal causes the acoustic output device to output the second audible portion in an omnidirectional manner.
  • For example, the infotainment content of which the audible content forms part may be a film, for example a scene in a busy restaurant, in which many voices can be heard as ambient noise. In addition, the film may focus on a particular person speaking. The present embodiment envisages that the sounds uttered by the particular person speaking (first audible portion) should be treated differently from the ambient noise (second audible portion). For example, the present embodiment envisages that the first audible portion should be perceived by a user as coming from a particular direction/location, in particular from a direction (with respect to a user) corresponding to the location of the particular person on the display device, whereas the ambient noise (second audible portion) should be perceived by a user as coming from a variety of directions - as is typical for ambient noise - in particular in an omnidirectional manner (i.e. with substantially equal intensity from all directions). In other words, for the purpose of outputting the first audible portion, the orientation and/or location of the acoustic output device is taken into account, whereas, for the purpose of outputting the second audible portion, the orientation and/or location of the acoustic output device is not taken into account (or the second audible portion is not output as a function of the orientation/location of the acoustic output device with respect to the road-based vehicle).
  • In some embodiments of the first aspect, the method further comprises detecting sounds generated by an occupant of the road-based vehicle other than a user of the acoustic output device; and
    causing the detected sounds or processed versions thereof to be output as said audible sound, in particular in such a way that a user of the acoustic output device perceives the audible sound as coming from a direction and/or location relative to said user which corresponds to a direction and/or location of said occupant relative to said user, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • Similarly, in some embodiments of the second aspect, the system is arranged to detect sounds generated by an occupant of the road-based vehicle other than a user of the acoustic output device. These sounds may be detected by a detector forming part of the system. Alternatively, information regarding the sounds may be obtained via an interface of the system, in particular an interface of the processing unit. The control signal to be generated and output by the processing unit (taking into account the information regarding the detected sounds) is such that the control signal causes the acoustic output device to output the detected sounds or processed versions thereof as said audible sound, in particular in such a way that a user of the acoustic output device perceives the audible sound as coming from a direction and/or location relative to said user which corresponds to a direction and/or location of said occupant relative to said user, irrespective of the orientation and/or location of the acoustic output device relative to the vehicle or the specific point thereof.
  • This may be particularly useful if the acoustic output device used by the user has a noise cancellation function. E.g. spoken words uttered by an occupant other than the user would then normally (i.e. without the method or system of the present invention) not be heard by the user, or at least would be difficult to hear. The present embodiments can ensure firstly that the user can hear the other occupant (via the acoustic output device) and secondly that the user can hear the words uttered by the other occupant as coming from a direction and/or location relative to the user which corresponds to a direction and/or location of the occupant relative to the user.
  • Such a function might be useful for safety reasons or might even be required for safety reasons.
  • Such a "transparent" mode might be active permanently, or might, in a further development, be activated through a particular action/event, for example if the other occupant utters a particular sound, for example a specific word or combination of words, such as "Hi, {name of the user}" (e.g. "Hi, John" if the user of the headphones is called "John"), or "Hi, this is {name of other occupant} speaking" (e.g. "Hi, this is Michael speaking" if the other occupant is called "Michael"), or some other "wake-up" expression. Another example of such a particular event could be the detection of a siren of an emergency vehicle.
  • In some embodiments of the first aspect, the method further comprises detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to derive therefrom the direction and/or location of said occupant relative to said user.
  • Similarly, in some embodiments of the second aspect, the system further comprises a detector for detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to derive therefrom the direction and/or location of said occupant relative to said user. Alternatively, the system may comprise an interface for receiving information regarding a direction and/or location of said occupant with respect to the vehicle or the specific point thereof, in order to be able to derive therefrom the direction and/or location of said occupant relative to said user.
  • In some embodiments of the first aspect, detecting the direction and/or location of said occupant with respect to the vehicle or the specific point thereof comprises detecting the direction and/or location of said occupant with respect to the vehicle or the specific point thereof using one or more microphones and/or cameras, in particular one or more microphones and/or cameras installed in the vehicle.
  • Similarly, in some embodiments of the second aspect, the detector for detecting a direction and/or location of said occupant with respect to the vehicle or the specific point thereof comprises one or more microphones and/or cameras, in particular one or more microphones and/or cameras installed in the vehicle.
  • In a manner known per se, the one or more microphones and/or cameras can detect which occupant or occupants are currently e.g. speaking, or where any sounds generated within the vehicle are coming from.
  • In some embodiments of the first and second aspects, the audible sound is an audible sound arranged
    • to indicate that the vehicle is operational and/or
    • to indicate that the vehicle is moving and/or
    • to resemble the sound of an internal combustion engine
  • This may be particularly useful in connection with electric vehicles since they tend to be quieter than vehicles with an internal combustion engine, in particular when they are stationary. The audible sound may therefore be a (subtle) sound that is (artificially) generated or added to give a person or user in the vehicle an indication of the condition of the vehicle (operational/moving etc.). In this context, an audible sound arranged to indicate that the vehicle is operational may be generated for example after a user of the vehicle has put the vehicle into an "activated state", e.g. by inserting a key, key card or similar into an appropriate receptacle of the vehicle or has pressed a start button or similar and while the vehicle remains stationary.
  • An audible sound of the type described above may however also be generated and output in connection with a vehicle which is equipped with an internal combustion engine.
  • In particular if the audible sound is an audible sound arranged to resemble the sound of an internal combustion engine, the audible sound may be arranged to be generated in such a way that it is perceived as coming from a central location towards the front of the vehicle, where, in a typical vehicle with a combustion engine, the engine would be located.
  • In some embodiments of the first and second aspects, the acoustic output device comprises a portable device, in particular a personal device or a proprietary device of the vehicle, in particular headphones or a headset, in particular with a surround sound function and/or a noise cancellation function.
  • Whether the acoustic output device is proprietary to the vehicle or not, it (or portions thereof) may be removable (from the vehicle). For example, in the case of a device proprietary to the vehicle, the vehicle may include a holder or cradle or similar for carrying the portable device. Such a cradle may, for example, be provided on the back of a headrest (of a front seat) for use by a person sitting on a seat behind that headrest. However, other locations are also possible. When the portable device is placed in the holder or cradle, it is substantially fixed with respect to the vehicle. However, it can be removed from the holder or cradle (and may or may not still be attached to the vehicle via a cable or similar). It may be proprietary to the vehicle in the sense that it may have proprietary connectors which (typically) only fit a corresponding connector of the holder or cradle, or the communication protocols of the holder or cradle (of the vehicle) need to match those of the portable device, so that a generic device (such as a personal headset or personal headphones) might not be able to communicate with the vehicle via the holder or cradle.
  • The invention encompasses embodiments in which no holder or cradle is provided in connection with a proprietary device. Again, the device may be proprietary to the vehicle in the sense that the communication protocols of the portable device need to match those of the vehicle in order to be able to communicate with the vehicle, whereas a generic device (such as a personal headset or personal headphones) might not be able to communicate with the vehicle.
  • Whether the acoustic output device is proprietary to the vehicle or is a (generic) personal device, it may be connected to the vehicle in a wired or wireless manner, for example via Bluetooth® or similar.
  • Further, while some explanations are being made herein with reference to a two-dimensional space, these explanations apply, mutatis mutandis, also to a three-dimensional space, and vice versa.
  • The terms "first", "second", "third" and the like in the description and in the claims are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.
  • Where the term "comprising" or "including" is used in the present description and claims, this does not exclude other elements or steps. Where an indefinite or definite article is used when referring to a singular noun, e.g. "a", "an" or "the", this includes a plural of that noun unless something else is specifically stated.
  • Further, unless expressly stated to the contrary, "or" refers to an inclusive "or" and not to an exclusive "or". For example, a condition "A or B" is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
  • All connections mentioned in the present specification may be wired or wireless unless this is explicitly excluded or technically impossible. The term "wired connection" encompasses not only connections established by, or using, a (metallic) wire, but also optical connections, preferably also any other type of physical connection allowing for the transmission of information.
  • Aspects and embodiments of the invention described or claimed herein can be combined with each other, and embodiments described in connection with one aspect of the invention can be combined with other aspects of the invention, unless such a combination is explicitly excluded or technically impossible.
  • Further advantages, features and applications of the present invention are provided in the following detailed description and the appended figures, wherein:
  • Fig. 1
    schematically shows a plan view of a vehicle according to an embodiment of the present invention.
    Fig. 2
    schematically shows a top view of the head of a person wearing headphones, in a first orientation.
    Fig. 3
    schematically shows a top view of the head of the person from Fig. 2, in a second orientation.
    Fig. 4
    schematically shows a top view of the head of a person wearing headphones and watching visual media content.
    Fig. 5
    shows a flow chart illustrating a method according to an embodiment of the present invention.
  • Fig. 1 schematically shows a plan view of a vehicle 15 according to an embodiment of the present invention. The vehicle 15 is shown, by way of example, as a left-hand drive car, with two front seats 11, 12, two rear seats 13, 14 and a steering wheel (not labelled). The front of the vehicle is at the top of Fig. 1.
  • Vehicle 15 is equipped with a processing unit 1. Processing unit 1 may comprise, or form part of, an onboard computer of vehicle 15.
  • Processing unit 1 is connected to one or more detectors 2, such as microphones 2, for detecting sounds within vehicle 15. Microphones 2 are distributed at locations in the vehicle 15 and may be substantially permanently installed in vehicle 15. In the example shown in Fig. 1, there are ten such microphones 2. Only one (wired) connection between processing unit 1 and one of the microphones 2 (right-hand side, towards the front of the vehicle 15) is shown. The other microphones 2 may be connected in like manner. The connection(s) may be wired or wireless. More, or fewer, than ten microphones 2 may be provided. Microphones 2 may all be positioned at the same height within the vehicle 15, or at different heights, in which case it may be possible to determine (more accurately) the location where sounds which are detected by microphones 2 originate from, in particular in a three-dimensional space.
  • Similarly, processing unit 1 is connected to one or more cameras 16, for capturing (part of) the interior of vehicle 15, in particular the seats 11-14 and any occupants thereof. Cameras 16 are distributed at locations in the vehicle 15 and may be substantially permanently installed in vehicle 15. In the example shown in Fig. 1, there are two such cameras 16. Only one (wired) connection between processing unit 1 and one of the cameras 16 (towards the front of the vehicle 15) is shown. The other camera(s) 16 may be connected in like manner. The connection(s) may be wired or wireless. More, or fewer, than two cameras 16 may be provided. Cameras 16 may for example be mounted to the ceiling of vehicle 15. Using cameras 16, the location of a person speaking within vehicle 15 may be able to be determined, for example by processing unit 1 analysing any mouth movements as detected by cameras 16.
  • Processing unit 1 is also connected to one or more acoustic output devices. Fig. 1 illustrates three such acoustic output devices 3, 4, 5. These will now be explained.
  • Common to all of the acoustic output devices 3, 4, 5 is that their orientation is not fixed with respect to vehicle 15. In other words, they are movable (or portable), at least within a certain range.
  • Acoustic output device 3 is a personal portable device, such as headphones or a headset, which may be connected to vehicle 15, in particular to processing unit 1, in a wired or wireless manner, for example via Bluetooth® or similar. Acoustic output device 3 may have a surround sound function.
  • Acoustic output device 4 is proprietary to vehicle 15. It is held by, placed upon, or (removably) attached to a cradle or holder 6, which may be (substantially permanently) installed in vehicle 15, for example attached to the back of seat 11 or the headrest of seat 11, and may form part of a rear seat entertainment (RSE) system. Cradle 6 is connected to processing unit 1 by a wired connection. Acoustic output device 4 is also connected to cradle 6 by a wired connection. Acoustic output device 4 may comprise headphones or a headset and may have a surround sound function. The location of cradle 6 with respect to the vehicle 15 (and therefore potentially also an at least approximate location of acoustic output device 4) may be known to processing unit 1. A possible use of this information will become clear later.
  • Acoustic output device 5 and cradle/holder 7 may substantially correspond to acoustic output device 4 and cradle/holder 6, except that these are not connected by a wired connection to processing unit 1, i.e. there is neither a wired connection between processing unit 1 and cradle 7, nor between cradle 7 and acoustic output device 5. Acoustic output device 5 may communicate with cradle 7, or directly with processing unit 1, in a wireless manner. Cradle 7 may also communicate with processing unit 1 in a wireless manner, or may not communicate with processing unit 1 at all and may simply be a holder for (mechanically) accommodating portable device 5 and/or serve as a charging station for acoustic output device 5.
  • Variations of acoustic output devices 4, 5 and cradles/ holders 6, 7 are also possible, for example such that the acoustic output device 4, 5 is connected to a respective cradle 6, 7 in a wireless manner whilst the respective cradle 6, 7 is connected to processing unit 1 by a wired connection. Alternatively, portable device 4, 5 may be connected to the respective cradle 6, 7 by a wired connection whilst the respective cradle 6, 7 is connected to processing unit 1 in a wireless manner. Further, cradles 6 and/or 7 may be omitted.
  • Processing unit 1 may have one or more wireless interfaces 8 for communicating with any one, some or all of the microphones 2, acoustic output devices 3, 4, 5 and/or cameras 16. Signals may be sent between processing unit 1 and microphones 2, acoustic output devices 3, 4, 5 and/or cameras 16 via their respective connections, if applicable via cradles 6, 7.
  • In addition or as an alternative, processing unit 1 may comprise a wired interface, for example an electrical connector for a cable-based connection with any, some or all of the other devices mentioned above. If these devices are hard-wired to processing unit 1, then the interface may be regarded as a point along the connection between these devices and processing unit 1. A wired connection can also be an indirect wired connection, for example such that cradles 6, 7 are connected to processing unit 1 via a first wired connection (hard-wired or using a removable cable with one or more connectors), and acoustic output devices 4, 5 are connected to their respective cradle 6, 7 via a second wired connection (hard-wired or using a removable cable with one or more connectors, for example plugged into an AUX socket of the respective cradle).
  • In some embodiments, not all of the devices are present or connected, or the system of certain embodiments of the invention may comprise only some of the devices illustrated in Fig. 1. For example, embodiments are possible where the system comprises only the processing unit 1 and one acoustic output device such as one of the acoustic output devices 3-5, or only the processing unit 1 (for controlling one or more acoustic output devices).
  • Fig. 2 schematically shows a top view of the head 22 of a person wearing headphones (or a headset), in a first orientation, and Fig. 3 schematically shows a top view of the head 22 of the person from Fig. 2, in a second orientation. The reference number 22 is respectively placed towards the direction in which head 22 is facing. In other words, in the orientation of Fig. 2, the head 22 is turned towards the top of the figure, whereas in Fig. 3, the head 22 is turned towards the left.
  • Figs. 2 and 3 schematically show headphones worn by the person 22, whereby reference number 20 indicates a right-hand portion of the headphones and reference number 21 indicates a left-hand portion of the headphones. The headphones 20, 21 are shown as examples of the acoustic output devices 3, 4, 5 of Fig. 1. In the embodiment shown in Figs. 2 and 3, the headphones 20, 21 are headphones with surround sound function. The headphones are further equipped with one or more (built-in) devices 23 for determining the orientation/position of the headphones, in particular with respect to a local or global reference. Such devices 23 can, for example, comprise a gyroscope and/or an accelerometer, in particular a 3-axis accelerometer, for example one gyroscope/accelerometer per headset or, as shown in Fig. 3, one gyroscope/accelerometer 23 each for the right-hand portion 20 and the left-hand portion 21 of the headphones. Such devices 23 can, for example, (also) determine the orientation/position of the headphones with respect to a satellite-based signal, such as a GPS signal.
  • Two speakers 24 (front) and 27 (rear) are integrated into right-hand portion 20. Similarly, two speakers 25 (front) and 26 (rear) are integrated into left-hand portion 21. More than two speakers may be provided in each of the right-hand portion 20 and the left-hand portion 21. It may also be possible to provide only one speaker for each of these portions, whereby this single speaker should be such that, depending on how it is activated, a user of the headphones 20, 21 will perceive audible sounds generated and output by this single speaker to come from different directions - so that the right-hand and left- hand portions 20, 21 together can provide a surround sound experience.
  • We now assume that the person (whose head 22 is shown in Figs. 2 and 3) is sitting on the front right-hand seat 12 of vehicle 15. The person 22 (or user of headphones 20, 21) shown in Fig. 2 is initially facing in the direction of travel of vehicle 15, i.e. towards the front of vehicle 15. This is also indicated in Fig. 2 by arrows 17 and 18, whereby arrow 17 indicates the (forward) direction of travel of vehicle 15 and arrow 18 indicates the direction in which the user 22 is facing. Arrow 18 therefore also indicates the orientation of the acoustic output device (headphones 20, 21).
  • We further assume that the user 22 is supposed to perceive an audible sound as coming approximately from a central location at the front of vehicle 15, where, at least in most conventional vehicles with an internal combustion engine, the engine would typically be located. This location is indicated as a square 10 in Figs. 1, 2 and 3. In accordance with this embodiment, the audible sound may be intended to resemble the sound of a typical internal combustion engine. Given the (perceived) location from which the audible sound is intended to originate from, the user 22 will be likely to be provided with a particularly realistic impression of a sound coming from an internal combustion engine - even if vehicle 15 is an electric vehicle, for example. In some cases, this may reduce the effects of motion sickness.
  • In relation to the head 22 of the person, the location 10 may be considered to correspond to a direction of 11 o'clock (12 o'clock being straight ahead). In order for the system to generate the audible sound in such a way that the person 22 will perceive it as coming from the direction 11 o'clock, the system determines, using gyroscopes and/or accelerometers 23 or similar, the orientation and/or position of the headphones 20, 21 in relation to vehicle 15. On determining that the person is facing straight ahead, the system causes the headphones 20, 21 to output the audible sound in such a way that the person will perceive the audible sound as coming (approximately) from the central location 10 at the front of vehicle 15, i.e. the direction 11 o'clock with respect to the head 22 of the person. In Fig. 2, the sound output of the speakers is indicated by circles, whereby a black filed-in circle indicates an activated speaker (i.e. outputting sound), and a white circle with a black outline indicates a speaker that is not activated (i.e. not outputting sound). In Fig. 2, the speakers 24, 25 and 26 are activated, whereas speaker 27 is not activated. In addition, the size of the circles of the activated speakers indicates the sound volume. As shown in Fig. 2, the sound volume of speaker 25 is greatest, whereas the sound volume of speaker 26 is smallest. The activation of the speakers as shown in Fig. 2 can give the user 22 the impression that the sound is coming from the 11 o'clock direction, as also indicated by arrow 19.
  • Referring now to Fig. 3, the person has turned their head 22 by 90° towards the left. The central location 10 at the front of vehicle 15 now corresponds to 2 o'clock with respect to the head 22 of the person. Again using information (provided by gyroscopes and/or accelerometers 23 or similar) regarding the (changed) orientation/position of the headphones 20, 21, the system can cause the headphones 20, 21 to output the audible sound in such a way that the person will perceive the audible sound as coming (approximately) from the central location 10 at the front of vehicle 15, i.e. the direction 2 o'clock with respect to the head 22 of the person. Accordingly, in Fig. 3, the speakers 24, 25 and 27 are activated, whereas speaker 26 is not activated, whereby the sound volume of speaker 24 is now greatest and the sound volume of speaker 25 is smallest. The activation of the speakers as shown in Fig. 3 can give the user 22 the impression that the sound is now coming from the 2 o'clock direction, as also indicated by arrow 19.
  • As can be appreciated from Figs. 2 and 3, in both cases the user 22 can perceive the sound as coming from the (consistent) direction indicated by arrow 19, which, in both cases, has the same orientation with respect to the vehicle 15. In other words, by taking the orientation of the acoustic output device 20, 21 into account, the system can ensure that the sound is perceived as coming from a consistent direction with respect to vehicle 15, irrespective of the orientation of the acoustic output device.
  • It will be appreciated that the technique illustrated with reference to Figs. 2 and 3 can be used with other locations/directions. However, it is preferred that the audible sound is perceived by the person as coming from a consistent direction with respect to vehicle 15, regardless of the orientation/position of headphones 20, 21.
  • It is preferred that the audible sound is perceived by the person not only as coming from a consistent direction with respect to vehicle 15 but also as coming from an appropriate/consistent distance. We consider again the example of the audible sound imitating an engine noise coming from a central location 10 at the front of vehicle 15. Further, in this example, we assume that a first user is sitting on (front) seat 12, and a second user is sitting on (rear) seat 14, behind seat 12. An embodiment of the invention envisages that the first and second users not only perceive the audible sound as coming from slightly different directions, but also that the second user perceives the audible sound as coming from a location at a greater distance than is the case for the first user. Accordingly, assuming the (approximate) location of the acoustic output devices respectively used by the first and second users has been determined or is already known (to processing unit 1), processing unit 1 can take this information into account to generate appropriate control signals respectively for the acoustic output devices used by the first and second users to ensure that the first user will perceive the audible sound as coming from a closer distance than the second user. It is envisaged that in most cases this will mean (inter alia) that the audible sound which the first user will perceive is at a greater volume than the audible sound that the second user will perceive.
  • Ensuring that the "correct" portion of headphones 20, 21, in particular the "correct" (subset of) speakers, is/are activated and at an appropriate volume so as to enable the person to perceive the audible sound as coming from a consistent direction/location can be implemented in at least two ways, which will be explained in the following.
  • In the first of these implementations, the gyroscopes and/or accelerometers 23 or similar provide information regarding the orientation (and potentially the position) of the headphones 20, 21 to the processing unit 1. This may be supplemented with information regarding the location of the headphones with respect to a reference, such as vehicle 15 or a particular reference location within the vehicle 15. Techniques for determining the location of a portable device such as headphones 20, 21 with respect to a reference, such as vehicle 15 or a particular reference location within the vehicle 15, are known per se. For example, if a portable device such as headphones 20, 21 are connected to processing unit 1 or a cradle 6, 7 or some other reference point of vehicle 15 via Bluetooth® or similar, signal parameters such as the angle of arrival, potentially detected at multiple, spaced apart locations, can be used to determine the relative location of the headphones 20, 21.The processing unit 1 then generates a control signal taking this information into account, i.e. the control signal then includes information as to which portion/speakers of headphones 20, 21 to activate and at what volume (based on the distance between headphones 20, 21 and the consistent location 10). The headphones 20, 21 can then use the received control signal to output the audible sound via the respective portion/speakers of headphones 20, 21. The headphones 20, 21 then do not need to calculate themselves which portion/speakers of headphones 20, 21 to activate and at what volume (taking into account the direction from which the audible sound is supposed to be perceived as coming from, as well as the orientation/position of the headphones 20, 21, and in particular the distance between headphones 20, 21 and the consistent location 10) - since the control signal received from processing unit 1 already contains this information.
  • In the second of these implementations, the gyroscopes and/or accelerometers 23 or similar again provide information regarding the orientation (and potentially the position) of the headphones 20, 21. However, this information is not transmitted to the processing unit 1 so that the processing unit then generates the control signal without being aware of the orientation/position of headphones 20, 21. The control signal will therefore also not include information as to which portion/speakers of headphones 20, 21 to activate and at what volume but instead informs the headphones 20, 21 of the position (with respect to vehicle 15) from which the audible sound is supposed to be perceived as coming from. The headphones 20, 21 can then use the received control signal and the information provided by its own gyroscope(s) and/or accelerometer(s) 23 or similar to calculate which portion/speakers of headphones 20, 21 to activate and at what volume so that the person will perceive the audible sound as coming from the "correct"/consistent direction with respect to vehicle 15 and "correct"/consistent distance. Again, as in the first implementation described in the preceding paragraph, information regarding the location of headphones 20, 21 with respect to a reference, such as vehicle 15 or a particular reference location within the vehicle 15, can be determined (e.g. using signal parameters such as the angle of arrival, potentially detected at multiple, spaced apart locations, see above) and then used in order to control the headphones 20, 21. If this location information is determined by the headphones 20, 21 themselves, there is no need to transmit this location information to the processing unit 1. If, on the other hand, this location information is determined by the processing unit 1, this location information should be taken into account by the processing unit 1 when generating the control signal.
  • In a variant, the orientation/position of the headphones 20, 21 is detected by one or more cameras or other sensors, such as cameras 16 shown in Fig. 1, and the corresponding information is then provided to processing unit 1 and/or the headphones 20, 21. Cameras 16 may be substantially permanently installed in vehicle 15. The position of cameras 16 in Fig. 1 is indicative only; other locations are possible.
  • In yet another variant, information regarding the location of the headphones with respect to a reference, such as vehicle 15 or a particular reference location within the vehicle 15 is initially determined on the basis of any of the techniques described above. Information from the gyroscope(s) and/or accelerometer(s) 23 of the headphones 20, 21 is then used to update this location information.
  • A further embodiment will now be described with continued reference to Figs. 1-3 and further with reference to Fig. 4.
  • Fig. 4 again illustrates an acoustic output device (headphones 20, 21) worn by a user 22, whereby the functions of the acoustic output device may correspond to those explained with reference to Figs. 2 and 3 and will therefore not be explained again.
  • However, in the embodiment of Fig. 4, the user 22 is watching audio-visual content such as a film/movie on a display device 9. Display device 9 can for example be a central information display (CID) or rear seat entertainment (RSE) device of vehicle 15.
  • In the example shown in Fig. 4, the display device 9 is located in front of user 22. We now assume, by way of example, that a person/actor 28 located towards the left-hand side of display device 9 is speaking. Accordingly, an audible sound to be output via headphones 20, 21 should be perceived by user 22 as coming from a forward direction and slightly to the left. In Fig. 4a ), this is indicated by activated (front) speakers 24 and 25, whereby the volume to be output by (left-hand) speaker 25 is greater than that to be output by (right-hand) speaker 24. Speakers 26 and 27 towards the rear of headphones 20, 21 are not activated. As a result of this pattern of activated and non-activated speakers, user 22 can perceive the sound as coming from a direction corresponding to person/actor 28 (in the film). The activation of speakers 24-27 would be adapted in line with any change in orientation/location of user 22 (and headphones 20, 21), as explained with reference to Figs. 2 and 3 (this is not explicitly illustrated in Fig. 4). Similarly, if person/actor 28 moved to a different location on the screen of display device 9, the activation of speakers 24-27 would be adapted accordingly.
  • The activation of speakers 24-27 can be regarded as being driven by the soundtrack of the film associated with the visual content to be displayed on display device 9.
  • However, as a refinement, the soundtrack of the film may have two portions, a first audible portion and a second audible portion (or a first audio portion and a second audio portion). The sound associated with particular persons/actors (such as person/actor 28) or objects etc. might be represented by the first audible portion, whereas other sounds as part of the media content might be represented by the second audible portion. In particular, it may be desirable to ensure that the first audible portion is perceived as coming from a specific, consistent direction/location and that the second audible portion is perceived as coming from a plurality of directions/locations, in particular substantially uniformly in an omnidirectional manner. For example, as illustrated in Fig. 4, the film shown on display device 9 may include a scene in a busy restaurant, in which (main) actor 28 is speaking, while ambient noise, for example conversations by other restaurant guests 29 (or extras in the film) should also be heard. The activation of speakers 24-27 associated with person 28 speaking has already been explained above with reference to Fig. 4a). The activation of speakers 24-27 associated with the ambient noise from extras 29 is illustrated, by way of example, in Fig. 4b). As can be seen in Fig. 4b), all speakers 24 to 27 are activated at the same sound volume, albeit at a smaller sound volume than speaker 25 in Fig. 4a), to provide a realistic impression of ambient noise. When the user 22 moves their head so that the orientation of headphones 20, 21 with respect to vehicle 15 changes, this change of orientation is not taken into account for the purpose of outputting the second audible portion.
  • A further example of application of embodiments of the present invention, not explicitly illustrated in the drawings, will now be explained. In this example, a user 22 is again travelling in vehicle 15, for example sitting on (front right-hand) passenger seat 12. Another occupant of vehicle 15 is sitting, by way of example, on (rear left-hand) seat 13. User 22 may be wearing headphones (such as those illustrated in, and explained with reference to, Figs. 2 and 3), in particular headphones with a noise cancellation function. User 22 may therefore not normally be able to hear speech uttered by the other occupant. In order for user 22 to be able to hear the other occupant, this embodiment envisages that the system detects the speech uttered by the other occupant, using one or more microphones 2. In addition, using one or more microphones 2 and/or one or more cameras 16, the system detects the (approximate) location of the other occupant (unless this is already known). The information regarding the location of the other occupant, as well as the orientation and/or location of headphones 20, 21 of user 22, are then used by the system, in particular the processing unit 1, to generate a control signal to ensure that headphones 20, 21 output audible sound, representing the speech uttered by the other occupant, which can be perceived by user 22 as coming from the direction/location of the other occupant with respect to the user 22 (or their headphones 20, 21), in this example from a direction of somewhere between 7 o'clock and 8 o'clock, in particular from a direction corresponding to half past 7 (in relation to the forward direction of vehicle 15), in particular irrespective of the orientation of headphones 20, 21.
  • A method of operation of a system in accordance with the present invention will now be described with continued reference to Figs. 1-4 and further with reference to Fig. 5.
  • Fig. 5 shows a flow chart illustrating a method according to an embodiment of the present invention.
  • After the start (30) of the method, an orientation of an acoustic output device (such as any of acoustic output devices 3-5) with respect to the road-based vehicle 15 is determined (31). An acoustic output function of the acoustic output device is then controlled (32) as a function of the orientation of the acoustic output device 3-5 with respect to the road-based vehicle 15. To this end, the processing unit 1 may send a corresponding control signal or control signals to the acoustic output device(s) 3-5. Once the acoustic output function of the acoustic output device has been controlled in the way described, the method may end (33), or alternatively may be repeated.
  • While at least one example embodiment of the present invention has been described above, it has to be noted that a great number of variations thereto exist. Furthermore, it is appreciated that any described example embodiments only illustrate non-limiting examples of how the present invention can be implemented and that it is not intended to limit the scope, the application or the configuration of the apparatuses and methods described herein. Rather, the preceding description will provide the person skilled in the art with instructions for implementing at least one example embodiment of the invention, whereby it has to be understood that various changes of functionality and the arrangement of the elements of the example embodiment can be made without deviating from the subject-matter defined by the appended claims, as well as their legal equivalents.
  • List of Reference Signs
  • 1
    processing unit
    2
    detectors / microphones
    3
    acoustic output device / (personal) headphones / (personal) headset
    4
    wired acoustic output device / (proprietary) headphones
    5
    wireless acoustic output device / (proprietary) headphones
    6
    wired cradle / holder
    7
    wireless cradle / holder
    8
    (wireless) interface
    9
    display device / CID / RSE
    10
    central position at front of vehicle
    11 - 14
    seat
    15
    vehicle
    16
    camera / detector
    17
    orientation of vehicle
    18
    orientation of acoustic output device
    19
    (perceived) direction of audible sound
    20
    headphones (right-hand portion)
    21
    headphones (left-hand portion)
    22
    (head of) user / person
    23
    position and/or orientation determination devices / accelerometers / gyroscope sensors / magnetometer sensors
    24
    output front right
    25
    output front left
    26
    output rear left
    27
    output rear right
    28
    person (in film), (main) actor
    29
    extras (in film)
    30 - 33
    method steps

Claims (14)

  1. A method of controlling an acoustic output device (3-5) in a road-based vehicle (15) comprising:
    determining an orientation (18) of the acoustic output device (3-5) with respect to the road-based vehicle (15); and
    controlling an acoustic output function of the acoustic output device (3-5) as a function of the orientation (18) of the acoustic output device (3-5) with respect to the road-based vehicle (15).
  2. The method according to claim 1, further comprising:
    determining a location of the acoustic output device (3-5) with respect to the road-based vehicle (15) or a specific point thereof; and
    controlling the acoustic output function of the acoustic output device (3-5) as a function of the location of the acoustic output device (3-5) with respect to the road-based vehicle (15) or the specific point thereof.
  3. The method according to claim 1 or 2, wherein controlling the acoustic output function of the acoustic output device (3-5) as a function of the orientation (18) of the acoustic output device (3-5) with respect to road-based vehicle (15) and/or controlling the acoustic output function of the acoustic output device (3-5) as a function of the location of the acoustic output device (3-5) with respect to the road-based vehicle (15) or the specific point thereof comprises causing the acoustic output device (3-5) to output audible sound in such a way that a user (22) of the acoustic output device (3-5) perceives the audible sound as coming from a substantially consistent direction (19) and/or location (10) relative to the vehicle (15) or the specific point thereof, irrespective of the orientation (18) and/or location of the acoustic output device (3-5) relative to the vehicle (15) or the specific point thereof.
  4. The method according to claim 3, wherein the audible sound is part of audible content forming part of infotainment content, the infotainment content further comprising visual content associated with the audible content, in particular synchronised with the audible content, the visual content being displayed on a display device (9).
  5. The method according to claim 4, wherein the audible content comprises a first audible portion and a second audible portion, wherein the first audible portion comprises the audible sound, and
    wherein controlling the acoustic output function of the acoustic output device (3-5) comprises causing the acoustic output device (3-5) to output the second audible portion irrespective of the orientation (18) of the acoustic output device (3-5) with respect to the road-based vehicle (15) and irrespective of the location of the acoustic output device (3-5) with respect to the road-based vehicle (15) or the specific point thereof,
    in particular wherein controlling the acoustic output function of the acoustic output device (3-5) comprises causing the acoustic output device (3-5) to output the second audible portion in an omnidirectional manner.
  6. The method according to claim 3, further comprising detecting sounds generated by an occupant of the road-based vehicle (15) other than a user (22) of the acoustic output device (3-5); and
    causing the detected sounds or processed versions thereof to be output as said audible sound, in particular in such a way that a user (22) of the acoustic output device (3-5) perceives the audible sound as coming from a direction and/or location relative to said user (22) which corresponds to a direction and/or location of said occupant relative to said user (22), irrespective of the orientation (18) and/or location of the acoustic output device (3-5) relative to the vehicle (15) or the specific point thereof.
  7. The method according to claim 6, further comprising detecting a direction and/or location of said occupant with respect to the vehicle (15) or the specific point thereof, in order to derive therefrom the direction and/or location of said occupant relative to said user (22).
  8. The method according to claim 7, wherein detecting the direction and/or location of said occupant with respect to the vehicle (15) or the specific point thereof comprises detecting the direction and/or location of said occupant with respect to the vehicle (15) or the specific point thereof using one or more microphones (2) and/or cameras (16), in particular one or more microphones (2) and/or cameras (16) installed in the vehicle (15).
  9. The method according to claim 3, wherein the audible sound is an audible sound arranged
    - to indicate that the vehicle (15) is operational and/or
    - to indicate that the vehicle (15) is moving and/or
    - to resemble the sound of an internal combustion engine.
  10. The method according to any preceding claim, wherein the acoustic output device (3-5) comprises a portable device (3-5), in particular a personal device (3) or a proprietary device (4, 5) of the vehicle (15), in particular headphones (20, 21) or a headset (20, 21), in particular with a surround sound function and/or a noise cancellation function.
  11. A system for controlling an acoustic output device (3-5) in a road-based vehicle (15), the system comprising:
    a processing unit (1) configured to receive orientation information regarding an orientation (18) of the acoustic output device (3-5) with respect to the road-based vehicle (15), wherein the processing unit (1) is further configured to generate and output a control signal for causing the acoustic output device (3-5) to output audible sound, wherein the processing unit (1) is configured to generate and output the control signal as a function of the orientation information.
  12. The system according to claim 11, wherein
    the system further comprises a detector (16, 23) configured to detect said orientation (18) of the acoustic output device (3-5) with respect to the road-based vehicle (15) in order to provide the orientation information, or
    the processing unit (1) comprises an interface (8) for receiving the orientation information.
  13. A road-based vehicle (15) comprising the system according to claim 11 or 12.
  14. A computer program product comprising a program code which is stored on a computer readable medium, for carrying out a method in accordance with any one of claims 1 to 10, or any of its steps or combination of steps.
EP22185123.1A 2022-07-15 2022-07-15 Road-based vehicle and method and system for controlling an acoustic output device in a road-based vehicle Pending EP4307722A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22185123.1A EP4307722A1 (en) 2022-07-15 2022-07-15 Road-based vehicle and method and system for controlling an acoustic output device in a road-based vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP22185123.1A EP4307722A1 (en) 2022-07-15 2022-07-15 Road-based vehicle and method and system for controlling an acoustic output device in a road-based vehicle

Publications (1)

Publication Number Publication Date
EP4307722A1 true EP4307722A1 (en) 2024-01-17

Family

ID=83049969

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22185123.1A Pending EP4307722A1 (en) 2022-07-15 2022-07-15 Road-based vehicle and method and system for controlling an acoustic output device in a road-based vehicle

Country Status (1)

Country Link
EP (1) EP4307722A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999039546A1 (en) 1998-02-02 1999-08-05 Christopher Glenn Wass Virtual surround sound headphone loudspeaker output system
US20080205662A1 (en) * 2007-02-23 2008-08-28 John Lloyd Matejczyk Vehicle sound (s) enhancing accessory and method
US20130191068A1 (en) * 2012-01-25 2013-07-25 Harman Becker Automotive Systems Gmbh Head tracking system
EP3985482A1 (en) * 2020-10-13 2022-04-20 Koninklijke Philips N.V. Audiovisual rendering apparatus and method of operation therefor
US20220210556A1 (en) * 2020-12-31 2022-06-30 Hyundai Motor Company Driver's vehicle sound perception method during autonomous traveling and autonomous vehicle thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999039546A1 (en) 1998-02-02 1999-08-05 Christopher Glenn Wass Virtual surround sound headphone loudspeaker output system
US20080205662A1 (en) * 2007-02-23 2008-08-28 John Lloyd Matejczyk Vehicle sound (s) enhancing accessory and method
US20130191068A1 (en) * 2012-01-25 2013-07-25 Harman Becker Automotive Systems Gmbh Head tracking system
EP3985482A1 (en) * 2020-10-13 2022-04-20 Koninklijke Philips N.V. Audiovisual rendering apparatus and method of operation therefor
US20220210556A1 (en) * 2020-12-31 2022-06-30 Hyundai Motor Company Driver's vehicle sound perception method during autonomous traveling and autonomous vehicle thereof

Similar Documents

Publication Publication Date Title
EP3424229B1 (en) Systems and methods for spatial audio adjustment
US10032453B2 (en) System for providing occupant-specific acoustic functions in a vehicle of transportation
JP6965783B2 (en) Voice provision method and voice provision system
JP6284331B2 (en) Conversation support device, conversation support method, and conversation support program
CN108058663A (en) Vehicle sounds processing system
CN111016824B (en) Communication support system, communication support method, and storage medium
US10812906B2 (en) System and method for providing a shared audio experience
EP3495942B1 (en) Head-mounted display and control method thereof
CN111007968A (en) Agent device, agent presentation method, and storage medium
JP2019086805A (en) In-vehicle system
JP2015128915A (en) Rear seat occupant monitor system, and rear seat occupant monitor method
EP4307722A1 (en) Road-based vehicle and method and system for controlling an acoustic output device in a road-based vehicle
CN111717083B (en) Vehicle interaction method and vehicle
US20220095045A1 (en) In-car headphone acoustical augmented reality system
WO2019058496A1 (en) Expression recording system
CN113939858A (en) Automatic driving assistance device, automatic driving assistance system, and automatic driving assistance method
CN115431911A (en) Interaction control method and device, electronic equipment, storage medium and vehicle
JP7065353B2 (en) Head-mounted display and its control method
EP4286861A1 (en) A system, a method and a computer program product for providing an indication of an acceleration or a change of orientation of a vehicle and such a vehicle
EP4286860A1 (en) A system, a method and a computer program product for providing an indication of an acceleration of a vehicle for road use and such a vehicle for road use
JP2021150835A (en) Sound data processing device and sound data processing method
JP7443877B2 (en) Audio output control device, audio output system, audio output control method and program
WO2018173112A1 (en) Sound output control device, sound output control system, and sound output control method
JP2019057869A (en) Communication device in vehicle
WO2020090456A1 (en) Signal processing device, signal processing method, and program

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR