EP4082226A1 - Audio method and system for a seat headrest - Google Patents
Audio method and system for a seat headrestInfo
- Publication number
- EP4082226A1 EP4082226A1 EP20829947.9A EP20829947A EP4082226A1 EP 4082226 A1 EP4082226 A1 EP 4082226A1 EP 20829947 A EP20829947 A EP 20829947A EP 4082226 A1 EP4082226 A1 EP 4082226A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- user
- ear
- audio
- calibration
- headrest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 77
- 210000005069 ears Anatomy 0.000 claims abstract description 35
- 238000003672 processing method Methods 0.000 claims abstract description 14
- 230000009467 reduction Effects 0.000 claims description 18
- 230000000877 morphologic effect Effects 0.000 claims description 16
- 239000003550 marker Substances 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 29
- 238000012546 transfer Methods 0.000 description 20
- 230000005236 sound signal Effects 0.000 description 10
- 238000012360 testing method Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000002329 infrared spectrum Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1781—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
- G10K11/17821—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions characterised by the analysis of the input signals only
- G10K11/17823—Reference signals, e.g. ambient acoustic environment
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1785—Methods, e.g. algorithms; Devices
- G10K11/17857—Geometric disposition, e.g. placement of microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1787—General system configurations
- G10K11/17873—General system configurations using a reference signal without an error signal, e.g. pure feedforward
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/10—Applications
- G10K2210/128—Vehicles
- G10K2210/1282—Automobiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/13—Acoustic transducers and sound field adaptation in vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/02—Spatial or constructional arrangements of loudspeakers
- H04R5/023—Spatial or constructional arrangements of loudspeakers in a chair, pillow
Definitions
- TITLE Seat headrest audio system and method
- the present invention relates to an audio processing method for a seat headrest audio system, and an associated seat headrest audio system, the audio processing for the purpose of improving the sound quality in the system. seat headrest audio.
- the present invention also relates to a passenger transport vehicle, in particular a motor vehicle, comprising one or more seats, at least one seat being equipped with a headrest and such an audio system.
- the invention lies in the field of audio systems for vehicles, and in particular for passenger transport vehicles.
- Audio systems for vehicles generally include one or more loudspeakers, adapted to emit sound signals, from a source, for example a car radio, in the passenger compartment of the vehicle.
- a source for example a car radio
- an important issue is to improve the quality of listening to sound signals for vehicle users.
- audio systems which integrate one or more loudspeakers) in the seat headrest, for each or for a part of the seats of a vehicle, in order to improve the audio experience of the vehicle.
- a user installed in a seat equipped with such a headrest.
- noise reduction systems are known, and in particular active noise reduction or active noise control systems.
- Active noise reduction or active noise control consists in applying filtering, in an electronic transmission chain connected to the loudspeaker, this filtering having the aim of canceling a picked up noise, so as to emit a denoised sound to the loudspeaker. a predetermined position (or area).
- the object of the invention is to provide a system for improving headrest audio systems regardless of the user's head position.
- the invention provides, according to one aspect, an audio processing method for a seat headrest audio system, the audio system comprising at least two loudspeakers positioned on either side of the seat. -head, and an audio processing module adapted to apply at least one audio processing operation.
- This process comprises steps of:
- the audio processing method for a headrest audio system makes it possible to optimize the audio processing operation according to the calculated positions of the user's ears in a predetermined spatial frame of reference.
- the audio processing method for a headrest audio system according to the invention may also have one or more of the characteristics below, taken independently or in any technically conceivable combination.
- the processing of the acquired images comprises an extraction, from at least one acquired image, of markers associated with the morphological characteristics of the user, and a determination, in said three-dimensional spatial frame of reference, of a spatial position associated with each marker extracted.
- the method further comprises generating a representative three-dimensional model of the user's head based on said extracted markers, and determining the spatial positions of each user's ear in said spatial frame of reference from said three-dimensional model.
- the extracted markers comprise only an ear marker relating to an ear of the user among the left ear and the right ear, the method comprising a calculation. spatial position of the user's other ear as a function of the three-dimensional model representative of the user's head generated.
- the method further comprises a preliminary step of determining calibration information in connection with said audio processing operation for each point of a plurality of points of a calibration grid, each point of the calibration grid being associated with a index and corresponding to a calibration position for the right ear and a calibration position for the left ear, and a recording of said calibration information in association with a calibration index associated with the calibration point.
- the determination of calibration information as a function of the determined spatial positions of the user's right and left ears comprises a neighborhood search to select a point of the calibration grid corresponding to a calibration position of the right ear and a calibration position of the left ear which are closest, by a predetermined distance, respectively to the determined spatial position of the user's right ear and to the determined spatial position of the user's left ear.
- the audio processing operation is active noise reduction or active noise control and / or sound spatialization and / or sound equalization.
- the invention relates to a seat headrest audio system comprising at least two speakers positioned on either side of the headrest, and an audio processing module suitable for applying to the headrest. minus an audio processing operation.
- the system comprises an image acquisition device, an image processing device and a device for determining calibration information, and:
- the image acquisition device is configured to acquire images of the head of a user of said audio system
- the image processing device is configured to perform processing of the acquired images to determine, in a predetermined three-dimensional spatial frame of reference, a spatial position of each ear, respectively right ear and left ear, of the user,
- the device for determining calibration information is configured to, as a function of said determined spatial positions of the user's ears, and on the basis of calibration information previously recorded in connection with an audio processing operation, to determine information of calibration making it possible to adapt said audio processing operation to the determined spatial positions of the user's ears.
- the headrest audio system according to the invention is suitable for implementing all of the characteristics of the audio processing method as briefly described above.
- the seat headrest audio system further comprises at least one microphone, preferably at least two microphones.
- the invention relates to a passenger transport vehicle comprising one or more seats, at least one seat being equipped with a headrest comprising a headrest audio system, said audio support system. head being configured to implement an audio processing method as briefly described above.
- Figure 1 is a schematic example of a headrest audio system according to one embodiment of the invention.
- Figure 2 schematically illustrates a movement of the head of a user in a predetermined frame of reference
- FIG 3 is a schematic view of an audio system according to the invention.
- FIG 4 is a flowchart of the main steps of an audio processing method in one embodiment of the invention.
- FIG. 5 is a flowchart of the main steps of a preliminary phase of calculating calibration information according to one embodiment.
- Figure 1 is schematically illustrated a vehicle 2 for transporting passengers, for example a motor vehicle.
- the vehicle 2 comprises a passenger compartment 4 in which are placed a plurality of seats, not shown, and at least one seat comprises a headrest 6, coupled to a seat back, generally intended to support the head of the user seated in the seat. .
- the vehicle 2 has several seats, the back of which is provided with a headrest.
- a motor vehicle 2 has a front row of seats, a row of rear seats, and the two front seats are both fitted with a headrest 6.
- the motor vehicle can also have one or more rows of intermediate seats. , located between the front row of seats and the rear row of seats.
- a headrest 6 comprises a central body 8, for example of concave shape, forming a bearing zone 10 for the head 12 of a user 14.
- the headrest 6 has two side flaps 16L, 16R, positioned on either side of the central body 8.
- the side flaps 16L, 16R are fixed.
- the side flaps 16L, 16R are articulated relative to the central body 8, for example movable in rotation relative to an axis.
- the headrest 4 is provided with an audio system 20 for a seat headrest, comprising in particular a number N of speakers 22, integrated in a housing of the headrest.
- the loudspeakers are housed on either side of a central axis A of the body 18 of the headrest, for example in the side flaps 16R, 16L when such side flaps are present.
- the audio system includes 2 speakers, which are distinguished and denoted respectively 22L, 22R.
- the audio system 20 comprises P microphones 24, each microphone being housed in a corresponding housing of the headrest.
- the audio system 20 has two microphones 24R, 24L, placed on either side of the headrest.
- These microphones 24L, 24R are particularly suitable for picking up the pressure level of the sound signals.
- the audio system 20 further comprises an audio processing module 30, connected to the N loudspeakers and to the P microphones via a link 32, preferably a wired link.
- the audio processing module 30 receives sound signals from a source 34, for example a car radio, and implements various audio processing of the received sound signal.
- the headrest audio system 20 comprises an audio processing improvement system 36, implementing a determination of the position of the user's ears to improve the sound reproduction of the audio system 20 for support.
- - seat head is also installed.
- the improvement in sound reproduction is achieved by an audio processing operation calibrated according to the positions of the user's ears determined.
- the audio processing operation is noise reduction, in particular active noise reduction, or sound spatialization, or cross-talk reduction between seats, sound equalization, or any sound operation allowing '' improve the quality of listening to music.
- multiple audio processing operations are implemented.
- the audio system 20 includes an image acquisition device 38 associated with the headrest 6.
- the image acquisition device 38 is for example an optical camera, suitable for capturing images in a given range of the electromagnetic spectrum, for example in the visible spectrum, the infrared spectrum or the near infrared spectrum.
- the image acquisition device 38 is a radar device.
- the image acquisition device 38 is adapted to capture two-dimensional images.
- the image acquisition device 38 is adapted to capture three-dimensional images.
- the image acquisition device 38 is positioned in the passenger compartment 4 of the vehicle, in a spatial position chosen so that the headrest 6 is in the shooting field 40 of the image acquisition device 38 .
- the image acquisition device 38 is placed on or integrated into a housing of an element (not shown) of the passenger compartment 4, located in front of the seat on which the headrest 6.
- the image acquisition device 38 is mounted in a fixed position, and its field of view is also fixed.
- the image acquisition device 38 is mounted in a movable position, for example on a movable part of the passenger compartment 4 of the vehicle, seat or dashboard, the movement of which is known to a control computer. edge of the vehicle.
- the mounting position of the image acquisition device 38 is chosen so that the movements of the head 12 of the user 14, in translation and / or in rotation relative to the centered position shown in FIG. 1 remains in the field of view 40 of the image acquisition device 38.
- Such a displacement of the head 12 of the user 14 is schematically illustrated in FIG. 2 .
- a rotational and / or translational movement of the head 12 of the user 14 notably modifies the distances between each ear 26R, 26L of the user 14 and the loudspeaker 22R, 22L corresponding, and consequently the transfer functions, represented by the arrows F'i, F ' 2 , F' 3 and F ' 4 in figure 2, are modified with respect to the transfer functions corresponding to the centered position, represented by arrows Fi, F 2 , F 3 and F 4 in Figure 1.
- a three-dimensional (3D) spatial frame of reference with center O and axes (X, Y, Z), orthogonal in pairs, is associated with the image acquisition device 38.
- This 3D frame of reference is, in one embodiment , chosen as the spatial reference frame of reference in the audio processing method implemented by the audio system 20 of the headrest 4.
- the system 36 further comprises an image processing device 42, connected to the image acquisition device 38, for example by a wired link, and configured to determine the position of each ear 26R, 26L of the user in the 3D reference frame of reference, from images acquired by the image acquisition device 38.
- an image processing device 42 connected to the image acquisition device 38, for example by a wired link, and configured to determine the position of each ear 26R, 26L of the user in the 3D reference frame of reference, from images acquired by the image acquisition device 38.
- the image processing device 42 is integrated into the image acquisition device 38.
- the image processing device 42 is connected to a device 44 for determining calibration information as a function of the positions of each ear 26R, 26L of the user, intended to adapt an audio processing operation of the audio system 20 for support. head, to optimize the sound reproduction of this audio system.
- the device 44 is a signal processing processor, for example a DSP (for “digital signal processor”) integrated into the audio processing module 30.
- a DSP for “digital signal processor”
- FIG. 3 is a schematic representation of an audio system for a seat headrest, in which the image processing device 42 and the device 44 for determining calibration information are detailed more particularly.
- the image processing device 42 comprises a processor 46, for example a processor of GPU (for “Graphics Processing Unit”) type, specialized in image processing, and an electronic memory unit 48, for example an electronic memory. , of RAM or DRAM type for example.
- a processor 46 for example a processor of GPU (for “Graphics Processing Unit”) type, specialized in image processing, and an electronic memory unit 48, for example an electronic memory. , of RAM or DRAM type for example.
- the processor 46 is suitable for implementing, when the image processing device 42 is powered on, an image acquisition module 50, a module 52 for extracting markers representative of morphological characteristics of the head of a user, a module 54 for generating a 3D model representative of the user's head and a module 56 for calculating the position of each user's ear in a 3D reference frame of reference.
- These modules 50, 52, 54 and 56 are for example in the form of software.
- Each of these software is suitable for being recorded on a non-volatile medium, readable by a computer, such as for example an optical disc or card, a magneto-optical disc or card, a ROM memory, RAM, any type of non-memory. volatile (EPROM, EEPROM, FLASH, NVRAM).
- these modules 50, 52, 54 and 56 are each made in the form of a programmable logic component, such as an FPGA or an integrated circuit.
- Spatial coordinates in the 3D frame of reference, for each of the user's ears, are transmitted to the device 44 for determining calibration information.
- This device 44 is a programmable device comprising a processor 58 and an electronic memory unit 60, for example an electronic memory, of RAM or DRAM type for example.
- the processor 58 is a DSP, adapted to implement a module 62 configured to determine a calibration index of a calibration grid from the spatial coordinates of the ears received from the image processing device 42 and a module 64 for extracting calibration information (s) 68 associated with the calibration index from a prior recording.
- modules 62, 64 are for example in the form of software.
- Each of these software is suitable for being recorded on a non-volatile medium, readable by a computer, such as for example an optical disc or card, a magneto-optical disc or card, a ROM memory, RAM, any type of non-memory. volatile (EPROM, EEPROM, FLASH, NVRAM).
- these modules 62, 64 are each made in the form of a programmable logic component, such as an FPGA or an integrated circuit.
- the calibration information 68 is supplied to the audio processing module 30.
- the audio processing module 30 implements an audio processing operation using the calibration information 68.
- the audio processing module 30 also implements other known audio filtering, which is not described in detail here, as well as digital-to-analog conversion and amplification, to provide a processed sound signal to the speakers 22L, 22R .
- FIG. 4 is a flowchart of the main steps of an audio processing method for a seat headrest audio system according to one embodiment of the invention, implemented in a processing system as described above .
- This method comprises a step 70 of acquiring images by the image acquisition device.
- the images are acquired with a given acquisition frequency (in English "frame rate"), and are stored successively under digital images.
- a digital image in a known manner is formed of one or more matrix (s) of digital samples, also called pixels, having an associated value.
- the image acquisition device is adapted to acquire two-dimensional images, therefore each digital image acquired is formed from one or more two-dimensional (2D) matrix (s).
- the images acquired, in the field of view of the image acquisition device are images in which at least one part of a user's head appears when the user is in the seat to which the headrest is attached.
- the morphological characteristics include the torso and shoulders as well.
- the morphological characteristics include in particular: eyes, mouth, nose, ears, chin, but can also contain the upper part of the body.
- Each morphological characteristic is for example detected in the image by segmentation and represented by a set of pixels of an acquired image.
- a marker is associated with each morphological characteristic, and a position of the marker in the 3D frame of reference associated with the image acquisition device.
- the method described in the article “Monocular vision measurement System for the position and orientation of remote object”, by Tao Zhou et al, published in “International Symposium on Photo electronic Detection and Imaging”, vol. 6623, is used for calculating the spatial positions of the markers in the 3D frame of reference associated with the image acquisition device.
- the method then comprises a generation 74 of a three-dimensional (3D) model representative of the user's head as a function of said extracted markers, by matching the markers of morphological characteristics extracted in step 72 on a standard 3D model of human head.
- a previously trained neural network is used in step 74.
- any other method of matching the markers of morphological characteristics extracted in step 72 on a standard 3D model of a human head can be used.
- a 3D model of the head of the complete user is obtained, which makes it possible in particular to calculate the position of the markers of morphological characteristics not detected in step 72.
- the position of the markers of morphological characteristics not detected in step 72.
- the user is turned in profile, alone part of its morphological characteristics can be detected on an acquired 2D image, but it is possible to determine by calculation, from the 3D model, the position of the missing characteristics.
- the position of each of the user's two ears ie the position of his right ear and the position of his left ear, in the 3D frame of reference, is calculated in the following step 76 for determining the spatial positions of the ears from the representative 3D model of the user's head.
- the position of the user's two ears is obtained in all cases.
- the position of an ear, right or left, not visible on an acquired image is determined by calculation, using the 3D module representative of the user's head.
- step 76 the position of each ear, represented by a triplet of spatial coordinates in the 3D frame of reference: (x, y, z) R and (x, y, z) L.
- the method then comprises a step 78 of determining an index associated with a point of a predefined calibration grid, called the calibration index, as a function of the coordinates (x, y, z) R and (x, y, z) L.
- the determination of an index consists in applying a neighborhood search to select points corresponding to calibration positions of the ears, previously recorded, of coordinates (x k , y k , z k ) R (position ear calibration) and (x m , y m , z m ) L (left ear calibration position), which are closest to the coordinate points (x, y, z) R (position of the user's right ear, determined in step 76) and (x, y, z) i_ (position of the user's left ear, determined in step 76).
- the proximity being evaluated by a predetermined distance, for example the Euclidean distance.
- a 2D calibration grid is used, and the operation of determining a calibration index associated with a point of a calibration grid is implemented in a similar manner for coordinates (X, Y) R and (x, y) i_ of each ear and coordinates (ck, gk) and (Xm, y m ) of calibration positions.
- the calibration index is used in the next extraction step 80 to extract calibration information associated with an audio processing operation, from calibration information previously stored in a memory of the computing device 44.
- calibration information is understood to mean a set of digital filters determined beforehand by measurements, this set of digital filters making it possible to correct or calibrate the audio processing operation in order to improve, as a function of the distance and / or of the acoustic field. between the ear and the loudspeaker, the reproduction of sound by the loudspeakers when the two ears are not in front of the loudspeakers or are not at an equal distance on each side of the head restraint.
- the audio processing operation is active noise reduction, and includes the application of FxLMS (filtered least mean squared) adaptive filters.
- FxLMS filtered least mean squared
- the calibration information comprises, in this embodiment, for each of the ears, a primary transfer function and a secondary transfer function.
- the primary transfer function IR p (resp IR p R ) is the transfer function between the microphone 24L (resp. 24R) and the ear 26L (resp. 26R), when the acoustic field is formed by the sound signal to be processed .
- the secondary transfer function IR R (IR R ) is the transfer function between the microphone 24L (resp. 24R) and the ear 26L (resp. 26R), when the acoustic field is formed by the sound signal emitted from the top -speaker 22L (resp. 22R) corresponding.
- the cross transfer functions, primary and secondary, between right microphone 24R (respectively right speaker 22R) and left ear 26L, and left microphone 24L (respectively left speaker 22L) and right ear 26R are also used. .
- the calibration information extracted in step 80 comprises 4 transfer functions, previously measured and recorded.
- the method ends with the application 82 of the audio processing operation, for example the application of the active noise reduction, adapted by using the calibration information obtained in step 80.
- the operation of Audio processing is optimized based on the calculated positions of the user's ears, which improves the quality of the sound perceived by the user.
- the audio processing method according to the invention has been described above in the case where the audio processing operation is active noise reduction.
- the audio processing operation is an improvement in the quality of audio perception, and includes the application of spatialization filters such as binauralization filters. Thanks to the positions of the two ears detected, the spatialization filters are well chosen as a function of these positions.
- the preliminary phase includes a step 90 for defining a calibration grid comprising a plurality of points, each point being identified by a calibration index.
- the calibration grid several spatial positions of the seat to which the headrest is attached are considered, when the seat is movable in translation, along one or more axes, which is for example the case in a motor vehicle.
- the seat is fixed in a given spatial position.
- step 92 For each point of the calibration grid, spatial coordinates representative of the calibration position of each ear of the test manikin, in the frame of reference of the image acquisition device, are measured and recorded in step 92, in association with the corresponding calibration index.
- the information or the calibration information for example the primary and secondary transfer functions for each of the ears in the case of active noise reduction, are calculated in the calculation step 94. calibration information.
- a transfer function for example, the measurement of a transfer function, primary or secondary, in a test environment, using a test dummy, is well known to those skilled in the art.
- a test sound signal for example pink noise, is used, for example, to calculate these transfer functions for each ear of the test manikin.
- the calibration information is recorded in the RAM memory of the processor of the processing module used. Steps 92 to 96 are repeated for all the points of the grid, which makes it possible to obtain a database of calibration information, associated with the calibration grid, for the chosen audio processing operation.
- the invention has been described above for a seat headrest audio system. It is clear that the invention applies in the same way to each seat headrest audio system, an image acquisition device being installed for each seat headrest audio system.
- the use of a mapping of the spatial positions of markers representative of the user's morphological characteristics with a three-dimensional model representative of the user's head makes it possible to obtain the spatial coordinates of the user's ears in all cases, including when the ears are not detected on the images acquired by the image acquisition device.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
- Stereophonic System (AREA)
- Chair Legs, Seat Parts, And Backrests (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1915546A FR3105549B1 (en) | 2019-12-24 | 2019-12-24 | Seat headrest audio method and system |
PCT/EP2020/087612 WO2021130217A1 (en) | 2019-12-24 | 2020-12-22 | Audio method and system for a seat headrest |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4082226A1 true EP4082226A1 (en) | 2022-11-02 |
Family
ID=70228188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20829947.9A Pending EP4082226A1 (en) | 2019-12-24 | 2020-12-22 | Audio method and system for a seat headrest |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230027663A1 (en) |
EP (1) | EP4082226A1 (en) |
FR (1) | FR3105549B1 (en) |
WO (1) | WO2021130217A1 (en) |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7243945B2 (en) * | 1992-05-05 | 2007-07-17 | Automotive Technologies International, Inc. | Weight measuring systems and methods for vehicles |
US7738678B2 (en) * | 1995-06-07 | 2010-06-15 | Automotive Technologies International, Inc. | Light modulation techniques for imaging objects in or around a vehicle |
JP5327049B2 (en) * | 2007-12-14 | 2013-10-30 | パナソニック株式会社 | Noise reduction device |
JP5092974B2 (en) * | 2008-07-30 | 2012-12-05 | 富士通株式会社 | Transfer characteristic estimating apparatus, noise suppressing apparatus, transfer characteristic estimating method, and computer program |
US9529431B2 (en) * | 2012-09-06 | 2016-12-27 | Thales Avionics, Inc. | Directional sound systems including eye tracking capabilities and related methods |
US20150382129A1 (en) * | 2014-06-30 | 2015-12-31 | Microsoft Corporation | Driving parametric speakers as a function of tracked user location |
US9544679B2 (en) * | 2014-12-08 | 2017-01-10 | Harman International Industries, Inc. | Adjusting speakers using facial recognition |
US9595251B2 (en) * | 2015-05-08 | 2017-03-14 | Honda Motor Co., Ltd. | Sound placement of comfort zones |
WO2018127901A1 (en) * | 2017-01-05 | 2018-07-12 | Noveto Systems Ltd. | An audio communication system and method |
KR20210068409A (en) * | 2018-10-10 | 2021-06-09 | 소니그룹주식회사 | Information processing devices, information processing methods and information processing programs |
-
2019
- 2019-12-24 FR FR1915546A patent/FR3105549B1/en active Active
-
2020
- 2020-12-22 EP EP20829947.9A patent/EP4082226A1/en active Pending
- 2020-12-22 WO PCT/EP2020/087612 patent/WO2021130217A1/en unknown
- 2020-12-22 US US17/789,076 patent/US20230027663A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20230027663A1 (en) | 2023-01-26 |
FR3105549B1 (en) | 2022-01-07 |
FR3105549A1 (en) | 2021-06-25 |
WO2021130217A1 (en) | 2021-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2258119B1 (en) | Method and device for determining transfer functions of the hrtf type | |
US8948414B2 (en) | Providing audible signals to a driver | |
FR2988654A1 (en) | Method for adjustment of seat of car, involves providing set of measurements for user, where set of measurements is given using rangefinder, and measurements are taken when user is in sitting position | |
EP2766872A1 (en) | Method of calibrating a computer-based vision system onboard a craft | |
CN111854620B (en) | Monocular camera-based actual pupil distance measuring method, device and equipment | |
CA3046312A1 (en) | Object recognition system based on an adaptive 3d generic model | |
EP3579191B1 (en) | Dynamic estimation of instantaneous pitch and roll of a video camera embedded in a motor vehicle | |
FR3116934A1 (en) | Audio processing method and system for a seat headrest audio system | |
FR3099610A1 (en) | EMOTION EVALUATION SYSTEM | |
FR3067999B1 (en) | METHOD FOR AIDING THE DRIVING OF A MOTOR VEHICLE | |
EP4082226A1 (en) | Audio method and system for a seat headrest | |
EP2445759A2 (en) | Obstacle detection device comprising a sound reproduction system | |
FR3097711A1 (en) | Autonomous audio system for seat headrest, seat headrest and associated vehicle | |
FR3105949A1 (en) | Device for detecting the position of a headrest of a seat, for example a vehicle seat | |
FR3079652A1 (en) | METHOD FOR EVALUATING A DISTANCE, ASSOCIATED EVALUATION SYSTEM AND SYSTEM FOR MANAGING AN INFLATABLE CUSHION | |
FR3085927A1 (en) | DEVICE, SYSTEM AND METHOD FOR DETECTING DISTRACTION OF A CONDUCTOR | |
WO2018206331A1 (en) | Method for calibrating a device for monitoring a driver in a vehicle | |
FR3113993A1 (en) | Sound spatialization process | |
WO2019086314A1 (en) | Method of processing data for system for aiding the driving of a vehicle and associated system for aiding driving | |
EP4211002A1 (en) | Method and system for locating a speaker in a reference frame linked to a vehicle | |
WO2023280745A1 (en) | Method for labelling an epipolar-projected 3d image | |
FR3111464A1 (en) | Method of calibrating a camera and associated device | |
FR3146523A1 (en) | Method of estimating the speed of a vehicle | |
FR3088465A1 (en) | ESTIMATING A DISPARITY MAP FROM A MONOSCOPIC IMAGE BY DEEP LEARNING | |
FR2899341A1 (en) | DEVICE FOR ACOUSTIC LOCALIZATION AND MEASUREMENT OF THEIR INTENSITY |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220623 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |