EP2194734A1 - Method and system for sound spatialisation by dynamic movement of the source - Google Patents

Method and system for sound spatialisation by dynamic movement of the source Download PDF

Info

Publication number
EP2194734A1
EP2194734A1 EP09174579A EP09174579A EP2194734A1 EP 2194734 A1 EP2194734 A1 EP 2194734A1 EP 09174579 A EP09174579 A EP 09174579A EP 09174579 A EP09174579 A EP 09174579A EP 2194734 A1 EP2194734 A1 EP 2194734A1
Authority
EP
European Patent Office
Prior art keywords
sound
signal
movement
information
spatialized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP09174579A
Other languages
German (de)
French (fr)
Inventor
Vincent Clot
Jean-Noël Perbet
Pierre-Albert Breton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thales SA
Original Assignee
Thales SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thales SA filed Critical Thales SA
Publication of EP2194734A1 publication Critical patent/EP2194734A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones

Definitions

  • the field of the invention relates to a method of algorithmic processing of signals for sound spatialization for improving the location of the original virtual positions of the sound signals. Subsequently, we will use the term sound spatialization or 3D sound.
  • the invention applies in particular to spatialization systems compatible with an avionics modular information processing equipment type IMA (abbreviation of the English expression "Integrated Modular Avionics") still called EMTI (for Modular Treatment Equipment some information).
  • IMA abbreviation of the English expression "Integrated Modular Avionics”
  • EMTI Modular Treatment Equipment some information
  • 3D sound is part of the same approach as the helmet visual by allowing pilots to acquire spatial situation information in their own landmark via a communication channel other than the visual, following a natural mode that is less loaded than the view. .
  • weapon aircraft typically include threat detection systems, such as enemy aircraft radar snapping or collision risk, associated with visualization systems inside the cockpit. These systems alert the pilot of threats in his environment by displaying a visualization combined with a sound signal. 3D sound techniques provide an indication of the location of the threat through the hearing input channel, which is not overloaded and intuitive. The pilot is thus informed of the threat by means of a spatialized sound in the direction corresponding to the information.
  • threat detection systems such as enemy aircraft radar snapping or collision risk
  • visualization systems inside the cockpit.
  • 3D sound techniques provide an indication of the location of the threat through the hearing input channel, which is not overloaded and intuitive. The pilot is thus informed of the threat by means of a spatialized sound in the direction corresponding to the information.
  • a sound spatialization system consists of a calculation system performing algorithmic processing on the sound signals.
  • the size of the aircraft cockpits limits the integration of audio equipment and therefore the systems of multiple speaker networks and / or mobile loudspeakers allowing a spatialization without algorithmic processing are little used for the restitution of sounds 3D.
  • the patent is also known WO 2004/006624 A1 describing an avionic 3D sound system using, to increase the detection of the position of the sound, the use of an HRTF database.
  • the present invention aims to avoid ambiguities of location of a sound source.
  • the invention relates to an algorithmic signal processing method for sound spatialization for associating sound signals with information to be located by a listener.
  • the spatialized sound signals are defined by a virtual position of origin corresponding to the position of the information, the method is characterized in that , by algorithmic processing, a movement describing a series of virtual positions of said signal is applied to a spatialized sound signal. signal around the original virtual position of the information.
  • the movement around the original virtual position of the information is of oscillatory type.
  • the movement around the original virtual position of the information is of the Random type.
  • This solution for improving the location of 3D sounds is particularly intended for listeners who are subject to movement and workloads.
  • a listener gives movement to his auditory receivers to better locate a sound.
  • the invention allows the listener to remain motionless. Indeed, since the original virtual position is the spatialized position of the sound, the movement of the sound source around this position provides location information better than that of a continuous monodirectional movement.
  • the spatialization system can also be coupled with a pilot's helmet position detection device.
  • the movement is then correlated to the angle of difference between the listening direction of the listener and the original virtual position of said sound signal.
  • the movement then varies according to the orientation of the pilot vis-à-vis the information to be detected which is associated with the sound signal.
  • the amplitude of motion is correlated with the value of said deviation angle and the orientation of movement is also correlated with the orientation of the plane of said deviation angle.
  • the pilot thus receives information indicating whether he is moving in the direction of the information to be acquired.
  • This spatialisation effect of the sound simulates a movement of the sound signal converging towards the original virtual position of the information. This dynamic effect improves sound detection.
  • the invention also relates to the algorithmic signal processing system for sound spatialization comprising a first sound spatialization calculation means for associating sound signals with information to be located by a listener.
  • Said system is characterized in that it comprises a second path calculating means providing data enabling the first means spatialization calculus to apply motion to a spatialized sound signal around its original virtual position. This movement of the sound signal is preferably oscillatory.
  • the system also comprises a third means for calculating at least one law for varying the intensity of a sound signal to modify the intensity of the spatialized sound signal during the oscillatory movement.
  • the second trajectory calculation means calculates the difference in distance between the original virtual position of the sound source and the position supplied by the reception means and calculates a movement correlating with said distance difference.
  • the position data receiving means is connected to a helmet position detector carried by a listener.
  • the position data receiving means is connected to a camera detecting the positioning of the listener. This camera is not worn by the listener.
  • the sound signals come from a sound database of the aircraft and said sound signals are associated with information of at least one avionic device.
  • a first avionic device is a display device.
  • a second avionics device is a navigation device.
  • a third avionics device is an alert device.
  • the invention relates to sound spatialization systems and a method for improving the localization of a sound in the environment of a listener.
  • the results obtained by empirical method shows that an individual more easily detects the origin of a sound when the sound is moving.
  • the above works show better results in location tests with continuous moving sound.
  • the essential characteristic of the spatialization process is to confer an oscillatory movement to a sound around its original virtual position.
  • a sound spatialization system lies at the border between the avionics systems and the man-machine interface.
  • the figure 3 schematizes a spatialization system in an aircraft cockpit and particularly in a weapons plane where the pilot wears a helmet incorporating a device 6 for detecting the position of the helmet.
  • This The aircraft type includes a plurality of avionics systems 7, including collision-avoidance navigation systems 71 to avoid collisions, and systems for military operations such as target detection devices 72 and aircraft attack devices. target. Avionics systems may also include a meteorological device 73. These systems are most often coupled to visualization devices.
  • the figure 3 does not represent all the avionics systems that can be associated with the spatial system. The skilled person knows the avionics architectures and is able to implement a sound spatialization system with any avionics device transmitting information to the pilot.
  • the figure 4 schematizes a particular situation showing the interest of a spatialized system with an anticollision system.
  • the field of vision 24 of the pilot flying the aircraft is oriented to the left at a given instant.
  • This driver has a speaker system 21 positioned inside the helmet at the level of his ears.
  • the cockpit of the aircraft comprises several visualizations 21-23 and the pilot's field of vision is oriented towards the visualization 23.
  • an event such as a risk of collision with the ground detected by the anti-collision system alerts the pilot by displaying on visualization 21 the risk situation with the navigation data to be monitored and the flight instructions to be established.
  • the system also issues audible alerts associated with the information on the screen.
  • the spatialized sound associated with the alerts indicates to the pilot the location of the information to be taken into account and thus reduces his mental workload thanks to the audio stimulus given by the spatialization system.
  • the reaction time of the driver is reduced.
  • a sound spatialization system 1 is generally associated with a sound reception device and a sound database system 81 storing prerecorded sounds, such as synthesized alert messages, signaling sounds, software application sounds. or sounds from internal communications systems as external to the aircraft. For example, the spatialization of audio communications gives additional information on the interlocutor with whom the pilot is in communication.
  • the sound reproduction system 5 includes the earphones inside the pilot's helmet and also includes the cockpit loudspeaker system.
  • the sound reproduction system must be of stereophonic type for the application of the temporal difference effects of the signals between the loudspeakers.
  • the output of the spatialization module also comprises a signal processing device for adding additional effects on the spatialized signals, such as Tremolo effects or Doppler effect for example.
  • the sound spatialization calculating means 2 performs the algorithmic processing of the sounds to produce the monaural signals, realizing the modification of the sound intensity, the binaural signals, realizing the modification of the phase of the signals to simulate a time shift and the setting anatomical transfer functions (HRTF).
  • HRTF anatomical transfer functions
  • Binaural signals are used for the localization of sound sources in azimuth and require a stereophonic reproduction system.
  • the calculation means establish an algorithmic processing for simulating a distance of the sound sources by modifying the sound level (ILD) and a temporal offset between the sounds (ITD).
  • Monaural signals are used for elevation location and to distinguish a sound source positioned in front of or behind the listener. Monaural signals do not require a stereophonic reproduction system.
  • the spatialization system is connected to an HRTF database 82 storing the anatomical transfer functions of the known pilots. These transfer functions can be customized for each driver by an individual measurement.
  • the database may also include several anatomical standard profiles in order to be correlated with a pilot at the first use of the system to detect the adapted profile. This manipulation is faster than the individual measurement.
  • the first means 3 has the function of calculating the trajectory of the oscillatory movement to be imparted to the spatialized sound.
  • the oscillatory movement has a trajectory that can vary in elevation and azimuth with respect to the listener.
  • the oscillatory trajectory is in an angular range whose apex of the angle is centered on the listener.
  • the calculation means 3 determines an angle of difference between the orientation of the pilot's gaze, indirectly by the position of the helmet, and the virtual position. origin of the sound signal.
  • the figure 5 schematizes the application of the invention for calculating the trajectory of the oscillatory movement.
  • the drawing on the left represents the case where the orientation of the pilot is such that the direction of his field of vision 42 is strongly decorrelated with the direction 43 of his field of vision if it was oriented towards the position of origin of the signal sound.
  • the deviation angle 31 is calculated by the calculation means 3. This deviation angle can be in a plane varying in azimuth and in elevation as a function of the orientation 42 of the pilot's field of view.
  • the calculation means 3 also calculates the trajectory of the oscillatory movement 32 as a function of this angle of variation 31.
  • the trajectory of the oscillatory movement 32, or 41 on the drawing of the right of the figure 5 is a function of the angle of difference 31, or 36.
  • the coordinates of the original virtual position 33 are defined by an azimuth coordinate a1 and an elevation coordinate e1.
  • the trajectory of the oscillatory movement 32 is bidirectional and continuous, thus achieving an oscillatory movement back and forth along an arc of a circle connected to the angular bearings 44 and 45.
  • the trajectory calculation means can define a trajectory that can be oval or other shape.
  • the scan speed of this path is also configurable. It is preferably greater than the speed of movement of the head with a latency of less than 70 ms to preserve the natural sound.
  • the spatialization process calculates, by the calculation means 2, the trajectory of the oscillatory movement 32 around the virtual position 33.
  • the calculated angles are used by the calculation functions of FIG. monaural and binaural signals to determine the trajectory 32 around the original virtual position 33.
  • This trajectory 32 comprises a series of several virtual positions delimited by two extreme positions 34 and 35. These two extreme positions are located according to the coordinates of the position of origin 33 added with angular bearings in azimuth 44 and elevation 45.
  • the signals ITD and ILD depend on the angles in azimuth and elevation.
  • a sound signal is defined as a vibration perceived by the human ear, described in the form of a sound wave and which can be represented in the time and frequency domain (spectrum of the wave).
  • the ITD signals include a phase change to simulate a different azimuth position by a time shift of the signal between the ears of a listener.
  • ITD interaural time shift
  • the figure 6 represents the timing diagram of the sound for each ear for different positions of the sound source.
  • the sound source A21 represents a first position where the sound source is on the left ear side of the listener and the sound source A22 represents a second position where the sound source is on the side of the right ear of the listener.
  • the sound source A21 is closer to the listener than the source A22 is.
  • the angle values evolving in the angular bearing range are used for the calculation of the different virtual positions constituting the trajectory 32. These values are injected into the formula 2.
  • the various angle values included in the angular bearing range define the different virtual positions composing the trajectory of the sound signal.
  • the sounds should have a periodicity ideally less than 70 ms, to avoid the artificial effects of drag when one turns the head. In the same way, the overall sound footprint should ideally last at least 250 ms.
  • the loudness is different between the left ear and the right ear of a listener.
  • the difference in sound level is variable at an angle between the two ears.
  • the difference in sound level between the two ears varies between the ILD curve associated with the original virtual position 33 of the sound signal and the ILD curve associated with the extreme position of the sound signal on the trajectory of movement.
  • the sound shift varies according to the angular bearings 44 and 45 (azimuth and elevation).
  • the figure 1 illustrates for example a law of loudness that can be applied to sounds.
  • the value of the angular bearings 44 and 45 varies depending on the orientation of the pilot.
  • the law regulating the bearings mentioned above shows that the more the listener moves towards the original position of the sound signal and the lower the value of the angular bearings, until the bearings are canceled for a direction substantially equal to the direction of rotation. the virtual position of origin.
  • the position detection device 6 transmits the position coordinates to the calculation means 3 and according to these coordinates the angular bearings 44 and 45 will decrease or increase.
  • the diagram on the right of the figure 3 corresponds to a situation where the orientation of the pilot approaches the original virtual position 33 of the sound signal.
  • the angle of variation 36 is reduced and therefore the new trajectory 41 calculated by the calculation module 2 is of smaller amplitude, delimited by the two positions 36 and 37.
  • the invention also includes, for each subject, calculation means 4 for processing the sound signal at the output of the calculation means 2 for spatializing the sound.
  • This module 4 realizes a variation of the sound intensity of a sound signal according to the positions of the sound on the oscillatory movement.
  • a law of regulation of linear sound is applied to a spatialized sound performing the oscillatory movement so that the intensity 39 of the sound signal is reduced by a predefined number of dB when the position of the sound signal is located at a position extreme of the oscillatory movement, these positions corresponding, on the figure 3 at positions 34 and 35 of the oscillatory movement 32, and the intensity 40 of the sound signal is maximum when the position of the sound signal is located at the original virtual position 33 of the sound signal.
  • a linear regression law can for example determine the intermediate positions having an intermediate level of sound intensity between the maximum intensity and the decreased intensity.
  • the intensity of the sound signal is indeed less strong for a signal far from its actual position.
  • this variation in intensity simulates a distance 51 between the listener and an extreme position of the high oscillatory movement. While for the original virtual position, the simulated distance 52 is small.
  • the change in intensity simulates a spatial convergence of the oscillatory movement towards the original position.
  • the law of variation of the sound intensity can be independent of the angular difference between the pilot's gaze and the direction of the sound source.
  • the variation can also be random, ie not continuous between the original position and the extremes.
  • the duration of a sound is greater than 250 ms. Ideally, the duration should even be greater than 5 s to take full advantage of the associated dynamic signals.
  • Any type of sound reproduction system may be used: a system comprising a loudspeaker, a system comprising several loudspeakers, a system of cartilaginous transducers, a plug system with or without wires, etc.
  • the sound spatialization method applies to any type of application whose needs require the location of a sound. It is particularly intended for applications associating sound with information to be taken into account by an auditor. It applies to the field of aeronautics for the human-machine interaction of avionics systems with the pilot, for simulator applications immersing an individual (virtual reality system for example or airplane simulator) and also in the automotive field for systems to alert the driver of a hazard and provide an indication of the origin of the hazard.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

The method involves defining spatialized sound signals by a virtual origin position among a set of virtual origin positions (33-35) corresponding to information position, and applying an oscillatory movement (32) describing the set of virtual origin positions of the spatialized sound signal to the spatialized sound signal by performing an algorithmic-process. A sound intensity variation law is applied on the spatialized sound signal during the movement of the spatialized sound signal, where sound intensity ranges between a maximum level (40) and a minimum level (39). An independent claim is also included for a system for algorithmic-processing of signals for sound spatialization.

Description

Le domaine de l'invention concerne un procédé de traitement algorithmique de signaux pour la spatialisation sonore permettant l'amélioration de la localisation des positions virtuelles d'origine des signaux sonores. Par la suite, on utilisera le terme spatialisation sonore ou son 3D. L'invention s'applique notamment aux systèmes de spatialisation compatibles avec un équipement modulaire avionique de traitement de l'information de type IMA (abréviation de l'expression anglo-saxon « Integrated Modular Avionics ») appelé encore EMTI (pour Equipement Modulaire de Traitement de l'Information).The field of the invention relates to a method of algorithmic processing of signals for sound spatialization for improving the location of the original virtual positions of the sound signals. Subsequently, we will use the term sound spatialization or 3D sound. The invention applies in particular to spatialization systems compatible with an avionics modular information processing equipment type IMA (abbreviation of the English expression "Integrated Modular Avionics") still called EMTI (for Modular Treatment Equipment some information).

Dans le domaine de l'aéronautique embarqué et notamment dans le domaine militaire, la majorité des réflexions débouche sur le besoin d'un visuel tête haute, qui peut être un casque porté par la tête, associé à une visualisation de très grand format présentée en tête basse. Cet ensemble doit permettre d'améliorer la perception de la situation globale (« situation awareness ») tout en réduisant la charge du pilote grâce à une présentation d'une synthèse en temps réel des informations issues de sources multiples (senseurs, base de données).In the field of embedded aeronautics and in particular in the military field, the majority of reflections leads to the need for a head-up display, which can be a helmet worn by the head, associated with a very large format visualization presented in head down. This set should improve the perception of the global situation ("situation awareness") while reducing the burden of the pilot through a presentation of a real-time synthesis of information from multiple sources (sensors, database) .

Le son 3D s'inscrit dans la même approche que le visuel de casque en permettant aux pilotes d'acquérir des informations de situation spatiale dans son repère propre par un canal de communication autre que le visuel en suivant une modalité naturelle moins chargée que la vue.3D sound is part of the same approach as the helmet visual by allowing pilots to acquire spatial situation information in their own landmark via a communication channel other than the visual, following a natural mode that is less loaded than the view. .

Typiquement dans le cadre d'applications aéronautiques militaires, les avions d'armes comportent des systèmes de détection de menaces, comme un accrochage radar par un aéronef ennemi ou un risque de collision, associés à des systèmes de visualisation à l'intérieur du cockpit. Ces systèmes alertent le pilote des menaces dans son environnement, par affichage sur une visualisation combiné à un signal sonore. Les techniques de son 3D fournissent une indication de localisation de la menace par le canal d'entrée de l'ouïe, non surchargé et intuitif. Le pilote est ainsi informé de la menace au moyen d'un son spatialisé dans la direction correspondante à l'information.Typically in the context of military aeronautical applications, weapon aircraft include threat detection systems, such as enemy aircraft radar snapping or collision risk, associated with visualization systems inside the cockpit. These systems alert the pilot of threats in his environment by displaying a visualization combined with a sound signal. 3D sound techniques provide an indication of the location of the threat through the hearing input channel, which is not overloaded and intuitive. The pilot is thus informed of the threat by means of a spatialized sound in the direction corresponding to the information.

Pour les applications aéronautiques embarquées, un système de spatialisation du son consiste en un système de calcul réalisant des traitements algorithmiques sur les signaux sonores. L'encombrement des cockpits d'aéronef limite l'intégration d'équipements audio et par conséquent les systèmes de réseaux de haut-parleurs multiples et/ou de haut-parleurs mobiles permettant une spatialisation sans traitement algorithmique sont peu utilisés pour la restitution de sons 3D.For embedded aeronautical applications, a sound spatialization system consists of a calculation system performing algorithmic processing on the sound signals. The size of the aircraft cockpits limits the integration of audio equipment and therefore the systems of multiple speaker networks and / or mobile loudspeakers allowing a spatialization without algorithmic processing are little used for the restitution of sounds 3D.

Différents traitements algorithmiques de spatialisation de son sont aujourd'hui utilisés pour simuler le positionnement des sons dans l'environnement d'un individu :

  • les techniques de génération de signaux binauraux sont basées sur la différence de niveau sonore (ILD en langage anglo-saxon pour « Interaural Level Difference ») entre les récepteurs auditifs d'un individu et sur la différence temporelle de réception des signaux sonores (ITD pour Interaural Time difference ») entre ces mêmes récepteurs. La figure 1 illustre une série de courbes représentant les différences de niveau sonore en fonction des fréquences du son pour un auditeur selon la position des sources sonores. Pour un son A1 au devant de l'auditeur, la courbe c1 représente la courbe du son en fonction des fréquences. La courbe c2 correspond au son A2 et la courbe c3 correspond au son A3.
  • les techniques complémentaires de génération de signaux monauraux font varier le spectre de l'onde sonore en fonction de sa position en lui appliquant la fonction de transfert anatomique de l'individu (HRTF, pour « Head Related Transfer Functions » en langage anglo-saxon). La fonction de transfert anatomique incorpore les effets de dispersion secondaire tels les oreilles externes, les épaules, la forme du crâne, etc... La prise en compte de la HRTF permet d'augmenter la sensibilité à l'élévation d'un son ainsi que la discrimination avant-arrière. A titre d'exemple, la figure 2 représente une série de courbes HRTF pour différentes positions de la source sonore. La courbe c11 représente la fonction HRTF pour le son localisé en A11. La courbe c12 représente la fonction HRTF pour le son localisé en A12. La courbe c13 représente la courbe HRTF pour le son localisé en A13.
Different algorithmic sound spatialization treatments are now used to simulate the positioning of sounds in the environment of an individual:
  • the binaural signal generation techniques are based on the difference in sound level (ILD) between the auditory receivers of an individual and on the temporal difference of reception of the sound signals (ITD for Interaural Time difference ") between these same receivers. The figure 1 illustrates a series of curves representing the differences in sound level according to the frequencies of sound for a listener according to the position of the sound sources. For an A1 sound in front of the listener, the curve c1 represents the curve of the sound as a function of the frequencies. The curve c2 corresponds to the sound A2 and the curve c3 corresponds to the sound A3.
  • the complementary monaural signal generation techniques vary the spectrum of the sound wave according to its position by applying the anatomical transfer function of the individual (HRTF, for "Head Related Transfer Functions" in English language) . The anatomical transfer function incorporates the effects of secondary dispersion such as the outer ears, the shoulders, the shape of the skull, etc. The consideration of the HRTF makes it possible to increase the sensitivity to the elevation of a sound as well. than front-back discrimination. For example, the figure 2 represents a series of HRTF curves for different positions of the sound source. The curve c11 represents the HRTF function for the sound located at A11. The curve c12 represents the HRTF function for the sound located at A12. The curve c13 represents the HRTF curve for the sound located at A13.

L'homme du métier connaît bien ces techniques de spatialisation du son, qui ne font pas l'objet de l'invention. Toutefois à titre d'information, on peut citer les ouvrages « Adaptative 3D sound systems » par John Garas, aux éditions Kluwer Academic Publishers, et « Signals, Sound, and Sensation » par Bill Hartmann aux éditions AIP Press décrivant ces dernières techniques.Those skilled in the art are familiar with these sound spatialization techniques, which are not the subject of the invention. However, for information, we can cite the books "Adaptative 3D sound systems" by John Garas, published by Kluwer Academic Publishers, and "Signals, Sound, and Sensation" by Bill Hartmann AIP Press editions describing the latest techniques.

On connaît également le brevet WO 2004/006624 A1 décrivant un système de son 3D avionique mettant à profit, pour augmenter la détection de la position du son, l'utilisation d'une base de données HRTF.The patent is also known WO 2004/006624 A1 describing an avionic 3D sound system using, to increase the detection of the position of the sound, the use of an HRTF database.

Les systèmes actuels de spatialisation du son présentent des limitations de performance en localisation et souvent l'inconvénient d'une localisation de la source sonore ambigüe. En particulier, les performances de localisation d'un son joué devant un auditeur d'un son joué derrière un auditeur, et de la même façon, en élévation restent variables d'un individu à l'autre et globalement insuffisantes.Current sound spatialization systems have limitations in localization performance and often the disadvantage of ambiguous sound source localization. In particular, the localization performance of a sound played in front of a listener of a sound played behind a listener, and in the same way, in elevation, remain variable from one individual to another and generally insufficient.

Des études scientifiques ont montré l'apport des signaux dynamiques à la localisation en élévation et avant - arrière. On appelle signal dynamique un signal sonore qui n'a pas une localisation constante par rapport à l'individu. Ces travaux se sont inspirés de certains animaux réputés pour leurs capacités auditives, notamment les félins qui bougent leurs récepteurs auditifs pour localiser les sources sonores. Pour illustrer les différents travaux menés sur les signaux dynamiques chez l'humain, on peut notamment citer H. Wallach avec "The role of head movements and vestibular and visual eues in sound localization", J. Exp. Psychol. Vol. 27, 1940, pages. 339-368 , W.R. Thurlow et P.S. Runge avec "Effects of induced head movements on localisation of direct sound", The Journal of the Acoustical Society of America., Vol. 42, 1967, pages. 480-487 / 489-493 et S. Perrett et W. Noble avec "The effect of head rotations on vertical plane sound localization", The Journal of the Acoustical Society of America., Vol. 102, 1997, pages. 2325-2332 .Scientific studies have shown the contribution of dynamic signals to elevation and fore-and-aft location. A dynamic signal is a sound signal that does not have a constant localization with respect to the individual. This work was inspired by some animals known for their hearing ability, including felines that move their auditory receivers to locate sound sources. To illustrate the different studies carried out on dynamic signals in humans, we can cite in particular H. Wallach with "The role of head movements and vestibular and visual eues in sound localization", J. Exp. Psychol. Flight. 27, 1940, pages. 339-368 , WR Thurlow and PS Runge with "Effects of induced head movements on localization of direct sound", The Journal of the Acoustical Society of America., Vol. 42, 1967, pages. 480-487 / 489-493 and S. Perrett and W. Noble with "The effect of head rotations on vertical plane sound localization", The Journal of the Acoustical Society of America., Vol. 102, 1997, pages. 2325-2332 .

L'article, " Resolution of front-back ambiguity in spatial hearing by listener and source movement", de F.L. Wightman et D.J. Kistler, paru dans The Journal of the Acoustical Society of America, Volume 105, Issue 5, pages. 2841-2853 en mai 1999 , synthétise les travaux menés depuis cinquante ans sur l'apport des signaux dynamiques dans la localisation des sons. Cette étude montre de façon empirique que le déplacement du sujet, mouvement de tête par exemple, diminue la confusion en localisation avant et arrière, que le déplacement multidirectionnel de la source à l'initiative d'un sujet contraint à rester immobile diminue la confusion en localisation avant et arrière et le déplacement continu monodirectionnel de la source par une action extérieure au sujet, non commandable par le sujet, ne diminue pas significativement la confusion en localisation avant-arrière.The article, " Resolution of front-back ambiguity in spatial hearing by listener and source movement, by FL Wightman and DJ Kistler, published in The Journal of the Acoustical Society of America, Volume 105, Issue 5, pages. 2841-2853 in May 1999 , summarizes the work done over the last fifty years on the contribution of dynamic signals in the localization of sounds. This study shows empirically that the movement of the subject, head movement for example, reduces the confusion in forward and backward localization, that the multidirectional displacement of the source at the initiative of a subject forced to remain motionless reduces the confusion. forward and backward localization and the monodirectional continuous movement of the source by an action outside the subject, not controllable by the subject, does not significantly reduce the confusion in front-rear location.

Cependant, dans le domaine de l'aéronautique et particulièrement pour les pilotes, il n'est pas toujours possible de bouger sa tête suffisamment à cause de l'espace de débattement restreint des cockpits et des systèmes électroniques intégrés au casque. La tâche du pilote, nécessitant sa pleine concentration sur les systèmes et son champ de vision, est également un facteur de contrainte aux mouvements. Les forts facteurs de charge sous accélération limitent eux aussi les mouvements du pilote et en particulier de sa tête.However, in the field of aeronautics and especially for pilots, it is not always possible to move his head sufficiently because of the limited space of movement of cockpits and electronic systems integrated into the helmet. The pilot's task, requiring his full concentration on the systems and his field of vision, is also a factor of constraint to movements. The strong loading factors under acceleration also limit the movements of the pilot and in particular of his head.

La présente invention a pour but d'éviter les ambiguïtés de localisation d'une source sonore.The present invention aims to avoid ambiguities of location of a sound source.

Plus précisément, l'invention concerne un procédé de traitement algorithmique de signaux pour la spatialisation sonore permettant d'associer des signaux sonores à des informations devant être localisées par un auditeur. Les signaux sonores spatialisés sont définis par une position virtuelle d'origine correspondant à la position de l'information, le procédé est caractérisé en ce que, par traitement algorithmique, on applique à un signal sonore spatialisé un mouvement décrivant une suite de positions virtuelles dudit signal autour de la position virtuelle d'origine de l'information.More specifically, the invention relates to an algorithmic signal processing method for sound spatialization for associating sound signals with information to be located by a listener. The spatialized sound signals are defined by a virtual position of origin corresponding to the position of the information, the method is characterized in that , by algorithmic processing, a movement describing a series of virtual positions of said signal is applied to a spatialized sound signal. signal around the original virtual position of the information.

De préférence, le mouvement autour de la position virtuelle d'origine de l'information est de type oscillatoire.Preferably, the movement around the original virtual position of the information is of oscillatory type.

Dans un second mode de calcul, le mouvement autour de la position virtuelle d'origine de l'information est de type Aléatoire.In a second calculation mode, the movement around the original virtual position of the information is of the Random type.

Cette solution d'amélioration de la localisation des sons 3D se destine particulièrement pour les auditeurs sujets à des contraintes de mouvement et de charges de travail. De manière naturelle, un auditeur donne du mouvement à ses récepteurs auditifs pour mieux localiser un son. L'invention permet à l'auditeur de rester immobile. En effet, la position virtuelle d'origine étant la position spatialisée du son, le mouvement de la source sonore autour de cette position fournit des informations de localisation meilleures que celles d'un mouvement continu monodirectionnel.This solution for improving the location of 3D sounds is particularly intended for listeners who are subject to movement and workloads. Naturally, a listener gives movement to his auditory receivers to better locate a sound. The invention allows the listener to remain motionless. Indeed, since the original virtual position is the spatialized position of the sound, the movement of the sound source around this position provides location information better than that of a continuous monodirectional movement.

Pour les applications aéronautiques, le système de spatialisation peut également être couplé avec un dispositif de détection de position de casque du pilote. Avantageusement, le mouvement est alors corrélé à l'angle d'écart entre la direction d'écoute de l'auditeur et la position virtuelle d'origine dudit signal sonore. Le mouvement varie alors en fonction de l'orientation du pilote vis-à-vis de l'information à détecter qui est associée au signal sonore.For aeronautical applications, the spatialization system can also be coupled with a pilot's helmet position detection device. Advantageously, the movement is then correlated to the angle of difference between the listening direction of the listener and the original virtual position of said sound signal. The movement then varies according to the orientation of the pilot vis-à-vis the information to be detected which is associated with the sound signal.

De préférence, l'amplitude du mouvement est corrélée à la valeur dudit angle d'écart et l'orientation du mouvement est également corrélée à l'orientation du plan dudit angle d'écart. Le pilote reçoit ainsi une information lui indiquant s'il s'oriente dans le sens de l'information à acquérir.Preferably, the amplitude of motion is correlated with the value of said deviation angle and the orientation of movement is also correlated with the orientation of the plane of said deviation angle. The pilot thus receives information indicating whether he is moving in the direction of the information to be acquired.

De plus, durant le mouvement du signal spatialisé, on applique également une loi de variation de l'intensité sonore sur le signal spatialisé où :

  • l'intensité sonore est comprise entre un niveau maximal et un niveau minimal.
  • le niveau est maximal lorsque le signal sonore correspond à la position virtuelle d'origine.
  • le niveau est minimal pour les positions extrêmes du mouvement oscillatoire.
Moreover, during the movement of the spatialized signal, a law of variation of the sound intensity is also applied to the spatialized signal where:
  • the loudness is between a maximum level and a minimum level.
  • the level is maximum when the sound signal corresponds to the original virtual position.
  • the level is minimal for the extreme positions of the oscillatory movement.

Cet effet de spatialisation du son simule un mouvement du signal sonore convergeant vers la position virtuelle d'origine de l'information. Cet effet dynamique améliore la détection du son.This spatialisation effect of the sound simulates a movement of the sound signal converging towards the original virtual position of the information. This dynamic effect improves sound detection.

L'invention concerne également le système de traitement algorithmique de signaux pour la spatialisation sonore comprenant un premier moyen de calcul de spatialisation sonore permettant d'associer des signaux sonores à des informations devant être localisées par un auditeur. Ledit système est caractérisé en ce qu'il comporte un second moyen de calcul de trajectoire fournissant des données permettant au premier moyen de calcul de spatialisation d'appliquer un mouvement à un signal sonore spatialisé autour de sa position virtuelle d'origine. Ce mouvement du signal sonore est de préférence oscillatoire.The invention also relates to the algorithmic signal processing system for sound spatialization comprising a first sound spatialization calculation means for associating sound signals with information to be located by a listener. Said system is characterized in that it comprises a second path calculating means providing data enabling the first means spatialization calculus to apply motion to a spatialized sound signal around its original virtual position. This movement of the sound signal is preferably oscillatory.

Le système comporte également un troisième moyen de calcul d'au moins une loi de variation de l'intensité d'un signal sonore pour modifier l'intensité du signal sonore spatialisé durant le mouvement oscillatoire.The system also comprises a third means for calculating at least one law for varying the intensity of a sound signal to modify the intensity of the spatialized sound signal during the oscillatory movement.

De préférence, il comporte également un moyen de réception de données de position et le second moyen de calcul de trajectoire calcule l'écart de distance entre la position virtuelle d'origine de la source sonore et la position fournie par le moyen de réception et calcule un mouvement en corrélation avec ledit écart de distance.Preferably, it also comprises a position data receiving means and the second trajectory calculation means calculates the difference in distance between the original virtual position of the sound source and the position supplied by the reception means and calculates a movement correlating with said distance difference.

Dans un premier mode de mise en oeuvre, le moyen de réception de données de position est relié à un détecteur de position de casque porté par un auditeur.In a first mode of implementation, the position data receiving means is connected to a helmet position detector carried by a listener.

Dans un second mode de mise en oeuvre, le moyen de réception de données de position est relié à une caméra détectant le positionnement de l'auditeur. Cette caméra n'est pas portée par l'auditeur.In a second embodiment, the position data receiving means is connected to a camera detecting the positioning of the listener. This camera is not worn by the listener.

Dans un mode d'application aéronautique, les signaux sonores proviennent d'une base de données sonores de l'aéronef et les dits signaux sonores sont associés à des informations d'au moins un dispositif avionique.In an aeronautical application mode, the sound signals come from a sound database of the aircraft and said sound signals are associated with information of at least one avionic device.

Un premier dispositif avionique est un dispositif de visualisation.A first avionic device is a display device.

Un deuxième dispositif avionique est un dispositif de navigation.A second avionics device is a navigation device.

Un troisième dispositif avionique est un dispositif d'alertes.A third avionics device is an alert device.

L'invention sera mieux comprise et d'autres avantages apparaîtront à la lecture de la description qui va suivre donnée à titre non limitatif et grâce aux figures annexées parmi lesquelles :

  • La figure 3 représente le système de spatialisation pour un système informatique. L'exemple s'applique notamment à un système avionique.
  • La figure 4 illustre une situation d'alerte dans un cockpit d'aéronef et une application du système de spatialisation du son.
  • La figure 5 représente une application aéronautique du système de spatialisation et notamment l'oscillation d'un signal sonore autour d'une position virtuelle d'origine. Ce schéma illustre la variation du mouvement d'oscillation en fonction de la position du pilote vis-à-vis de la position virtuelle d'origine du signal sonore et la variation d'intensité du signal en fonction de la position virtuelle du signal sonore dans le mouvement oscillatoire.
  • La figure 6 illustre la différence temporelle d'arrivée des sons au niveau des oreilles d'un auditeur selon la position des sons.
  • La figure 7 représente l'effet simulé par la variation de l'intensité sonore sur le mouvement oscillatoire.
The invention will be better understood and other advantages will become apparent on reading the description which follows given by way of non-limiting example and by virtue of the appended figures among which:
  • The figure 3 represents the spatialization system for a computer system. The example applies in particular to an avionics system.
  • The figure 4 illustrates an alert situation in an aircraft cockpit and an application of the sound spatialization system.
  • The figure 5 represents an aeronautical application of the spatialization system and in particular the oscillation of a sound signal around a virtual position of origin. This diagram illustrates the variation of the oscillation movement as a function of the position of the pilot vis-à-vis the original virtual position of the sound signal and the variation of intensity of the signal as a function of the virtual position of the sound signal in the oscillatory movement.
  • The figure 6 illustrates the temporal difference of arrival of the sounds in the ears of a listener according to the position of the sounds.
  • The figure 7 represents the effect simulated by the variation of the sound intensity on the oscillatory movement.

L'invention concerne les systèmes de spatialisation du son et un procédé d'amélioration de la localisation d'un son dans l'environnement d'un auditeur. Les résultats obtenus par méthode empirique montre qu'un individu détecte plus facilement l'origine d'un son lorsque celui est en mouvement. Les travaux précités montrent des meilleurs résultats dans les tests de localisation avec un son en mouvement continu. La caractéristique essentielle du procédé de spatialisation est de conférer un mouvement oscillatoire à un son autour de sa position virtuelle d'origine.The invention relates to sound spatialization systems and a method for improving the localization of a sound in the environment of a listener. The results obtained by empirical method shows that an individual more easily detects the origin of a sound when the sound is moving. The above works show better results in location tests with continuous moving sound. The essential characteristic of the spatialization process is to confer an oscillatory movement to a sound around its original virtual position.

Les besoins dans le domaine de l'aéronautique, notamment pour les interfaces homme machine en ce qui concerne le cockpit, plébiscitent particulièrement les techniques de spatialisation du son pour améliorer l'interaction des systèmes de pilotage avec l'équipage. La complexité de ces systèmes, les multiples fonctions pour la navigation, pour la gestion de la sécurité et pour les manoeuvres submergent le pilote d'informations. Ces informations peuvent provenir des systèmes de visualisation, des indicateurs lumineux d'alarmes, des systèmes d'interaction et également des copilotes et équipages navigant pour les communications. Les techniques de son 3D permettent de fournir une indication de la position d'une information. Ainsi le pilote peut mieux percevoir son origine, sa priorité et la nature de l'action à donner en conséquence.The needs in the field of aeronautics, in particular for man-machine interfaces with regard to the cockpit, particularly favor the sound spatialization techniques to improve the interaction of the control systems with the crew. The complexity of these systems, the multiple functions for navigation, for security management and for maneuvers overwhelm the information pilot. This information may come from visualization systems, warning lights, interaction systems and also co-pilots and crews navigating for communications. 3D sound techniques provide an indication of the position of an information. Thus the pilot can better perceive his origin, his priority and the nature of the action to be given accordingly.

Au sein d'un cockpit d'aéronef, un système de spatialisation du son se situe à la frontière entre les systèmes avioniques et l'interface homme machine. La figure 3 schématise un système de spatialisation dans un cockpit d'aéronef et particulièrement dans un avion d'armes où le pilote porte un casque intégrant un dispositif 6 de détection de position du casque. Ce type d'aéronef comporte plusieurs systèmes avioniques 7, notamment des systèmes d'alartes 71 liés à la navigation permettant d'éviter les collisions ainsi que des systèmes dédiés aux opérations militaires comme des dispositifs de détection de cibles 72 et des dispositifs d'attaque de cible. Les systèmes avioniques peuvent aussi inclure un dispositif 73 de météorologie. Ces systèmes sont le plus souvent couplés à des dispositifs de visualisation. La figure 3 ne représente pas l'ensemble des systèmes avioniques pouvant être associés avec le système de spatialisation. L'homme du métier connaît les architectures avioniques et est capable de mettre en oeuvre un système de spatialisation du son avec tout dispositif avionique émettant des informations au pilote.Within an aircraft cockpit, a sound spatialization system lies at the border between the avionics systems and the man-machine interface. The figure 3 schematizes a spatialization system in an aircraft cockpit and particularly in a weapons plane where the pilot wears a helmet incorporating a device 6 for detecting the position of the helmet. This The aircraft type includes a plurality of avionics systems 7, including collision-avoidance navigation systems 71 to avoid collisions, and systems for military operations such as target detection devices 72 and aircraft attack devices. target. Avionics systems may also include a meteorological device 73. These systems are most often coupled to visualization devices. The figure 3 does not represent all the avionics systems that can be associated with the spatial system. The skilled person knows the avionics architectures and is able to implement a sound spatialization system with any avionics device transmitting information to the pilot.

La figure 4 schématise une situation particulière montrant l'intérêt d'un système spatialisé avec un système anticollision. Le champ de vision 24 du pilote aux commandes de l'aéronef est orienté vers la gauche à un instant donné. Ce pilote dispose d'un système de haut parleur 21 positionné à l'intérieur du casque au niveau de ses oreilles. Le cockpit de l'aéronef comporte plusieurs visualisations 21-23 et le champ de vision du pilote est orienté vers la visualisation 23. Par exemple, un évènement comme un risque de collision avec le sol détecté par le système anticollision alerte le pilote en affichant sur la visualisation 21 la situation à risque avec les données de navigation à surveiller et les consignes de vol à établir. Le système émet également des alertes sonores associées aux informations de l'écran. Le son spatialisé associé aux alertes indique au pilote la localisation de l'information à prendre en compte et diminue ainsi sa charge de travail mental grâce au stimulus audio donné par le système de spatialisation. Le temps de réaction du pilote est ainsi réduit.The figure 4 schematizes a particular situation showing the interest of a spatialized system with an anticollision system. The field of vision 24 of the pilot flying the aircraft is oriented to the left at a given instant. This driver has a speaker system 21 positioned inside the helmet at the level of his ears. The cockpit of the aircraft comprises several visualizations 21-23 and the pilot's field of vision is oriented towards the visualization 23. For example, an event such as a risk of collision with the ground detected by the anti-collision system alerts the pilot by displaying on visualization 21 the risk situation with the navigation data to be monitored and the flight instructions to be established. The system also issues audible alerts associated with the information on the screen. The spatialized sound associated with the alerts indicates to the pilot the location of the information to be taken into account and thus reduces his mental workload thanks to the audio stimulus given by the spatialization system. The reaction time of the driver is reduced.

Un système de spatialisation du son 1 est généralement associé à un dispositif de réception de son et un système de bases de données sonores 81 mémorisant des sons préenregistrés, comme des messages d'alertes synthétisés, des sons de signalisation, des sons d'applications logicielles ou des sons provenant de systèmes de communications internes comme externes à l'aéronef. A titre d'exemple, la spatialisation des communications audio donnent une information supplémentaire sur l'interlocuteur avec qui le pilote est en communication.A sound spatialization system 1 is generally associated with a sound reception device and a sound database system 81 storing prerecorded sounds, such as synthesized alert messages, signaling sounds, software application sounds. or sounds from internal communications systems as external to the aircraft. For example, the spatialization of audio communications gives additional information on the interlocutor with whom the pilot is in communication.

Le système de restitution des sons 5 comporte les écouteurs à l'intérieur du casque du pilote et comporte également le système de haut parleur du cockpit. Pour l'utilisation des sons binauraux dans le système de spatialisation, le système de restitution des sons doit être de type stéréophonique pour l'application des effets de différence temporelle des signaux entre les haut-parleurs.The sound reproduction system 5 includes the earphones inside the pilot's helmet and also includes the cockpit loudspeaker system. For the use of binaural sounds in the spatialization system, the sound reproduction system must be of stereophonic type for the application of the temporal difference effects of the signals between the loudspeakers.

La sortie du module de spatialisation comporte également un dispositif de traitement des signaux permettant d'ajouter des effets supplémentaires sur les signaux spatialisés, comme des effets Trémolo ou effet Doppler par exemple.The output of the spatialization module also comprises a signal processing device for adding additional effects on the spatialized signals, such as Tremolo effects or Doppler effect for example.

Le moyen de calcul de spatialisation du son 2 réalise les traitements algorithmiques des sons pour élaborer les signaux monauraux, réalisant la modification de l'intensité sonore, les signaux binauraux, réalisant la modification de la phase des signaux pour simuler un décalage temporel et la mise en oeuvre des fonctions de transfert anatomique (HRTF).The sound spatialization calculating means 2 performs the algorithmic processing of the sounds to produce the monaural signals, realizing the modification of the sound intensity, the binaural signals, realizing the modification of the phase of the signals to simulate a time shift and the setting anatomical transfer functions (HRTF).

Les signaux binauraux servent pour la localisation des sources sonores en azimut et nécessitent un système de restitution stéréophonique. Parmi les signaux de type binaural, les moyens de calcul établissent un traitement algorithmique permettant de simuler une distance des sources sonores en modifiant le niveau sonore (ILD) et un décalage temporel entre les sons (ITD).Binaural signals are used for the localization of sound sources in azimuth and require a stereophonic reproduction system. Among the binaural type signals, the calculation means establish an algorithmic processing for simulating a distance of the sound sources by modifying the sound level (ILD) and a temporal offset between the sounds (ITD).

Les signaux monauraux servent pour la localisation en élévation et pour distinguer une source sonore positionnée au devant ou derrière l'auditeur. Les signaux monauraux ne nécessitent pas de système de restitution stéréophonique. Le système de spatialisation est connecté à une base de données HRTF 82 mémorisant les fonctions de transfert anatomique des pilotes connus. Ces fonctions de transfert peuvent être sur mesures en pour chaque pilote par une mesure individuelle. La base de données peut également comporter plusieurs profils types anatomiques en vu d'être corrélés avec un pilote à la première utilisation du système pour détecter le profil adapté. Cette manipulation est plus rapide que la mesure individuelle.Monaural signals are used for elevation location and to distinguish a sound source positioned in front of or behind the listener. Monaural signals do not require a stereophonic reproduction system. The spatialization system is connected to an HRTF database 82 storing the anatomical transfer functions of the known pilots. These transfer functions can be customized for each driver by an individual measurement. The database may also include several anatomical standard profiles in order to be correlated with a pilot at the first use of the system to detect the adapted profile. This manipulation is faster than the individual measurement.

L'homme du métier connaît les différentes techniques et algorithmes de traitement des signaux élaborés par le moyen de calcul 2 pour la spatialisation sonore.Those skilled in the art are familiar with the different signal processing techniques and algorithms developed by calculation means 2 for sound spatialization.

Pour la mise en oeuvre de l'invention, deux moyens fonctionnels 3 et 4 complètent le moyen de calcul de spatialisation. Le premier moyen 3 a pour fonction de calculer la trajectoire du mouvement oscillatoire devant être conféré au son spatialisé. Le mouvement oscillatoire comporte une trajectoire pouvant varier en élévation et en azimut par rapport à l'auditeur. La trajectoire oscillatoire est comprise dans une plage angulaire dont le sommet de l'angle est centré sur l'auditeur. Pour une application de cockpit d'aéronef comportant un détecteur de position 6 du casque du pilote, le moyen de calcul 3 détermine un angle d'écart entre l'orientation du regard du pilote, indirectement par la position du casque, et la position virtuelle d'origine du signal sonore. La figure 5 schématise l'application de l'invention pour le calcul de la trajectoire du mouvement oscillatoire. Le dessin de gauche représente le cas où l'orientation du pilote est telle que la direction de son champ de vision 42 est fortement décorrélée avec la direction 43 de son champ de vision si celui-ci était orienté vers la position d'origine du signal sonore. L'angle d'écart 31 est calculé par le moyen de calcul 3. Cet angle d'écart peut être dans un plan variant en azimut et en élévation en fonction de l'orientation 42 du champ de vision du pilote. Le moyen de calcul 3 élabore également la trajectoire du mouvement oscillatoire 32 en fonction de cet angle d'écart 31.For the implementation of the invention, two functional means 3 and 4 complete the spatialization calculation means. The first means 3 has the function of calculating the trajectory of the oscillatory movement to be imparted to the spatialized sound. The oscillatory movement has a trajectory that can vary in elevation and azimuth with respect to the listener. The oscillatory trajectory is in an angular range whose apex of the angle is centered on the listener. For an aircraft cockpit application comprising a position detector 6 of the pilot's helmet, the calculation means 3 determines an angle of difference between the orientation of the pilot's gaze, indirectly by the position of the helmet, and the virtual position. origin of the sound signal. The figure 5 schematizes the application of the invention for calculating the trajectory of the oscillatory movement. The drawing on the left represents the case where the orientation of the pilot is such that the direction of his field of vision 42 is strongly decorrelated with the direction 43 of his field of vision if it was oriented towards the position of origin of the signal sound. The deviation angle 31 is calculated by the calculation means 3. This deviation angle can be in a plane varying in azimuth and in elevation as a function of the orientation 42 of the pilot's field of view. The calculation means 3 also calculates the trajectory of the oscillatory movement 32 as a function of this angle of variation 31.

La trajectoire du mouvement oscillatoire 32, ou 41 sur le dessin de droite de la figure 5, est fonction de l'angle d'écart 31, ou 36. On définit les coordonnées de la position virtuelle d'origine 33, par une coordonnée en azimut a1 et une coordonnée en élévation e1. De préférence, la trajectoire du mouvement oscillatoire 32 est bidirectionnelle et continue, réalisant ainsi un mouvement oscillatoire en aller-retour selon un arc de cercle lié aux paliers angulaire 44 et 45. Toutefois, le moyen de calcul de trajectoire peut définir une trajectoire pouvant être ovale ou d'une autre forme. La vitesse de balayage de cette trajectoire est également configurable. Elle est de préférence supérieure à la vitesse de déplacement de la tête avec une latence inférieure à 70 ms pour préserver le naturel du son.The trajectory of the oscillatory movement 32, or 41 on the drawing of the right of the figure 5 , is a function of the angle of difference 31, or 36. The coordinates of the original virtual position 33 are defined by an azimuth coordinate a1 and an elevation coordinate e1. Preferably, the trajectory of the oscillatory movement 32 is bidirectional and continuous, thus achieving an oscillatory movement back and forth along an arc of a circle connected to the angular bearings 44 and 45. However, the trajectory calculation means can define a trajectory that can be oval or other shape. The scan speed of this path is also configurable. It is preferably greater than the speed of movement of the head with a latency of less than 70 ms to preserve the natural sound.

La loi définissant la trajectoire de déplacement dépend de paliers angulaires pouvant être définis, à titre d'exemple non limitatif, de la façon suivante :

  • Si l'écart angulaire 31 est supérieur à 45°, la position angulaire par rapport au regard du pilote varie de 15°.
  • Si l'écart angulaire 31 est compris entre 45° et 20°, la position angulaire par rapport au regard du pilote varie de 10°.
  • S l'écart angulaire 31 est compris entre 20° et 10°, la position angulaire par rapport au regard du pilote varie de 5°.
  • Si l'écart angulaire 31 est compris entre 10° et 0°, la position angulaire par rapport au regard du pilote varie de 2°.
  • Lorsque l'écart angulaire 31 est égal à 0°, la source sonore ne bouge plus.
The law defining the path of displacement depends on angular steps that can be defined, by way of non-limiting example, as follows:
  • If the angular difference 31 is greater than 45 °, the angular position relative to the pilot's view varies by 15 °.
  • If the angular difference 31 is between 45 ° and 20 °, the angular position relative to the pilot's view varies by 10 °.
  • S angular deviation 31 is between 20 ° and 10 °, the angular position relative to the pilot's view varies by 5 °.
  • If the angular difference 31 is between 10 ° and 0 °, the angular position relative to the pilot's view varies by 2 °.
  • When the angular difference 31 is equal to 0 °, the sound source no longer moves.

Lorsque la trajectoire du mouvement oscillatoire est déterminée par les paliers angulaires, le processus de spatialisation calcule, par le moyen de calcul 2, la trajectoire du mouvement oscillatoire 32 autour de la position virtuelle 33. Les angles calculés sont utilisés par les fonctions de calcul de signaux monauraux et binauraux pour déterminer la trajectoire 32 autour de la position virtuelle d'origine 33. Cette trajectoire 32 comporte une série de plusieurs positions virtuelles délimitée par deux positions extrêmes 34 et 35. Ces deux positions extrêmes sont localisées selon les coordonnées de la position d'origine 33 additionnées de paliers angulaires en azimut 44 et élévation 45. Les signaux ITD et ILD dépendent des angles en azimut et élévation.When the trajectory of the oscillatory movement is determined by the angular steps, the spatialization process calculates, by the calculation means 2, the trajectory of the oscillatory movement 32 around the virtual position 33. The calculated angles are used by the calculation functions of FIG. monaural and binaural signals to determine the trajectory 32 around the original virtual position 33. This trajectory 32 comprises a series of several virtual positions delimited by two extreme positions 34 and 35. These two extreme positions are located according to the coordinates of the position of origin 33 added with angular bearings in azimuth 44 and elevation 45. The signals ITD and ILD depend on the angles in azimuth and elevation.

Pour comprendre le fonctionnement du module de spatialisation 3, il est nécessaire de définir dans un premier temps les signaux sonores. Dans une application aéronautique de spatialisation du son, on définit un signal sonore comme une vibration perçue par l'oreille humaine, décrit sous forme d'onde sonore et pouvant être représentée dans le domaine temporel et fréquentiel (spectre de l'onde). Mathématiquement, un signal sonore est définit par la formule (1): S t = i a i cos 2 π f i t + Φ i = f t

Figure imgb0001
où ai est l'amplitude de la ième harmonique, fi sa fréquence et Φ i sa phase à l'origine To understand the operation of the spatialization module 3, it is necessary to first define the sound signals. In an aeronautical sound spatialization application, a sound signal is defined as a vibration perceived by the human ear, described in the form of a sound wave and which can be represented in the time and frequency domain (spectrum of the wave). Mathematically, a sound signal is defined by the formula (1): S t = Σ i at i cos 2 π f i t + Φ i = f t
Figure imgb0001
where a i is the amplitude of the i th harmonic, f i its frequency and Φ i its phase at the origin

Calcul du délai temporel interaural (ITD) :Calculation of interaural time delay (ITD):

Les signaux ITD comportent une modification de phase afin de simuler une position en azimut différente au moyen d'un décalage temporel du signal entre les oreilles d'un auditeur. Une différence de phase ΔΦ correspond à un décalage temporel interaural (ITD) de Δt = ΔΦ/(2πf) pour un son de fréquence f.
Si l'on assimile la tête à une sphère et que l'on considère des formes d'ondes suffisamment longues, le délai temporel interaural est égal à Δ t = 3 a c . sin θ .

Figure imgb0002

où θ est l'angle azimutal, a le rayon de la tête, environ 8.75 cm, et C la célérité du son, 344 m/s. Ainsi 3a/c = 763 µs environ.
La figure 6 représente le diagramme temporel du son pour chaque oreille pour différentes positions de la source sonore. La source sonore A21 représente une première position où la source sonore est du côté de l'oreille gauche de l'auditeur et la source sonore A22 représente une deuxième position où la source sonore est du côté de l'oreille droite de l'auditeur. La source sonore A21 est plus proche de l'auditeur que la source A22 ne l'est.
Ainsi, si S 1(t) = f(t) où f est la fonction définie dans la formule (1), alors S 2(t) = f(t + Δt) (formule (2)) avec S1 le signal reçu sur l'oreille gauche et S2 le signal reçu sur l'oreille droite.
Sur la ligne temporelle de S1(t), on représente à la fois le son SA21 de la source sonore lorsqu'elle est en position A21 et à la fois le son SA22 lorsqu'elle est en position A22. On représente de même pour la ligne S2(t).
Le son SA21 arrive plus tôt à l'oreille gauche que l'oreille droite car le son est positionné du côté de l'oreille gauche.
Le son SA22 arrive plus tôt à l'oreille droite que l'oreille gauche car le son A22 est positionné du côté de l'oreille droite.
Sur une même échelle temporelle, le son SA21 arrive avant le son SA22 car la source sonore A21 est plus proche de l'auditeur que la source A22 ne l'est.The ITD signals include a phase change to simulate a different azimuth position by a time shift of the signal between the ears of a listener. A phase difference ΔΦ corresponds to an interaural time shift (ITD) of Δ t = ΔΦ / (2π f ) for a sound of frequency f.
If we equate the head with a sphere and consider sufficiently long waveforms, the interaural time delay is equal to Δ t = 3 at vs . sin θ .
Figure imgb0002

where θ is the azimuthal angle, has the radius of the head, about 8.75 cm, and C the velocity of the sound, 344 m / s. Thus 3a / c = approximately 763 μs.
The figure 6 represents the timing diagram of the sound for each ear for different positions of the sound source. The sound source A21 represents a first position where the sound source is on the left ear side of the listener and the sound source A22 represents a second position where the sound source is on the side of the right ear of the listener. The sound source A21 is closer to the listener than the source A22 is.
Thus, if S 1 (t) = f (t) where f is the function defined in the formula (1), then S 2 (t) = f (t + Δ t) (formula (2)) with S1 the signal received on the left ear and S2 the signal received on the right ear.
On the time line of S1 (t), we represent both the sound SA21 of the sound source when it is in position A21 and both the sound SA22 when it is in position A22. The same is true for the line S2 (t).
The SA21 sound arrives earlier in the left ear than the right ear because the sound is positioned on the side of the left ear.
The SA22 sound arrives earlier in the right ear than the left ear because the A22 sound is positioned on the side of the right ear.
On the same time scale, the sound SA21 arrives before the sound SA22 because the sound source A21 is closer to the listener than the source A22 is.

Pour le calcul de la trajectoire 32 du mouvement oscillatoire autour de la position virtuelle d'origine 33 et dans le cas où l'on assimile la tête à une sphère et où l'on considère des formes d'ondes suffisamment longues, pour le calcul des signaux ITD, le délai temporel interaural pour un azimut θ donné varie entre Δ t = 3 a c . sin θ

Figure imgb0003
et Δ T = 3 a c . sin θ + P a l i e r a n g u l a i r e ,
Figure imgb0004
sin(θ + Palier angudaire), où θ est l'angle azimutal a1 sur la figure 5. Les valeurs d'angle évoluant dans la plage de palier angulaire sont utilisées pour le calcul des différentes positions virtuelles constituant la trajectoire 32. Ces valeurs sont injectées dans la formule 2. Les différentes valeurs d'angle comprises dans la plage de palier angulaire définissent les différentes positions virtuelles composant la trajectoire du signal sonore. Les sons doivent avoir une périodicité idéalement inférieure à 70 ms, afin d'éviter les effets artificiels de traînée lorsque l'on tourne la tête. De la même façon, l'empreinte sonore globale doit idéalement durer 250 ms au minimum.For the calculation of the trajectory 32 of the oscillatory movement around the original virtual position 33 and in the case where the head is likened to a sphere and where we consider sufficiently long waveforms, for calculation ITD signals, the interaural time delay for an azimuth θ given varies between Δ t = 3 at vs . sin θ
Figure imgb0003
and Δ T = 3 at vs . sin θ + P at l i e r at not boy Wut u l at i r e ,
Figure imgb0004
sin (θ + angular step ), where θ is the azimuth angle a1 on the figure 5 . The angle values evolving in the angular bearing range are used for the calculation of the different virtual positions constituting the trajectory 32. These values are injected into the formula 2. The various angle values included in the angular bearing range define the different virtual positions composing the trajectory of the sound signal. The sounds should have a periodicity ideally less than 70 ms, to avoid the artificial effects of drag when one turns the head. In the same way, the overall sound footprint should ideally last at least 250 ms.

Calcul des signaux ILD :Calculation of ILD signals:

L'intensité sonore est différente entre l'oreille gauche et l'oreille droite d'un auditeur. La différence de niveau sonore est variable selon un angle entre les deux oreilles.The loudness is different between the left ear and the right ear of a listener. The difference in sound level is variable at an angle between the two ears.

Le décalage en niveau sonore entre les deux oreilles varie entre la courbe ILD associée à la position virtuelle d'origine 33 du signal sonore et la courbe ILD associée à la position extrême du signal sonore sur la trajectoire de déplacement. Le décalage sonore varie en fonction des paliers angulaires 44 et 45 (azimut et élévation). La figure 1 illustre par exemple une loi d'intensité sonore pouvant être appliquée aux sons.The difference in sound level between the two ears varies between the ILD curve associated with the original virtual position 33 of the sound signal and the ILD curve associated with the extreme position of the sound signal on the trajectory of movement. The sound shift varies according to the angular bearings 44 and 45 (azimuth and elevation). The figure 1 illustrates for example a law of loudness that can be applied to sounds.

La valeur des paliers angulaires 44 et 45 varie en fonction de l'orientation du pilote. La loi de régulation des paliers citée précédemment montre que plus l'auditeur s'oriente vers la position d'origine du signal sonore et plus la valeur des paliers angulaires diminue, jusqu'à annulation des paliers pour une direction sensiblement égale à la direction de la position virtuelle d'origine. Le dispositif de détection de position 6 transmet les coordonnées de position au moyen de calcul 3 et selon ces coordonnées les paliers angulaires 44 et 45 diminueront ou augmenteront. Le schéma de droite de la figure 3 correspond à une situation où l'orientation du pilote se rapproche de la position virtuelle d'origine 33 du signal sonore. L'angle d'écart 36 est réduit et par conséquent la nouvelle trajectoire 41 calculée par le module de calcul 2 est d'amplitude plus faible, délimitée par les deux positions 36 et 37.The value of the angular bearings 44 and 45 varies depending on the orientation of the pilot. The law regulating the bearings mentioned above shows that the more the listener moves towards the original position of the sound signal and the lower the value of the angular bearings, until the bearings are canceled for a direction substantially equal to the direction of rotation. the virtual position of origin. The position detection device 6 transmits the position coordinates to the calculation means 3 and according to these coordinates the angular bearings 44 and 45 will decrease or increase. The diagram on the right of the figure 3 corresponds to a situation where the orientation of the pilot approaches the original virtual position 33 of the sound signal. The angle of variation 36 is reduced and therefore the new trajectory 41 calculated by the calculation module 2 is of smaller amplitude, delimited by the two positions 36 and 37.

Les valeurs des HRTFs sont toujours utilisées à partir de la prédétermination faite pour chaque sujet et décrite précédemment.The values of the HRTFs are always used from the predetermination made for each subject and described previously.

En plus des ILD, l'invention comporte également pour chaque sujet un moyen de calcul 4 de traitement du signal sonore en sortie du moyen de calcul 2 de spatialisation du son. Ce module 4 réalise une variation de l'intensité sonore d'un signal sonore en fonction des positions du son sur le mouvement oscillatoire.In addition to the ILDs, the invention also includes, for each subject, calculation means 4 for processing the sound signal at the output of the calculation means 2 for spatializing the sound. This module 4 realizes a variation of the sound intensity of a sound signal according to the positions of the sound on the oscillatory movement.

De préférence, une loi de régulation du son linéaire est appliquée sur un son spatialisé réalisant le mouvement oscillatoire de façon que l'intensité 39 du signal sonore soit diminuée d'un nombre prédéfini de dB lorsque la position du signal sonore est localisée à une position extrême du mouvement oscillatoire, ces positions correspondant, sur la figure 3, aux positions 34 et 35 du mouvement oscillatoire 32, et l'intensité 40 du signal sonore est maximale lorsque la position du signal sonore est localisée sur la position virtuelle d'origine 33 du signal sonore. Une loi de régression linéaire peut par exemple déterminer les positions intermédiaires ayant un niveau d'intensité sonore intermédiaire entre l'intensité maximale et l'intensité diminuée.Preferably, a law of regulation of linear sound is applied to a spatialized sound performing the oscillatory movement so that the intensity 39 of the sound signal is reduced by a predefined number of dB when the position of the sound signal is located at a position extreme of the oscillatory movement, these positions corresponding, on the figure 3 at positions 34 and 35 of the oscillatory movement 32, and the intensity 40 of the sound signal is maximum when the position of the sound signal is located at the original virtual position 33 of the sound signal. A linear regression law can for example determine the intermediate positions having an intermediate level of sound intensity between the maximum intensity and the decreased intensity.

Comme représenté par la figure 7, le module de traitement du son permet :

  • de simuler un éloignement par rapport à la position du signal sonore lorsque les positions du signal sonore durant le mouvement oscillatoire se rapprochent d'une position extrême du mouvement oscillatoire
  • de simuler un rapprochement du signal sonore lorsque les positions du signal durant le mouvement oscillatoire se rapprochent de la position virtuelle d'origine du son spatialisé.
As represented by figure 7 , the sound processing module allows:
  • to simulate a distance from the position of the sound signal when the positions of the sound signal during the oscillatory movement approach an extreme position of the oscillatory movement
  • to simulate an approximation of the sound signal when the positions of the signal during the oscillatory movement are close to the original virtual position of the spatialized sound.

L'intensité du signal sonore est en effet moins forte pour un signal éloigné de sa position réelle. Pour l'auditeur, cette variation de l'intensité simule une distance 51 entre l'auditeur et une position extrême du mouvement oscillatoire élevée. Tandis que pour la position virtuelle d'origine, la distance 52 simulée est faible. La modification de l'intensité simule une convergence spatiale du mouvement oscillatoire vers la position d'origine. La loi de variation de l'intensité sonore peut être indépendante de l'écart angulaire entre le regard du pilote et la direction de la source sonore. La variation peut également être aléatoire, c'est à dire non continue entre la position d'origine et les extrêmes.The intensity of the sound signal is indeed less strong for a signal far from its actual position. For the listener, this variation in intensity simulates a distance 51 between the listener and an extreme position of the high oscillatory movement. While for the original virtual position, the simulated distance 52 is small. The change in intensity simulates a spatial convergence of the oscillatory movement towards the original position. The law of variation of the sound intensity can be independent of the angular difference between the pilot's gaze and the direction of the sound source. The variation can also be random, ie not continuous between the original position and the extremes.

De préférence, la durée d'un son est supérieure à 250 ms. Idéalement, la durée doit même être supérieure à 5 s afin de profiter pleinement des signaux dynamiques associés.Preferably, the duration of a sound is greater than 250 ms. Ideally, the duration should even be greater than 5 s to take full advantage of the associated dynamic signals.

Tout type de système de restitution du son peut être utilisé : un système comportant un haut-parleur, un système comportant plusieurs haut-parleurs, un système de transducteurs cartilagineux, un système à bouchons avec ou sans fils, etc...Any type of sound reproduction system may be used: a system comprising a loudspeaker, a system comprising several loudspeakers, a system of cartilaginous transducers, a plug system with or without wires, etc.

Le procédé de spatialisation du son s'applique à tout type d'application dont les besoins requièrent la localisation d'un son. Elle s'adresse particulièrement aux applications associant un son à une information devant être prise en compte par un auditeur. Elle s'applique au domaine de l'aéronautique pour l'interaction homme machine des systèmes avioniques avec le pilote, pour des applications de simulateurs immergeant un individu (système de réalité virtuelle par exemple ou simulateur avion) et également au domaine automobile pour des systèmes devant alerter le conducteur d'un danger et fournir une indication de l'origine du danger.The sound spatialization method applies to any type of application whose needs require the location of a sound. It is particularly intended for applications associating sound with information to be taken into account by an auditor. It applies to the field of aeronautics for the human-machine interaction of avionics systems with the pilot, for simulator applications immersing an individual (virtual reality system for example or airplane simulator) and also in the automotive field for systems to alert the driver of a hazard and provide an indication of the origin of the hazard.

Claims (14)

Procédé de traitement algorithmique de signaux pour la spatialisation sonore permettant d'associer des signaux sonores à des informations devant être localisées par un auditeur, les signaux sonores spatialisés étant définis par une position virtuelle d'origine (33) correspondant à la position de l'information, caractérisé en ce que, par traitement algorithmique, on applique à un signal sonore spatialisé un mouvement (32) décrivant une suite de positions virtuelles (33-35) dudit signal autour de la position virtuelle d'origine de l'information (33) et en ce que durant le mouvement du signal spatialisé, on applique une loi de variation de l'intensité sonore sur le signal spatialisé, l'intensité sonore étant comprise entre un niveau maximal (40) et un niveau minimal (39), le niveau étant maximal lorsque le signal sonore correspond à la position virtuelle d'origine (33) et le niveau étant minimal pour les positions extrêmes (34 et 35) du mouvement (32).A method of algorithmic signal processing for sound spatialization for associating sound signals with information to be located by a listener, the spatialized sound signals being defined by an original virtual position (33) corresponding to the position of the information, characterized in that, by algorithmic processing, a motion (32) describing a succession of virtual positions (33-35) of said signal around the virtual origin position of the information (33) is applied to a spatialized sound signal. ) and in that during the movement of the spatialized signal, a law of variation of the sound intensity is applied to the spatialized signal, the sound intensity being between a maximum level (40) and a minimum level (39), the level being at a maximum when the sound signal corresponds to the original virtual position (33) and the level being minimal for the extreme positions (34 and 35) of the movement (32). Procédé selon la revendication 1, caractérisé en ce que le mouvement autour de la position virtuelle d'origine de l'information est de type oscillatoire.Method according to Claim 1, characterized in that the movement around the original virtual position of the information is of the oscillating type. Procédé selon la revendication 1, caractérisé en ce que le mouvement autour de la position virtuelle d'origine de l'information est de type Aléatoire.Method according to claim 1, characterized in that the movement around the original virtual position of the information is of the Random type. Procédé selon l'une quelconque des revendications précédentes, caractérisé en ce que le mouvement est corrélé à l'angle d'écart (31) entre la direction (42) du regard de l'auditeur et la position virtuelle d'origine (33) dudit signal sonore.Method according to any one of the preceding claims, characterized in that the movement is correlated with the angle of divergence (31) between the direction (42) of the listener's gaze and the virtual position of origin (33) said sound signal. Procédé selon la revendication 4, caractérisé en ce que l'amplitude du mouvement oscillatoire est corrélée à la valeur dudit angle d'écart (31).Method according to claim 4, characterized in that the amplitude of the oscillatory movement is correlated with the value of said deviation angle (31). Procédé selon la revendication 5, caractérisé en ce que l'orientation du mouvement oscillatoire (32) est corrélée à l'orientation du plan dudit angle d'écart.Method according to claim 5, characterized in that the orientation of the oscillatory movement (32) is correlated with the orientation of the plane of said deviation angle. Système de traitement algorithmique de signaux pour la spatialisation sonore comprenant un premier moyen de calcul (2) de spatialisation sonore permettant d'associer des signaux sonores à des informations devant être localisées par un auditeur, les signaux sonores spatialisés étant définis par une position virtuelle d'origine correspondant à la position d'une information, ledit système étant caractérisé en ce qu'il comporte un second moyen de calcul de trajectoire (3) fournissant des données permettant au premier moyen de calcul de spatialisation (2) d'appliquer un mouvement à un signal sonore spatialisé autour de sa position virtuelle d'origine et en ce qu'il comporte un troisième moyen de calcul (4) d'au moins une loi de variation de l'intensité d'un signal sonore pour modifier l'intensité du signal sonore spatialisé durant le mouvement oscillatoire.An algorithmic signal processing system for sound spatialization comprising a first calculation means (2) for sound spatialization for associating sound signals with information to be located by a listener, the spatialized sound signals being defined by a virtual position of origin corresponding to the position of an item of information, said system being characterized in that comprises second trajectory calculating means (3) providing data enabling the first spatialization calculating means (2) to apply a movement to a spatialized sound signal around its original virtual position and in that it comprises third calculation means (4) of at least one law for varying the intensity of a sound signal to modify the intensity of the spatialized sound signal during the oscillatory movement. Système selon la revendication 7, caractérisé en ce qu'il comporte un moyen de réception de données de position et en ce que le second moyen de calcul de trajectoire (3) calcule l'angle d'écart entre la position virtuelle d'origine de la source sonore et la position fournie par le moyen de réception et calcule un mouvement oscillatoire en corrélation avec ledit écart de distance.System according to Claim 7, characterized in that it comprises a position data receiving means and in that the second trajectory calculation means (3) calculates the difference angle between the virtual position of origin of the the sound source and the position provided by the receiving means and calculates an oscillatory movement correlating with said distance difference. Système selon la revendication 8, caractérisé en ce que le moyen de réception de données de position est relié à un détecteur de position de casque (6) porté par un auditeur.System according to claim 8, characterized in that the position data receiving means is connected to a helmet position detector (6) carried by a listener. Système selon la revendication 9, caractérisé en ce que le moyen de réception de données de position est relié à une caméra non portée par l'auditeur détectant le positionnement de l'auditeur.System according to claim 9, characterized in that the position data receiving means is connected to a camera not worn by the listener detecting the positioning of the listener. Système selon la revendication 10, caractérisé en ce que les signaux sonores proviennent d'une base de données sonores d'un aéronef et en ce que les dits signaux sonores sont associés à des informations d'au moins un dispositif avionique (71-73).System according to claim 10, characterized in that the sound signals come from a sound database of an aircraft and in that said sound signals are associated with information of at least one avionic device (71-73) . Système selon la revendication 11, caractérisé en ce qu'un premier dispositif avionique est un dispositif de visualisation.System according to claim 11, characterized in that a first avionic device is a display device. Système selon la revendication 12, caractérisé en ce qu'un deuxième dispositif avionique est un dispositif de navigation.System according to claim 12, characterized in that a second avionic device is a navigation device. Système selon la revendication 13, caractérisé en ce qu'un troisième dispositif avionique est un dispositif d'alertes.System according to Claim 13, characterized in that a third avionic device is an alert device.
EP09174579A 2008-11-07 2009-10-30 Method and system for sound spatialisation by dynamic movement of the source Withdrawn EP2194734A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR0806229A FR2938396A1 (en) 2008-11-07 2008-11-07 METHOD AND SYSTEM FOR SPATIALIZING SOUND BY DYNAMIC SOURCE MOTION

Publications (1)

Publication Number Publication Date
EP2194734A1 true EP2194734A1 (en) 2010-06-09

Family

ID=40763225

Family Applications (1)

Application Number Title Priority Date Filing Date
EP09174579A Withdrawn EP2194734A1 (en) 2008-11-07 2009-10-30 Method and system for sound spatialisation by dynamic movement of the source

Country Status (3)

Country Link
US (1) US20100183159A1 (en)
EP (1) EP2194734A1 (en)
FR (1) FR2938396A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3061150A1 (en) * 2016-12-22 2018-06-29 Thales INTERACTIVE DESIGNATION SYSTEM FOR A VEHICLE, IN PARTICULAR FOR AN AIRCRAFT, COMPRISING A DATA SERVER
FR3137810A1 (en) * 2022-07-06 2024-01-12 Psa Automobiles Sa Control of a sound environment in a vehicle

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130010967A1 (en) * 2011-07-06 2013-01-10 The Monroe Institute Spatial angle modulation binaural sound system
US8183997B1 (en) * 2011-11-14 2012-05-22 Google Inc. Displaying sound indications on a wearable computing system
US9591427B1 (en) * 2016-02-20 2017-03-07 Philip Scott Lyren Capturing audio impulse responses of a person with a smartphone
CN105929367A (en) * 2016-04-28 2016-09-07 乐视控股(北京)有限公司 Handle positioning method, device and system
DE102016115449B4 (en) 2016-08-19 2020-02-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for generating a spatial sound from an audio signal, use of the method and computer program product
EP4085660A4 (en) 2019-12-30 2024-05-22 Comhear Inc. Method for providing a spatialized soundfield
FR3110762B1 (en) 2020-05-20 2022-06-24 Thales Sa Device for customizing an audio signal automatically generated by at least one avionic hardware item of an aircraft

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030059070A1 (en) * 2001-09-26 2003-03-27 Ballas James A. Method and apparatus for producing spatialized audio signals
FR2842064A1 (en) * 2002-07-02 2004-01-09 Thales Sa SYSTEM FOR SPATIALIZING SOUND SOURCES WITH IMPROVED PERFORMANCE
US20060018497A1 (en) * 2004-07-20 2006-01-26 Siemens Audiologische Technik Gmbh Hearing aid system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2696258B1 (en) * 1992-09-25 1994-10-28 Sextant Avionique Device for managing a human-machine interaction system.
FR2696574B1 (en) * 1992-10-06 1994-11-18 Sextant Avionique Method and device for analyzing a message supplied by means of interaction with a human-machine dialogue system.
FR2786308B1 (en) * 1998-11-20 2001-02-09 Sextant Avionique METHOD FOR VOICE RECOGNITION IN A NOISE ACOUSTIC SIGNAL AND SYSTEM USING THE SAME
FR2808917B1 (en) * 2000-05-09 2003-12-12 Thomson Csf METHOD AND DEVICE FOR VOICE RECOGNITION IN FLUATING NOISE LEVEL ENVIRONMENTS

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030059070A1 (en) * 2001-09-26 2003-03-27 Ballas James A. Method and apparatus for producing spatialized audio signals
FR2842064A1 (en) * 2002-07-02 2004-01-09 Thales Sa SYSTEM FOR SPATIALIZING SOUND SOURCES WITH IMPROVED PERFORMANCE
WO2004006624A1 (en) 2002-07-02 2004-01-15 Thales Sound source spatialization system
US20060018497A1 (en) * 2004-07-20 2006-01-26 Siemens Audiologische Technik Gmbh Hearing aid system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DURAND R BEGAULT: "3-D Sound for Virtual Reality and Multimedia", NASA/TM-2000-000000, XX, XX, 1 January 2000 (2000-01-01), pages 1 - 246, XP002199910 *
F.L. WIGHTMAN; D.J. KISTLER: "Resolution of front-back ambiguity in spatial hearing by listener and source movement", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 105, no. 5, May 1999 (1999-05-01), pages 2841 - 2853
H. WALLACH: "The role of head movements and vestibular and visual cues in sound localization", J. EXP. PSYCHOL., vol. 27, 1940, pages 339 - 368
S. PERRETT; W. NOBLE: "The effect of head rotations on vertical plane sound localization", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 102, 1997, pages 2325 - 2332
W.R. THURLOW; P.S. RUNGE: "Effects of induced head movements on localisation of direct sound", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 42, 1967, pages 480 - 487
WIGHTMAN FREDERIC L ET AL: "Resolution of front-back ambiguity in spatial hearing by listener and source movement", JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, AIP / ACOUSTICAL SOCIETY OF AMERICA, MELVILLE, NY, US, vol. 105, no. 5, 1 May 1999 (1999-05-01), pages 2841 - 2853, XP012000962, ISSN: 0001-4966 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3061150A1 (en) * 2016-12-22 2018-06-29 Thales INTERACTIVE DESIGNATION SYSTEM FOR A VEHICLE, IN PARTICULAR FOR AN AIRCRAFT, COMPRISING A DATA SERVER
FR3137810A1 (en) * 2022-07-06 2024-01-12 Psa Automobiles Sa Control of a sound environment in a vehicle

Also Published As

Publication number Publication date
US20100183159A1 (en) 2010-07-22
FR2938396A1 (en) 2010-05-14

Similar Documents

Publication Publication Date Title
EP2194734A1 (en) Method and system for sound spatialisation by dynamic movement of the source
EP3622386B1 (en) Spatial audio for three-dimensional data sets
EP0813688B1 (en) Personal direction finding apparatus
US10721581B1 (en) Head-related transfer function (HRTF) personalization based on captured images of user
US11112389B1 (en) Room acoustic characterization using sensors
EP1658755B1 (en) Sound source spatialization system
US11364844B2 (en) Systems and methods for verifying whether vehicle operators are paying attention
FR2874371A1 (en) DISPLAY SYSTEM FOR AN AIRCRAFT
US11638110B1 (en) Determination of composite acoustic parameter value for presentation of audio content
US10798515B2 (en) Compensating for effects of headset on head related transfer functions
US11979803B2 (en) Responding to a signal indicating that an autonomous driving feature has been overridden by alerting plural vehicles
US10795038B2 (en) Information presentation system, moving vehicle, information presentation method, and non-transitory storage medium
US11356788B2 (en) Spatialized audio rendering for head worn audio device in a vehicle
Carlander et al. Uni-and bimodal threat cueing with vibrotactile and 3D audio technologies in a combat vehicle
JPH10230899A (en) Man-machine interface of aerospace aircraft
FR3038101A1 (en) METHOD FOR GUIDING AN INDIVIDUAL AND NAVIGATION SYSTEM
WO2020115896A1 (en) Information processing device and information processing method
FR2999759A1 (en) DISPLAY METHOD AND NAVIGATION ASSISTANCE SYSTEM
FR2855932A1 (en) Sound warning system for avoiding aircraft collision, has electronic unit combining spatial signal and non spatial signal generated by respective electronic units to produce two monophonic channels
EP3607430B1 (en) Method and system for determining periodicity of data
Daniels et al. Improved performance from integrated audio video displays
CN118365832A (en) Voice warning situation awareness augmented reality method and system for pilot
Dunai et al. Perception of the sound source position
KR101490821B1 (en) Method for applying stereophonic sound technique and ground surveillance radar system using the same

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

AX Request for extension of the european patent

Extension state: AL BA RS

17P Request for examination filed

Effective date: 20100630

17Q First examination report despatched

Effective date: 20100722

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20101202