WO2018087570A1 - Improved communication device - Google Patents

Improved communication device Download PDF

Info

Publication number
WO2018087570A1
WO2018087570A1 PCT/GB2017/053407 GB2017053407W WO2018087570A1 WO 2018087570 A1 WO2018087570 A1 WO 2018087570A1 GB 2017053407 W GB2017053407 W GB 2017053407W WO 2018087570 A1 WO2018087570 A1 WO 2018087570A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
communication device
user
ear
module
Prior art date
Application number
PCT/GB2017/053407
Other languages
French (fr)
Inventor
David Greenberg
Clive Taylor
Forrest RADFORD
Original Assignee
Eartex Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB1619160.3A external-priority patent/GB2555842A/en
Priority claimed from GB1619163.7A external-priority patent/GB2555843A/en
Priority claimed from GB1619162.9A external-priority patent/GB2556045A/en
Application filed by Eartex Limited filed Critical Eartex Limited
Publication of WO2018087570A1 publication Critical patent/WO2018087570A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F11/00Methods or devices for treatment of the ears or hearing sense; Non-electric hearing aids; Methods or devices for enabling ear patients to achieve auditory perception through physiological senses other than hearing sense; Protective devices for the ears, carried on the body or in the hand
    • A61F11/06Protective devices for the ears
    • A61F11/14Protective devices for the ears external, e.g. earcaps or earmuffs
    • A61F11/145Protective devices for the ears external, e.g. earcaps or earmuffs electric, e.g. for active noise reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H3/00Measuring characteristics of vibrations by using a detector in a fluid
    • G01H3/10Amplitude; Power
    • G01H3/12Amplitude; Power by electric means
    • G01H3/125Amplitude; Power by electric means for representing acoustic field distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/271Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • H04R2201/107Monophonic and stereophonic headphones with microphone for two-way hands free communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/07Use of position data from wide-area or local-area positioning systems in hearing devices, e.g. program or information selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils

Definitions

  • the present application relates to a communication device, a communication system comprising a plurality of communication devices and a method of operation.
  • Ear defenders may be used to protect persons from sound and noise exposure in a noisy environment by blocking sound energy from reaching their ears.
  • ear defenders generally block all sound indiscriminately. Accordingly, the use of ear defenders can make it difficult to communicate with persons using them. This can lead to reduced efficiency in the workplace, and potentially to reduced safety, because it may be difficult for persons using ear defenders to hear warnings or requests for assistance.
  • the present disclosure provides a communication device comprising: a
  • communication means providing audio communication over a peer-to-peer network; an ear defender for reducing noise exposure; a noise measuring means to determine noise level; and a positioning means arranged to determine a position, the device associating the determined position with the corresponding noise level.
  • the present disclosure provides a communication device comprising: a peer-to-peer communication interface arranged to establish a connection between the communication device and at least one other communication device via a peer-to-peer network; an audio input device for receiving audio from a user; a communication module arranged to transmit to one of the at least one other communication devices, via the peer-to- peer communication interface, audio data based on audio received from the user via the audio input device; and arranged to receive audio data from the one of the at least one other communication devices, via the peer-to-peer networking interface; an audio output device for outputting audio based on the received audio data; at least one ear defender for reducing noise exposure; an audio input device for receiving environmental audio; a noise level module arranged to determine a noise level based on the received environmental audio; and a positioning module arranged to determine a position of the audio input device corresponding with the determined noise level; and arranged to associate the position of the audio input device with the corresponding noise level.
  • the present disclosure provides a combined noise dosimeter and communication device comprising: an audio input device for receiving audio; and a noise level module arranged to determine a noise level based on the received audio; wherein the audio input device is associated with a positioning module arranged to: determine a position of the audio input device corresponding with the determined noise level; and arranged to associate the position of the audio input device with the corresponding noise level; an audio input device for receiving audio from a user; an audio output device for outputting audio to the user; a communication interface for transmitting and receiving over a network; and a head-mount or ear-mount comprising ear defenders for reducing noise level exposure of a user.
  • the present disclosure provides a communication system comprising a plurality of communication devices according to the first aspect or the second aspect connected to one another via a peer-to-peer network.
  • the present disclosure provides a method of monitoring noise exposure using a communication device according to the first aspect or the second aspect, the method comprising: receiving audio at the audio input device for receiving environmental audio; determining a noise level based on the received audio; determining a position of the audio input device corresponding with the determined noise level; and associating the position of the audio input device with the corresponding noise level.
  • the present disclosure provides a computer program comprising code portions which, when executed on a processor of a computer, cause the computer to carry out a method according to the fourth aspect.
  • a communication device comprising: a peer-to-peer communication interface arranged to establish a connection between the communication device and at least one other communication device via a peer- to-peer network; an audio input device for receiving audio from a user; a communication module arranged to transmit to one of the at least one other communication devices, via the peer-to-peer communication interface, audio data based on audio received from the user via the audio input device; and arranged to receive audio data from the one of the at least one other communication devices, via the peer-to-peer networking interface; an audio output device for outputting audio based on the received audio data; at least one ear defender for reducing noise exposure; an input device for receiving an environmental parameter other than audio; a level module arranged to determine an environmental parameter level based on the received environmental
  • the methods described herein may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium.
  • tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc. and do not include propagated signals.
  • the software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
  • Figure 1 is a schematic diagram showing the general architecture of a communication system according to a first embodiment
  • Figure 2 is a schematic diagram showing the general architecture of a communication device useable in the communication system of figure 1 ;
  • Figure 3 shows a flow chart illustrating a method of operation of the system
  • Figure 4 illustrates a look up table that can be used to determine a user's allowable exposure to noise in percentage terms
  • Figure 5 shows a flow chart illustrating a method of operation of the system
  • Figure 6 shows an example of a map display generated by the system
  • Figure 7 shows a flow chart illustrating a method of activating different modes at the communication device
  • Figure 8 shows a flow chart illustrating a method of using the communication device in a 'connection enabled' mode
  • Figure 9 shows a flow chart illustrating a method of using the communication device in a 'voice recognition' mode
  • Figure 10 is a schematic diagram showing the general architecture of a
  • Figure 1 1 is a schematic diagram showing the general architecture of a computing device useable in the communication system of figure 10;
  • Figure 12 is a schematic diagram showing the general architecture of a
  • Figure 13 shows a flow chart illustrating a method of adjusting an audio output based on an ear characteristic of a user's ear
  • Figure 14 shows a flow chart illustrating an example of a method of determining an ear characteristic
  • Figure 15 shows a flow chart illustrating another example of a method of determining an ear characteristic
  • Figure 16 shows a flow chart illustrating another example of a method of determining an ear characteristic
  • Figure 17 A illustrates a graph of a user's ear response to distortion product otoacoustic emissions (DPOAEs)
  • Figure 17B illustrates a graph of a real ear aided response (REAR) and a real ear unaided response (REUR) for a device matched to a users ear;
  • Figure 18 illustrates an example of a display in an audio output mode of the communication system of the third embodiment.
  • Figures 1 9A and 19B illustrate examples of a user's hearing profile.
  • Figure 1 illustrates a communication system 1 according to a first embodiment, and comprising a plurality of communication devices 2 and a server 6. Each of the communication devices 2 is worn by a different user 3. Each communication device 2 comprises a pair of ear defenders 4. The ear defenders 4 are sound reducing, and so reduce the noise level exposure of the respective users ears, and so protect the ears and hearing of the respective user 3 from damage by excessive sound and noise exposure.
  • Figure 2 illustrates a single communication device 2 in more detail.
  • the communication system 1 is arranged to provide communications between the different users 3 using their respective communication devices 2, and also to gather noise data and operate as a noise dosimeter. Accordingly, the communication system 1 is able to provide a combined communications system and noise dosimeter.
  • the communication device 2 comprises a pair of ear defenders 4 physically connected by a linking arm 5 to form a headset mounted on and supported by the head of a user 3, and covering and protecting both ears of the respective user 3.
  • Each communication device 2 can be connected to another communication device 2, and to the server 6, using a respective network interface or communication interface 10 at each communication device 2.
  • the server 6 also has a communication interface 63.
  • the communication devices 2 may be connected to one another, and to the server 6, directly or indirectly via another communication device 2.
  • the communication interfaces 10 and 63 are arranged to support peer-to-peer networking.
  • a MESH network is one type of peer-to-peer network that may be used to connect the plurality of communication devices to one another, and to the server 6.
  • a wireless MESH network (IEEE 802.15.4) is an ad-hoc network formed by devices which are in range of one another. It is a peer-to-peer cooperative communication infrastructure in which wireless access points (APs) and nearby devices act as repeaters that transmit data from node to node. In some cases, many of the APs aren't physically connected to a wired network. The APs, and other devices create a mesh with each other that can route data back to a wired network via a gateway.
  • APs wireless access points
  • the APs, and other devices create a mesh with each other that can route data back to a wired network via a gateway.
  • APs and other devices acting a repeaters may be included in the system 1 to support the peer-to-peer network. Such APs and other devices are not shown in figure 1 to improve clarity. In some examples these repeaters may form fixed nodes of the communication system 1 . In some examples the server 6 and/or any gateway may be fixed nodes of the communication system or be connected to fixed nodes of the communication system 1 .
  • a wireless mesh network becomes more efficient with each additional network connection.
  • Wireless mesh networks feature a "multi-hop" topology in which data packets "hop" short distances from one node to another until they reach their final destination. The greater the number of available nodes, the greater the distance the data packet may be required to travel. Increasing capacity or extending the coverage area can be achieved by adding more nodes, which can be fixed or mobile.
  • the communication system 1 comprises a peer-to-peer network of communication devices 2 which enables communication over the network using short range low power wireless links. This can require considerably less computing and signal transmission power than in other communication devices. In addition, this can allow the communication devices 2 to consume less power and to have a simpler and smaller design.
  • the peer-to-peer network may comprise the communication devices 2 and server 6 only. However, in another example, the peer-to-peer network may comprise the communication devices 2 and server 6 as well other devices such as the APs described above. [0048] In the workplace environment, an employee equipped with one of the communication devices 2 described herein can be reachable at all times.
  • PA public address
  • the communication device 2 can avoid the need for an employee to carry around a conventional mobile telephone.
  • the first embodiment provides a compact, simple and inexpensive communication device, providing a solution to the problems associated with known communication devices, which are often bulky, complex and expensive. This can be a problem, in particular, in a workplace environment where there are a number of employees each requiring their own communication device in order to communicate with one another. A bulky communication device may hinder an employee's ability to go about their work, whilst a complex
  • communication device may be difficult for an employee to use.
  • each individual communication device is expensive, then it will become very costly for an employer to equip their entire workforce with communication devices.
  • the communication device 2 comprises a networking interface or communication interface 10 and an antenna 1 1 .
  • the communication interface 10 is arranged to establish a connection between the communication device 2 and another similar communication device via a peer-to-peer network, in which the other similar communication devices also include a peer-to-peer networking capable communication interface.
  • the communication device 2 comprises a voice audio input device 12 which is arranged to receive audio from a user 3 using the communication device 3, that is, the user 2 wearing the headset.
  • the communication device 2 also comprises an environmental audio input device 16, such as an external microphone.
  • the communication device 3 is able to receive voice input from the user 2, and able to receive audio from the environment in which the communication device 2 and the user 3 are located.
  • Each of the voice audio input device 12 and the environmental audio input device 16 may be a microphone, or any other suitable audio input device.
  • the voice audio input device 12 is arranged on an arm external to an ear defender 4, so that the voice audio input device 12 can be arranged proximate to the user's mouth.
  • the voice audio input device 12 comprises an in-ear microphone which receives amplitude modified user speech signals conducted into the ear canal via bone material, which is referred to as the occlusion effect.
  • the occlusion effect is the user speech signals received through this occlusion effect which is used for user voice recognition.
  • the frequency spectrum of speech is modified by the occlusion effect causing an elevation of the lower tones. This technique may enables ease of user transferability unlike conventional voice recognition systems which requires stored voice samples.
  • the communication device 2 comprises an audio output device 13, such as a speaker, which is arranged to output audio to the user 2.
  • an audio output device 13 such as a speaker
  • the communication device 2 In the illustrated example of the communication device 2, only a single audio output device 13 is shown, but preferably the communication device 2 is provided with a pair of audio output devices 13, one for each ear of the user 3. In some examples a separate communication device may be associated with each ear of the user 3, with each
  • the audio output device 13 is shown schematically. However, the audio output device 13 may be any form of listening device such as a headphone, an earphone or an earbud.
  • the communication device 2 comprises a communication module 14 which is arranged to transmit, via the communication interface 10 to another communication device 2 of the communication system 1 , audio received from the user 2 via the voice audio input device 12.
  • the communication module 14 is arranged to receive audio data from other communication devices 2, via the communication interface 10, and provide this to the user 2 via the audio output device 13.
  • the communication devices 3 can conduct two-way communication between one another. However, the communication device 3 may engage in one way communication with one or many other communication devices.
  • the communication module 14 is arranged to send noise level data regarding audio from the environment received via the voice audio input device 12 to the server 6, via the communication interface 10.
  • the communication devices 2 of the communications system 1 provide two way and one way audio communication between the different users 3 of the system 1 using the voice audio input devices 12, the communication modules 14, the communication interfaces 10 and the audio output devices 13 of the different communication devices 2.
  • the communication device 2 comprises a voice recognition module 15 which is arranged to receive voice inputs from a user 3 via the voice audio input device 12.
  • the voice recognition module 15 is arranged to store a number of pre-defined voice commands each associated with an action.
  • the voice recognition module 15 is arranged to detect a match between voice input and one of the pre-defined voice commands, and is arranged to perform the action associated with the matching voice command.
  • the voice recognition module 15 is arranged to control the communication interface 10 and the communication module 14.
  • the voice recognition module 1 5 is arranged to cause the communication interface 10 to initiate establishing a connection between the communication device 2 and another communication device 2 based on audio commands received from the user 3 via the voice audio input device 12.
  • the voice recognition module 15 may be arranged to cause the communication module 14 to communicate with another communication device 2.
  • the communication device 2 further comprises a user-interface switch 17 and a control module 18.
  • the user-interface switch 17 is a pressure sensitive switch 17.
  • any other suitable type of switch, control or contact sensor may be used.
  • the user-interface switch 17 and the control module 18 are arranged to activate different modes at the communication module 14.
  • the communication device 2 comprises only one user- interface switch 17.
  • the control module 18 is arranged to store a number of pre-defined user interactions with the user-interface switch 17. In addition, each pre-defined user interaction is associated with a different action to be performed at the control module 18.
  • the control module 18 is arranged to detect a user interaction with the user-interface switch 17 and a match between the detected user interaction and one of the pre-defined user interactions. Then, the control module 28 is arranged to perform the action associated with the matching detected user interaction.
  • the communication module 14 is configured to be able to operate in a plurality of different modes, and the control module 18 is arranged to detect whether one of a plurality of pre-defined user interactions with the switch has occurred. The control module 14 is arranged to activate the mode associated with the detected user interaction.
  • the environmental audio input device 16 can be used to detect environmental noise in order to provide noise cancelling via the audio output device 13, for example under the control of the communication module 14 or the control modulel 8.
  • the communication device 2 may provide noise cancelling during communication between communication devices 2.
  • the communication device 2 may decide to not provide noise cancelling when there is no communication between devices 2.
  • the communication device 2 further comprises a storage module 19, which is arranged to store data.
  • the storage module 19 may store an identification parameter for the communication device 2.
  • the identification parameter is indicative of a unique identifier for the communication device 2.
  • the unique identifier may be a number for the communication device 2, a title for the user 3 of the communication device 2 and/or the user's name.
  • This unique identifier may be used so that other communication devices 2 can establish a connection with the communication device 2. It will be understood that it is only necessary for the unique identifier to be unique among all communication devices 2 which are in, or may be connectable to, the peer-to-peer network. It is not necessary that the unique identifier is unique among all communication devices 2 in existence, although this may be the case.
  • the storage module 19 may store a database comprising a list of unique identifiers for other communication devices 2 in the peer-to-peer network, where each unique identifier corresponds with a speech label stored at the storage module 1 9.
  • Each speech label may be indicative of a name, or a label, for the user 3 of the communication device 2 to which the speech label's associated unique identifier corresponds.
  • Each individual user 3 can be stored in association with a number. For instance, the lowest number, such as 'one', may refer to the most senior user 3.
  • each of the communication devices 2 receives audio from its surrounding environment through the environmental audio input device 16.
  • each one of the communication devices 2 receives audio from its surrounding environment and a noise level is determined.
  • Each noise level determined is associated with a position at which the audio was received, from which the noise level was determined.
  • each position is associated with a corresponding noise level.
  • the noise level will indicate the amplitude of the audio received.
  • the noise level may be the peak amplitude of the audio in decibels
  • the system 1 is able to generate an indication of the noise level to which a user 3 has been exposed along with positional information associated with the noise level.
  • a particular location can be associated with a particular noise level. For example, it may be possible to determine that a particular location within a factory is associated with a particularly high noise level. Therefore, a user 3 can decide to avoid that location in order to limit their exposure to potentially harmful noise levels.
  • the system 1 may store a plurality of the positions each in association with a corresponding noise level. Thus, it is possible to generate information describing locations with associated noise levels. This helps to build a more complete indication of the noise levels throughout an environment. This can help someone to make better decisions about which areas to avoid, in order to limit their exposure to potentially harmful noise levels.
  • the information generated can be used to output map data which can be presented to a user in combination with a map of an environment in which the audio was received. This may present, to the user, at least some of the positions each in association with their corresponding noise level.
  • This map may be regarded as a noise intensity map.
  • the map can also present at least one high noise level area indicative of a position associated with a noise level above a high noise level threshold. Furthermore, the map may present at least one boundary defining the perimeter of a high noise level area. Thus, a user 3 can easily determine which areas to avoid, in order to limit their exposure to noise.
  • the system 1 may use map data and the noise levels with the associated position data in order to determine a navigation path from one place to another.
  • the navigation path may be associated with a reduced level of noise exposure.
  • the system may determine a navigation path from one place to another, avoiding at least one high noise level area.
  • the system 1 determines noise levels associated with a plurality of different navigation paths from one place to another, and presents a user 3 with the navigation path associated with the lowest noise level.
  • a user 3 can limit their exposure to noise by following the navigation path.
  • the system 1 may notify a user 3 when they have been exposed to a noise level at or over a particular noise threshold.
  • the user 3 can be alerted when they have been exposed to an unacceptable level of noise. Then, the user 3 may decide to move to a quieter environment, so that they can attempt to avoid damage to their hearing. Preferably, the user 3 is notified in advance of reaching the noise threshold. In this way, the user 3 can be alerted before they have been exposed to an unacceptable level of noise.
  • the noise threshold may be user-defined. Thus, since some people have higher and lower tolerances to noise, this enables the system to be optimised for individual people.
  • the communication device 2 comprises a positioning module 20, a noise level module 21 , a calculation module 22 and a notification module 23.
  • the positioning module 20 uses MESH Networks Position System
  • MPSTM to determine the position.
  • MPSTM does not rely on satellites, so it can operate in both exterior and interior locations where GPS will not.
  • MPSTM determines position by utilising time of flight and triangulation information using other devices in the network as reference points.
  • GPS is used; however, it will be appreciated that any other suitable positioning system may be used, instead of or in combination with GPS and/or MPSTM.
  • Figure 3 illustrates a flow chart of the operation of a communication device 2 acting as a noise dosimeter.
  • the audio input device 1 1 receives audio from the environment in which it is located.
  • the environmental audio input device 16 is mounted on the communication device 2 headset, the environmental audio input device 16 can be used in proximity to the user's ears. Therefore, the system 1 may be able to obtain a more accurate reading of the actual noise level to which the user is exposed.
  • the environmental audio input device 16 may be located externally of an ear defender 4 so that it senses the environmental noise directly, or may be located internally of an ear defender 4 so that it senses the level of noise to which the users ears are subjected directly, after the environmental noise has been attenuated by the ear defender 4. In some examples multiple environmental audio input devices 16 mounted both externally and internally of an ear defender may be used. In some examples where the environmental audio input device 16 is located internally of an ear defender 4 the environmental audio input device 16 may be located in the ear canal of a user.
  • the noise level module 21 determines a noise level based on the audio received at the environmental audio input device 16. Generally, the noise level module 21 will measure the noise level in decibels (dB). However, any other measure of sound/noise level, amplitude or intensity may be used. Noise levels may include sound pressure levels (SPL) and continuous sound exposure levels (SEL), including peak values and specified periods of time. Once a noise level has been determined, the noise level module 21 may output the noise level to the storage module 19 of the communication device 2.
  • SPL sound pressure levels
  • SEL continuous sound exposure levels
  • the noise level module 21 can be arranged to estimate the external environmental noise based on the noise level sensed by the environmental audio input device 21 and a known sound reduction effect of the ear defender 4.
  • the noise level module 21 can be arranged to estimate the noise level to which the users ears are subjected based on the external environmental noise sensed by the environmental audio input device 21 and a known sound reduction effect of the ear defender 4.
  • noise levels of these environmental and in- ear sounds determined by the noise level module 21 may be stored separately at the storage module 19.
  • the difference between the measured internal and external noise levels provided by the noise level module 21 may be calculated and compared to a threshold value by the calculation module 22 to determine the sound reduction effect being provided by the ear defender 4.
  • a predetermined noise difference threshold stored in the storage module 19, which the calculation module 22 can access. If the determined sound reduction effect is determined to be below the threshold value
  • the notification module 23 may issue an alert to the user via the audio output device 13 to warn the user of improper operation of the ear defender 4, and that the users hearing is not being fully protected. Reduced sound reduction effect may indicate that the ear defender is defective or incorrectly fitted, and the alert may prompt the user to check the fitting of their ear defenders, and if necessary to exit, or avoid entering, a noisy environment until the functioning of their ear defenders can be checked.
  • Noise level data may be time stamped, for instance with the time at which the audio was received from which the noise level data were generated. Further, noise level data may be tagged with the sensed sound reduction effect of the headset in examples where this is available.
  • the noise level is received by the positioning module 20.
  • the positioning module 20 determines the position of the user 3.
  • the positioning module 20 uses MESH Networks Position System (MPSTM) to determine the position.
  • MPSTM does not rely on satellites, so it can operate in both exterior and interior locations where GPS will not.
  • MPSTM determines position by utilising time of flight and triangulation information using other devices in the network as reference points.
  • GPS is used; however, it will be appreciated that any other suitable positioning system may be used, instead of or in combination with GPS and/or MPSTM.
  • the positioning device 20 can determine the position of the environmental audio input device 1 6 corresponding with the determined noise level. [0086] In step 306, once the positioning module 20 has determined an estimate of the position of the environmental audio input device 1 6, the positioning module 20 associates the position with the corresponding noise level. For instance, the positioning module 20 may link the co-ordinates of the position with the decibel reading of the noise level.
  • step 308 the noise level and position data, from the communication device 2, is communicated to the server 6 through the peer-to-peer network by the communication module 14 and the communication interface 10.
  • a calculation module 22 of the communication device 2 is used to calculate a calculated noise level.
  • the noise level may be calculated based on time data and noise levels determined by the noise level module 21 .
  • the time data may be associated with the noise levels.
  • the calculated noise level may include a calculation of peak (impulse) noise, equivalent continuous (average) 'A' weighted noise, which is a UK standard, or a time- weighted average (TWA) noise, which is a USA standard.
  • the peak noise, the equivalent continuous noise and the TWA noise are calculated over a predefined period of time, such as over an eight hour period.
  • Peak noise can be calculated by detecting peak amplitudes of noise. Continuous noise can be sampled over a predefined period of time.
  • Equivalent continuous noise can be calculated by averaging all noise level samples to which a subject is exposed during a period of time, for example, during an eight-hour workday. An average can be calculated through the addition of the magnitude of these samples divided by the number of samples collected during the time period.
  • TWA noise is the summation of the actual number of hours over which samples are recorded divided by the permissible hours at each sound level multiplied by one hundred for calculating a percentage dose for an eight hour shift.
  • the equivalent continuous noise level calculation used in the UK uses the "A-weighting standard" for measuring harmful sound pressure level (SPL) values. These weightings take into account subjects' varying susceptibility to noise related hearing damage at different frequencies.
  • Noise level (L p ) is a logarithmic measure of the root mean square (RMS) sound pressure relative to a reference (ambient) level expressed in decibels (dB).
  • the A-weighted equivalent continuous noise level often referred to as energy-averaged exposure level (L Aeq ) is calculated by dividing the measure dB values by 1 0, converting to antilog values, assigning an A-weighting curve to them, summing these scaled values, dividing by the number of samples taken and then taking the logarithm to arrive at A-weighted decibels of power dBA. This is illustrated in Equation 1 :
  • Equation 1 n is number of samples.
  • Short L eq is a method of recording and storing sound levels for displaying the true time history of noise events and all sound levels during any specified period of time.
  • the resulting 'time histories' typically measured in 1 /8 second intervals may then be used to calculate the 'overall' levels for any sub-period of the overall measurement.
  • the time interval can be varied according to the amount of change recorded between intervals.
  • a meter To measure true peak values of impulsive sound levels, a meter must be equipped with a peak detector. Accordingly, in order to measure true peak values of impulsive sound levels, in this case the environmental audio input device 16 and the noise level module 21 will need to be able to act as a peak detector. Alternatively, in some examples the communication device 2 may be equipped with a separate peak detector. A peak detector responds in less than 100uS according to the sound level meter standards. A typical response time is 40uS.
  • a noise dose is a descriptor of noise exposure expressed in percentage terms. For example a noise dose of 160% (87dBA for 8 hours) exceeds the permissible 100% dose (85dBA for 8 hours) by 60%.
  • the dose value is derived from Equation 2 as follows:
  • T is the exposure time
  • the noise exposure level (L EX ) is the measured L Aeq of the user's exposure (in decibels) which is linearly adjusted for a fixed 8 hour period. This is illustrated in Equation 3:
  • L EX 10 log 10 ⁇ Dose/100 ⁇ + 85 dBA Equation 3 [0099]
  • This noise threshold may be a recommended average noise threshold, such as the Occupational H&S threshold of 85dBA.
  • step 312 the calculation module 22 determines whether the noise threshold has been reached. If this threshold has been reached, the method proceeds to step 314 in which the notification module 22, outputs a notification sound through the audio output device 13 to notify the user that they have reached their noise exposure threshold.
  • the system may determine a percentage value of the permissible dose (see
  • noise level in percentage terms There are other possible calculations of noise level in percentage terms. For example, a continuous measure of how well the user is doing at managing his/her exposure to noise could also be provided, where the permissible noise dose for an 8 hour shift is adjusted during the shift as illustrated in the example below by using the table in Figure 4.
  • Figure 4 illustrates a look up table that can be used to determine a user's allowable exposure to noise in percentage terms.
  • L eq dBA is equivalent to L Aeq dB.
  • the noise dose exposure level calculated from the table for a L Aeq reading of 88dBA is 49.9%.
  • the forecast may be calculated by the calculation module 22 and provided to the user by the notification module 23 using the audio output device 13 at set times, for example periodically, during a work shift. Such forecasts may alternatively, or additionally, be provided by the server 6.
  • An alternate method may be to start a real time clock at the start of each working day and calculate the number of hours left of permissible noise exposure at current noise levels. For example, if the current equivalent continuous noise level L Aeq over the first hour is 88dBA, the above table calculates that 3 hours remain at current noise levels. This may be useful for diverse working environments. [00107] In another example, a time weighed average (TWA) percentage is output. This would be particularly useful for the North American market.
  • TWA time weighed average
  • each new day or another defined period should be preceded by an automatic re-setting of the noise exposure data stored in the communication device 2 to zero for monitoring exposure levels over this period.
  • the pre-defined threshold is also stored in the communication device 2 which should be the permissible exposure limit (PEL) or a user defined lower (inset) threshold value.
  • the notification module 23 may also be arranged to output a notification when the noise level at a particular instant reaches or exceeds a predetermined peak noise threshold.
  • the calculation module 22 may determine whether calculated noise level has reached a pre-determined level below the noise threshold. For example, the calculation module 22 may determine that the calculated noise level is 10% below the noise threshold. In this case, the notification module 23 may output a notification.
  • the notification module 23 may cause the audio output device 13 to output a notification sound to notify the user that the threshold is about to be reached and may recommend action for limiting exposure to noise. Therefore, the user can be alerted before they have been exposed to an unacceptable level of noise. Thus, the user can move to a quieter environment, so that they can pre-emptively attempt to avoid damage to their hearing.
  • Steps 300-308 are repeated by the different communication devices making up the system 1 in order to obtain a plurality of noise level measurements, each associated with a respective position, which are all sent through the peer-to-peer network to the server 6. This helps to build a more complete indication of the noise levels throughout a particular environment.
  • the noise level and position data are stored at the server 6. In some examples the data may also be stored at the respective communication devices 2, or the data may be stored at the server 7 instead of at the computing device.
  • FIG. 5 shows a flow diagram of a method carried out by the server 6.
  • the server 6 receives the noise level and position data from a communication device 2 of the plurality of communication devices 2 in the system 1 through the peer-to-peer network.
  • a mapping module 61 of the server 6 generates map data based on the plurality of the positions and associated noise levels.
  • a map of the environment is generated in combination with the map data. This map shows at least some of the determined positions each in association with their corresponding noise level.
  • the map along with the map data is displayed at a display/user interface associated with the server 6.
  • the display/user interface may be a remote device connected to the server 6 through a communication network, such as an intranet, or the Internet.
  • the display/user interface may be used by the users 3 using the communication devices 2, or other personnel, to identify noise levels and plan how to reduce or limit noise exposure.
  • the display/user interface may be, for instance, a touch-screen display.
  • An example of the map and corresponding noise data is illustrated in Figure 6.
  • the display/user interface presents the user with the map 31 of the environment in which various noise levels were recorded.
  • the map 31 shows a number of rooms 32A-C, with passages between them.
  • the map 31 shows a plurality of areas 33A-C in which noise has been detected.
  • the map 31 shows a plurality of areas 33A-C in which noise has been detected above a particular threshold
  • the magnitude of the noise levels detected in these areas 33A-C is indicated to the user via shading.
  • a darker shade indicates an area of higher noise level, whilst a lighter shade indicates an area of lower noise level. If there is no shading in an area of the map 31 , the user may assume that no noise has been detected in that area, or that any noise detected is below a threshold.
  • each area 33A-C may have a numerical value (e.g. between 1 to 10) associated with it.
  • the user is presented with a noise intensity map comprising contours lines, where the width of the spacing between the contour lines indicate a rise or fall in noise level.
  • narrower spacing between contour lines indicates a steep rise in noise level, and wider spacing between contour lines indicates a shallow rise in noise level.
  • Noise level data from a plurality of user communication devices 2 are stored in a central database 62 of the server 6 together with the associated positioning data coordinates.
  • Each of the positioning coordinates relate to a grid reference of the location.
  • the resolution of the square grid reference, or in other words the area of each square in the grid, may be preset depending on the accuracy of the positioning apparatus being used.
  • Noise level data points can be tagged with a grid reference based on the position data. Then, an average of the noise level data can be determined for each square within the grid reference.
  • noise levels for each square in the grid are continuously updated by each user 3 who enters the environment. This is useful for constructing a reliable representation of noise levels per unit area.
  • Noise intensity values are derived from the noise level data accorded to each grid reference divided by the assigned area of the grid.
  • the integration of new with old data for each grid map reference may use time related weighting factors.
  • additional sensor nodes located at fixed known positions may also be connected to the mesh network.
  • the additional sensor nodes may act as repeaters to support the peer-to-peer network.
  • the additional sensor nodes may provide fixed reference points for use by the positioning modules 20 of the communication devices 2 to improve the accuracy of position determination.
  • the additional sensor nodes may each comprise one or more audio input devices to determine noise levels and provide noise data for particular positions or grid references where they are located, for example at positions where high noise levels are expected.
  • the additional sensor modules may each comprise a storage module arranged to store noise data associated with their fixed position for use in producing the noise intensity map. This may remove the need for a user to traverse these expected high noise positions in order to build up the noise information, for example to complete the noise intensity map.
  • each area may have a particular colour (e.g. green, orange or red) associated with it.
  • the indicator scheme used should have a legend so that the user can understand the data presented to them.
  • each noise level area 33A-C has a boundary 35A-C around it, defining the perimeter of each area.
  • FIG. 7 shows a flow chart illustrating a method of activating different modes at the communication device 2.
  • step 400 the communication device 2 is activated, or 'powered-on'.
  • the communication device 2 more specifically the communication module 14, is configured to operate in a "connection-enabled" mode initially.
  • the communication module 14 of the communication device 2 is configured to permit transmitting or receiving of audio to or from another communication device 2 of the system 1 .
  • the voice recognition module 15 may be deactivated, when the communication module 14 is in the connection-enabled mode initially, and the voice recognition module 1 5 may be configured to be activated only in response to a user interaction with the switch 17. When activated, the voice recognition module 15 is arranged to perform at least one action in response to at least one voice command of a stored first instruction set.
  • the first instruction set may, for example, be stored at the voice recognition module 15 or the storage module 19.
  • the control module 18 detects a user-interaction with the switch 17. In this case, the user 3 wishes to instruct the communication device 2 to enter a "connection- disabled" mode.
  • the user maintains contact with the switch 1 7, or 'holds' the switch down, for a first time period.
  • the user 3 holds the switch 17 for over five seconds until the audio output device 13 outputs an audio notification, such as a single 'beep'.
  • the user 3 disengages contact with the switch 17, or
  • the control module 18 detects this interaction with the switch 17 and instructs the communication module 14 to enter the "connection-disabled" mode.
  • step 405 the communication module 14 enters the connection-disabled mode.
  • the communication module 14 In the connection-disabled mode the communication module 14 is not permitted to transmit or receive audio to or from another communication device 2.
  • the communication interface 10 In the connection-disabled mode, the communication interface 10 may not be permitted to establish a connection between the communication device 2 and another communication device 2 via the peer-to-peer network.
  • the voice recognition module 15 in the connection-disabled mode the voice recognition module 15 may be deactivated [00130]
  • the control module 18 detects another user-interaction with the switch 17. In this case, the user 3 wishes to instruct the communication device 2 to re-enter the
  • connection-enabled mode In order to do this, the user 3 performs a different user- interaction with the switch 1 7 compared with the user-interaction in step 403. Here, the user 3 maintains contact with the switch 17 for a second time period, for instance two seconds longer than the first period of time.
  • the user 3 holds the switch 17 until the audio output device 13 outputs an audio notification, such as a two 'beeps'.
  • the user 3 Upon hearing the second 'beep', the user 3 knows that they have reached the second time period threshold and can disengage contact with the switch 1 7, or 'release' the switch 17.
  • the control module 18 detects this interaction with the switch 1 7 and instructs the communication module 14 to re-enter the "connection- enabled" mode. Thus, the method returns to step 400.
  • the user 3 holds the switch 17 for the first time period until the first single beep is output in step 403. Then, the user 3 continues to hold the switch 17 until the second time period has elapsed, at which point the audio output device 17 outputs a second beep.
  • the second time period is seven seconds, which is two seconds longer than the first period.
  • the second time period may be any length of time so long as the user 3 is given sufficient time to response to the first beep before the second beep occurs.
  • step 409 the control module 18 detects another user-interaction with the switch 17.
  • the user 3 wishes to instruct the communication device 2 to enter a "voice- control" mode.
  • the user performs a different user-interaction with the switch 17 compared with the user-interactions in steps 403 and 407.
  • the user 3 contacts with the switch 17 multiple times within a time period. For instance, the user 3 may activate the switch 17 twice within a time period of under five seconds.
  • the control module 18 detects this interaction with the switch 1 7 and instructs the communication module 14 to enter the "voice control" mode.
  • step 41 1 the communication module 14 enters the voice control mode, in which the communication module 14 is permitted to transmit or receive audio to or from another communication device 2.
  • the voice recognition module 1 5 is activated when the communication module 14 is in the voice-control mode.
  • the voice recognition module 15 may be arranged to perform a plurality of actions each in response to at least one voice command of a second instruction set.
  • the second instruction set of the voice control mode may comprise a greater number of voice commands than the first instruction set used in the connection-enabled mode.
  • the second instruction set may, for example, be stored at the voice recognition module 15 or the storage module 19.
  • step 413 the control module 18 detects a user-interaction with the switch 17 where the user 3 maintains contact with the switch 17 for over five seconds until the audio output device 13 outputs a 'beep', at which point the user 3 disengages contact with the switch 17.
  • the control module 18 detects this interaction with the switch 17 and instructs the communication module 14 to re-enter the "connection-disabled" mode.
  • the method returns to step 405.
  • step 415 the control module 18 detects a user-interaction with the switch 17 where the user 3 maintains contact with the switch 17, for the second time period until the audio output device 13 outputs two 'beeps' at which point the user 3 disengages contact with the switch 1 7.
  • the control module 18 detects this interaction with the switch 17 and instructs the communication module 14 to re-enter the "connection-enabled" mode.
  • the method returns to step 400.
  • Figure 8 shows a flow chart illustrating a method of using the communication device 2 in the 'connection-enabled' mode.
  • step 500 the communication interface 10 is in a waiting state where it checks to determine whether or not there is an incoming call from another communication device 2, or in other words a request for a connection to be made between the communication device 2 and another communication device 2.
  • the control module 18 checks to determine whether or not there is a user-interaction with the switch 1 7 whilst there is not an incoming call. If there is a user-interaction with the switch 17 whilst there is not an incoming call, the method proceeds to step 502.
  • control module 18 detects an interaction with the switch 17.
  • the user 3 wishes to provide a command to the voice recognition module 15.
  • the user 3 maintains contact with the switch 17, for a time period, for instance less than five seconds.
  • the control module 18 detects this interaction with the switch 1 7 and, in response, activates the voice recognition module 15.
  • voice recognition module 15 detects a voice command provided by the user 3.
  • the voice recognition module 15 identifies voice commands by detecting reserved words.
  • the voice commands are verified by a pause preceding and following the command. For instance, the pause preceding and following the command may be a few seconds.
  • the user may say "CALL SUPERVISOR”.
  • the voice recognition module 15 determines the action associated with the voice command.
  • the voice recognition module 1 5 outputs a confirmation request, via the audio output device 13.
  • the confirmation request comprises outputting audio indicative of the determined action.
  • the output may comprise repeating the voice command "CALL SUPERVISOR"
  • the "SUPERVISOR" voice command may be described as a label associated with another communication device 2.
  • the label may comprise a name for a user 3 associated with the other communication device 2.
  • each user's contact name, title or number is associated with his/her communication device 2.
  • a user 3 initiates a call, a message is broadcast to the peer-to-peer network for identifying the requested communication device 2.
  • the requested communication device 2 responds and a connection is established between the calling and the receiving communication devices 2.
  • the voice recognition module 15 waits for the user 3 to provide a confirmation.
  • the user 3 may provide the confirmation by saying an affirmative voice command, for instance by saying "yes”. In this case, the method proceeds to step 508.
  • the user 3 may decline the confirmation by saying a negative voice command, for instance by saying "no". In this case, the method returns to step 500.
  • step 506 if the voice recognition module 15 fails to recognise the name of the person to be called, it prompts an appropriate audible notification for a repeat command. If the repeat command is unsuccessful the method returns to step 500. If the repeat command is successful the method proceeds to step 508 [00147] In step 508 the voice recognition module 1 5 causes the action associated with the voice command, input at step 504, to be performed. In this case, the voice recognition module 15 causes the communication interface 10 to initiate the process of establishing a connection with a communication device 2 associated with the supervisor.
  • step 500 the communication interface 10 checks to determine whether or not there is an incoming call from another communication device 2. If there is an incoming call the method proceeds to step 510 in which a notification is output, preferably at the audio output device 13, indicating to the user that there is an incoming call
  • step 512 the control module 18 checks to determine whether or not the user 3 engages the switch 17. If the user 3 engages the switch 17, for less than one second, in response to the incoming call the method proceeds to step 514, in which a connection is established between the communication device 2 and another communication device 2 in the peer-to-peer network.
  • step 516 the control module 18 determines that the user 3 has engaged the switch 17 for less than five seconds, indicating that the user 3 wishes to terminate the call.
  • step 518 in response to this user interaction, the control module 18 instructs the communication interface 10 to disconnect the communication device 2 from the other communication device 2.
  • step 520 the control module 18 checks to determine whether the switch 1 7 has been engaged within ten seconds of outputting the incoming call notification. If the user 3 has not provided an interaction with the switch 1 7 within this ten second time period, the method proceeds to step 524 in which the incoming call request is cancelled.
  • step 522 if the control module 18 determines that the switch 17 has been engaged for a time period in excess of five seconds during the ten second time period, then the incoming call request is cancelled also.
  • Figure 9 shows a flow chart illustrating a method of using the communication device 2 in the 'voice-recognition' mode.
  • the purpose of the voice-recognition mode is that the user can perform all required functions using voice commands rather than interacting with the switch 17.
  • the voice recognition module 15 remains active whilst in the voice recognition mode.
  • step 600 the user 3 provides a voice command.
  • step 602 the voice recognition module 15 detects that the user 3 has provided the voice command and determines an action associated with the voice command.
  • step 604 the voice recognition module 15 outputs a confirmation request, via the audio output device 13.
  • the confirmation comprises outputting audio indicative of determined action.
  • the voice recognition module 15 waits for the user 3 to provide a confirmation.
  • the user 3 may accept the confirmation by saying an affirmative voice command, for instance by saying "YES”. In this case the method proceeds to step 605 in which the action associated with the voice command is performed.
  • the user 3 may decline the confirmation request by saying a negative voice command, for instance by saying "NO". In this case, the method returns to step 600.
  • step 602 if the voice recognition module 15 fails to recognise the voice command, for instance if the voice recognition module 15 cannot recognise the name of the person to be called, it prompts an appropriate audible notification for a repeat command. If the repeat command is unsuccessful the voice recognition module 15 simply waits for another voice command at step 600.
  • the voice recognition module 15 detects that the user 3 has said "HANG-UP", whilst a call is in session between the communication device 2 and another communication device 2, the voice recognition module 15 instructs the communication interface 10 to disconnect the communication device 2 from the other connected communication device 2.
  • the voice recognition module 15 detects that the user 3 has said "PICK-UP", in response to an incoming call request, the voice recognition module 1 5 instructs the communication interface 10 to connect the communication device 2 with the other connected communication device 2 requesting the call.
  • the voice recognition module 15 detects that the user 3 has said "DECLINE", in response to an incoming call request, the voice recognition module 1 5 instructs the communication interface 10 to refuse a request to connect the communication device 2 with the other connected communication device 2 requesting the call.
  • the voice recognition module 15 detects that the user 3 has said "CALL" followed by the name of a contact, the voice recognition module 1 5 instructs the communication interface 10 to initiate a request to connect the communication device 2 with another connected communication device 2 associated with the contact.
  • the voice recognition module 15 detects that the user 3 has said "EXIT"
  • the voice recognition module 1 5 instructs the communication module 14 to enter the connection- enabled mode.
  • user settings for each communication device 2 can be controlled by the server 6, or by another device connected to the mesh network, such as a computer or a MESH network enabled smartphone.
  • One of the user setting options could include a sound pressure threshold above which the user's speech is detected and processed into instructions for execution by the voice recognition module 15. Otherwise, settings would normally reflect user preferences for an optimum listening experience.
  • Access to cloud computing applications such as private clouds for company infrastructure services may be accessed by communication devices 2 via a gateway connected to the peer-to-peer network. This can include communication links to other sites for secure inter-site calls including conference calls.
  • the peer-to-peer network may connect to a secure central database via a gateway containing employees' routing requirements for setting up wireless communication links.
  • the pressure sensitive switch 17 may be engaged accidentally.
  • the communication device 2 may comprise a sensor, such as an acoustic in-ear sensor, arranged to determine that the communication device has not been mounted on an ear of a user; and, in response, ignoring any user interactions with the switch 17.
  • the occlusion effect attenuates the external sound entering the ear canal thereby creating a difference in sounds levels measured by the acoustic in-ear sensor and external acoustic sensors.
  • an acoustic in-ear sensor may allow a determination that communication device 2 has not been mounted if it receives audio above a particular attenuated amplitude of the amplitude measured by an externally mounted sensor, and the acoustic in-ear sensor may determine that the communication device 2 has been mounted if the amplitude of the received audio falls below that particular amplitude. In some examples where an acoustic in-ear sensor is used, this may also be the voice audio input device 12 and/or the environmental audio input device 16.
  • the communication device 2 may power down into a
  • the communication interface 10 In the beacon mode, the communication interface 10 periodically checks for messages/activations and sends out a unique identifier which can be used to determine the location of the communication device 2 before returning to a sleep state. In the beacon mode, the communication device 2 alternates between an active state and a dormant state, where a greater amount of the functionality of the communication device 2 is activated in the active state than in the dormant state.
  • the communication module 10 of the communication device 2 may be configured to operate in an override mode in which the communication device 2 is able to transmit audio for output at another communication device irrespective of the mode activated at the other communication device.
  • This enables a supervisor/manager to have connection priority to the user's device by automatically forcing acceptance of a connection request. This option could include termination of a call by the supervisor exclusively.
  • the override mode may allow the communication device 2 to transmit audio for output at a plurality of other communication devices irrespective of the mode activated at each respective communication device.
  • the override mode may be used in place of an conventional public address (PA) system.
  • PA public address
  • Figure 10 illustrates a communication system 100 according to a second
  • the communication system 100 is similar to the communication system 1 according to the first embodiment described above, and comprises a plurality of
  • each of the communication devices 101 comprises a pair of ear defenders 4 physically connected by a linking arm 5 to form a headset 1 02 mounted on and supported by the head of a user 3, and covering and protecting both ears of the respective user 3.
  • the headset 102 has corresponding components to the communication device 2 of the first embodiment as shown in figure 2, and is able to communicate with the server 6 using the peer-to-peer network.
  • each communication device 101 in addition to the headset 102 each communication device 101 further comprises a computing device 103.
  • Figure 1 1 illustrates a computing device 103 in more detail.
  • the computing device 103 comprises a communications interface 104 and an antenna 105, together with a storage module 106, a display and user interface 107, and a navigation module 108.
  • a display and a user interface are integrated together in a display and user interface 1 07 in the form of a touch-screen display of the computing device 103.
  • a display and user interface 1 07 in the form of a touch-screen display of the computing device 103.
  • different types of display and user interface may be used.
  • a separate display and user interface may be used.
  • the computing device 103 is able to communicate wirelessly with the headset 102 formed by the other parts of the communication device 101 by way of the communications interface 104 and antenna 105 of the computing device 103, which communicate wirelessly with the communication interface 10 and antenna 1 1 of the headset 102.
  • This wireless communication between the headset 102 and the computing device 103 may, for instance be via Bluetooth® or via Wi-Fi.
  • the communications interface 104 of the computing device 103 may be a peer-to-peer networking interface, and the computing device 103 may communicate with the headset 1 02 using the peer-to-peer network.
  • the headset 102 and the computing device 103 may communicate with one another via any other suitable connection, such as via a wired connection.
  • the communications interface 104 of the computing device 103 is a peer-to-peer networking interface the computing device 103 may communicate directly with the server 6 using the peer-to-peer network.
  • the computing device 103 is a smartphone. However, it will be appreciated that any other suitable computing device 103 may be used instead of a smartphone.
  • the map along with the map data generated by the mapping module 61 of the server 6 may be sent to the headset 102 of the communication device 101 through the peer-to-peer network, and then sent through the wireless link to the computing device 103 for display to the user.
  • map and map data are received by the computing device 103 through the communications interface 104 and antenna 105, they are stored in the storage module 106. The map and map data are then used to display a map, such as the map 31 illustrated in figure 6 to the user 3 on the display and user interface 107.
  • the communication device 1 01 may include a mapping module, so that the communication devices 101 can carry out the mapping themselves.
  • the computing device 103 may include the mapping module to carry out the noise mapping.
  • the navigation module 108 provides a navigation function in which the positioning module 20 of the headset 102 determines the current position of the user and sends this current position to the computing device 103, and the user indicates a desired destination location using the display/user interface 107 of the computing device 1 03.
  • the navigation module 108 determines a navigation path which exposes the user to least amount of noise based on the map data.
  • the navigation module 1087 may determine a navigation path that avoids at least one high noise level area.
  • An example of a navigation path 39 is shown on the map 31 in Figure 6.
  • the navigation path determined by the navigation module 108 may be compared to changes in the position of the user over time, as determined by the positioning module 20 of the headset 102, and the notification module 43 of the headset 102 may output an audio notification to the user 3 via the audio output device 13 of the headset 102, and/or the navigation module 108 may output a visual notification to the user 3 through the display/user interface 107, if the user 3 deviates from the navigation path.
  • Users can use the map 31 to determine their own paths by themselves, in order to limit their exposure to noise. Alternatively, users may can instruct the device to determine the best route for limiting the users' exposure to noise using the navigation module 108.
  • the navigation module 108 may identify a noise level limit. Then the navigation module 108 cause the display/user interface 107 to display paths from the users location to the intended destination, where the noise level associated with each path is below the noise level limit.
  • routing software tools similar to those used in conventional navigation systems may be used, but with preferences such as determining the least noisy route or determining the shortest route which avoids noise levels above a certain threshold.
  • Any deviation from the chosen path may be detected by a rise in noise levels above the selected threshold. This may lead to an audible or visual warning.
  • the user's destination may be indicated by tapping the appropriate area on a pressure sensitive display screen. In addition, the user may be able to zoom-in on areas on the map for closer inspection.
  • the communication device 101 comprises a headset 102 and a computing device 1 03 in wireless or wired communication.
  • the illustrated embodiment has specific functions and modules of the communication device 101 assigned to different ones of the headset 1 02 and computing device 1 03.
  • the functions and modules of the communication device 101 may be differently distributed between the headset 1 02 and the computing device 1 03 as convenient.
  • the navigation module could be part of the headset, and the computing device 1 03 could be a "dumb" display which merely displays image data provided by the headset 102.
  • much of the functionality of the communication device 1 01 could be provided by the computing device 103. This may be advantageous in examples where the computing device 103 has significant on-board processing capability, such as where the display device is a smartphone.
  • Figure 12 illustrates a communication system 200 according to a third embodiment.
  • the communication system 200 according to the third embodiment has a hearing-test mode in which a communication device is arranged to determine an ear characteristic of a user; and an audio output mode in which the audio processing unit is arranged to provide an audio output, which is adjusted based on the determined ear characteristic.
  • a user can administer a hearing-test using the communication device, in order to determine a characteristic of their ears. Then, this characteristic can be used to adjust the audio output from the communication device in the audio output mode.
  • the audio output mode may also be referred to as an audio streaming mode. This provides allows a user to tune the communication device to their own hearing characteristics without having to visit a clinician.
  • the communication device Since the communication device conducts the hearing test and the audio output, the communication device can be tuned immediately after the hearing-test. This avoids the need to wait for results to be processed by a separate unit. Furthermore, a user can use the communication device to customise its audio output based on at least one characteristic of their ears. [00188]
  • headphones and earphones such as those used for communication devices are designed to have only one audio output profile. However, this may have disadvantages, because each person has a different hearing profile, as different people hear sounds differently and may have different sensitivities to different frequencies. Therefore, one particular earphone may be acceptable for one person, but may be entirely inappropriate for another individual. Therefore, it would be desirable to provide an communication device with an audio output that can be optimised for individual users.
  • the communication device may be able to determine an ear characteristic of the user's ear more accurately by detecting a response to an audio test signal. For example, an audio test signal with a pre-defined frequency and amplitude may be output to the user's ear via an audio output device of the communication device. The communication device may then detect a response by receiving an input from the user indicating that the audio test signal has been heard. This allows the communication device to determine that the user is able to hear that particular sound frequency at a particular amplitude. This information can be used to adjust the output of the audio stream in the audio output mode, in order to optimise the user's hearing experience. [00190] The communication device may be arranged to output a plurality of pre-defined audio test signals via the audio output device, in the hearing test mode. In addition, the
  • the communication device may be arranged to determine at least one characteristic of the ear of the user based on a response, or responses, to the plurality of pre-defined audio test signals.
  • OAEs Otoacoustic Emissions
  • SOAEs Spontaneous Otoacoustic Emissions
  • EOAEs Evoked Otoacoustic Emissions
  • SOAEs are emitted without external stimulation of the ear
  • EOAEs are emitted when the ear is subject to external stimulation.
  • the OAEs emitted by the ear of a user indicate characteristics of that user's ear.
  • the communication device may be provided with an ear-microphone which can detect sound emitted by the user's ear.
  • the communication device may determine a characteristic based on the OAEs, which in turn can be used to adjust the audio stream, in order to optimise the user's hearing experience.
  • the results of SOAE detection may be used as a basis for activating specific EOAE tests, for example by selecting a frequency and amplitude of external stimulation of the ear used to evoke EOAEs. Such results may be, for example, changes in the user's SOAE profile, which may be determined from the results of the SOAE detection.
  • a communication device 200 according to a third embodiment is shown, comprising a headset 201 which is communicatively connected to a computing device 202.
  • the headset 201 is substantially the same as the communication device 2 according to the first embodiment and the headset 1 02 according to the second embodiment, and comprises corresponding components.
  • the computing device 202 may be a smartphone. However, it will be appreciated that any other suitable computing device 202 may be used.
  • the environmental audio input device 16 is used to conduct a hearing test for the ear of the user 3, in order to determine at least one ear characteristic of the user's ear. This hearing test will be described in greater detail below.
  • the ear characteristic may represent the sensitivity of the ear to at least one frequency.
  • This ear characteristic can be stored at the storage module 19 at the headset 201 , so that the communication module 14 can adjust the audio output of the audio output device 13 based on the ear characteristic.
  • the communication module 14 is arranged to adjust the audio output via the audio output device 13 based on the sensitivity of the ear to certain frequencies, so that frequencies to which the ear is less sensitive are amplified and/or frequencies to which the ear is more sensitive are attenuated. In this way, the headset can optimise the audio stream for an individual user's ear.
  • the communication module 14 is arranged to operate in a hearing test mode and an audio output mode.
  • the communication module 14 is arranged to determine at least one ear characteristic of the ear of the user 3 based on a hearing test.
  • the communication module 14 is arranged to output an audio stream via the audio output device 13, where the audio stream is adjusted based on the at least one ear characteristic.
  • the computing device 202 comprises an antenna 105, a communication interface
  • the antennas 1 1 , 105 and the interfaces 13, 104 of the headset 201 and computing device 202 are used to establish a wireless connection between the headset 201 and the computing device 202, so that they can communicate with one another.
  • the headset 201 and the computing device 202 communicate wirelessly with one other, for instance, via Bluetooth® or via Wi-Fi.
  • the headset 201 and the computing device 202 may also communicate with one another via any other suitable connection, such as via a wired connection.
  • the computing device 202 also has an audio processing module 203, which performs a similar hearing test function to the communication module 14 of the headset 201 .
  • the functions of the communication module 14 of the headset 201 and the audio processing module 203 of the computing device 202 may be shared between the modules 14, 203.
  • the audio processing module 203 at the computing device 202 can be used to conduct hearing tests for determining an ear characteristic of the user's ear also.
  • the audio processing module 203 can also be used for transmitting audio signals to the audio output device 13 via the antennas 1 1 , 105 and communication interfaces 10, 1 04.
  • the computing device 202 further comprises an audio source module 204, which is arranged to interface with the audio processing module 203.
  • the audio source module 204 may be, for instance, a telephone link, or other audio or multi-media communications channel, a digital music player or a music streaming application.
  • the audio source module 204 is arranged to communicate with the headset 201 with the audio processing module 203, communication module 14, communication interfaces 10, 104 and antennas 1 1 , 105 in order to output voice, music, or any other audio, via the audio output device 13.
  • the storage module 106 at the computing device 202 may be used for storing audio for output by the headset 201 .
  • the storage module 106 may be used to store ear characteristics of the user's ear.
  • the headset 201 is connected to the server 6 by the peer-to-peer network, and the computing device 202 may also be connected to the server 6.
  • the server 6 may be used for storing ear characteristics and/or audio for output via the headset 201 .
  • the communications interface 104 of the computing device 202 is a peer-to-peer networking interface the computing device 202 may communicate directly with the server 6 using the peer-to-peer network
  • Figure 13 shows a flow chart illustrating a method of adjusting an audio output via the headset 201 based on an ear characteristic of the user's ear.
  • the user 3 selects the hearing test mode of the communication module 14 and/or audio processing module 203.
  • the user interacts with the display/user interface device 1 07 at the computing device 202 to activate a hearing test application.
  • the communication module 14 and/or the audio processing module 203 determines at least one ear characteristic of the user's ear by carrying out at least one hearing test. Different hearing tests that may be conducted by the communication module 14 and/or the audio processing module 203 will be described in greater detail below.
  • the ear characteristic can be stored at a storage module 1 9, 34 at the headset 201 , the computing device 202, or at the server 6.
  • step 1304 the user selects the audio output mode of the communication module 14 and/or the audio processing unit 203.
  • the user interacts with the display/user interface device 107 at the computing device 202 to activate an audio output mode.
  • the hearing test mode and the audio output mode are described as separate applications. However, the functionality of each of these modes may be integrated into a single application at the computing device 202.
  • the headset 201 may comprise at least one external environmental audio input device 16a located externally of an ear defender.
  • the external environmental audio input device 16a receives sound signals from the environment outside the headset 202. These sound signals can be processed by the communication module 14 and output by the audio output device 13 to allow the headset 201 to operate as a hearing aid to assist a user 3 to hear environmental sounds, such as speech from persons not wearing any headset, without removing the headset 201 .
  • the headset 201 comprises at least one external environmental audio input device 1 6a
  • the user can select either the external environmental audio input device 1 6a or the audio source module 204 as the preferred source of audio. If the user selects the external environmental audio input device 16a, the method proceeds to step 1308. On the other hand, the user may select the audio source module 204 as the preferred source of audio, in which case the method proceeds to step 1310. In this example the audio source module 204 is a digital music player. However, any other type of suitable audio application may be used. Steps 1300-1304 may be carried out once or many times, the same applies to steps 1306-1310.
  • the external environmental audio input device 16a receives sound from the environment outside of the headset 201 and replays the received sound, in real-time, via the audio output device 13. Before, replaying the sound, the audio stream is adjusted based on the ear characteristic determined and stored in the hearing test mode. This allows the headset 201 to optimise the user's hearing of sound in their environment.
  • the communication module 14 will limit the maximum volume of the sound emitted by the audio output device in order to avoid any problems if the user selects the external environmental audio input device 16a as the preferred source of audio when the user is in a noisy environment. In some example the communication module 14 may monitor the volume of external environmental noise detected and may deselect, or disable selection, of the external environmental audio input device 16a as the preferred source of audio when the volume of noise detected is too high.
  • the audio source module 204 transmits audio for output via the audio output device 13.
  • the signals are transmitted via the audio processing module 203, the communication module 14, the communication interfaces 10, 104, and the antennas 1 1 , 105.
  • a wired communication mechanism could be used instead
  • the audio is adjusted based on the ear characteristic determined in the hearing test mode. This allows the headset 201 to optimise the user's hearing of live or pre-recorded music.
  • the audio source module 204 may be able to selectively provide audio received through a communications channel supported by the smartphone, instead of music.
  • step 1306 and 1308 can be omitted, and the method can proceed directly from step 1304 to step 1310.
  • the communication module 14 may be arranged to selectively supress the transmission of audio having the audio source module 204, or the external environmental audio input device 1 6a, as a source from the audio output device 13, in favour of audio communication received through the peer-to-peer network, such as audio communications from other users, or PA system messages.
  • Figure 14 shows a flow chart illustrating a more detailed example of the method in step 1302 of Figure 13 for determining an ear characteristic.
  • the user initiates the hearing test application using the display/user interface 107.
  • the communication module 14 and/or audio processing module 203 causes the audio output device 13 to output a first pre-defined audio test signal.
  • the first pre-defined audio test signal has a pre-defined frequency and amplitude.
  • the display/user interface 107 prompts the user 3 to provide a positive or a negative response via the display/user interface device 107.
  • a positive response indicates that the user 3 can hear the first test signal, whilst a negative response, or a lack of a response perhaps after a particular time period, indicates that the user 3 cannot hear the first test signal.
  • step 1406 the amplitude of the first test signal is increased.
  • the method repeats steps 1402-1406 until the display/user interface 107 receives a response from the user indicating that the test signal has been heard.
  • step 1408 the communication module 14 and/or audio processing module 203 determines an ear characteristic of the ear. In this example, the ear
  • the characteristic determined is the sensitivity of the user's ear to the frequency of the test signal. For instance, this sensitivity may be recorded as the minimum amplitude at which the user is able to hear a particular frequency. This minimum amplitude may indicate that it is necessary to adjust the audio output so that audio signals at this frequency are either amplified or attenuated depending on whether the user is less or more sensitive to that particular frequency.
  • the method proceeds to step 1410 in which the frequency of the test signal is changed. Then steps 1402-1408 are repeated in order to determine another ear characteristic of the ear, which in this case may be the user's sensitivity to the new frequency.
  • Steps 1402-1410 may be repeated for a range of test frequencies.
  • the communication device 200 is able to build a hearing profile for the user.
  • audio output can be adjusted accordingly in order to optimise the user's hearing experience.
  • the headset 201 has at least one internal environmental audio input device 16b which is located internally of an ear defender so that it is able to sense sounds emitted by an ear of the user 3.
  • the internal environmental audio input device 16b is arranged to be located at least partially inside an ear of the user 3 when the headset 201 is in use.
  • the internal environmental audio input device 16b is arranged to be located outside the ear.
  • Figure 15 shows a flow chart illustrating an alternative example of the method in step 1302 of Figure 13 for determining an ear characteristic which may be used in examples where the headset has at least one internal environmental audio input device 16b.
  • This method of determining an ear characteristic relies on detecting Otoacoustic Emissions (OAEs) emitted by ear of the user.
  • OAEs Otoacoustic Emissions
  • OAEs are sounds given off by the inner ear as a result of an active cellular process.
  • a soundwave enters the ear canal it is transmitted to the fluid of the inner ear via the middle ear bones.
  • the air borne vibrations are converted into fluid borne vibrations in the cochlea.
  • the fluid borne vibrations in the cochlea result in the outer hair cells producing a sound that echoes back into the middle ear.
  • Outer hair cell vibrations can be induced by either external sound waves (EOAEs) or internal mechanisms (SOAEs).
  • EOAEs external sound waves
  • SOAEs internal mechanisms
  • OAE otoacoustic emission
  • the middle ear matches the acoustic impedance between the air and the fluid, thus maximizing the flow of energy from the air to the fluid of the inner ear. Impairment in the transmission of sound through the middle ear creates a conductive hearing loss which can be compensated by increasing the amplification of sounds entering the ear canal. Therefore, more energy is needed for the individual with a conductive hearing loss to hear sound, but once any audio is loud enough and the mechanical impediment is overcome, the ear works in a normal way. OAE results in this case would typically show non-frequency specific hearing loss in the form of reduced amplitudes above the noise floor across the frequency range of hearing.
  • the outer hair cells (OHC) of the cochlea of the inner ear perform a
  • OAEs in general provide reliable information on the ear's auditory pathway characteristics which can also be a significant help in preventing noise related hearing loss.
  • OAEs can provide the means to monitor a patient for early signs of noise related hearing damage. Excessive noise exposure affects outer hair cell (OHC) functionality, so OAEs can be used to detect this.
  • OOC outer hair cell
  • An OAE evaluation can give a warning sign of outer hair cell damage before it is evident on an audiogram.
  • OAEs are more sensitive in detecting cochlear dysfunctions, since the outer hair cells are the first structure of the inner ear to be damaged by external agents, even before the record of changes in audiometric thresholds.
  • SOAEs Spontaneous Otoacoustic Emissions
  • EOAEs Evoked Otoacoustic Emissions
  • step 1500 the internal environmental audio input device 16b is used to detect SOAEs emitted by the user's ear.
  • the audio output device 13 does not provide any stimulus to the ear.
  • the method proceeds to step 1502 in which an ear characteristic is determined based on the SOAEs, or lack thereof.
  • SOAEs Spontaneous otoacoustic emissions
  • SOAEs can be considered as continuously evoking otoacoustic emissions which provide supplementary information on the ear's auditory pathway characteristics. Accordingly, SOAEs are ideally suited to monitoring the user's hearing abilities during quiet periods to identify the onset of any hearing impairment without any user cooperation or awareness of the monitoring being necessary.
  • Spontaneous otoacoustic emissions typically show multiple narrow frequency spikes above the noise floor indicating normal functionality. An attenuation of these spikes over time could indicate impending noise related hearing impairment which may become permanent unless appropriate action is taken. The attenuation of these spikes may be recorded as an ear characteristic, and audio output can be adjusted accordingly, for instance, by increasing amplitude of audio output at these frequencies.
  • step 1504 the audio output device 13 outputs an audio test signal as a stimulus to the ear.
  • the stimulus is arranged to cause the ear to emit an EOAE, which is detected in step 1506 if any EOAEs are emitted.
  • an ear characteristic is determined based on the EOAE, or lack thereof.
  • the results of the SOAE detection in step 1500 may be used as a basis for activating specific EOAE tests in steps 1504 and 1506. Such results may be, for example, changes in the user's SOAE profile, which may be determined from the results of the SOAE detection in step 1 500.
  • a pure-tone stimulus is output and stimulus frequency OAEs (SFOAEs) are measured during the application of the pure-tone stimulus.
  • SFOAEs stimulus frequency OAEs
  • the SFOAEs are detected by measuring the vectorial difference between the stimulus waveform and the recorded waveform, which consists of the sum of the stimulus and the OAE.
  • a click, a broad frequency range, a tone burst or a brief duration pure tone is output and transient evoked OAEs (TEOAEs or TrOAEs) are measured.
  • TEOAEs or TrOAEs transient evoked OAEs
  • the evoked response from a click covers the frequency range up to around 4 kHz, while a tone burst will elicit a response from the region that has the same frequency as the pure tone.
  • DPOAEs distortion product OAEs
  • DPOAEs distortion product OAEs
  • fi and f 2 The corresponding DPOAEs are measured to determine an ear characteristic.
  • the pair of primary tones of similar intensity have a frequency ratio which typically lies at 1 :2 from which strong distortion products (DP) should be detected at 2f 1 -f 2 and at 2f 2 -fi , where f 2 is the higher frequency tone.
  • DP strong distortion products
  • EOAEs measure the conductive mechanism characteristics of the ear including the integrity of the outer hair cells (OHC) which can be damaged by exposure to high levels of noise.
  • OOC outer hair cells
  • SFOAE stimulus-frequency OAEs
  • the non-linear function of the cochlear will need to be taken into account.
  • EOAE measurements provide frequency specific information about hearing ability in terms of establishing whether auditory thresholds are within normal limits which is important for hearing aid settings and for diagnosing sensory or conductive hearing impairment which can lead to problems understanding speech in the presence of background noise.
  • the headset 201 includes a sound pressure sensor 205 and a probe 206.
  • the probe 206 is arranged to be inserted at least partially inside the ear when the headset 201 is in use. This allows the sensor 205 to measure the sound pressure level within the ear.
  • the sound pressure sensor 205 and the probe 206 are used to conduct another hearing test in order to determine an ear characteristic, which again can be used to optimise the audio output.
  • Figure 16 shows a flow chart illustrating a different example of the method of determining an ear characteristic, as in step 1302 of Figure 13, which may be used in examples where the headset 201 includes a sound pressure sensor 205 and a probe 206. This method of determining an ear characteristic relies on the sound pressure level in the ear 9 of the user.
  • the probe 206 and the sound pressure sensor 205 of the headset 201 measure the sound pressure level in the user's ear.
  • the probe 206 which in this instance is a probe tube, is placed with its tip approximately 6mm from the tympanic membrane of the ear.
  • the sound pressure level is measured when there is no audio output via the audio output device 13, or in other words when the audio output device 13 is inactive. This sound pressure level may be referred to as an unaided sound pressure level.
  • step 1602 as in step 1 600, the probe 206 and the sound pressure sensor 205 of the headset 201 measure the sound pressure level in the user's ear. However, in this step the sound pressure level is measured when the audio output device 13 is outputting an audio signal. Thus, in this step the sound pressure level is measured when the audio output device 13 is active. This sound pressure level may be referred to as an aided sound pressure level.
  • step 1604 the communication module 14 and/or audio processing module 203 calculates the difference between the unaided sound pressure level and the aided sound pressure level in order to determine the 'insertion gain'.
  • the insertion gain may be described as an ear characteristic. This characteristic can be matched to targets produced by various prescriptive formula based on the user's audiogram or individual hearing loss.
  • the size and shape of the ear canal affects the acoustics and resonant qualities of the ear.
  • real-ear the actual acoustic energy that exists within the ear canal of a particular person is accurately measured.
  • Real-ear measurements REMs
  • Machine learning algorithms use audio sensing in diverse and unconstrained acoustic environments to adjust the user's listening experience according to a learned model based on a large dataset of place visits.
  • Deepear is an example of micro-powered machine learning using deep neural networks (DNN) to significantly increase inference robustness to background noise beyond conventional approaches present in mobile devices. It uses computational models to infer a broad set of human behaviour and context from audio streams.
  • DNN deep neural networks
  • REMs allows the effects of adjustment of the audio output by the headset 201 to be verified by taking into account any changes to the sound pressure level (SPL) of the signal caused by the shape of the ear.
  • Fine tuning may include adjusting the overall volume, or making changes at specific pitches/frequencies.
  • REUR real-ear unaided response
  • REOG Real Ear Occluded Gain
  • the insertion gain is the difference REAR - REUR (real ear aided response minus real ear unaided response in sound pressure levels) or REAG - REUG (real ear gain parameters).
  • REAR - REUR real ear aided response minus real ear unaided response in sound pressure levels
  • REAG - REUG real ear gain parameters
  • Figure 18 illustrates an example of the output displayed on the display/user interface 107 of the computing device 202 when the communication module 14 and/or audio processing module 203 are in the audio output mode.
  • the display/user interface 107 presents a graph 60 to the user.
  • a first line 62 on the graph 60 displays the constituent frequencies within the sound from the environment received via the external environmental audio input device 16a, along with the amplitude of each of the frequencies.
  • the user can select points 64A-F along the first line 62 using the display/user interface 107. Once one of the points 64A-F has been selected the user can drag that point to a desired amplitude.
  • the user may be listening to the sounds in the surrounding environment using the headset 201 via the audio output device 13, and there may be a repetitive and loud low-frequency noise in the audio stream. This noise may be hindering the user's ability to hear a person speak.
  • the user may select points 64A and 64B and drag them down in order to reduce their amplitude so that the noise is less prominent in the audio output.
  • the user has created a second line, which represents the actual output of the audio output device 13, and the graph 60 displays the difference between the actual sounds in the environment in comparison to the sounds output via the headset 201 .
  • the headset 201 and/or computing device 202 may select audio having a particular frequency above a certain threshold and lower the amplitude of the selected audio automatically, without intervention from the user.
  • the computing device 202 may be arranged to receive an input from a user indicating a preferred frequency response for the audio stream output in the audio output mode.
  • the user may be able to adjust a graphic equaliser presented via the display/user interface 107.
  • the communication module 14 and/or audio processing module 203 may be arranged to adjust the output of the audio stream in the audio output mode based on the at least one ear characteristic determined in the hearing-test mode and the preferred frequency response indicated by the user. Therefore, it is possible to optimise the output audio stream based on a combination of user preferences and results of the hearing test. Therefore, the user may be able to 'fine-tune' their listening experience in order to achieve the optimum audio output.
  • Figure 19A and Figure 19B illustrate an example of a user's hearing profile.
  • the user's measured hearing profile is compared to a range of reference values within which normal hearing is considered to lie.
  • the DPOAE measurements in Figure 19A correlate closely with the audiometric profile in Figure 19B of a hearing loss patient.
  • TOAE data which is not illustrated here.
  • the noise floor is the lower curve in Figure 19A, and the notch in the curve indicating hearing loss lies near the 15dB threshold. From this data notches in the user's hearing profile can be optimised by adding the correct amount of insertion gain into the hearing device electronics at those frequencies, as previously described.
  • the second and third embodiments described above each comprise a computing device, the computing devices of the second and third embodiments comprising some different modules.
  • a communication device may be provided having the functionality of both of the second and third embodiments combined.
  • the computing device may comprise the modules of the computing devices of both the second and third embodiments.
  • modules may comprise software running on a computing device such as a processor, may comprise dedicated electronic hardware, or may comprise a combination of software and hardware.
  • the embodiments described above include a server.
  • the server may comprise a single server or network of servers.
  • the server comprise a network of separate servers which each provide different functionality.
  • the functionality of the server may be provided by a network of servers distributed across a geographical area, such as a worldwide distributed network of servers, and a user may be connected to an appropriate one of the network of servers based upon a user location
  • the embodiments described above comprise a peer-to-peer communications network connecting the communication devices. In some examples other types of communication networks may be used.
  • the embodiments described above comprise a peer-to-peer communications network formed by the communication devices in which noise mapping is carried out based on noise measurements made by the communication devices, and in some examples by fixed sensor nodes of the communication network, typically located in high noise locations. It is explained above that the peer-to-peer network may be supported by other devices. These other devices may include communication devices which are not used to carry out noise measurements, and noise measuring devices which are not communication devices.
  • the embodiments described above employ separate audio input devices, such as microphones, to receive user voice input and environmental noise input. In some examples one or more audio input devices may each be used to receive both user voice input and environmental noise input instead of, or in addition to, the separate audio input devices.
  • the embodiments described above comprise a combined communication device and noise dosimeter able to provide communications, monitor the cumulative noise levels to which a user has been exposed over time, and to provide noise data together with associated location data. In some examples the combined communication device and noise dosimeter may only provide communications and monitor the cumulative noise levels to which a user has been exposed over time, or may only provide communications and provide noise data together with associated location data.
  • the illustrated embodiments disclose communication devices each comprising a headset mounted on and supported by the head of a user, and covering and protecting both ears of the respective user.
  • a pair of communication devices may be used, each mounted on and protecting a single ear of the respective user.
  • an internal environmental audio input device is used to detect SOAEs emitted by the user's ear.
  • a dedicated audio input device separate from any environmental audio input device may be used to detect the SOAEs.
  • the illustrated embodiments disclose a communication device comprising an environmental audio input device.
  • the communication device may comprise a plurality of environmental audio input devices.
  • the second and third embodiments described above each have a communication device comprising a headset and a computing device.
  • the functionality of the computing device may be provided by the headset, so that the communication device comprises a headset only.
  • the above description discusses embodiments of the invention with reference to a single user for clarity. It will be understood that in practice the system may be shared by a plurality of users, and possibly by a very large number of users simultaneously.
  • each communication device 2 may comprise a low power sub-GHZ ISM band radio that does not depend on a mesh network for wide area peer-to-peer coverage and connects wirelessly to a remote hub without the need for hopping from node to node.
  • the communication system may include P2P group functions where the
  • supervisor/manager is given the option of group ownership which may extend to multiple concurrent P2P groups using Wi-Fi or other such technology, or a group communication system (GCS) where the network is divided into optional sub-groups.
  • GCS group communication system
  • the illustrated examples show a single communication system, for simplicity.
  • a plurality of communication systems may be connected together or interconnected by a network infrastructure to provide communication links between different interconnected groups of devices, which groups may be remotely located.
  • the plurality of communication systems may be connected together or interconnected by a network infrastructure such as infrastructure meshing with client meshing (P2P).
  • P2P infrastructure meshing with client meshing
  • the system monitors exposure to noise and outputs noise map data.
  • the communication device may additionally be provided with suitable sensors to measure other environmental conditions than noise. Examples of such environmental conditions include airborne dust concentration or temperature, such as excessive heat or cold. In such examples the system can additionally measure and track users exposure to these environmental conditions and/or map these environmental conditions in a corresponding manner to that described above for noise.
  • the communication device may be provided with suitable sensors to measure other environmental conditions or hazards as an alternative to noise sensors. Examples of such environmental conditions include dust or temperature, such as excessive heat or cold. In such examples the system can measure and track users exposure to these environmental conditions and/or map these environmental conditions in a corresponding manner to that described above for noise.
  • navigation functions and notifications may include other hazards.
  • navigation function and notifications may relate to any one or more of noise, dust and heat.
  • navigation function and notifications may relate to only heat or only dust respectively.
  • the system may be implemented as any form of a computing and/or electronic device.
  • a device may comprise one or more processors which may be microprocessors, controllers or any other suitable type of processors for processing computer executable instructions to control the operation of the device in order to gather and record routing information.
  • the processors may include one or more fixed function blocks (also referred to as accelerators) which implement a part of the method in hardware (rather than software or firmware).
  • Platform software comprising an operating system or any other suitable platform software may be provided at the computing-based device to enable application software to be executed on the device.
  • Computer- readable media may include, for example, computer-readable storage media.
  • Computer- readable storage media may include volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • a computer-readable storage media can be any available storage media that may be accessed by a computer.
  • Such computer-readable storage media may comprise RAM, ROM, EEPROM, flash memory or other memory devices, CD-ROM or other optical disc storage, magnetic disc storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • Disc and disk include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu- ray disc (BD).
  • BD blu- ray disc
  • Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a connection for instance, can be a communication medium.
  • the functionality described herein can be performed, at least in part, by one or more hardware logic components.
  • hardware logic components may include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs). Complex Progrmmable Logic Devices (CPLDs), etc.
  • the term 'computer' is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realise that such processing capabilities are incorporated into many different devices and therefore the term 'computer' includes PCs, servers, mobile telephones including smartphones, personal digital assistants and many other devices.
  • a remote computer may store an example of the process described as software.
  • a local or terminal computer may access the remote computer and download a part or all of the software to run the program.
  • the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network).
  • the remote computer or computer network.
  • all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
  • the figures illustrate exemplary methods. While the methods are shown and described as being a series of acts that are performed in a particular sequence, it is to be understood and appreciated that the methods are not limited by the order of the sequence. For example, some acts can occur in a different order than what is described herein. In addition, an act can occur concurrently with another act. Further, in some instances, not all acts may be required to implement a method described herein.
  • the acts described herein may comprise computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media.
  • the computer-executable instructions can include routines, sub-routines, programs, threads of execution, and/or the like.
  • results of acts of the methods can be stored in a computer-readable medium, displayed on a display device, and/or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Pathology (AREA)
  • Neurosurgery (AREA)
  • Psychology (AREA)
  • Vascular Medicine (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)

Abstract

A communication device comprising a peer-to-peer communication interface arranged to establish a connection between the communication device and at least one other communication device via a peer-to-peer network, an audio input device for receiving audio from a user, a communication module arranged to transmit to one of the at least one other communication devices, via the peer-to-peer communication interface, audio data based on audio received from the user via the audio input device; and arranged to receive audio data from the one of the at least one other communication devices, via the peer-to-peer networking interface, an audio output device for outputting audio based on the received audio data, at least one ear defender for reducing noise exposure, an audio input device for receiving environmental audio, a noise level module arranged to determine a noise level based on the received environmental audio, and a positioning module arranged to determine a position of the audio input device corresponding with the determined noise level and arranged to associate the position of the audio input device with the corresponding noise level.

Description

IMPROVED COMMUNICATION DEVICE
[0001] The present application relates to a communication device, a communication system comprising a plurality of communication devices and a method of operation.
Background [0002] When persons are exposed to a noisy environment there is a risk that their exposure to sound and noise may exceed recommended safety exposure limits. This can lead to negative impacts on a person's hearing. For instance, a person may experience symptoms such as deafness and tinnitus. This can be a particular problem for people working in noisy factory environments and other loud places such as heavy construction sites or loud entertainment venues.
[0003] There exists a need to be able to more accurately monitor the sound/noise levels to which a person is exposed and the circumstances corresponding with the sound/noise levels. In addition, it would be desirable to be able to generate data to enable people to make preemptive decisions that may help to lower their exposure to noise. [0004] Ear defenders may be used to protect persons from sound and noise exposure in a noisy environment by blocking sound energy from reaching their ears. However, ear defenders generally block all sound indiscriminately. Accordingly, the use of ear defenders can make it difficult to communicate with persons using them. This can lead to reduced efficiency in the workplace, and potentially to reduced safety, because it may be difficult for persons using ear defenders to hear warnings or requests for assistance.
[0005] The embodiments described below are not limited to implementations which solve any or all of the disadvantages of the known approaches described above.
Summary
[0006] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to determine the scope of the claimed subject matter.
[0007] The present disclosure provides a communication device comprising: a
communication means providing audio communication over a peer-to-peer network; an ear defender for reducing noise exposure; a noise measuring means to determine noise level; and a positioning means arranged to determine a position, the device associating the determined position with the corresponding noise level. [0008] In a first aspect, the present disclosure provides a communication device comprising: a peer-to-peer communication interface arranged to establish a connection between the communication device and at least one other communication device via a peer-to-peer network; an audio input device for receiving audio from a user; a communication module arranged to transmit to one of the at least one other communication devices, via the peer-to- peer communication interface, audio data based on audio received from the user via the audio input device; and arranged to receive audio data from the one of the at least one other communication devices, via the peer-to-peer networking interface; an audio output device for outputting audio based on the received audio data; at least one ear defender for reducing noise exposure; an audio input device for receiving environmental audio; a noise level module arranged to determine a noise level based on the received environmental audio; and a positioning module arranged to determine a position of the audio input device corresponding with the determined noise level; and arranged to associate the position of the audio input device with the corresponding noise level. [0009] In a second aspect, the present disclosure provides a combined noise dosimeter and communication device comprising: an audio input device for receiving audio; and a noise level module arranged to determine a noise level based on the received audio; wherein the audio input device is associated with a positioning module arranged to: determine a position of the audio input device corresponding with the determined noise level; and arranged to associate the position of the audio input device with the corresponding noise level; an audio input device for receiving audio from a user; an audio output device for outputting audio to the user; a communication interface for transmitting and receiving over a network; and a head-mount or ear-mount comprising ear defenders for reducing noise level exposure of a user.
[0010] In a third aspect, the present disclosure provides a communication system comprising a plurality of communication devices according to the first aspect or the second aspect connected to one another via a peer-to-peer network.
[0011] In a fourth aspect, the present disclosure provides a method of monitoring noise exposure using a communication device according to the first aspect or the second aspect, the method comprising: receiving audio at the audio input device for receiving environmental audio; determining a noise level based on the received audio; determining a position of the audio input device corresponding with the determined noise level; and associating the position of the audio input device with the corresponding noise level.
[0012] In a fifth aspect, the present disclosure provides a computer program comprising code portions which, when executed on a processor of a computer, cause the computer to carry out a method according to the fourth aspect. [0013] In another aspect, the present disclosure provides a communication device comprising: a peer-to-peer communication interface arranged to establish a connection between the communication device and at least one other communication device via a peer- to-peer network; an audio input device for receiving audio from a user; a communication module arranged to transmit to one of the at least one other communication devices, via the peer-to-peer communication interface, audio data based on audio received from the user via the audio input device; and arranged to receive audio data from the one of the at least one other communication devices, via the peer-to-peer networking interface; an audio output device for outputting audio based on the received audio data; at least one ear defender for reducing noise exposure; an input device for receiving an environmental parameter other than audio; a level module arranged to determine an environmental parameter level based on the received environmental parameter; and a positioning module arranged to determine a position of the input device corresponding with the determined environmental parameter level; and arranged to associate the position of the input device with the corresponding environmental parameter level. Further, the present disclosure provides a corresponding communication system, monitoring method and computer program.
[0014] The methods described herein may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc. and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
[0015] This application acknowledges that firmware and software can be valuable, separately tradable commodities.
[0016] The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the invention. Brief Description of the Drawings
[0017] Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
[0018] Figure 1 is a schematic diagram showing the general architecture of a communication system according to a first embodiment, [0019] Figure 2 is a schematic diagram showing the general architecture of a communication device useable in the communication system of figure 1 ;
[0020] Figure 3 shows a flow chart illustrating a method of operation of the system;
[0021] Figure 4 illustrates a look up table that can be used to determine a user's allowable exposure to noise in percentage terms;
[0022] Figure 5 shows a flow chart illustrating a method of operation of the system;
[0023] Figure 6 shows an example of a map display generated by the system;
[0024] Figure 7 shows a flow chart illustrating a method of activating different modes at the communication device;
[0025] Figure 8 shows a flow chart illustrating a method of using the communication device in a 'connection enabled' mode;
[0026] Figure 9 shows a flow chart illustrating a method of using the communication device in a 'voice recognition' mode;
[0027] Figure 10 is a schematic diagram showing the general architecture of a
communication system according to a second embodiment;
[0028] Figure 1 1 is a schematic diagram showing the general architecture of a computing device useable in the communication system of figure 10;
[0029] Figure 12 is a schematic diagram showing the general architecture of a
communication system according to a third embodiment;
[0030] Figure 13 shows a flow chart illustrating a method of adjusting an audio output based on an ear characteristic of a user's ear;
[0031] Figure 14 shows a flow chart illustrating an example of a method of determining an ear characteristic;
[0032] Figure 15 shows a flow chart illustrating another example of a method of determining an ear characteristic;
[0033] Figure 16 shows a flow chart illustrating another example of a method of determining an ear characteristic; [0034] Figure 17 A illustrates a graph of a user's ear response to distortion product otoacoustic emissions (DPOAEs);
[0035] Figure 17B illustrates a graph of a real ear aided response (REAR) and a real ear unaided response (REUR) for a device matched to a users ear; [0036] Figure 18 illustrates an example of a display in an audio output mode of the communication system of the third embodiment; and
[0037] Figures 1 9A and 19B illustrate examples of a user's hearing profile.
[0038] Common reference numerals are used throughout the figures to indicate similar features. Detailed Description
[0039] Embodiments of the present invention are described below by way of example only. These examples represent the best ways of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
[0040] Figure 1 illustrates a communication system 1 according to a first embodiment, and comprising a plurality of communication devices 2 and a server 6. Each of the communication devices 2 is worn by a different user 3. Each communication device 2 comprises a pair of ear defenders 4. The ear defenders 4 are sound reducing, and so reduce the noise level exposure of the respective users ears, and so protect the ears and hearing of the respective user 3 from damage by excessive sound and noise exposure. Figure 2 illustrates a single communication device 2 in more detail.
[0041] The communication system 1 is arranged to provide communications between the different users 3 using their respective communication devices 2, and also to gather noise data and operate as a noise dosimeter. Accordingly, the communication system 1 is able to provide a combined communications system and noise dosimeter.
[0042] In the illustrated embodiment of figure 1 the communication device 2 comprises a pair of ear defenders 4 physically connected by a linking arm 5 to form a headset mounted on and supported by the head of a user 3, and covering and protecting both ears of the respective user 3. [0043] Each communication device 2 can be connected to another communication device 2, and to the server 6, using a respective network interface or communication interface 10 at each communication device 2. The server 6 also has a communication interface 63. The communication devices 2 may be connected to one another, and to the server 6, directly or indirectly via another communication device 2. The communication interfaces 10 and 63 are arranged to support peer-to-peer networking. In this way, the communication devices 2 are connected to one another and to the server 6 to form a peer-to-peer network, so that the users 3 can communicate with one another, and the communication devices 2 can send noise data to the server 6. A MESH network, is one type of peer-to-peer network that may be used to connect the plurality of communication devices to one another, and to the server 6.
[0044] A wireless MESH network (IEEE 802.15.4) is an ad-hoc network formed by devices which are in range of one another. It is a peer-to-peer cooperative communication infrastructure in which wireless access points (APs) and nearby devices act as repeaters that transmit data from node to node. In some cases, many of the APs aren't physically connected to a wired network. The APs, and other devices create a mesh with each other that can route data back to a wired network via a gateway.
[0045] APs and other devices acting a repeaters may be included in the system 1 to support the peer-to-peer network. Such APs and other devices are not shown in figure 1 to improve clarity. In some examples these repeaters may form fixed nodes of the communication system 1 . In some examples the server 6 and/or any gateway may be fixed nodes of the communication system or be connected to fixed nodes of the communication system 1 .
[0046] A wireless mesh network becomes more efficient with each additional network connection. Wireless mesh networks feature a "multi-hop" topology in which data packets "hop" short distances from one node to another until they reach their final destination. The greater the number of available nodes, the greater the distance the data packet may be required to travel. Increasing capacity or extending the coverage area can be achieved by adding more nodes, which can be fixed or mobile.
[0047] The communication system 1 comprises a peer-to-peer network of communication devices 2 which enables communication over the network using short range low power wireless links. This can require considerably less computing and signal transmission power than in other communication devices. In addition, this can allow the communication devices 2 to consume less power and to have a simpler and smaller design. The peer-to-peer network may comprise the communication devices 2 and server 6 only. However, in another example, the peer-to-peer network may comprise the communication devices 2 and server 6 as well other devices such as the APs described above. [0048] In the workplace environment, an employee equipped with one of the communication devices 2 described herein can be reachable at all times. This may avoid the need for a general announcement system such as a public address (PA) system, which uses one or more loudspeakers to communicate information to many employees. A PA system may not be appropriate in some situations. Furthermore, the communication device 2 can avoid the need for an employee to carry around a conventional mobile telephone.
[0049] The first embodiment provides a compact, simple and inexpensive communication device, providing a solution to the problems associated with known communication devices, which are often bulky, complex and expensive. This can be a problem, in particular, in a workplace environment where there are a number of employees each requiring their own communication device in order to communicate with one another. A bulky communication device may hinder an employee's ability to go about their work, whilst a complex
communication device may be difficult for an employee to use. In addition, if each individual communication device is expensive, then it will become very costly for an employer to equip their entire workforce with communication devices.
[0050] The communication device 2 comprises a networking interface or communication interface 10 and an antenna 1 1 . The communication interface 10 is arranged to establish a connection between the communication device 2 and another similar communication device via a peer-to-peer network, in which the other similar communication devices also include a peer-to-peer networking capable communication interface.
[0051] The communication device 2 comprises a voice audio input device 12 which is arranged to receive audio from a user 3 using the communication device 3, that is, the user 2 wearing the headset. The communication device 2 also comprises an environmental audio input device 16, such as an external microphone. Thus, the communication device 3 is able to receive voice input from the user 2, and able to receive audio from the environment in which the communication device 2 and the user 3 are located. Each of the voice audio input device 12 and the environmental audio input device 16 may be a microphone, or any other suitable audio input device.
[0052] In the example illustrated in Figures 1 and 2, the voice audio input device 12 is arranged on an arm external to an ear defender 4, so that the voice audio input device 12 can be arranged proximate to the user's mouth. However, in another example the voice audio input device 12 comprises an in-ear microphone which receives amplitude modified user speech signals conducted into the ear canal via bone material, which is referred to as the occlusion effect. In this case, it is the user speech signals received through this occlusion effect which is used for user voice recognition. Here, the frequency spectrum of speech is modified by the occlusion effect causing an elevation of the lower tones. This technique may enables ease of user transferability unlike conventional voice recognition systems which requires stored voice samples.
[0053] The communication device 2 comprises an audio output device 13, such as a speaker, which is arranged to output audio to the user 2.
[0054] In the illustrated example of the communication device 2, only a single audio output device 13 is shown, but preferably the communication device 2 is provided with a pair of audio output devices 13, one for each ear of the user 3. In some examples a separate communication device may be associated with each ear of the user 3, with each
communication device having only a single audio output device. In the illustrated example the audio output device 13 is shown schematically. However, the audio output device 13 may be any form of listening device such as a headphone, an earphone or an earbud.
[0055] The communication device 2 comprises a communication module 14 which is arranged to transmit, via the communication interface 10 to another communication device 2 of the communication system 1 , audio received from the user 2 via the voice audio input device 12. In addition, the communication module 14 is arranged to receive audio data from other communication devices 2, via the communication interface 10, and provide this to the user 2 via the audio output device 13. Typically, the communication devices 3 can conduct two-way communication between one another. However, the communication device 3 may engage in one way communication with one or many other communication devices. In addition, the communication module 14 is arranged to send noise level data regarding audio from the environment received via the voice audio input device 12 to the server 6, via the communication interface 10.
[0056] Thus, the communication devices 2 of the communications system 1 provide two way and one way audio communication between the different users 3 of the system 1 using the voice audio input devices 12, the communication modules 14, the communication interfaces 10 and the audio output devices 13 of the different communication devices 2.
[0057] The communication device 2 comprises a voice recognition module 15 which is arranged to receive voice inputs from a user 3 via the voice audio input device 12. The voice recognition module 15 is arranged to store a number of pre-defined voice commands each associated with an action. The voice recognition module 15 is arranged to detect a match between voice input and one of the pre-defined voice commands, and is arranged to perform the action associated with the matching voice command. [0058] In addition, the voice recognition module 15 is arranged to control the communication interface 10 and the communication module 14. For instance, the voice recognition module 1 5 is arranged to cause the communication interface 10 to initiate establishing a connection between the communication device 2 and another communication device 2 based on audio commands received from the user 3 via the voice audio input device 12. The voice recognition module 15 may be arranged to cause the communication module 14 to communicate with another communication device 2.
[0059] The communication device 2 further comprises a user-interface switch 17 and a control module 18. In the illustrated example, the user-interface switch 17 is a pressure sensitive switch 17. However, any other suitable type of switch, control or contact sensor may be used.
[0060] The user-interface switch 17 and the control module 18 are arranged to activate different modes at the communication module 14. In the illustrated example, there is only one user-interface switch 17 for activating different modes at the communication module 14. Furthermore, in the illustrated example the communication device 2 comprises only one user- interface switch 17.
[0061] The control module 18 is arranged to store a number of pre-defined user interactions with the user-interface switch 17. In addition, each pre-defined user interaction is associated with a different action to be performed at the control module 18.
[0062] The control module 18 is arranged to detect a user interaction with the user-interface switch 17 and a match between the detected user interaction and one of the pre-defined user interactions. Then, the control module 28 is arranged to perform the action associated with the matching detected user interaction.
[0063] The communication module 14 is configured to be able to operate in a plurality of different modes, and the control module 18 is arranged to detect whether one of a plurality of pre-defined user interactions with the switch has occurred. The control module 14 is arranged to activate the mode associated with the detected user interaction.
[0064] The environmental audio input device 16 can be used to detect environmental noise in order to provide noise cancelling via the audio output device 13, for example under the control of the communication module 14 or the control modulel 8. The communication device 2 may provide noise cancelling during communication between communication devices 2. The communication device 2 may decide to not provide noise cancelling when there is no communication between devices 2. [0065] The communication device 2 further comprises a storage module 19, which is arranged to store data. The storage module 19 may store an identification parameter for the communication device 2. The identification parameter is indicative of a unique identifier for the communication device 2. The unique identifier may be a number for the communication device 2, a title for the user 3 of the communication device 2 and/or the user's name. This unique identifier may be used so that other communication devices 2 can establish a connection with the communication device 2. It will be understood that it is only necessary for the unique identifier to be unique among all communication devices 2 which are in, or may be connectable to, the peer-to-peer network. It is not necessary that the unique identifier is unique among all communication devices 2 in existence, although this may be the case.
[0066] In addition, the storage module 19 may store a database comprising a list of unique identifiers for other communication devices 2 in the peer-to-peer network, where each unique identifier corresponds with a speech label stored at the storage module 1 9. Each speech label may be indicative of a name, or a label, for the user 3 of the communication device 2 to which the speech label's associated unique identifier corresponds. Each individual user 3 can be stored in association with a number. For instance, the lowest number, such as 'one', may refer to the most senior user 3.
[0067] As is explained above, each of the communication devices 2 receives audio from its surrounding environment through the environmental audio input device 16. In operation of the communication system 1 as a noise dosimeter system each one of the communication devices 2 receives audio from its surrounding environment and a noise level is determined. Each noise level determined is associated with a position at which the audio was received, from which the noise level was determined. Thus, each position is associated with a corresponding noise level. Typically, the noise level will indicate the amplitude of the audio received. For instance, the noise level may be the peak amplitude of the audio in decibels
(dB) received at a particular time or an equivalent continuous value over a specified period of time in decibels (dB).
[0068] The system 1 is able to generate an indication of the noise level to which a user 3 has been exposed along with positional information associated with the noise level. Thus, a particular location can be associated with a particular noise level. For example, it may be possible to determine that a particular location within a factory is associated with a particularly high noise level. Therefore, a user 3 can decide to avoid that location in order to limit their exposure to potentially harmful noise levels.
[0069] The system 1 may store a plurality of the positions each in association with a corresponding noise level. Thus, it is possible to generate information describing locations with associated noise levels. This helps to build a more complete indication of the noise levels throughout an environment. This can help someone to make better decisions about which areas to avoid, in order to limit their exposure to potentially harmful noise levels.
[0070] The information generated can be used to output map data which can be presented to a user in combination with a map of an environment in which the audio was received. This may present, to the user, at least some of the positions each in association with their corresponding noise level. This map may be regarded as a noise intensity map. The map can also present at least one high noise level area indicative of a position associated with a noise level above a high noise level threshold. Furthermore, the map may present at least one boundary defining the perimeter of a high noise level area. Thus, a user 3 can easily determine which areas to avoid, in order to limit their exposure to noise.
[0071] The system 1 may use map data and the noise levels with the associated position data in order to determine a navigation path from one place to another. The navigation path may be associated with a reduced level of noise exposure. For example, the system may determine a navigation path from one place to another, avoiding at least one high noise level area. In another example, the system 1 determines noise levels associated with a plurality of different navigation paths from one place to another, and presents a user 3 with the navigation path associated with the lowest noise level. Thus, a user 3 can limit their exposure to noise by following the navigation path. [0072] In addition, the system 1 may notify a user 3 when they have been exposed to a noise level at or over a particular noise threshold. Therefore, the user 3 can be alerted when they have been exposed to an unacceptable level of noise. Then, the user 3 may decide to move to a quieter environment, so that they can attempt to avoid damage to their hearing. Preferably, the user 3 is notified in advance of reaching the noise threshold. In this way, the user 3 can be alerted before they have been exposed to an unacceptable level of noise. The noise threshold may be user-defined. Thus, since some people have higher and lower tolerances to noise, this enables the system to be optimised for individual people.
[0073] Referring to Figure 2, the communication device 2 comprises a positioning module 20, a noise level module 21 , a calculation module 22 and a notification module 23. In the illustrated example, the positioning module 20 uses MESH Networks Position System
(MPS™) to determine the position. MPS™ does not rely on satellites, so it can operate in both exterior and interior locations where GPS will not. MPS™ determines position by utilising time of flight and triangulation information using other devices in the network as reference points. In another example GPS is used; however, it will be appreciated that any other suitable positioning system may be used, instead of or in combination with GPS and/or MPS™.
[0074] Figure 3 illustrates a flow chart of the operation of a communication device 2 acting as a noise dosimeter. [0075] In step 300, the audio input device 1 1 receives audio from the environment in which it is located.
[0076] Since the environmental audio input device 16 is mounted on the communication device 2 headset, the environmental audio input device 16 can be used in proximity to the user's ears. Therefore, the system 1 may be able to obtain a more accurate reading of the actual noise level to which the user is exposed.
[0077] The environmental audio input device 16 may be located externally of an ear defender 4 so that it senses the environmental noise directly, or may be located internally of an ear defender 4 so that it senses the level of noise to which the users ears are subjected directly, after the environmental noise has been attenuated by the ear defender 4. In some examples multiple environmental audio input devices 16 mounted both externally and internally of an ear defender may be used. In some examples where the environmental audio input device 16 is located internally of an ear defender 4 the environmental audio input device 16 may be located in the ear canal of a user.
[0078] In step 302 the noise level module 21 determines a noise level based on the audio received at the environmental audio input device 16. Generally, the noise level module 21 will measure the noise level in decibels (dB). However, any other measure of sound/noise level, amplitude or intensity may be used. Noise levels may include sound pressure levels (SPL) and continuous sound exposure levels (SEL), including peak values and specified periods of time. Once a noise level has been determined, the noise level module 21 may output the noise level to the storage module 19 of the communication device 2.
[0079] In examples where the environmental audio input device 16 is protected from ambient noise levels by an ear defender 4, for instance where the environmental audio input device 16 is located internally of the ear defender 4, such as in the ear canal of a user 3, the noise level module 21 can be arranged to estimate the external environmental noise based on the noise level sensed by the environmental audio input device 21 and a known sound reduction effect of the ear defender 4.
[0080] Alternatively, in examples where the environmental audio input device 16 is not protected from ambient noise levels by an ear defender 4, for instance where the environmental audio input device 16 is located externally of the ear defender 4, the noise level module 21 can be arranged to estimate the noise level to which the users ears are subjected based on the external environmental noise sensed by the environmental audio input device 21 and a known sound reduction effect of the ear defender 4. [0081] In examples where multiple environmental audio input devices 16 are used internally and externally of an ear defender to directly sense both external environmental noise and the noise level to which the users ears are subjected, noise levels of these environmental and in- ear sounds determined by the noise level module 21 may be stored separately at the storage module 19. [0082] In examples where multiple environmental audio input devices 16 are used internally and externally of an ear defender to directly sense both external environmental noise and the noise level to which the users ears are subjected, the difference between the measured internal and external noise levels provided by the noise level module 21 may be calculated and compared to a threshold value by the calculation module 22 to determine the sound reduction effect being provided by the ear defender 4. There may be a predetermined noise difference threshold stored in the storage module 19, which the calculation module 22 can access. If the determined sound reduction effect is determined to be below the
predetermined threshold the notification module 23 may issue an alert to the user via the audio output device 13 to warn the user of improper operation of the ear defender 4, and that the users hearing is not being fully protected. Reduced sound reduction effect may indicate that the ear defender is defective or incorrectly fitted, and the alert may prompt the user to check the fitting of their ear defenders, and if necessary to exit, or avoid entering, a noisy environment until the functioning of their ear defenders can be checked.
[0083] Noise level data may be time stamped, for instance with the time at which the audio was received from which the noise level data were generated. Further, noise level data may be tagged with the sensed sound reduction effect of the headset in examples where this is available.
[0084] The noise level is received by the positioning module 20. In step 304, the positioning module 20 determines the position of the user 3. In one example, the positioning module 20 uses MESH Networks Position System (MPS™) to determine the position. MPS™ does not rely on satellites, so it can operate in both exterior and interior locations where GPS will not. MPS™ determines position by utilising time of flight and triangulation information using other devices in the network as reference points. In another example GPS is used; however, it will be appreciated that any other suitable positioning system may be used, instead of or in combination with GPS and/or MPS™. [0085] Since the user 3 is wearing the communication device 2 headset which includes the environmental audio input device 16 and the positioning module 20, the positioning device 20 can determine the position of the environmental audio input device 1 6 corresponding with the determined noise level. [0086] In step 306, once the positioning module 20 has determined an estimate of the position of the environmental audio input device 1 6, the positioning module 20 associates the position with the corresponding noise level. For instance, the positioning module 20 may link the co-ordinates of the position with the decibel reading of the noise level.
[0087] In step 308, the noise level and position data, from the communication device 2, is communicated to the server 6 through the peer-to-peer network by the communication module 14 and the communication interface 10.
[0088] in step 31 0 a calculation module 22 of the communication device 2 is used to calculate a calculated noise level. The noise level may be calculated based on time data and noise levels determined by the noise level module 21 . The time data may be associated with the noise levels. The calculated noise level may include a calculation of peak (impulse) noise, equivalent continuous (average) 'A' weighted noise, which is a UK standard, or a time- weighted average (TWA) noise, which is a USA standard. The peak noise, the equivalent continuous noise and the TWA noise are calculated over a predefined period of time, such as over an eight hour period. [0089] Peak noise can be calculated by detecting peak amplitudes of noise. Continuous noise can be sampled over a predefined period of time. Equivalent continuous noise can be calculated by averaging all noise level samples to which a subject is exposed during a period of time, for example, during an eight-hour workday. An average can be calculated through the addition of the magnitude of these samples divided by the number of samples collected during the time period. TWA noise is the summation of the actual number of hours over which samples are recorded divided by the permissible hours at each sound level multiplied by one hundred for calculating a percentage dose for an eight hour shift. The equivalent continuous noise level calculation used in the UK uses the "A-weighting standard" for measuring harmful sound pressure level (SPL) values. These weightings take into account subjects' varying susceptibility to noise related hearing damage at different frequencies.
[0090] Noise level (Lp) is a logarithmic measure of the root mean square (RMS) sound pressure relative to a reference (ambient) level expressed in decibels (dB). The A-weighted equivalent continuous noise level, often referred to as energy-averaged exposure level (LAeq), is calculated by dividing the measure dB values by 1 0, converting to antilog values, assigning an A-weighting curve to them, summing these scaled values, dividing by the number of samples taken and then taking the logarithm to arrive at A-weighted decibels of power dBA. This is illustrated in Equation 1 :
LAeq = 10 log10 {1 /n [10L1 A 10 + 10L2 A 10 + 1 0L3 A 10 + ... +10Ln A 10]} Equation 1 [0091] In Equation 1 n is number of samples.
[0092] According to UK Occupational H&S requirements the daily value of LAeq should lie below 85dBA over an 8 hour period. In another calculation of noise level a C-weighting (LCpk) is used for measuring peak values which according to Occupational H&S should lie below 137dBC.
[0093] Short Leq (non A-weighted values) is a method of recording and storing sound levels for displaying the true time history of noise events and all sound levels during any specified period of time. The resulting 'time histories', typically measured in 1 /8 second intervals may then be used to calculate the 'overall' levels for any sub-period of the overall measurement. The time interval (sample time) can be varied according to the amount of change recorded between intervals.
[0094] To measure true peak values of impulsive sound levels, a meter must be equipped with a peak detector. Accordingly, in order to measure true peak values of impulsive sound levels, in this case the environmental audio input device 16 and the noise level module 21 will need to be able to act as a peak detector. Alternatively, in some examples the communication device 2 may be equipped with a separate peak detector. A peak detector responds in less than 100uS according to the sound level meter standards. A typical response time is 40uS.
[0095] A noise dose is a descriptor of noise exposure expressed in percentage terms. For example a noise dose of 160% (87dBA for 8 hours) exceeds the permissible 100% dose (85dBA for 8 hours) by 60%.
[0096] The dose value is derived from Equation 2 as follows:
Dose = 100 x T/8 x 10<L Aeq~85> 10 Equation 2
[0097] In Equation 2, T is the exposure time.
[0098] The noise exposure level (LEX), is the measured LAeq of the user's exposure (in decibels) which is linearly adjusted for a fixed 8 hour period. This is illustrated in Equation 3:
LEX = 10 log10{Dose/100} + 85 dBA Equation 3 [0099] There may be a pre-defined noise threshold stored in the storage module 19, which the calculation module 22 can access. This noise threshold may be a recommended average noise threshold, such as the Occupational H&S threshold of 85dBA.
[00100] In step 312, the calculation module 22 determines whether the noise threshold has been reached. If this threshold has been reached, the method proceeds to step 314 in which the notification module 22, outputs a notification sound through the audio output device 13 to notify the user that they have reached their noise exposure threshold.
[00101] The noise exposure level defined in Equation 3 above, gives a running amount in decibels of the current exposure level adjusted for an eight hour shift which could be compared with the permissible 85dBA threshold
[00102] The system may determine a percentage value of the permissible dose (see
Equations 1 and 2 above). 100% may be used as the threshold above which noise induced damaged hearing could occur. Any pre-set threshold should be less than 100%.
[00103] There are other possible calculations of noise level in percentage terms. For example, a continuous measure of how well the user is doing at managing his/her exposure to noise could also be provided, where the permissible noise dose for an 8 hour shift is adjusted during the shift as illustrated in the example below by using the table in Figure 4. Figure 4 illustrates a look up table that can be used to determine a user's allowable exposure to noise in percentage terms. In Figure 4, Leq dBA is equivalent to LAeq dB. [00104] In this example, for the first 2 hours during an 8 hour shift the noise dose exposure level calculated from the table for a LAeq reading of 88dBA is 49.9%. This value is divided by the noise dose for the permissible 85dBA reading of 25% for 2 hours to give 200%, which is equal to (49.9/25)x100. This gives a rolling forecast for the user 3 based on current trends which in this case indicates twice the permissible exposure level at the end of the 8 hour shift. [00105] The forecast may be calculated by the calculation module 22 and provided to the user by the notification module 23 using the audio output device 13 at set times, for example periodically, during a work shift. Such forecasts may alternatively, or additionally, be provided by the server 6.
[00106] An alternate method may be to start a real time clock at the start of each working day and calculate the number of hours left of permissible noise exposure at current noise levels. For example, if the current equivalent continuous noise level LAeq over the first hour is 88dBA, the above table calculates that 3 hours remain at current noise levels. This may be useful for diverse working environments. [00107] In another example, a time weighed average (TWA) percentage is output. This would be particularly useful for the North American market.
[00108] In another example, at the start of each new day or another defined period should be preceded by an automatic re-setting of the noise exposure data stored in the communication device 2 to zero for monitoring exposure levels over this period. The pre-defined threshold is also stored in the communication device 2 which should be the permissible exposure limit (PEL) or a user defined lower (inset) threshold value.
[00109] The notification module 23 may also be arranged to output a notification when the noise level at a particular instant reaches or exceeds a predetermined peak noise threshold. [00110] In another example, the calculation module 22 may determine whether calculated noise level has reached a pre-determined level below the noise threshold. For example, the calculation module 22 may determine that the calculated noise level is 10% below the noise threshold. In this case, the notification module 23 may output a notification. Here, the notification module 23 may cause the audio output device 13 to output a notification sound to notify the user that the threshold is about to be reached and may recommend action for limiting exposure to noise. Therefore, the user can be alerted before they have been exposed to an unacceptable level of noise. Thus, the user can move to a quieter environment, so that they can pre-emptively attempt to avoid damage to their hearing.
[00111] Steps 300-308 are repeated by the different communication devices making up the system 1 in order to obtain a plurality of noise level measurements, each associated with a respective position, which are all sent through the peer-to-peer network to the server 6. This helps to build a more complete indication of the noise levels throughout a particular environment. The noise level and position data are stored at the server 6. In some examples the data may also be stored at the respective communication devices 2, or the data may be stored at the server 7 instead of at the computing device.
[00112] Figure 5 shows a flow diagram of a method carried out by the server 6. In a step 310, the server 6 receives the noise level and position data from a communication device 2 of the plurality of communication devices 2 in the system 1 through the peer-to-peer network.
[00113] In a step 312 a mapping module 61 of the server 6 generates map data based on the plurality of the positions and associated noise levels. A map of the environment is generated in combination with the map data. This map shows at least some of the determined positions each in association with their corresponding noise level. [00114] In step 314, the map along with the map data is displayed at a display/user interface associated with the server 6. The display/user interface may be a remote device connected to the server 6 through a communication network, such as an intranet, or the Internet. The display/user interface may be used by the users 3 using the communication devices 2, or other personnel, to identify noise levels and plan how to reduce or limit noise exposure. The display/user interface may be, for instance, a touch-screen display. An example of the map and corresponding noise data is illustrated in Figure 6.
[00115] Referring to Figure 6, the display/user interface presents the user with the map 31 of the environment in which various noise levels were recorded. The map 31 shows a number of rooms 32A-C, with passages between them. In this example, the map 31 shows a plurality of areas 33A-C in which noise has been detected. In another example, the map 31 shows a plurality of areas 33A-C in which noise has been detected above a particular threshold
[00116] In the illustrated example, the magnitude of the noise levels detected in these areas 33A-C is indicated to the user via shading. A darker shade indicates an area of higher noise level, whilst a lighter shade indicates an area of lower noise level. If there is no shading in an area of the map 31 , the user may assume that no noise has been detected in that area, or that any noise detected is below a threshold.
[00117] Any other suitable type of indicator scheme may be used. For example, each area 33A-C may have a numerical value (e.g. between 1 to 10) associated with it. In another example, the user is presented with a noise intensity map comprising contours lines, where the width of the spacing between the contour lines indicate a rise or fall in noise level. Here narrower spacing between contour lines indicates a steep rise in noise level, and wider spacing between contour lines indicates a shallow rise in noise level.
[00118] Noise level data from a plurality of user communication devices 2 are stored in a central database 62 of the server 6 together with the associated positioning data coordinates. Each of the positioning coordinates relate to a grid reference of the location. The resolution of the square grid reference, or in other words the area of each square in the grid, may be preset depending on the accuracy of the positioning apparatus being used.
[00119] Noise level data points can be tagged with a grid reference based on the position data. Then, an average of the noise level data can be determined for each square within the grid reference.
[00120] The noise levels for each square in the grid are continuously updated by each user 3 who enters the environment. This is useful for constructing a reliable representation of noise levels per unit area. Noise intensity values are derived from the noise level data accorded to each grid reference divided by the assigned area of the grid. The integration of new with old data for each grid map reference may use time related weighting factors.
[00121] In some examples, additional sensor nodes located at fixed known positions may also be connected to the mesh network. The additional sensor nodes may act as repeaters to support the peer-to-peer network. The additional sensor nodes may provide fixed reference points for use by the positioning modules 20 of the communication devices 2 to improve the accuracy of position determination. The additional sensor nodes may each comprise one or more audio input devices to determine noise levels and provide noise data for particular positions or grid references where they are located, for example at positions where high noise levels are expected. The additional sensor modules may each comprise a storage module arranged to store noise data associated with their fixed position for use in producing the noise intensity map. This may remove the need for a user to traverse these expected high noise positions in order to build up the noise information, for example to complete the noise intensity map. This may be useful to limit the exposure of users to noise at known or anticipated high noise level hotspots, which ideally should be monitored in order to keep the noise data and noise intensity map up to date over time. The use of fixed nodes may allow noise levels in areas which are expected to be highly noisy, or are hazardous in other ways, to be monitored without putting users or their hearing at risk. The use of fixed nodes may allow noise levels in areas which are seldom visited by users to be monitored. [00122] As a further example, each area may have a particular colour (e.g. green, orange or red) associated with it. Preferably, the indicator scheme used should have a legend so that the user can understand the data presented to them.
[00123] On inspection of the map 31 , a user will be able to determine that room 32A has an area of low noise level 33A in its north-west corner, but the rest of the room is quiet; the south end of room 32B has an area of medium noise level 33B, but the rest of the room is quiet; and the whole of room 32C is an area of high noise level 33C. Each noise level area 33A-C has a boundary 35A-C around it, defining the perimeter of each area.
[00124] Users can use the map 31 to determine paths or working locations to be used by themselves, or others, in order to limit their exposure to noise. [00125] In the illustrated embodiment the communication device 2 is able to operate in a number of different modes. Figure 7 shows a flow chart illustrating a method of activating different modes at the communication device 2.
[00126] In step 400 the communication device 2 is activated, or 'powered-on'. In this case, the communication device 2, more specifically the communication module 14, is configured to operate in a "connection-enabled" mode initially. In the "connection-enabled" mode, the communication module 14 of the communication device 2 is configured to permit transmitting or receiving of audio to or from another communication device 2 of the system 1 .
[00127] The voice recognition module 15 may be deactivated, when the communication module 14 is in the connection-enabled mode initially, and the voice recognition module 1 5 may be configured to be activated only in response to a user interaction with the switch 17. When activated, the voice recognition module 15 is arranged to perform at least one action in response to at least one voice command of a stored first instruction set. The first instruction set may, for example, be stored at the voice recognition module 15 or the storage module 19. [00128] In step 403 the control module 18 detects a user-interaction with the switch 17. In this case, the user 3 wishes to instruct the communication device 2 to enter a "connection- disabled" mode. In order to do this, the user maintains contact with the switch 1 7, or 'holds' the switch down, for a first time period. In this example, the user 3 holds the switch 17 for over five seconds until the audio output device 13 outputs an audio notification, such as a single 'beep'. Upon hearing the 'beep', the user 3 disengages contact with the switch 17, or
'releases' the switch. The control module 18 detects this interaction with the switch 17 and instructs the communication module 14 to enter the "connection-disabled" mode.
[00129] In step 405, the communication module 14 enters the connection-disabled mode. In the connection-disabled mode the communication module 14 is not permitted to transmit or receive audio to or from another communication device 2. In the connection-disabled mode, the communication interface 10 may not be permitted to establish a connection between the communication device 2 and another communication device 2 via the peer-to-peer network. In addition, in the connection-disabled mode the voice recognition module 15 may be deactivated [00130] In step 407 the control module 18 detects another user-interaction with the switch 17. In this case, the user 3 wishes to instruct the communication device 2 to re-enter the
"connection-enabled" mode. In order to do this, the user 3 performs a different user- interaction with the switch 1 7 compared with the user-interaction in step 403. Here, the user 3 maintains contact with the switch 17 for a second time period, for instance two seconds longer than the first period of time.
[00131] In this example, the user 3 holds the switch 17 until the audio output device 13 outputs an audio notification, such as a two 'beeps'. Upon hearing the second 'beep', the user 3 knows that they have reached the second time period threshold and can disengage contact with the switch 1 7, or 'release' the switch 17. The control module 18 detects this interaction with the switch 1 7 and instructs the communication module 14 to re-enter the "connection- enabled" mode. Thus, the method returns to step 400.
[00132] In this example, the user 3 holds the switch 17 for the first time period until the first single beep is output in step 403. Then, the user 3 continues to hold the switch 17 until the second time period has elapsed, at which point the audio output device 17 outputs a second beep. Here, the second time period is seven seconds, which is two seconds longer than the first period. However, the second time period may be any length of time so long as the user 3 is given sufficient time to response to the first beep before the second beep occurs.
[00133] In step 409 the control module 18 detects another user-interaction with the switch 17. In this case, the user 3 wishes to instruct the communication device 2 to enter a "voice- control" mode. In order to do this, the user performs a different user-interaction with the switch 17 compared with the user-interactions in steps 403 and 407. Here, the user 3 contacts with the switch 17 multiple times within a time period. For instance, the user 3 may activate the switch 17 twice within a time period of under five seconds. The control module 18 detects this interaction with the switch 1 7 and instructs the communication module 14 to enter the "voice control" mode.
[00134] In step 41 1 , the communication module 14 enters the voice control mode, in which the communication module 14 is permitted to transmit or receive audio to or from another communication device 2. In addition, the voice recognition module 1 5 is activated when the communication module 14 is in the voice-control mode.
[00135] In the voice-control mode the voice recognition module 15 may be arranged to perform a plurality of actions each in response to at least one voice command of a second instruction set. The second instruction set of the voice control mode may comprise a greater number of voice commands than the first instruction set used in the connection-enabled mode. The second instruction set may, for example, be stored at the voice recognition module 15 or the storage module 19.
[00136] In step 413, as in step 403, the control module 18 detects a user-interaction with the switch 17 where the user 3 maintains contact with the switch 17 for over five seconds until the audio output device 13 outputs a 'beep', at which point the user 3 disengages contact with the switch 17. As before, the control module 18 detects this interaction with the switch 17 and instructs the communication module 14 to re-enter the "connection-disabled" mode. Thus, the method returns to step 405.
[00137] In step 415, as in step 407, the control module 18 detects a user-interaction with the switch 17 where the user 3 maintains contact with the switch 17, for the second time period until the audio output device 13 outputs two 'beeps' at which point the user 3 disengages contact with the switch 1 7. The control module 18 detects this interaction with the switch 17 and instructs the communication module 14 to re-enter the "connection-enabled" mode. Thus, the method returns to step 400. [00138] Figure 8 shows a flow chart illustrating a method of using the communication device 2 in the 'connection-enabled' mode.
[00139] As mentioned previously, in the connection enabled mode the communication module 14 of the communication device 2 is permitted to transmit or receive audio to or from another communication device 2. Thus, in step 500 the communication interface 10 is in a waiting state where it checks to determine whether or not there is an incoming call from another communication device 2, or in other words a request for a connection to be made between the communication device 2 and another communication device 2. In addition, in the waiting state the control module 18 checks to determine whether or not there is a user-interaction with the switch 1 7 whilst there is not an incoming call. If there is a user-interaction with the switch 17 whilst there is not an incoming call, the method proceeds to step 502.
[00140] In step 502, control module 18 detects an interaction with the switch 17. In this case, the user 3 wishes to provide a command to the voice recognition module 15. In order to do this, before speaking the voice command, the user 3 maintains contact with the switch 17, for a time period, for instance less than five seconds. The control module 18 detects this interaction with the switch 1 7 and, in response, activates the voice recognition module 15.
[00141] In step 504 voice recognition module 15 detects a voice command provided by the user 3. The voice recognition module 15 identifies voice commands by detecting reserved words. The voice commands are verified by a pause preceding and following the command. For instance, the pause preceding and following the command may be a few seconds. [00142] For instance, the user may say "CALL SUPERVISOR". Next, in step 506 the voice recognition module 15 determines the action associated with the voice command. Then, the voice recognition module 1 5 outputs a confirmation request, via the audio output device 13. In this example, the confirmation request comprises outputting audio indicative of the determined action. For instance, the output may comprise repeating the voice command "CALL SUPERVISOR"
[00143] In this example, the "SUPERVISOR" voice command may be described as a label associated with another communication device 2. In another example, the label may comprise a name for a user 3 associated with the other communication device 2. [00144] As described above, each user's contact name, title or number is associated with his/her communication device 2. When a user 3 initiates a call, a message is broadcast to the peer-to-peer network for identifying the requested communication device 2. The requested communication device 2 responds and a connection is established between the calling and the receiving communication devices 2.
[00145] In response to the confirmation request, the voice recognition module 15 waits for the user 3 to provide a confirmation. The user 3 may provide the confirmation by saying an affirmative voice command, for instance by saying "yes". In this case, the method proceeds to step 508. On the other hand the user 3 may decline the confirmation by saying a negative voice command, for instance by saying "no". In this case, the method returns to step 500.
[00146] In step 506, if the voice recognition module 15 fails to recognise the name of the person to be called, it prompts an appropriate audible notification for a repeat command. If the repeat command is unsuccessful the method returns to step 500. If the repeat command is successful the method proceeds to step 508 [00147] In step 508 the voice recognition module 1 5 causes the action associated with the voice command, input at step 504, to be performed. In this case, the voice recognition module 15 causes the communication interface 10 to initiate the process of establishing a connection with a communication device 2 associated with the supervisor.
[00148] In step 500 the communication interface 10 checks to determine whether or not there is an incoming call from another communication device 2. If there is an incoming call the method proceeds to step 510 in which a notification is output, preferably at the audio output device 13, indicating to the user that there is an incoming call
[00149] In step 512 the control module 18 checks to determine whether or not the user 3 engages the switch 17. If the user 3 engages the switch 17, for less than one second, in response to the incoming call the method proceeds to step 514, in which a connection is established between the communication device 2 and another communication device 2 in the peer-to-peer network.
[00150] In step 516, the control module 18 determines that the user 3 has engaged the switch 17 for less than five seconds, indicating that the user 3 wishes to terminate the call. In step 518, in response to this user interaction, the control module 18 instructs the communication interface 10 to disconnect the communication device 2 from the other communication device 2. [00151] In step 520, the control module 18 checks to determine whether the switch 1 7 has been engaged within ten seconds of outputting the incoming call notification. If the user 3 has not provided an interaction with the switch 1 7 within this ten second time period, the method proceeds to step 524 in which the incoming call request is cancelled. [00152] However, in step 522, if the control module 18 determines that the switch 17 has been engaged for a time period in excess of five seconds during the ten second time period, then the incoming call request is cancelled also.
[00153] Figure 9 shows a flow chart illustrating a method of using the communication device 2 in the 'voice-recognition' mode. The purpose of the voice-recognition mode is that the user can perform all required functions using voice commands rather than interacting with the switch 17. Thus, the voice recognition module 15 remains active whilst in the voice recognition mode.
[00154] In step 600 the user 3 provides a voice command. Next, in step 602 the voice recognition module 15 detects that the user 3 has provided the voice command and determines an action associated with the voice command.
[00155] In step 604, the voice recognition module 15 outputs a confirmation request, via the audio output device 13. The confirmation comprises outputting audio indicative of determined action.
[00156] In response to the confirmation request, the voice recognition module 15 waits for the user 3 to provide a confirmation. The user 3 may accept the confirmation by saying an affirmative voice command, for instance by saying "YES". In this case the method proceeds to step 605 in which the action associated with the voice command is performed.
[00157] On the other hand the user 3 may decline the confirmation request by saying a negative voice command, for instance by saying "NO". In this case, the method returns to step 600.
[00158] In step 602, if the voice recognition module 15 fails to recognise the voice command, for instance if the voice recognition module 15 cannot recognise the name of the person to be called, it prompts an appropriate audible notification for a repeat command. If the repeat command is unsuccessful the voice recognition module 15 simply waits for another voice command at step 600.
[00159] In this example the following voice commands are available in the voice recognition mode. [00160] If the voice recognition module 15 detects that the user 3 has said "HANG-UP", whilst a call is in session between the communication device 2 and another communication device 2, the voice recognition module 15 instructs the communication interface 10 to disconnect the communication device 2 from the other connected communication device 2. [00161] If the voice recognition module 15 detects that the user 3 has said "PICK-UP", in response to an incoming call request, the voice recognition module 1 5 instructs the communication interface 10 to connect the communication device 2 with the other connected communication device 2 requesting the call.
[00162] If the voice recognition module 15 detects that the user 3 has said "DECLINE", in response to an incoming call request, the voice recognition module 1 5 instructs the communication interface 10 to refuse a request to connect the communication device 2 with the other connected communication device 2 requesting the call.
[00163] If the voice recognition module 15 detects that the user 3 has said "CALL" followed by the name of a contact, the voice recognition module 1 5 instructs the communication interface 10 to initiate a request to connect the communication device 2 with another connected communication device 2 associated with the contact.
[00164] If the voice recognition module 15 detects that the user 3 has said "EXIT", then the voice recognition module 1 5 instructs the communication module 14 to enter the connection- enabled mode. [00165] In the system 1 , user settings for each communication device 2 can be controlled by the server 6, or by another device connected to the mesh network, such as a computer or a MESH network enabled smartphone. One of the user setting options could include a sound pressure threshold above which the user's speech is detected and processed into instructions for execution by the voice recognition module 15. Otherwise, settings would normally reflect user preferences for an optimum listening experience.
[00166] Access to cloud computing applications such as private clouds for company infrastructure services may be accessed by communication devices 2 via a gateway connected to the peer-to-peer network. This can include communication links to other sites for secure inter-site calls including conference calls. [00167] The peer-to-peer network may connect to a secure central database via a gateway containing employees' routing requirements for setting up wireless communication links.
[00168] When fitting the communication device 2 to an ear of a user 3 the pressure sensitive switch 17 may be engaged accidentally. To avoid any possible problems caused by such accidental activation of the switch 17, the communication device 2 may comprise a sensor, such as an acoustic in-ear sensor, arranged to determine that the communication device has not been mounted on an ear of a user; and, in response, ignoring any user interactions with the switch 17. When the communication device 2 is correctly mounted the occlusion effect attenuates the external sound entering the ear canal thereby creating a difference in sounds levels measured by the acoustic in-ear sensor and external acoustic sensors. Accordingly, the use of an acoustic in-ear sensor may allow a determination that communication device 2 has not been mounted if it receives audio above a particular attenuated amplitude of the amplitude measured by an externally mounted sensor, and the acoustic in-ear sensor may determine that the communication device 2 has been mounted if the amplitude of the received audio falls below that particular amplitude. In some examples where an acoustic in-ear sensor is used, this may also be the voice audio input device 12 and/or the environmental audio input device 16.
[00169] In the connection-enabled mode, and in the absence of streamed wireless audio of any kind for a certain time period, the communication device 2 may power down into a
'beacon mode'. In the beacon mode, the communication interface 10 periodically checks for messages/activations and sends out a unique identifier which can be used to determine the location of the communication device 2 before returning to a sleep state. In the beacon mode, the communication device 2 alternates between an active state and a dormant state, where a greater amount of the functionality of the communication device 2 is activated in the active state than in the dormant state.
[00170] The communication module 10 of the communication device 2 may be configured to operate in an override mode in which the communication device 2 is able to transmit audio for output at another communication device irrespective of the mode activated at the other communication device. This enables a supervisor/manager to have connection priority to the user's device by automatically forcing acceptance of a connection request. This option could include termination of a call by the supervisor exclusively. The override mode may allow the communication device 2 to transmit audio for output at a plurality of other communication devices irrespective of the mode activated at each respective communication device. Thus, the override mode may be used in place of an conventional public address (PA) system.
[00171] Figure 10 illustrates a communication system 100 according to a second
embodiment. The communication system 100 is similar to the communication system 1 according to the first embodiment described above, and comprises a plurality of
communication devices 101 , each worn by a different user 3, and a server 6. [00172] In the second embodiment, each of the communication devices 101 comprises a pair of ear defenders 4 physically connected by a linking arm 5 to form a headset 1 02 mounted on and supported by the head of a user 3, and covering and protecting both ears of the respective user 3. The headset 102 has corresponding components to the communication device 2 of the first embodiment as shown in figure 2, and is able to communicate with the server 6 using the peer-to-peer network. In the second embodiment, in addition to the headset 102 each communication device 101 further comprises a computing device 103.
[00173] Figure 1 1 illustrates a computing device 103 in more detail.
[00174] The computing device 103 comprises a communications interface 104 and an antenna 105, together with a storage module 106, a display and user interface 107, and a navigation module 108.
[00175] In the illustrated example, a display and a user interface are integrated together in a display and user interface 1 07 in the form of a touch-screen display of the computing device 103. In other examples different types of display and user interface may be used. In some other examples a separate display and user interface may be used.
[00176] The computing device 103 is able to communicate wirelessly with the headset 102 formed by the other parts of the communication device 101 by way of the communications interface 104 and antenna 105 of the computing device 103, which communicate wirelessly with the communication interface 10 and antenna 1 1 of the headset 102. This wireless communication between the headset 102 and the computing device 103 may, for instance be via Bluetooth® or via Wi-Fi. In some examples the communications interface 104 of the computing device 103 may be a peer-to-peer networking interface, and the computing device 103 may communicate with the headset 1 02 using the peer-to-peer network. However, in other examples the headset 102 and the computing device 103 may communicate with one another via any other suitable connection, such as via a wired connection. In examples where the communications interface 104 of the computing device 103 is a peer-to-peer networking interface the computing device 103 may communicate directly with the server 6 using the peer-to-peer network.
[00177] In the illustrated example of figures 10 and 1 1 the computing device 103 is a smartphone. However, it will be appreciated that any other suitable computing device 103 may be used instead of a smartphone.
[00178] The map along with the map data generated by the mapping module 61 of the server 6 may be sent to the headset 102 of the communication device 101 through the peer-to-peer network, and then sent through the wireless link to the computing device 103 for display to the user.
[00179] When the map and map data are received by the computing device 103 through the communications interface 104 and antenna 105, they are stored in the storage module 106. The map and map data are then used to display a map, such as the map 31 illustrated in figure 6 to the user 3 on the display and user interface 107.
[00180] In alternative examples, the communication device 1 01 may include a mapping module, so that the communication devices 101 can carry out the mapping themselves. In such examples the computing device 103 may include the mapping module to carry out the noise mapping.
[00181] By referring to the map 31 , users may be able to determine for themselves how to get from one point to another, whilst limiting their exposure to noise. However, the navigation module 108 provides a navigation function in which the positioning module 20 of the headset 102 determines the current position of the user and sends this current position to the computing device 103, and the user indicates a desired destination location using the display/user interface 107 of the computing device 1 03. The navigation module 108 then determines a navigation path which exposes the user to least amount of noise based on the map data. For instance, the navigation module 1087 may determine a navigation path that avoids at least one high noise level area. An example of a navigation path 39 is shown on the map 31 in Figure 6.
[00182] The navigation path determined by the navigation module 108 may be compared to changes in the position of the user over time, as determined by the positioning module 20 of the headset 102, and the notification module 43 of the headset 102 may output an audio notification to the user 3 via the audio output device 13 of the headset 102, and/or the navigation module 108 may output a visual notification to the user 3 through the display/user interface 107, if the user 3 deviates from the navigation path.
[00183] Users can use the map 31 to determine their own paths by themselves, in order to limit their exposure to noise. Alternatively, users may can instruct the device to determine the best route for limiting the users' exposure to noise using the navigation module 108. [00184] In order for the navigation module 108 to determine a route that limits users' noise exposure, the navigation module 108 may identify a noise level limit. Then the navigation module 108 cause the display/user interface 107 to display paths from the users location to the intended destination, where the noise level associated with each path is below the noise level limit. [00185] In addition, routing software tools similar to those used in conventional navigation systems may be used, but with preferences such as determining the least noisy route or determining the shortest route which avoids noise levels above a certain threshold. Any deviation from the chosen path may be detected by a rise in noise levels above the selected threshold. This may lead to an audible or visual warning. The user's destination may be indicated by tapping the appropriate area on a pressure sensitive display screen. In addition, the user may be able to zoom-in on areas on the map for closer inspection.
[00186] In the second embodiment described above the communication device 101 comprises a headset 102 and a computing device 1 03 in wireless or wired communication. Although the illustrated embodiment has specific functions and modules of the communication device 101 assigned to different ones of the headset 1 02 and computing device 1 03. In other examples the functions and modules of the communication device 101 may be differently distributed between the headset 1 02 and the computing device 1 03 as convenient. In some examples the navigation module could be part of the headset, and the computing device 1 03 could be a "dumb" display which merely displays image data provided by the headset 102. In other examples much of the functionality of the communication device 1 01 could be provided by the computing device 103. This may be advantageous in examples where the computing device 103 has significant on-board processing capability, such as where the display device is a smartphone. [00187] Figure 12 illustrates a communication system 200 according to a third embodiment. The communication system 200 according to the third embodiment has a hearing-test mode in which a communication device is arranged to determine an ear characteristic of a user; and an audio output mode in which the audio processing unit is arranged to provide an audio output, which is adjusted based on the determined ear characteristic. Thus, a user can administer a hearing-test using the communication device, in order to determine a characteristic of their ears. Then, this characteristic can be used to adjust the audio output from the communication device in the audio output mode. The audio output mode may also be referred to as an audio streaming mode. This provides allows a user to tune the communication device to their own hearing characteristics without having to visit a clinician. Since the communication device conducts the hearing test and the audio output, the communication device can be tuned immediately after the hearing-test. This avoids the need to wait for results to be processed by a separate unit. Furthermore, a user can use the communication device to customise its audio output based on at least one characteristic of their ears. [00188] In general, headphones and earphones such as those used for communication devices are designed to have only one audio output profile. However, this may have disadvantages, because each person has a different hearing profile, as different people hear sounds differently and may have different sensitivities to different frequencies. Therefore, one particular earphone may be acceptable for one person, but may be entirely inappropriate for another individual. Therefore, it would be desirable to provide an communication device with an audio output that can be optimised for individual users.
[00189] The communication device may be able to determine an ear characteristic of the user's ear more accurately by detecting a response to an audio test signal. For example, an audio test signal with a pre-defined frequency and amplitude may be output to the user's ear via an audio output device of the communication device. The communication device may then detect a response by receiving an input from the user indicating that the audio test signal has been heard. This allows the communication device to determine that the user is able to hear that particular sound frequency at a particular amplitude. This information can be used to adjust the output of the audio stream in the audio output mode, in order to optimise the user's hearing experience. [00190] The communication device may be arranged to output a plurality of pre-defined audio test signals via the audio output device, in the hearing test mode. In addition, the
communication device may be arranged to determine at least one characteristic of the ear of the user based on a response, or responses, to the plurality of pre-defined audio test signals.
[00191] Healthy ears emit sounds called Otoacoustic Emissions (OAEs). These OAEs are produced by the outer hairs of the cochlear in the inner ear. Generally, there are two types of OAEs: Spontaneous Otoacoustic Emissions (SOAEs) and Evoked Otoacoustic Emissions (EOAEs). SOAEs are emitted without external stimulation of the ear, whilst EOAEs are emitted when the ear is subject to external stimulation. The OAEs emitted by the ear of a user indicate characteristics of that user's ear. [00192] The communication device may be provided with an ear-microphone which can detect sound emitted by the user's ear. Therefore, it is possible to detect OAEs emitted by the user's ear. This allows the communication device to determine a characteristic based on the OAEs, which in turn can be used to adjust the audio stream, in order to optimise the user's hearing experience. [00193] In some examples, the results of SOAE detection may be used as a basis for activating specific EOAE tests, for example by selecting a frequency and amplitude of external stimulation of the ear used to evoke EOAEs. Such results may be, for example, changes in the user's SOAE profile, which may be determined from the results of the SOAE detection. [00194] Referring to Figure 12, a communication device 200 according to a third embodiment is shown, comprising a headset 201 which is communicatively connected to a computing device 202.
[00195] The headset 201 is substantially the same as the communication device 2 according to the first embodiment and the headset 1 02 according to the second embodiment, and comprises corresponding components. The computing device 202 may be a smartphone. However, it will be appreciated that any other suitable computing device 202 may be used.
[00196] The environmental audio input device 16 is used to conduct a hearing test for the ear of the user 3, in order to determine at least one ear characteristic of the user's ear. This hearing test will be described in greater detail below.
[00197] The ear characteristic may represent the sensitivity of the ear to at least one frequency. This ear characteristic can be stored at the storage module 19 at the headset 201 , so that the communication module 14 can adjust the audio output of the audio output device 13 based on the ear characteristic. In one example, the communication module 14 is arranged to adjust the audio output via the audio output device 13 based on the sensitivity of the ear to certain frequencies, so that frequencies to which the ear is less sensitive are amplified and/or frequencies to which the ear is more sensitive are attenuated. In this way, the headset can optimise the audio stream for an individual user's ear.
[00198] The communication module 14 is arranged to operate in a hearing test mode and an audio output mode. In the hearing test mode, the communication module 14 is arranged to determine at least one ear characteristic of the ear of the user 3 based on a hearing test. In the audio output mode, the communication module 14 is arranged to output an audio stream via the audio output device 13, where the audio stream is adjusted based on the at least one ear characteristic. [00199] The computing device 202 comprises an antenna 105, a communication interface
104, a storage module 106 and a display/user interface 107, similarly to the computing device 103. The antennas 1 1 , 105 and the interfaces 13, 104 of the headset 201 and computing device 202 are used to establish a wireless connection between the headset 201 and the computing device 202, so that they can communicate with one another. In this example, the headset 201 and the computing device 202 communicate wirelessly with one other, for instance, via Bluetooth® or via Wi-Fi. However, the headset 201 and the computing device 202 may also communicate with one another via any other suitable connection, such as via a wired connection. [00200] The computing device 202 also has an audio processing module 203, which performs a similar hearing test function to the communication module 14 of the headset 201 . The functions of the communication module 14 of the headset 201 and the audio processing module 203 of the computing device 202 may be shared between the modules 14, 203. [00201] The audio processing module 203 at the computing device 202 can be used to conduct hearing tests for determining an ear characteristic of the user's ear also. The audio processing module 203 can also be used for transmitting audio signals to the audio output device 13 via the antennas 1 1 , 105 and communication interfaces 10, 1 04.
[00202] The computing device 202 further comprises an audio source module 204, which is arranged to interface with the audio processing module 203. The audio source module 204 may be, for instance, a telephone link, or other audio or multi-media communications channel, a digital music player or a music streaming application.
[00203] The audio source module 204 is arranged to communicate with the headset 201 with the audio processing module 203, communication module 14, communication interfaces 10, 104 and antennas 1 1 , 105 in order to output voice, music, or any other audio, via the audio output device 13. The storage module 106 at the computing device 202 may be used for storing audio for output by the headset 201 . The storage module 106 may be used to store ear characteristics of the user's ear.
[00204] Similarly to the headset 102 and computing device 103 of the second embodiment discussed above, the headset 201 is connected to the server 6 by the peer-to-peer network, and the computing device 202 may also be connected to the server 6. The server 6 may be used for storing ear characteristics and/or audio for output via the headset 201 . In examples where the communications interface 104 of the computing device 202 is a peer-to-peer networking interface the computing device 202 may communicate directly with the server 6 using the peer-to-peer network
[00205] Figure 13 shows a flow chart illustrating a method of adjusting an audio output via the headset 201 based on an ear characteristic of the user's ear. In step 1300, the user 3 selects the hearing test mode of the communication module 14 and/or audio processing module 203. In order to do this, the user interacts with the display/user interface device 1 07 at the computing device 202 to activate a hearing test application.
[00206] In step 1302, the communication module 14 and/or the audio processing module 203 determines at least one ear characteristic of the user's ear by carrying out at least one hearing test. Different hearing tests that may be conducted by the communication module 14 and/or the audio processing module 203 will be described in greater detail below. The ear characteristic can be stored at a storage module 1 9, 34 at the headset 201 , the computing device 202, or at the server 6.
[00207] In step 1304, the user selects the audio output mode of the communication module 14 and/or the audio processing unit 203. In order to do this, the user interacts with the display/user interface device 107 at the computing device 202 to activate an audio output mode. In this example, the hearing test mode and the audio output mode are described as separate applications. However, the functionality of each of these modes may be integrated into a single application at the computing device 202.
[00208] Optionally, in some examples, the headset 201 may comprise at least one external environmental audio input device 16a located externally of an ear defender. The external environmental audio input device 16a receives sound signals from the environment outside the headset 202. These sound signals can be processed by the communication module 14 and output by the audio output device 13 to allow the headset 201 to operate as a hearing aid to assist a user 3 to hear environmental sounds, such as speech from persons not wearing any headset, without removing the headset 201 .
[00209] In examples where the headset 201 comprises at least one external environmental audio input device 1 6a, in a step 1306, the user can select either the external environmental audio input device 1 6a or the audio source module 204 as the preferred source of audio. If the user selects the external environmental audio input device 16a, the method proceeds to step 1308. On the other hand, the user may select the audio source module 204 as the preferred source of audio, in which case the method proceeds to step 1310. In this example the audio source module 204 is a digital music player. However, any other type of suitable audio application may be used. Steps 1300-1304 may be carried out once or many times, the same applies to steps 1306-1310. [00210] In step 1308, the external environmental audio input device 16a receives sound from the environment outside of the headset 201 and replays the received sound, in real-time, via the audio output device 13. Before, replaying the sound, the audio stream is adjusted based on the ear characteristic determined and stored in the hearing test mode. This allows the headset 201 to optimise the user's hearing of sound in their environment. The communication module 14 will limit the maximum volume of the sound emitted by the audio output device in order to avoid any problems if the user selects the external environmental audio input device 16a as the preferred source of audio when the user is in a noisy environment. In some example the communication module 14 may monitor the volume of external environmental noise detected and may deselect, or disable selection, of the external environmental audio input device 16a as the preferred source of audio when the volume of noise detected is too high.
[00211] In step 1310, the audio source module 204 transmits audio for output via the audio output device 13. Here the signals are transmitted via the audio processing module 203, the communication module 14, the communication interfaces 10, 104, and the antennas 1 1 , 105. However, it will be appreciated that a wired communication mechanism could be used instead
[00212] Before, replaying the audio transmitted by the audio source module 204, the audio is adjusted based on the ear characteristic determined in the hearing test mode. This allows the headset 201 to optimise the user's hearing of live or pre-recorded music. In examples where the computing device 202 is a smartphone, the audio source module 204 may be able to selectively provide audio received through a communications channel supported by the smartphone, instead of music.
[00213] It will be understood that in examples where the headset 201 does not comprise any external environmental audio input device 16a, the steps 1306 and 1308 can be omitted, and the method can proceed directly from step 1304 to step 1310.
[00214] The communication module 14 may be arranged to selectively supress the transmission of audio having the audio source module 204, or the external environmental audio input device 1 6a, as a source from the audio output device 13, in favour of audio communication received through the peer-to-peer network, such as audio communications from other users, or PA system messages.
[00215] Figure 14 shows a flow chart illustrating a more detailed example of the method in step 1302 of Figure 13 for determining an ear characteristic. In step 1300, the user initiates the hearing test application using the display/user interface 107. In step 1302, the communication module 14 and/or audio processing module 203 causes the audio output device 13 to output a first pre-defined audio test signal. The first pre-defined audio test signal has a pre-defined frequency and amplitude. In step 1404, the display/user interface 107 prompts the user 3 to provide a positive or a negative response via the display/user interface device 107. A positive response indicates that the user 3 can hear the first test signal, whilst a negative response, or a lack of a response perhaps after a particular time period, indicates that the user 3 cannot hear the first test signal.
[00216] If the user cannot hear the first test signal and a response is not received at the user interface, the method proceeds to step 1406 in which the amplitude of the first test signal is increased. The method repeats steps 1402-1406 until the display/user interface 107 receives a response from the user indicating that the test signal has been heard. [00217] Once the user has indicated that they have heard the test signal, the method proceeds to step 1408 in which the communication module 14 and/or audio processing module 203 determines an ear characteristic of the ear. In this example, the ear
characteristic determined is the sensitivity of the user's ear to the frequency of the test signal. For instance, this sensitivity may be recorded as the minimum amplitude at which the user is able to hear a particular frequency. This minimum amplitude may indicate that it is necessary to adjust the audio output so that audio signals at this frequency are either amplified or attenuated depending on whether the user is less or more sensitive to that particular frequency. [00218] After the ear characteristic has been determined in step 1408, the method proceeds to step 1410 in which the frequency of the test signal is changed. Then steps 1402-1408 are repeated in order to determine another ear characteristic of the ear, which in this case may be the user's sensitivity to the new frequency.
[00219] Steps 1402-1410 may be repeated for a range of test frequencies. In this way, the communication device 200 is able to build a hearing profile for the user. Then, audio output can be adjusted accordingly in order to optimise the user's hearing experience.
[00220] In some examples the headset 201 has at least one internal environmental audio input device 16b which is located internally of an ear defender so that it is able to sense sounds emitted by an ear of the user 3. In some examples the internal environmental audio input device 16b is arranged to be located at least partially inside an ear of the user 3 when the headset 201 is in use. However, in other examples, the internal environmental audio input device 16b is arranged to be located outside the ear.
[00221] Figure 15 shows a flow chart illustrating an alternative example of the method in step 1302 of Figure 13 for determining an ear characteristic which may be used in examples where the headset has at least one internal environmental audio input device 16b. This method of determining an ear characteristic relies on detecting Otoacoustic Emissions (OAEs) emitted by ear of the user.
[00222] OAEs are sounds given off by the inner ear as a result of an active cellular process. When a soundwave enters the ear canal it is transmitted to the fluid of the inner ear via the middle ear bones. The air borne vibrations are converted into fluid borne vibrations in the cochlea. The fluid borne vibrations in the cochlea result in the outer hair cells producing a sound that echoes back into the middle ear. Outer hair cell vibrations can be induced by either external sound waves (EOAEs) or internal mechanisms (SOAEs). [00223] People with normal hearing produce OAEs. However, those with hearing loss greater than 25-30 decibels (dB) generally do not produce OAEs. Studies have shown that OAEs disappear after the inner ear has been damaged, so OAEs can be used as a measure of inner ear health. [00224] The primary purpose of otoacoustic emission (OAE) tests is to determine cochlear status, specifically hair cell function. It also builds up a picture of the conductive mechanism characteristics from the ear canal to the outer hair cells (OHC) of the cochlea over the hearing range of frequencies. This includes proper forward and reverse transmission, no blockage of the external auditory canal, normal tympanic membrane (eardrum) movement, and a functioning impedance matching system normally checked by impedance audiometry.
[00225] The middle ear matches the acoustic impedance between the air and the fluid, thus maximizing the flow of energy from the air to the fluid of the inner ear. Impairment in the transmission of sound through the middle ear creates a conductive hearing loss which can be compensated by increasing the amplification of sounds entering the ear canal. Therefore, more energy is needed for the individual with a conductive hearing loss to hear sound, but once any audio is loud enough and the mechanical impediment is overcome, the ear works in a normal way. OAE results in this case would typically show non-frequency specific hearing loss in the form of reduced amplitudes above the noise floor across the frequency range of hearing. [00226] The outer hair cells (OHC) of the cochlea of the inner ear perform a
resonating/amplifying role, which generate the electro-physical OAE responses. Present OAEs at the required threshold would indicate normal OHC functionality at those measured frequencies equating to normal hearing sensitivity. Any gaps in normal hearing sensitivity would need more complex adjustments such as frequency selective boosting of sound waves to the neighbouring functional outer hair cells in the case of dead regions of the cochlea.
[00227] OAEs in general provide reliable information on the ear's auditory pathway characteristics which can also be a significant help in preventing noise related hearing loss. OAEs can provide the means to monitor a patient for early signs of noise related hearing damage. Excessive noise exposure affects outer hair cell (OHC) functionality, so OAEs can be used to detect this. An OAE evaluation can give a warning sign of outer hair cell damage before it is evident on an audiogram. OAEs are more sensitive in detecting cochlear dysfunctions, since the outer hair cells are the first structure of the inner ear to be damaged by external agents, even before the record of changes in audiometric thresholds.
[00228] There are two types of OAE: Spontaneous Otoacoustic Emissions (SOAEs) and Evoked Otoacoustic Emissions (EOAEs). SOAEs are sounds that are emitted from the ear without external stimulation. On the other hand, EOAEs are sounds emitted from the ear in response to external stimulation.
[00229] In step 1500 the internal environmental audio input device 16b is used to detect SOAEs emitted by the user's ear. In this step, the audio output device 13 does not provide any stimulus to the ear. Once the internal environmental audio input device 1 6b has attempted to detect any SOAEs, the method proceeds to step 1502 in which an ear characteristic is determined based on the SOAEs, or lack thereof.
[00230] Spontaneous otoacoustic emissions (SOAEs) can be considered as continuously evoking otoacoustic emissions which provide supplementary information on the ear's auditory pathway characteristics. Accordingly, SOAEs are ideally suited to monitoring the user's hearing abilities during quiet periods to identify the onset of any hearing impairment without any user cooperation or awareness of the monitoring being necessary.
[00231] Spontaneous otoacoustic emissions typically show multiple narrow frequency spikes above the noise floor indicating normal functionality. An attenuation of these spikes over time could indicate impending noise related hearing impairment which may become permanent unless appropriate action is taken. The attenuation of these spikes may be recorded as an ear characteristic, and audio output can be adjusted accordingly, for instance, by increasing amplitude of audio output at these frequencies.
[00232] In step 1504, the audio output device 13 outputs an audio test signal as a stimulus to the ear. The stimulus is arranged to cause the ear to emit an EOAE, which is detected in step 1506 if any EOAEs are emitted. Then, in step 1508, an ear characteristic is determined based on the EOAE, or lack thereof. In some examples, the results of the SOAE detection in step 1500, may be used as a basis for activating specific EOAE tests in steps 1504 and 1506. Such results may be, for example, changes in the user's SOAE profile, which may be determined from the results of the SOAE detection in step 1 500.
[00233] There are a number of different types of audio test signal that can be used as a stimulus to the ear when attempting to detect EOAEs in step 1504. Evoked otoacoustic emissions can be evoked using a variety of different methods.
[00234] In one method, a pure-tone stimulus is output and stimulus frequency OAEs (SFOAEs) are measured during the application of the pure-tone stimulus. The SFOAEs are detected by measuring the vectorial difference between the stimulus waveform and the recorded waveform, which consists of the sum of the stimulus and the OAE. [00235] In another method, a click, a broad frequency range, a tone burst or a brief duration pure tone is output and transient evoked OAEs (TEOAEs or TrOAEs) are measured. The evoked response from a click covers the frequency range up to around 4 kHz, while a tone burst will elicit a response from the region that has the same frequency as the pure tone. [00236] In another method, distortion product OAEs (DPOAEs) are evoked by outputting a pair of primary tones (fi and f2). The corresponding DPOAEs are measured to determine an ear characteristic.
[00237] The pair of primary tones of similar intensity have a frequency ratio which typically lies at 1 :2 from which strong distortion products (DP) should be detected at 2f1 -f2 and at 2f2-fi , where f2 is the higher frequency tone.
[00238] EOAEs measure the conductive mechanism characteristics of the ear including the integrity of the outer hair cells (OHC) which can be damaged by exposure to high levels of noise. The steepness of phase roll-off from stimulus-frequency OAEs (SFOAE) is also believed to indicate the true frequency selectivity of single fibres of the human auditory nerve that represent the stimulus input to auditory brain centres. This may help in compensating for dead regions of the cochlea by selectively elevating the magnitude of neighbouring
(sideband) frequencies to supplement the missing ones.
[00239] An alternate method could be to utilise RF frequency mixing techniques to synthesise the missing frequency in the auditory (neural) centre by mixing the neural-electrical energy of two frequencies where the upper product lies towards the limit of normal hearing range and the lower product equates to the damaged area of the cochlea, ie: cos(u)RF.t)* cos( LO t) = 0.5{ {COS[(U)RF- )i_o)t] + COS[(U)RF+ u)i_o)t]}. In this method, the non-linear function of the cochlear will need to be taken into account.
[00240] EOAE measurements provide frequency specific information about hearing ability in terms of establishing whether auditory thresholds are within normal limits which is important for hearing aid settings and for diagnosing sensory or conductive hearing impairment which can lead to problems understanding speech in the presence of background noise.
[00241] Conventional techniques such as air- and bone- conduction pure-tone audiometry and simple speech audiometry are not always as reliable. Tuning the hearing device from OAE data as described previously to compensate for any hearing impairment and/or perceived deficiencies needs to account for the acoustic properties of the ear canal. This is achieved by using the REMs method described below, which should restore the user's hearing response to match that of a normal hearing profile and/or suit personal preferences with the hearing device inserted. [00242] In some examples the headset 201 includes a sound pressure sensor 205 and a probe 206. The probe 206 is arranged to be inserted at least partially inside the ear when the headset 201 is in use. This allows the sensor 205 to measure the sound pressure level within the ear. The sound pressure sensor 205 and the probe 206 are used to conduct another hearing test in order to determine an ear characteristic, which again can be used to optimise the audio output.
[00243] Figure 16 shows a flow chart illustrating a different example of the method of determining an ear characteristic, as in step 1302 of Figure 13, which may be used in examples where the headset 201 includes a sound pressure sensor 205 and a probe 206. This method of determining an ear characteristic relies on the sound pressure level in the ear 9 of the user.
[00244] In step 1600, the probe 206 and the sound pressure sensor 205 of the headset 201 measure the sound pressure level in the user's ear. Preferably the probe 206, which in this instance is a probe tube, is placed with its tip approximately 6mm from the tympanic membrane of the ear. In this step the sound pressure level is measured when there is no audio output via the audio output device 13, or in other words when the audio output device 13 is inactive. This sound pressure level may be referred to as an unaided sound pressure level.
[00245] In step 1602, as in step 1 600, the probe 206 and the sound pressure sensor 205 of the headset 201 measure the sound pressure level in the user's ear. However, in this step the sound pressure level is measured when the audio output device 13 is outputting an audio signal. Thus, in this step the sound pressure level is measured when the audio output device 13 is active. This sound pressure level may be referred to as an aided sound pressure level.
[00246] In step 1604, the communication module 14 and/or audio processing module 203 calculates the difference between the unaided sound pressure level and the aided sound pressure level in order to determine the 'insertion gain'. The insertion gain may be described as an ear characteristic. This characteristic can be matched to targets produced by various prescriptive formula based on the user's audiogram or individual hearing loss.
[00247] The size and shape of the ear canal affects the acoustics and resonant qualities of the ear. In "real-ear" the actual acoustic energy that exists within the ear canal of a particular person is accurately measured. Real-ear measurements (REMs) compare and verify the real- ear acoustic characteristics of a hearing device with a prescription target set by, for instance, a combination of OAE data, user preferences, and machine learning data. [00248] Machine learning algorithms use audio sensing in diverse and unconstrained acoustic environments to adjust the user's listening experience according to a learned model based on a large dataset of place visits. Deepear is an example of micro-powered machine learning using deep neural networks (DNN) to significantly increase inference robustness to background noise beyond conventional approaches present in mobile devices. It uses computational models to infer a broad set of human behaviour and context from audio streams.
[00249] REMs allows the effects of adjustment of the audio output by the headset 201 to be verified by taking into account any changes to the sound pressure level (SPL) of the signal caused by the shape of the ear. Fine tuning may include adjusting the overall volume, or making changes at specific pitches/frequencies.
[00250] A patient's real-ear unaided response (REUR), or in other words the natural "amplification" in the patient's open, or non-occluded, ear canal, ensures that any adjustment does not over-amplify in certain regions of the frequency response. [00251] Real Ear Occluded Gain (REOG) is a measurement which involves the placement of any in ear components of the headset in the ear but muted/off. It allows consideration of the attenuation caused by the in ear components and their obstructing effect of external sounds.
[00252] The existence of numerous resonance (gain) peaks in a real ear aided response (REAR) curve does not provide a smooth match to the prescriptive target. This is illustrated in the graph 1700 shown in Figure 1 7A, where line 1702 shows the predicted gain and line 1704 shows the real-ear gain.
[00253] Subtracting the level of the incoming signal from the REUR (real ear unaided response in dBs) gives a natural amplification or gain of the external ear (REUG - real ear unaided gain). This identifies any inherent resonance troughs caused by the external ear's acoustics for adjustment.
[00254] The insertion gain is the difference REAR - REUR (real ear aided response minus real ear unaided response in sound pressure levels) or REAG - REUG (real ear gain parameters). For good speech intelligibility in the presence of noise, especially the softer consonant components of speech, there needs to exist an SNR of at least 15dB(A). The required insertion gain provided by the headset electronics improves an otherwise inadequate ear sensitivity characteristic such that softer sounds within the speech frequency spectrum are elevated to acceptable levels in preference to other sounds outside this range. [00255] Figure 17B illustrates a graph 1706 of REAR and REUR readings for a device that has been "acoustically matched" to an individual user's ear. This is shown by the smooth real- ear aided response (REAR), particularly between 2000 and 4000 Hz.
[00256] Figure 18 illustrates an example of the output displayed on the display/user interface 107 of the computing device 202 when the communication module 14 and/or audio processing module 203 are in the audio output mode. In this example, the display/user interface 107 presents a graph 60 to the user. A first line 62 on the graph 60 displays the constituent frequencies within the sound from the environment received via the external environmental audio input device 16a, along with the amplitude of each of the frequencies. [00257] The user can select points 64A-F along the first line 62 using the display/user interface 107. Once one of the points 64A-F has been selected the user can drag that point to a desired amplitude.
[00258] In one hypothetical example, the user may be listening to the sounds in the surrounding environment using the headset 201 via the audio output device 13, and there may be a repetitive and loud low-frequency noise in the audio stream. This noise may be hindering the user's ability to hear a person speak. In response, the user may select points 64A and 64B and drag them down in order to reduce their amplitude so that the noise is less prominent in the audio output.
[00259] In another hypothetical example, there may be an undesired high frequency noise in the audio stream from the environment. In response, the user may select points 64C-F and drag them down in order to reduce their amplitude so that the high frequency noise is less prominent in the audio output.
[00260] In re-arranging points 64A-F the user has created a second line, which represents the actual output of the audio output device 13, and the graph 60 displays the difference between the actual sounds in the environment in comparison to the sounds output via the headset 201 . In adjusting the audio stream a user will be able to highlight sounds that they want to hear and diminish sounds that they do not want to hear. This helps to optimise the user's listening experience. The user's listening experience is optimised further by adjusting the audio stream based on the ear characteristic determined in the hearing test mode. In another example, the headset 201 and/or computing device 202 may select audio having a particular frequency above a certain threshold and lower the amplitude of the selected audio automatically, without intervention from the user.
[00261] In another example, the computing device 202 may be arranged to receive an input from a user indicating a preferred frequency response for the audio stream output in the audio output mode. For instance, the user may be able to adjust a graphic equaliser presented via the display/user interface 107. The communication module 14 and/or audio processing module 203 may be arranged to adjust the output of the audio stream in the audio output mode based on the at least one ear characteristic determined in the hearing-test mode and the preferred frequency response indicated by the user. Therefore, it is possible to optimise the output audio stream based on a combination of user preferences and results of the hearing test. Therefore, the user may be able to 'fine-tune' their listening experience in order to achieve the optimum audio output.
[00262] Figure 19A and Figure 19B illustrate an example of a user's hearing profile. The user's measured hearing profile is compared to a range of reference values within which normal hearing is considered to lie. In the hearing profile below, it can be seen that the DPOAE measurements in Figure 19A correlate closely with the audiometric profile in Figure 19B of a hearing loss patient. There is also good correlation with TOAE data, which is not illustrated here. In this example, the noise floor is the lower curve in Figure 19A, and the notch in the curve indicating hearing loss lies near the 15dB threshold. From this data notches in the user's hearing profile can be optimised by adding the correct amount of insertion gain into the hearing device electronics at those frequencies, as previously described.
[00263] The second and third embodiments described above each comprise a computing device, the computing devices of the second and third embodiments comprising some different modules. In some examples a communication device may be provided having the functionality of both of the second and third embodiments combined. In such examples the computing device may comprise the modules of the computing devices of both the second and third embodiments.
[00264] In the embodiments described above a number of modules are disclosed. These modules may comprise software running on a computing device such as a processor, may comprise dedicated electronic hardware, or may comprise a combination of software and hardware.
[00265] The embodiments described above include a server. The server may comprise a single server or network of servers. In some examples the server comprise a network of separate servers which each provide different functionality. In some examples the functionality of the server may be provided by a network of servers distributed across a geographical area, such as a worldwide distributed network of servers, and a user may be connected to an appropriate one of the network of servers based upon a user location [00266] The embodiments described above comprise a peer-to-peer communications network connecting the communication devices. In some examples other types of communication networks may be used.
[00267] The embodiments described above comprise a peer-to-peer communications network formed by the communication devices in which noise mapping is carried out based on noise measurements made by the communication devices, and in some examples by fixed sensor nodes of the communication network, typically located in high noise locations. It is explained above that the peer-to-peer network may be supported by other devices. These other devices may include communication devices which are not used to carry out noise measurements, and noise measuring devices which are not communication devices.
[00268] The embodiments described above employ separate audio input devices, such as microphones, to receive user voice input and environmental noise input. In some examples one or more audio input devices may each be used to receive both user voice input and environmental noise input instead of, or in addition to, the separate audio input devices. [00269] The embodiments described above comprise a combined communication device and noise dosimeter able to provide communications, monitor the cumulative noise levels to which a user has been exposed over time, and to provide noise data together with associated location data. In some examples the combined communication device and noise dosimeter may only provide communications and monitor the cumulative noise levels to which a user has been exposed over time, or may only provide communications and provide noise data together with associated location data.
[00270] The illustrated embodiments disclose communication devices each comprising a headset mounted on and supported by the head of a user, and covering and protecting both ears of the respective user. In other examples, a pair of communication devices may be used, each mounted on and protecting a single ear of the respective user.
[00271] In the example of the third embodiment described above an internal environmental audio input device is used to detect SOAEs emitted by the user's ear. In other examples a dedicated audio input device separate from any environmental audio input device may be used to detect the SOAEs. [00272] The illustrated embodiments disclose a communication device comprising an environmental audio input device. In some examples the communication device may comprise a plurality of environmental audio input devices. [00273] The second and third embodiments described above each have a communication device comprising a headset and a computing device. In some examples the functionality of the computing device may be provided by the headset, so that the communication device comprises a headset only. [00274] The above description discusses embodiments of the invention with reference to a single user for clarity. It will be understood that in practice the system may be shared by a plurality of users, and possibly by a very large number of users simultaneously.
[00275] A wireless mesh network as described above can be used for wireless peer to peer connectivity. However, in another embodiment each communication device 2 may comprise a low power sub-GHZ ISM band radio that does not depend on a mesh network for wide area peer-to-peer coverage and connects wirelessly to a remote hub without the need for hopping from node to node.
[00276] The communication system may include P2P group functions where the
supervisor/manager is given the option of group ownership which may extend to multiple concurrent P2P groups using Wi-Fi or other such technology, or a group communication system (GCS) where the network is divided into optional sub-groups.
[00277] The illustrated examples show a single communication system, for simplicity. In other examples a plurality of communication systems may be connected together or interconnected by a network infrastructure to provide communication links between different interconnected groups of devices, which groups may be remotely located. In some examples the plurality of communication systems may be connected together or interconnected by a network infrastructure such as infrastructure meshing with client meshing (P2P).
[00278] In the examples described above, the system monitors exposure to noise and outputs noise map data. In other examples the communication device may additionally be provided with suitable sensors to measure other environmental conditions than noise. Examples of such environmental conditions include airborne dust concentration or temperature, such as excessive heat or cold. In such examples the system can additionally measure and track users exposure to these environmental conditions and/or map these environmental conditions in a corresponding manner to that described above for noise. [00279] In other examples the communication device may be provided with suitable sensors to measure other environmental conditions or hazards as an alternative to noise sensors. Examples of such environmental conditions include dust or temperature, such as excessive heat or cold. In such examples the system can measure and track users exposure to these environmental conditions and/or map these environmental conditions in a corresponding manner to that described above for noise.
[00280] In examples where environmental conditions other than noise are measured the map display, navigation functions and notifications may include other hazards. For example, in a system measuring noise, dust and heat the map, navigation function and notifications may relate to any one or more of noise, dust and heat. In another example, in a system measuring only heat or only dust the map, navigation function and notifications may relate to only heat or only dust respectively.
[00281] In the described embodiments of the invention the system may be implemented as any form of a computing and/or electronic device. Such a device may comprise one or more processors which may be microprocessors, controllers or any other suitable type of processors for processing computer executable instructions to control the operation of the device in order to gather and record routing information. In some examples, for example where a system on a chip architecture is used, the processors may include one or more fixed function blocks (also referred to as accelerators) which implement a part of the method in hardware (rather than software or firmware). Platform software comprising an operating system or any other suitable platform software may be provided at the computing-based device to enable application software to be executed on the device.
[00282] Various functions described herein can be implemented in hardware, software, or any combination thereof. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer- readable media may include, for example, computer-readable storage media. Computer- readable storage media may include volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. A computer-readable storage media can be any available storage media that may be accessed by a computer. By way of example, and not limitation, such computer-readable storage media may comprise RAM, ROM, EEPROM, flash memory or other memory devices, CD-ROM or other optical disc storage, magnetic disc storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disc and disk, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu- ray disc (BD). Further, a propagated signal is not included within the scope of computer- readable storage media. Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection, for instance, can be a communication medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable.twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of communication medium. Combinations of the above should also be included within the scope of computer-readable media. [00283] Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, hardware logic components that can be used may include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs). Complex Progrmmable Logic Devices (CPLDs), etc.
[00284] The term 'computer' is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realise that such processing capabilities are incorporated into many different devices and therefore the term 'computer' includes PCs, servers, mobile telephones including smartphones, personal digital assistants and many other devices.
[00285] Those skilled in the art will realise that storage devices utilised to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program.
Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realise that by utilising conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
[00286] It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. [00287] Any reference to 'an' item refers to one or more of those items. The term 'comprising' is used herein to mean including the method steps or elements identified, but that such steps or elements do not comprise an exclusive list and a method or apparatus may contain additional steps or elements. [00288] As used herein, the terms "module", "devcie", "component" and "system" are intended to encompass computer-readable data storage that is configured with computer- executable instructions that cause certain functionality to be performed when executed by a processor. The computer-executable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices.
[00289] Further, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term
"comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
[00290] The figures illustrate exemplary methods. While the methods are shown and described as being a series of acts that are performed in a particular sequence, it is to be understood and appreciated that the methods are not limited by the order of the sequence. For example, some acts can occur in a different order than what is described herein. In addition, an act can occur concurrently with another act. Further, in some instances, not all acts may be required to implement a method described herein.
[00291] Moreover, the acts described herein may comprise computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media. The computer-executable instructions can include routines, sub-routines, programs, threads of execution, and/or the like. Still further, results of acts of the methods can be stored in a computer-readable medium, displayed on a display device, and/or the like.
[00292] The order of the steps of the methods described herein is exemplary, but the steps may be carried out in any suitable order, or simultaneously where appropriate. Additionally, steps may be added or substituted in, or individual steps may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
[00293] It will be understood that the above description of preferred embodiments is given by way of example only and that various modifications may be made by those skilled in the art. What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable modification and alteration of the above devices or methods for purposes of describing the aforementioned aspects, but one of ordinary skill in the art can recognize that many further modifications and permutations of various aspects are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the scope of the appended claims.

Claims

Claims:
1 . A communication device comprising:
a peer-to-peer communication interface arranged to establish a connection between the communication device and at least one other communication device via a peer-to-peer network;
an audio input device for receiving audio from a user;
a communication module arranged to transmit to one of the at least one other communication devices, via the peer-to-peer communication interface, audio data based on audio received from the user via the audio input device; and arranged to receive audio data from the one of the at least one other communication devices, via the peer-to-peer networking interface;
an audio output device for outputting audio based on the received audio data;
at least one ear defender for reducing noise exposure;
an audio input device for receiving environmental audio;
a noise level module arranged to determine a noise level based on the received environmental audio; and
a positioning module arranged to determine a position of the audio input device corresponding with the determined noise level ; and arranged to associate the position of the audio input device with the corresponding noise level.
2. The communication device according to claim 1 , and further comprising a calculation module arranged to calculate a calculated noise level based on time and noise levels determined by the noise level module.
3. The communication device according to claim 2, and further comprising a notification module arranged to output a notification if the calculated noise level reaches a noise threshold.
4. The communication device according to claim 3, wherein the notification module is arranged to output a notification if the calculated noise level reaches a pre-determined level below the noise threshold.
5. The communication device according to claim 3 or claim 4, wherein the noise threshold is user-defined.
6. The communication device according to any preceding claim, and further comprising: a storage module arranged to store a plurality of the positions determined by the positioning module, wherein each position is stored in association with its corresponding noise level determined by the noise level module.
7. The communication device according to any preceding claim, and further comprising: a mapping module arranged to generate map data based on the plurality of the positions and associated noise levels.
8. The communication device according to claim 7, and further comprising:
a display arranged to present map data based on the plurality of the positions and associated noise levels in combination with a map of an environment in which the environmental audio was received.
9. The communication device according to claim 8, wherein the display is arranged to indicate at least some of the positions each in association with their corresponding noise level.
10. The communication device according to claim 9, wherein the display is arranged to present at least one high noise level area indicative of a position associated with a noise level above a high noise level threshold.
1 1 . The communication device according to claim 10, wherein the map displays at least one boundary defining the perimeter of a high noise level area.
12. The communication device according to any preceding claim, and further comprising a navigation module arranged to determine a navigation path from one point to another, the path being associated with a reduced level of noise exposure.
13. The communication device according to any preceding claim, wherein the audio input device for receiving audio from a user and the audio input device for receiving environmental audio are the same audio input device.
14. The communication device according to any preceding claim, wherein the audio input device for receiving environmental audio is located inside the at least one ear defender.
15. The communication device according to any preceding claim, wherein at least one of the audio input device for receiving environmental audio and the audio input device for receiving audio from a user is an in ear microphone.
16. The communication device according to any preceding claim, and further comprising a head mount, or an ear mount; and
wherein the audio input device or devices are mounted on the head mount or ear mount.
17. The communication device according to any preceding claim, and comprising two ear defenders.
18. The communication device according to any preceding claim, and further comprising a voice recognition module arranged to initiate establishing the connection between the communication device and one of the at least one other communication devices based on an audio voice command received from the user via the audio input device.
19. The communication device according to claim 18, wherein the voice recognition module takes into account the occlusion effect of bone material.
20. The communication device according to claim 18 or claim 19, and further comprising: a user interface switch and a control module;
wherein the communication device is configured to be able to operate in a plurality of different modes; and
the control module is arranged to detect whether one of a plurality of pre-defined user-interactions with the user interface switch has occurred; wherein each one of the predefined user-interactions is associated with a different mode of the communication device; and
the control module is arranged to activate the mode associated with the detected user-interaction.
21 . The communication device according to claim 20, wherein the communication device is configured to be able to operate in a connection-disabled mode in which : the communication module is not permitted to transmit or receive audio data to or from another communication device; and/or
the peer-to-peer networking interface is not permitted to establish a connection between the communication device and another communication device via the peer-to-peer network; and/or
the voice recognition module is deactivated.
22. The communication device according to claim 20 or claim 21 , wherein the communication device is configured to be able to operate in a connection-enabled mode in which:
the communication module is permitted to transmit or receive audio data to or from another communication device; and/or
the voice recognition module is only activated in response to a user interaction with the user interface switch.
23. The communication device according to claim 22, wherein in the connection-enabled mode the voice recognition module is arranged to establish a connection with another communication device in response to at least one voice command;
wherein the at least one voice command comprises inputting audio indicative of a label associated with the another communication device.
24. The communication device according to claim 23, wherein the label comprises a name of a user associated with the another communication device.
25. The communication device according to any of claims 22 to 24, wherein the communication device has a voice-control mode in which the communication device is permitted to transmit or receive audio data to or from another communication device;
wherein the voice recognition module is activated when the communication device is in the voice-control mode.
26. The communication device according to any one of claims 20 to 25, and arranged to determine that the communication device has not been mounted on an ear of a user, and in response to such a determination, to ignore user interactions with the user interface switch.
27. The communication device according to any preceding claim, wherein the communication device has a beacon mode in which the communication device alternates between an active state and a dormant state, where a greater amount of the functionality of the communication device is activated in the active state than in the dormant state.
28. The communication device according to any preceding claim, wherein the communication device has an override mode in which the communication device is able to transmit audio data for output of audio based on the audio data at another communication device irrespective of any mode activated at the another communication device.
29. The communication device according to any preceding claim, and further comprising an audio processing module having:
a hearing-test mode in which the audio processing module is arranged to determine at least one ear characteristic of the ear of the user based on at least one hearing test; and an audio output mode in which the communication device is arranged to output audio based on the received audio data via the audio output device;
wherein the audio processing unit is arranged to adjust the output audio in the audio output mode based on the at least one ear characteristic determined in the hearing-test mode.
30. The communication device according to claim 29, wherein the at least one ear characteristic represents the sensitivity of the ear to at least one frequency; and
wherein the audio processing module is arranged to adjust the output audio based on the sensitivity of the ear to at least one frequency, so that a frequency to which the ear is less sensitive is amplified and/or a frequency to which the ear is more sensitive is attenuated.
31 . The communication device according to claim 29 or claim 30, wherein the audio processing module is arranged to:
output at least one pre-defined audio test signal via the audio output device, in the hearing-test mode; and
determine at least one characteristic of the ear of the user based on a response to the at least one pre-defined audio test signal.
32. The communication device according to any of claims 29 to 31 , and further comprising an audio input device arranged to detect sound emitted by an ear of a user;
wherein the audio processing module is arranged, in the hearing-test mode, to determine at least one ear characteristic of the ear of the user based on detected sound emitted by the ear of the user.
33. The communication device according to claim 32, wherein the detected sounds emitted by an ear of the user are Spontaneous Otoacoustic Emissions 'SOAE's',
34. The communication device according to claim 33, wherein the audio processing module is arranged to monitor at least one hearing characteristic of the ear of the user for identifying the onset of hearing loss based on the detected SOAEs.
35. The communication device according to claim 31 , and further comprising an audio input device arranged to detect sound emitted by an ear of a user;
wherein the audio processing module is arranged, in the hearing-test mode, to determine at least one ear characteristic of the ear based detected sound emitted by the ear of the user in response to the at least one pre-defined audio test signal.
36. The communication device according to claim 33 or claim 34, wherein the audio processing module is arranged, in the hearing-test mode, to determine at least one ear characteristic of the ear based on an Evoked Otoacoustic Emission ΈΟΑΕ' response detected by the audio input device in response to at least one pre-defined audio test signal; wherein the at least one pre-defined audio test signal is selected based on the detected SOAEs.
37. The communication device according to any of claims 29 to 36, and further comprising a pressure sensor arranged to measure an aided sound pressure level in the ear of the user when the audio output device is outputting audio and an unaided sound pressure level in the ear of the user without the audio output device outputting audio; and
wherein the audio processing module, in the hearing test mode, is arranged to determine the difference between the aided sound pressure level and the unaided sound pressure level, in order to determine the at least one ear characteristic.
38. The communication device according to any of claims 29 to 36, and further comprising a user interface device arranged to receive an input from a user indicating a preferred intensity of at least one frequency of the output audio in the audio output mode; wherein the audio processing module is arranged to adjust the output audio stream in the audio output mode based on the at least one ear characteristic determined in the hearing- test mode and the preferred frequency response indicated by the user.
39. The communication device according to any preceding claim, further comprising: an input device for receiving an environmental parameter other than audio;
a level module arranged to determine an environmental parameter level based on the received environmental parameter other than audio; and
wherein the positioning module is arranged to determine a position of the input device corresponding with the determined environmental parameter level; and arranged to associate the position of the input device with the corresponding environmental parameter level.
40. The communication device according to claim 39, and further comprising a storage module arranged to store a plurality of the positions determined by the positioning module, wherein each position is stored in association with its corresponding determined environmental parameter level determined by the level module.
41 . The communication device according to claim 39 or claim 40, and further comprising a mapping module arranged to generate map data based on the plurality of the positions and associated determined environmental parameter levels.
42. The noise dosimeter system according to any one of claims 39 to 41 wherein the environmental parameter other than audio is dust concentration or temperature.
43. The communication device according to any preceding claim, wherein the communication device comprises a head mounted or ear mounted device communicatively coupled to a computing device.
44. The communication device according to claim 43, when dependent on claim 6, wherein the computing device comprises the storage module.
45. The communication device according to claim 44, when dependent on claim 7, wherein the computing device comprises the mapping module.
46. The communication device according to claim 45, when dependent on any of claims 8 to 1 1 , wherein the computing device comprises the display.
47. A combined noise dosimeter and communication device comprising:
an audio input device for receiving audio; and
a noise level module arranged to determine a noise level based on the received audio;
wherein the audio input device is associated with a positioning module arranged to: determine a position of the audio input device corresponding with the determined noise level ; and arranged to associate the position of the audio input device with the corresponding noise level ;
an audio input device for receiving audio from a user;
an audio output device for outputting audio to the user;
a communication interface for transmitting and receiving over a network; and a head-mount or ear-mount comprising ear defenders for reducing noise level exposure of a user.
48. A communication system comprising a plurality of communication devices according to any preceding claim connected to one another via a peer-to-peer network.
49. The communication system according to claim 48, and further comprising a server connected to the peer-to-peer network.
50. The communication system according to claim 49, wherein the server comprises a storage module arranged to store a plurality of the positions determined by the communication devices, wherein each position is stored in association with its corresponding determined noise level.
51 . The communication system according to claim 49 or claim 50, wherein the server comprises a mapping module arranged to generate map data based on a plurality of the positions and associated noise levels determined by the communication devices.
52. The communication system according to any of claims 48 to 51 , and further comprising one or more fixed nodes with known fixed positions connected to the peer-to-peer network.
53. The communication system according to claim 52, wherein the or each fixed node comprises a storage module arranged to store noise level data associated with the corresponding fixed positions for use in generating map data.
54. The communication system according to claim 52 or claim 53, wherein the or each fixed node provides a fixed reference point for the positioning modules of the communication devices.
55. The communication system according to any one of claims 47 to 54 wherein the plurality of communication devices are connected together via a plurality of peer-to-peer networks, wherein the plurality of peer-to-peer networks are connected together via a network infrastructure
56. A method of monitoring noise exposure using a communication device according to any one of claims 1 to 45, the method comprising:
receiving audio at the audio input device for receiving environmental audio;
determining a noise level based on the received audio;
determining a position of the audio input device corresponding with the determined noise level; and associating the position of the audio input device with the corresponding noise level.
57. The method according to claim 56, and further comprising:
storing a plurality of the positions each in association with a corresponding noise level ; and
generating map data based on the plurality of the positions and their corresponding noise levels.
58. The method according to claim 56 or claim 57, further comprising calculating a calculated noise level at the communication device based on time and noise levels determined by the noise level module.
59. The method according to claim 58, further comprising outputting a notification if the calculated noise level reaches a noise threshold.
60. The method according to claim 58 or claim 59, further comprising outputting a notification if the calculated noise level reaches a noise threshold.
61 . The method according to claim 60, further comprising outputting a notification if the calculated noise level reaches a pre-determined level below the noise threshold.
62. The method according to claim 60 or claim 61 , wherein the noise threshold is user- defined.
63. The method according to any of claims 56 to 62, the method further comprising: receiving environmental audio at the plurality of audio input devices;
determining a plurality of noise levels based on the received environmental audio; determining a position of the audio input device corresponding with each determined noise level;
associating the position of the audio input devices with the corresponding noise levels;
receiving, at the server, a plurality of the positions of the audio input devices, each in association with a corresponding noise level; and
generating map data based on the plurality of the positions and their corresponding noise levels.
64. The method according to claim 57 or claim 63, further comprising presenting the map data in combination with a map of an environment in which the environmental audio was received.
65. The method according to claim 64, further comprising presenting at least some of the positions each in association with their corresponding noise level.
66. The method according to claim 64 or claim 65, further comprising presenting at least one high noise level area indicative of a position associated with a noise level above a high noise level threshold.
67. The method according to claim 66, further comprising displaying on the map at least one boundary defining the perimeter of a high noise level area.
68. The method according to any of claims 64 to 67, further comprising determining a navigation path from one place to another, the navigation path being associated with a reduced level of noise exposure.
69. The method of any of claims 56 to 68, the method further comprising:
receiving audio from a user via the audio input device of the communication device; initiating establishing a connection between the communication device and another communication device via the peer-to-peer network, based on an audio voice command received from the user via the audio input device;
transmitting, via the peer-to-peer network to the another communication device, audio data based on audio received from the user via the audio input device;
receiving at the communication device audio data from the another communication device via the peer-to-peer network; and
outputting audio based on the received audio data via an audio output device at the communication device.
70. The method according to claim 69, and further comprising:
detecting whether one of a plurality of pre-defined user-interactions with a user interface switch has occurred; wherein each one of the pre-defined user-interactions is associated with a different mode; and
activating the mode associated with the detected user-interaction.
71 . The method according to claim 70, and further comprising, when the communication device is in a connection-disabled mode: not permitting the communication device to transmit or receive audio data to or from another communication device; and/or
not permitting establishing a connection with another communication device via the peer-to-peer network; and/or
deactivating voice activation.
72. The method according to claim 70 or claim 71 , and further comprising, when the communication device is in a connection-enabled mode:
permitting transmitting or receiving audio data to or from another communication device; and/or
activating voice recognition only in response to a user interaction with the user interface switch.
73. The method according to claim 72, and further comprising, when the communication device is in a connection-enabled mode, the voice recognition module establishing a connection with another communication device in response to at least one voice command; wherein the at least one voice command comprises inputting audio indicative of a label associated with the another communication device.
74. The method according to claim 73, wherein the label comprises a name of a user associated with the second communication device.
75. The method according to any of claims 72 to 74, and further comprising, operating the communication device in a voice-control mode in which the communication device is permitted to transmit or receive audio data to or from another communication device;
wherein the voice recognition module is activated when the communication device is in the voice-control mode.
76, The method according to any of claims 6 to 75, wherein:
the establishing a connection comprises establishing a connection between the communication device and the other communication device via two peer-to-peer networks interconnected via a network infrastructure;
the transmitting comprises transmitting, via the two peer-to-peer networks and the network infrastructure to the another communication device, audio data based on audio received from the user via the audio input device; and
the receiving comprises receiving at the communication device audio data from the another communication device via the two peer-to-peer networks and the network infrastructure.
77. The method according to any of claims 56 to 76, and further comprising operating the communication device in a beacon mode in which the communication device alternates between an active state and a dormant state, where a greater amount of the functionality of the communication device is activated in the active state than in the dormant state.
78. The method according to any of claims 56 to 77, and further comprising operating the communication device in an override mode in which the communication device is able to transmit audio data for output of audio based on the audio data at another communication device irrespective of any mode activated at the another communication device.
79. The method according to any one of claims 56 to 78, and further comprising:
operating the communication device in a hearing-test mode in which the audio processing module is arranged to determine at least one ear characteristic of the ear of the user based on at least one hearing test; and
operating the communication device in an audio output mode in which the communication device is arranged to output audio based on the received audio data via the audio output device;
wherein the audio processing unit adjusts the output audio in the audio output mode based on the at least one ear characteristic determined in the hearing-test mode.
80. The method according to claim 79, wherein the at least one ear characteristic represents the sensitivity of the ear to at least one frequency; and
wherein the audio processing module adjusts the output audio based on the sensitivity of the ear to at least one frequency, so that a frequency to which the ear is less sensitive is amplified and/or a frequency to which the ear is more sensitive is attenuated.
81 . The method according to claim 79 or claim 80, and further comprising the audio processing module:
outputting at least one pre-defined audio test signal via the audio output device, in the hearing-test mode; and
determining at least one characteristic of the ear of the user based on a response to the at least one pre-defined audio test signal.
82. The method according to any of claims 79 to 81 , and further comprising detecting sound emitted by an ear of the user; and
determining at least one ear characteristic of the ear of the user based on the detected sound emitted by the ear of the user.
83. The method according to claim 82, wherein the detected sounds emitted by an ear of the user are Spontaneous Otoacoustic Emissions 'SOAE's'.
84. The method according to claim 83, further comprising monitoring at least one hearing characteristic of the ear of the user for identifying the onset of hearing loss based on the detected SOAEs.
85. The method according to claim 81 , and further comprising detecting sound emitted by an ear of the user in response to the at least one pre-defined audio test signal; and
determining at least one ear characteristic of the ear of the user based on the detected sound emitted by the ear of the user.
86. The method according to claim 83 or claim 84, further comprising:
determining at least one ear characteristic of the ear based on a detected Evoked
Otoacoustic Emission ΈΟΑΕ' response emitted by the user in response to at least one predefined audio test signal;
wherein the at least one pre-defined audio test signal is selected based on the detected SOAEs.
87. The method according to any of claims 81 to 86, and further comprising:
measuring an aided sound pressure level in the ear of the user when the audio output device is outputting audio and an unaided sound pressure level in the ear of the user without the audio output device outputting audio;
determining the difference between the aided sound pressure level and the unaided sound pressure level; and
using the determined difference to determine the at least one ear characteristic.
88. The method according to any one of claims 81 to 87, and further comprising:
receiving an input from a user indicating a preferred intensity of at least one frequency of the output audio in the audio output mode; and
adjusting the output audio stream in the audio output mode based on the at least one ear characteristic determined in the hearing-test mode and the preferred frequency response indicated by the user.
89. The method according to any one of claims 56 to 88, and further comprising:
receiving an environmental parameter other than audio at an input device; determining an environmental parameter level based on the received environmental parameter other than audio;
determining a position of the input device corresponding with the determined environmental parameter level; and
associating the position of the audio input device with the corresponding
environmental parameter level.
90. The method according to claim 89, further comprising storing a plurality of the positions determined by the positioning module, each in association with a corresponding determined environmental parameter level.
91 . The method according to claim 89 or claim 90, further comprising generating map data based on the plurality of the positions and associated determined environmental parameter levels.
92. The method according to any one of claims 89 to 91 , wherein the environmental parameter other than audio is dust concentration or temperature.
93. A computer program comprising code portions which, when executed on a processor of a computer, cause the computer to carry out a method according to any of claims 56 to 92.
PCT/GB2017/053407 2016-11-11 2017-11-10 Improved communication device WO2018087570A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
GB1619163.7 2016-11-11
GB1619160.3 2016-11-11
GB1619160.3A GB2555842A (en) 2016-11-11 2016-11-11 Auditory device assembly
GB1619163.7A GB2555843A (en) 2016-11-11 2016-11-11 Noise dosimeter
GB1619162.9A GB2556045A (en) 2016-11-11 2016-11-11 Communication device
GB1619162.9 2016-11-11

Publications (1)

Publication Number Publication Date
WO2018087570A1 true WO2018087570A1 (en) 2018-05-17

Family

ID=60413222

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2017/053407 WO2018087570A1 (en) 2016-11-11 2017-11-10 Improved communication device

Country Status (1)

Country Link
WO (1) WO2018087570A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333890A (en) * 2022-03-09 2022-04-12 深圳微迅信息科技有限公司 Signal processing method and device, electronic equipment and storage medium
WO2022187587A1 (en) * 2021-03-05 2022-09-09 Soundtrace LLC Smart sound level meter for providing real-time sound level tracing
EP3726856B1 (en) 2019-04-17 2022-11-16 Oticon A/s A hearing device comprising a keyword detector and an own voice detector
GB2611529A (en) * 2021-10-05 2023-04-12 Mumbli Ltd A hearing wellness monitoring system and method
US11736873B2 (en) 2020-12-21 2023-08-22 Sonova Ag Wireless personal communication via a hearing device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080159547A1 (en) * 2006-12-29 2008-07-03 Motorola, Inc. Method for autonomously monitoring and reporting sound pressure level (SPL) exposure for a user of a communication device
US20150223000A1 (en) * 2014-02-04 2015-08-06 Plantronics, Inc. Personal Noise Meter in a Wearable Audio Device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080159547A1 (en) * 2006-12-29 2008-07-03 Motorola, Inc. Method for autonomously monitoring and reporting sound pressure level (SPL) exposure for a user of a communication device
US20150223000A1 (en) * 2014-02-04 2015-08-06 Plantronics, Inc. Personal Noise Meter in a Wearable Audio Device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3726856B1 (en) 2019-04-17 2022-11-16 Oticon A/s A hearing device comprising a keyword detector and an own voice detector
US11968501B2 (en) 2019-04-17 2024-04-23 Oticon A/S Hearing device comprising a transmitter
US11736873B2 (en) 2020-12-21 2023-08-22 Sonova Ag Wireless personal communication via a hearing device
WO2022187587A1 (en) * 2021-03-05 2022-09-09 Soundtrace LLC Smart sound level meter for providing real-time sound level tracing
GB2611529A (en) * 2021-10-05 2023-04-12 Mumbli Ltd A hearing wellness monitoring system and method
CN114333890A (en) * 2022-03-09 2022-04-12 深圳微迅信息科技有限公司 Signal processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11818552B2 (en) Earguard monitoring system
WO2018087570A1 (en) Improved communication device
US11665488B2 (en) Auditory device assembly
EP3036916B1 (en) Hearing aid having a classifier
US10499167B2 (en) Method of reducing noise in an audio processing device
EP3448064B1 (en) A hearing aid device including a self-checking unit for determine status of one or more features of the hearing aid device based on feedback response
US11470413B2 (en) Acoustic detection of in-ear headphone fit
KR102361999B1 (en) Acoustic detection of in-ear headphone fit
JP6308533B2 (en) Hearing aid system operating method and hearing aid system
US20150172829A1 (en) Method of determining a fit of a hearing device and hearing device with fit recognition
Zera et al. Comparison between subjective and objective measures of active hearing protector and communication headset attenuation
EP3593543B1 (en) Communication hub and communication system
US20140153754A1 (en) Otic sensory detection and protection system, device and method
EP4084500A1 (en) Electronic hearing device and method
US20230328420A1 (en) Setup Management for Ear Tip Selection Fitting Process
US20220353625A1 (en) Electronic hearing device and method
Folkeard et al. Verifying bilaterally linked and monaural telephone programs in hearing aids

Legal Events

Date Code Title Description
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17801491

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17801491

Country of ref document: EP

Kind code of ref document: A1