WO2023057752A1 - A hearing wellness monitoring system and method - Google Patents

A hearing wellness monitoring system and method Download PDF

Info

Publication number
WO2023057752A1
WO2023057752A1 PCT/GB2022/052517 GB2022052517W WO2023057752A1 WO 2023057752 A1 WO2023057752 A1 WO 2023057752A1 GB 2022052517 W GB2022052517 W GB 2022052517W WO 2023057752 A1 WO2023057752 A1 WO 2023057752A1
Authority
WO
WIPO (PCT)
Prior art keywords
hearing
wellness
sound data
environment
data
Prior art date
Application number
PCT/GB2022/052517
Other languages
French (fr)
Inventor
Nicolae PAMPU
Marion MARINCAT
Original Assignee
Mumbli Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mumbli Ltd filed Critical Mumbli Ltd
Publication of WO2023057752A1 publication Critical patent/WO2023057752A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H3/00Measuring characteristics of vibrations by using a detector in a fluid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F11/00Methods or devices for treatment of the ears or hearing sense; Non-electric hearing aids; Methods or devices for enabling ear patients to achieve auditory perception through physiological senses other than hearing sense; Protective devices for the ears, carried on the body or in the hand
    • A61F11/06Protective devices for the ears
    • A61F11/14Protective devices for the ears external, e.g. earcaps or earmuffs
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H3/00Measuring characteristics of vibrations by using a detector in a fluid
    • G01H3/04Frequency
    • G01H3/08Analysing frequencies present in complex vibrations, e.g. comparing harmonics present
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H7/00Measuring reverberation time ; room acoustic measurements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6008Substation equipment, e.g. for use by subscribers including speech amplifiers in the transmitter circuit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/07Use of position data from wide-area or local-area positioning systems in hearing devices, e.g. program or information selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops

Definitions

  • a hearing wellness monitoring system and method A hearing wellness monitoring system and method
  • the present disclosure relates to a hearing wellness monitoring system and method.
  • the sound profile of venues is one of the factors that influence people’s choices of social venue. Venues with good sound profile can enhance or improve social interaction. Conversely venues with bad sound profile inhibits meaningful social interaction and can damage the hearing health of patrons.
  • venues with good sound profile can enhance or improve social interaction.
  • venues with bad sound profile inhibits meaningful social interaction and can damage the hearing health of patrons.
  • people to easily identify and locate social venues that fits their venue sound profile preference, particularly for those that experience difficulties with their hearing or sensitive to background noise.
  • At present there is a lack of an objective holistic way for venues operators to establish the right sound profilefortheir venue. There are also limited ways to effectively or easily describe sound profile of a venue to layperson.
  • US2017372242A1 describes a plurality of stationary noise sensors may each include a microphone to sense noise, a power source, and a communication device to transmit data about noise sensed by the microphone.
  • a plurality of mobile noise sensors may each include a microphone to sense noise, a power source, and a communication device to transmit data about noise sensed by the microphone.
  • a noise information hub may receive data from the stationary noise sensors and mobile noise sensors and provide indications associated with the received data via a cloud-based application.
  • An analytics platform may receive the indications and analyse them to determine noise level exposure information for each of a plurality of locations within a workplace.
  • US10068451 B1 describes a first value is received and is associated with a noise level of an environment that a user is in. It is determined whether the first value exceeds a first threshold. A second computing device is notified when the first value exceeds the first threshold. The notifying indicates that the user must leave the environment.
  • US2018005142A1 describes a method and device system for determining, publishing, and taking reservation reguests against a venue offering a restricted or resource-limited service or, a restaurant, bar, or cafe whose occupancy is limited.
  • the method can function unattended and provide the measure of availability from ambient sound levels, using digital acoustic sensors (microphones) and a machine learning process to predict and publish the availability of the resource or occupancy level of the space in which it is contained.
  • US10390157B2 relates generally to devices, systems, and methods for assessing a hearing protection device.
  • a method for assessing a hearing protection device may comprise monitoring sounds with a hearing protection device; transmitting sound measurements from the hearing protection device to a portable electronic device; transmitting the sound measurements from the portable electronic device to a server; comparing, with the portable electronic device or the server, sound measurements to a look-up table; suggesting, with the portable electronic device, the most suitable hearing protection device to be worn based on a comparison between the sound measurements and the look-up table.
  • US2019139565A1 describes a personalized computing experience for a user based on a user-visit-characterized venue profile.
  • user visits to a venue are determined.
  • user characteristics and/or visit characteristics are determined.
  • User similarities and visit features similarities may be determined and associated with the venue to form the user-visit-characterized venue profile.
  • the user-visit-characterized venue profile may be provided to an application or service such as a personal assistant service associated with the user, or may be provided as an API to facilitate consumption of the user-visit-characterized venue profile by an application or service.
  • US10587970B2 describes an acoustic camera which comprises an array of arranged microphones.
  • the microphone arrangement can be organized in planar or non-planar form.
  • the device can be handheld, and it comprises a touch screen for interacting with the user.
  • the acoustic camera measures acoustic signal intensities in the pointed direction and simultaneously takes an optical image of the measured area and shows the acoustic signal strengths with the taken image on the screen.
  • the device analyses the acoustic signals and makes a classification for the sound based on the analysis. If necessary, an alarm is given by the acoustic camera.
  • the device can be fixed on an immobile structure or fixed to a movable or rotatable device or a vehicle, such as to a drone.
  • US7983426B2 a method for monitoring and reporting sound pressure level exposure for a user of a first communication device is implemented in one embodiment when the device measures a sound pressure level (SPL) of the surrounding environment.
  • the device stores at least the SPL measurement in a memory, producing an SPL exposure record, and displays a visual representation of the SPL exposure record on a display screen.
  • the SPL is measured by a second communication device and combined with a known SPL for an output audio transducer of the second device, producing a user sound exposure level.
  • the user sound exposure level is transmitted to the first communication device.
  • the user is notified when the user sound exposure level exceeds a predetermined threshold.
  • a server may also be used to track SPLs over time and recommend corrective action when exposure limits are exceeded.
  • a method includes receiving an audio sensor data from a plurality of microphones disposed at known locations throughout an open space.
  • the method includes generating a three-dimensional sound map data from the audio sensor data.
  • the method further includes generating an augmented reality visualization of the three-dimensional sound map data, which includes capturing with a video camera at a mobile device a video image of the open space, displaying the video image on a display screen of the mobile device, and overlaying a visualization of the three- dimensional sound map data on the video image on the display screen.
  • US9510118B2 describes how environmental sound is captured and mapped.
  • Application of a large number of mobile communication terminals embodied as a data supplier leads to extensive, automatic and continual mapping of environmental sound.
  • a relatively large quantity of remotely transmitted data records for the environmental sound is obtained informally, which permits a more precise depiction of the environmental noise.
  • Commercially available mobile communication terminals may be augmented using simple measures in order to communicate with a central or local mapping system or mapping service directly or indirectly. Summary of the invention
  • a hearing wellness monitoring system comprising at least one audio capture device configured to provide sound data about an environment as a function of time.
  • the at least one audio capture device comprises at least one microphone and a communication interface, configured to transmit the sound data from the audio capture device.
  • the hearing wellness monitoring system also comprises a remote server, configured to receive the sound data, and wherein the remote server is configured to analyse the sound data. Analysing the sound data comprises dynamically calculating a plurality of sound data parameters as a function of time for the environment from the sound data, and dynamically calculating at least one hearing wellness parameter as a function of time from the plurality of sound data parameters for assessing the hearing wellness characteristics of an environment.
  • the at least one microphone may comprise at least four microphones. Advantageously this may allow directionality of audio/sound information to be obtained.
  • the at least one hearing wellness parameter may comprise a plurality of hearing wellness parameters, and wherein analysing the sound data further comprises dynamically assigning the environment a hearing wellness category for characterising the environment according to its hearing wellness atmosphere.
  • the hearing wellness category is assigned based on at least one hearing wellness parameter.
  • Hearing wellness categories may be determined as part of a series of interviews with people who described different social or public environments.
  • the hearing wellness categories may act to translate dB levels in intuitive words to describe when a venue is too loud.
  • the hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter.
  • Each hearing wellness category may be defined by a plurality of ranges for a combination of hearing wellness parameters. At least a portion of the hearing wellness parameter ranges defining each of the plurality of hearing wellness categories may overlap. Hearing parameter ranges for different hearing wellness categories may overlap it will be understood that it is not required that the plurality of hearing wellness categories require mutually exclusive hearing wellness parameter ranges.
  • the hearing wellness monitoring system may further be configured to determine a hearing wellness rating.
  • the hearing wellness rating may be determined by ascribing a numerical value to each of the plurality of hearing wellness parameters, applying a weighting to each of the plurality of hearing wellness parameters, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
  • the hearing wellness rating may correspond to respective ranges of the numerical value of the sum of weighted numerical values.
  • the hearing wellness monitoring system may further be configured to determine the hearing wellness rating by ascribing each of the plurality of sound data parameters and/or hearing wellness parameters to a sound data principle, such that each sound data principle comprises a respective subset of the sound data parameters, applying a first weighting to each of the plurality of hearing wellness parameters, applying a second weighting to each sound data principle to give a respective weighted value for each hearing wellness parameter, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
  • the first weighting may comprise a percentage of a total percentage ascribable to each sound data principle, and wherein the second weighting may be a percentage of a total percentage ascribable to all sound data principles.
  • the sound data may be obtained as a function of time according to pre-determined time windows.
  • a preferred time window is 12 seconds.
  • the sound capture may happen continuously during working hours. In some examples there may be an automated ‘startup system’ to activate the device when the sound level reached or is higher than 40dB level.
  • the rolling window is every 12 seconds. Every 12 seconds the at least one audio capture device captures noise characteristics as sound data according to the format 257 frequency ranges (2. elements) and 1 set of sound pressure levels (SPLs) (aweightSPL) .
  • Analysing the sound data may comprise calculating each sound data parameter as a function of time from (i) sound data from a single time window, or (ii) sound data from a plurality of time windows.
  • the at least one audio capture device comprises an array of audio capture devices.
  • the audio capture device may comprise a hub and satellite system, whereby one audio capture device has additional functionality and acts as a hub to receive sound data from satellite audio capture devices connected (e.g. wirelessly) to it and then process and transmit this to the remote device/server.
  • the system may further comprise at least one of (i) a central local processor configured for local processing of the sound data from the array of audio capture devices, or (ii) wherein the at least one audio capture device comprises a processor configured for local processing of the sound data from the at least one microphone of each respective audio capture device. Local processing may comprise any of removing data outside of a selected time window such as the venue’s working hours.
  • it may include, for each sample, normalizing 257 frequency magnitudes, computing dBAs share for each frequency, removing frequencies less than 200 Hz and more than 20kHz, and/or summing up dBA shares for the 200-20k Hz range to get new values of the dBAs.
  • Each audio capture device is preferably compliant with the sound level measurement system as determined by the WHO Global Standard for Safe Listening (see Feature 2,
  • each audio capture device preferably comprises a class 1 or class 2 sound level meter, as defined in international standard IEC 61672-1 :2013.
  • the audio capture device may be configured to be located at (or as close to as is practicable) the reference measurement position and/or long-term measurement position of an environment as determined by the World Health Organisation (WHO) Global Standard for Safe Listening (see Annex 5, WHO Global Standard for Safe Listening 2022).
  • WHO World Health Organization
  • the audio capture device(s) should be configured to be positioned at least one of: at a height comparable to audience head height; out of reach of audience members; at least 1 metre from any large reflective surface (such as a wall, ceiling, or large piece of furniture or equipment); and/or with a clear line of sight to any main loudspeakers where appropriate.
  • the processor may be configured to receive sound data from the array of audio capture devices, calculate averaged sound data as a function of time from the sound data from the plurality of audio capture devices, and transmit the averaged sound data as a function of time to the remote server, wherein the averaged sound data is used for analysis of the sound data.
  • the remote server may be implemented in the cloud and may be used to capture and store the data, and there is the processing of the raw data capture on the device, followed by a structure of that data in our database management solution (such as Mongo DB or other database storage and management system) which is then being processed further through a python script.
  • the sound data may include at least one of (i) audio frequency, and (ii) sound pressure level (SPL).
  • the at least one sound data parameter may comprise at least one of : Nominal sound pressure level, Nominal Interval frequency spectrum, A(C)-weighted sound pressure level, A-weighted equivalent continuous sound level measured over a specified period of time (LAeq,T), C-weighted peak instantaneous sound pressure level (LCpeak), A-weighted fast maximum sound level (LA,Fmax), A(C)-weighted population minimum, A(C)-weighted population maximum, A(C)-weighted population standard deviation, A(C)- weighted population variance, A(C)-weighted population median, A(C)-weighted population average, and A(C)-weighted Interval frequency spectrum.
  • SPL (expressed in dB) measurement is done using an audio-stream of 16bit data on each of the 4 channels (each channel represents a microphone in the array).
  • the 16bit value can be changed to a 32bit one in the final product for a better representation of a low SPL level.
  • the raw data is stored for preprocessing in chunks of 1024 samples (frame) with a sample rate of 44100Hz fora duration of 15 seconds 4 times per minute.
  • the parameters of time will be configurable and will allow a user to set the preferred values as the tests are carried out at a later stage.
  • the nominal SPL value is calculated on each individual sample using the following formula:
  • the values are concatenated to a new data array and stored under nominal.value SPL name in the data container.
  • A-weighted SPL (expressed in dBA) for each microphone may be obtained using a digital A-weighted filter (according to IEC/CD 1672) applied to each individual nominal data frame.
  • the values may be concatenated to a new data array and stored under aweighted.value name in the data container C-weighted SPL (expressed in dBC) will be calculated using a digital C-weighted filter in a similar manner as the A-weighted SPL and will be stored in eweighted. value.
  • A-weighted population minimum (expressed in dBA) is a statistical measure applied om the a-weighted SPL measurements, it is referred to the minimum value in the 15 second interval and it is stored as a value in the data container under the name of aweighted.population minimum value in the data container.
  • C-weighted population minimum (expressed in dBC) will be calculated in a similar manner as the A-weighted population minimum and will be stored in eweighted. population minimum value data container.
  • A-weighted population maximum (expressed in dBA) is a statistical measure applied om the a-weighted SPL measurements, it is referred to the maximum value in the 15 second interval and it is stored as a value in the data container under the name of aweighted.population maximum value in the data container C-weighted population maximum (expressed in dBC) will be calculated in a similar manner as the A-weighted population maximum and will be stored in eweighted. population maximum value data container.
  • A-weighted population standard deviation (expressed in dBA) is a measure of variation or dispersion of the a-weighted set of values. This is stored in the data container as aweighted.population_standard_deviation.
  • C-weighted population standard deviation (expressed in dBC) will be calculated in a similar manner as the A-weighted population standard deviation and will be stored in eweighted. population_standard_deviation data container.
  • A-weighted population variance is the average of the squared differences from the mean value. This is stored in the data container as aweighted.population variance C-weighted population variance will be calculated in a similar manner as the A-weighted population variance and will be stored in eweighted. population variance data container.
  • A-weighted population median (expressed in dBA) is a statistical measure which represents the value in the middle of the population. This value is stored in the data container as aweighted.population median.
  • C-weighted population standard median (expressed in dBC) will be calculated in a similar manner as the A-weighted population median and will be stored in eweighted. population median data container.
  • A-weighted population average (expressed in dBA) is a statistical measure which represents the arithmetic mean of a population, and it is calculated by dividing the population values with the population number. This value is stored in the data container as aweighted.population average C-weighted population average (expressed in dBC) will be calculated in a similar manner as the A-weighted population average and will be stored in eweighted. population average data container.
  • Nominal Interval frequency spectrum represents the range of frequencies contained by the analyzed dataset.
  • the frequency range of this spectrum will be 20-20000Hz in 1024 distinct portions of the spectrum range.
  • the discrete fast fourier will be used with no additional windowing for the computation, at a later stage the addition of a windowing can be arranged but will not make the scope of this work.
  • the spectrum range array will be stored in the data container as nominal.freq_spectrum.
  • A-weighted Interval frequency spectrum is similar to the normal interval frequency spectrum, with the remark that it is applied on the already weighted A-values
  • the spectrum range array will be stored in the data container as aweighted.freq_spectrum C-weighted Interval frequency spectrum will be calculated in a similar manner as the A-weighted Interval frequency spectrum and will be stored in eweighted. freq_spectrum data container.
  • the remote server may be further configured to obtain hearing preference data about a user, and dynamically assess the suitability of the environment for the user, based on the hearing preference data and the current hearing wellness category of the environment.
  • the at least one hearing wellness parameter may comprise at least one of : exposure to hazardous noise, unpleasant frequencies, maximum frequency magnitude, average dBA of the venue, acoustic quality, reverberation time, vibe of the venue, and signal to noise ratio (SNR).
  • SNR signal to noise ratio
  • Each hearing wellness parameter may correspond to a hearing wellness component.
  • the hearing wellness component may comprise at least one of : adverse noise, quality of speech, sensation of sounds, space ambient noise, space acoustic, space reflection to sounds, space atmosphere.
  • the hearing wellness parameter may be based on at least one sound data parameter and at least one static venue parameter.
  • the static venue parameter may comprise at least one of: reverberation time, venue building materials and furniture, acoustic quality, venue capacity, and unstructured data (such as photos of the venue) from which other data may be inferred or determined.
  • the at least one audio capture device may further comprise a memory configured to store the sound data, for example to buffer the sound data before it is sent to the remote device/server.
  • the memory may be configured to store the sound data fora predetermined time period.
  • the audio capture device communication interface may comprise a wireless telecommunication interface, for example 3G, 4G or 5G.
  • the remote server may further be configured to generate an alert in the event that at least one of (i) a sound data parameter or (ii) a hearing wellness parameter exceeds a predetermined threshold; and wherein the alert is for initiating improvement of acoustic properties of the environment.
  • the alert may be e.g. to, alert an acoustics expert to improve the site, to a local authority such as a noise complaint department, and/or may be used by the venue to automatically adjust the sound levels in the venue i.e. smart music control to match a venue operator’s target atmosphere.
  • a method for determining the hearing wellness of an environment comprises obtaining sound data about an environment as a function of time, dynamically calculating a plurality of sound data parameters for the environment as a function of time based on analysing the sound data, determining at least one hearing wellness parameter based on the plurality of sound data parameters; assigning the environment a hearing wellness category, based on the at least one hearing wellness parameter, obtaining hearing preference data of a user, and assessing suitability of the environment for the user based on correlating the hearing preference data of the user and the hearing wellness category of the environment.
  • the sound data may be obtained from an array of audio capture devices positioned throughout the environment.
  • At least one audio capture device may be located at (or as close to as is practicable) the reference measurement position and/or long-term measurement position as determined by the World Health Organisation (WHO) Global Standard for Safe Listening (see Annex 5, WHO Global Standard for Safe Listening 2022).
  • WHO World Health Organization
  • Preferably, at least a portion of the array of audio capture devices should be positioned at least one of : at a height comparable to audience head height; out of reach of audience members; at least 1 metre from any large reflective surface (such as a wall, ceiling, or large piece of furniture or equipment); and/or with a clear line of sight to any main loudspeakers where appropriate.
  • Assessing suitability of the environment for the user may comprise either (i) matching the environment to the user, or (ii) not matching the environment to the user, based on matching criteria determined by the hearing preference data in relation to the hearing wellness category of the environment.
  • the hearing preference data may comprise at least one of (i) subjective user preference data, and (ii) objective hearing performance data.
  • the method may further comprise obtaining venue data about the environment, wherein venue data comprises at least one static venue parameter, and wherein determining the at least one hearing wellness parameter is further based on the venue data.
  • the static venue parameter may comprise at least one of: reverberation time, venue building materials and furniture, acoustic quality, venue capacity, and unstructured data (such as photos of the venue) from which other data may be inferred or determined.
  • Assigning the environment the hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by at least one range of at least one hearing wellness parameter.
  • the at least one hearing wellness parameter may comprises a plurality of hearing wellness parameters, and each hearing wellness category is defined by a combination of hearing wellness parameters, wherein each hearing wellness parameter for each category is defined by a range.
  • At least a portion of the hearing parameter ranges defining each of the plurality of hearing wellness categories may overlap.
  • Obtaining sound data may comprise obtaining sound data about an environment as a function of time, and wherein calculating the plurality of sound data parameters and calculating the at least one hearing wellness parameter comprises dynamically calculating the parameters as a function of time.
  • the method may further comprise determining a hearing wellness rating by ascribing a numerical value to each of the plurality of hearing wellness parameters, applying a weighting to each of the plurality of hearing wellness parameters, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
  • the hearing wellness rating may correspond to respective ranges of the numerical value of the sum of weighted numerical values.
  • the method may furthercomprise ascribing each of the plurality of sound data parameters and/or hearing wellness parameters to a sound data principle, such that each sound data principle comprises a respective subset of the sound data parameters (and/or hearing wellness parameters), applying a first weighting to each of the plurality of hearing wellness parameters, applying a second weighting to each sound data principle to give a respective weighted value for each hearing wellness parameter, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
  • the first weighting may comprise a percentage of a total percentage ascribable to each sound data principle, and wherein the second weighting may be a percentage of a total percentage ascribable to all sound data principles.
  • a hearing wellness monitoring apparatus comprising a server configured to perform the method described above.
  • a method for identifying a suitable environment for a user based on hearing wellness comprises obtaining sound data about a plurality of environments, analysing the sound data to calculate at least one sound data parameter for each environment, correlating the at least one sound data parameter with at least one hearing wellness parameter for each environment, obtaining hearing preference data for a user, and matching at least one environment from the plurality of environments to the user based on correlating the at least one hearing wellness parameter of each environment and the hearing preference data of the user. This may allow users to be categorised based on an interaction with their Hearing Personality profile.
  • hearing preference data may be obtained by simulating an environment (e.g. a noisy restaurant) and using logic to determine a user’s tolerance levels for understanding speech in noise, but also preference levels.
  • Identifying at least one suitable environment may further comprise assigning each environment a hearing wellness category, based on the at least one hearing wellness parameter for each environment, and matching at least one environment from the plurality of environments to the user based on correlating the hearing preference data of the user and the hearing wellness category for each environment and/or the hearing wellness parameters.
  • Matching at least one environment from the plurality of environments to the user may further comprise identifying a subset of the plurality of environments, wherein the subset match criteria determined by the hearing preference data.
  • the hearing preference data may comprise different criteria for identifying suitable environments for the user based on the intended activity of the user.
  • the hearing preference data may comprise at least one of (i) subjective user preference data, and (ii) objective hearing performance data.
  • a hearing wellness monitoring apparatus comprising a server configured to perform the method described above.
  • a computer program product comprising program instructions configured to program a programmable device to perform the method of any of the aspects described above.
  • a method for displaying tailored information relating to hearing wellness of a user on a graphical user interface comprises displaying a series of predetermined questions, receiving hearing preference data of a user via user input in response to the series of predetermined questions, and dynamically displaying information relating to a subset of environments from a plurality of environments, wherein the subset of environments is determined to be suitable for the user based on correlating the hearing preference data of the user with real-time sound data parameters of each environment.
  • FIG. 1 shows a functional schematic view of an example process flow chart for an audio capture device for use with an example hearing wellness monitoring system
  • Fig. 2 shows a functional schematic view of an example audio capture device for use with an example hearing wellness monitoring system
  • Fig. 3 shows a functional schematic view of the different layers of the OSI model of an example audio capture device such as the audio capture device shown in Fig. 1 or 2;
  • Fig. 4 shows a functional schematic view of the functioning of an example hearing wellness monitoring system
  • Fig. 5 shows an example process flow chart for an example hearing wellness monitoring system, such as the system shown in Fig. 4;
  • Figs. 6A to 6D show example graphical user interfaces for the hearing wellness monitoring system
  • Fig. 7 shows an example process flowchart of how the hearing wellness monitoring system may be used by a user to identify venues that have a hearing wellness category that suits them;
  • Fig. 8 shows an example graphical user interface for a method of determining a user’s hearing personality
  • Fig. 9 shows another view of the graphical user interface of Fig. 8 for determining a user’s hearing personality
  • Fig. 10 shows another view of the graphical user interface of Fig. 8 fordetermining a user’s hearing personality
  • Fig. 11 shows another view of the graphical user interface of Fig. 8 indicating a user’s hearing personality
  • Fig. 12 shows an example process flow chart for a method of interacting with a venue’s sound system to adjust the sound profile of a venue
  • Fig. 13 shows an example process flow chart for a method of interacting with a venue’s sound system
  • Fig. 14 shows an example process flow chart for a method for determining the hearing wellness of an environment.
  • Fig. 1 shows a functional schematic view of an example process flow chart for an audio capture device 100 for use with an example hearing wellness monitoring system.
  • the example audio capture device 100 shown in Fig. 1 comprises a microphone array 103 configured to obtain sound data such as audio frequency spectrum 105 and venue sound pressure level 107. From this sound data the audio capture device may comprise a processor (not shown) that is configured to calculate sound data parameters 109 for example audio related statistics (such as average, min, max, SD, variance).
  • the audio capture device 100 further comprises a communication interface 1 1 1 configured to access and communicate with a database on a remote device or server 150 (such as in the cloud).
  • Fig. 2 shows a functional schematic view of an example audio capture device 200 such as the audio capture device shown in Fig. 1 for use with an example hearing wellness monitoring system.
  • the audio capture device 200 shown in Fig. 2 comprises a power management module 201 configured to provide power to the audio capture device 200. It also comprises a processor 203 coupled to a memory 205 and a data storage 207 which may for example be a memory card such as an SD card.
  • the audio capture device also comprises an audio driver 209 coupled to an external audio codec and four microphones 213 (which advantageously may allow directionality of audio/sound information to further be obtained), although it will be understood that more or less microphones may be used.
  • the audio capture device 200 also comprises a communication interface 215, which in this example may comprise a short-range wireless interface (e.g. a Bluetooth®, including AuracastTM, or Wi-Fi® interface) and a long-range wireless interface which in this example is a telecommunications interface (e.g. a 3G, 4G or 5G interface).
  • a short-range wireless interface e.g. a Bluetooth®, including AuracastTM, or Wi-Fi® interface
  • a long-range wireless interface which in this example is a telecommunications interface (e.g. a 3G, 4G or 5G interface).
  • the audio capture device 200 is also operable to provide a graphical user interface 217, although it will be understood that this is optional.
  • an example hearing wellness monitoring system may comprise a plurality or an array of audio capture devices 200, and some of these audio capture devices 200 may have reduced functionality.
  • the example audio capture device 200 shown in Fig. 2 may act as a “hub” or “master” audio capture device and may be operable to receive sound data from a plurality of “spoke” or “slave” audio capture devices that may have reduced functionality.
  • the “spoke” or “slave” audio capture devices may have no data storage 207 and may only have a short-range wireless interface to communicate with the “hub” or “master” audio capture device.
  • the “spoke” or “slave” audio capture devices may also have less memory 205 and/or lower processing power for the processor 203.
  • the audio capture device 200 may be located at (or as close to as is practicable) the reference measurement position and/or long-term measurement position as determined by the World Health Organisation (WHO) Global Standard for Safe Listening (see Annex 5, WHO Global Standard for Safe Listening 2022).
  • WHO World Health Organization
  • the at least one audio capture device 200 should be at a height comparable to audience head height; out of reach of audience members; at least 1 metre from any large reflective surface (such as a wall, ceiling, or large piece of furniture or equipment); and with a clear line of sight to any main loudspeakers where appropriate.
  • each audio capture device preferably comprises a class 1 or class 2 sound level meter, as defined in international standard IEC 61672-1 :2013.
  • the power management module 201 is operable to power the audio capture device 200 and all its components.
  • the processor 203 is operable to control the functioning of the audio capture device and to carry out the tasks described above with respect to Fig 1 .
  • the processor 203 is operable to receive sound date from the external audio codec 21 1 and optionally process the data to obtain sound data parameters, optionally store these on data storage 207 (for example in case buffering is needed due to poor network connection) and send data parameters via the communication interface 215 to a remote device 150 as shown in Fig. 1 .
  • the audio capture device 200 is configured to obtain sound data about an environment as a function of time.
  • the sound data may include at least one of (i) audio frequency, and (ii) sound pressure level.
  • At least one sound data parameter may be determined from the sound data.
  • the at least one sound data parameter may comprise at least one of : Nominal sound pressure level, Nominal Interval frequency spectrum, A(C)- weighted sound pressure level, A(C)-weighted population minimum, A(C)-weighted population maximum, A(C)-weighted population standard deviation, A(C)-weighted population variance, A(C)-weighted population median, A(C)-weighted population average, and A(C)-weighted Interval frequency spectrum.
  • the at least one sound data parameter comprises A-weighted equivalent continuous sound level (in dB), measured over a specified period of time (T), LAeq,T.
  • the time (T) is 15 minutes, such that the at least one sound data parameter preferably comprises l_Aeq,i5 min.
  • the communication interface 215 is configured to transmit the sound data from the audio capture device 200 to the remote device 150.
  • the remote device 150 is configured to receive the sound data and configured to analyse the sound data.
  • the sound data may be obtained as a function of time according to pre-determined time windows. Analysing the sound data may comprise calculating each sound data parameter as a function of time from (i) sound data from a single time window, or (ii) sound data from a plurality of time windows.
  • a preferred time window is 12 seconds.
  • the sound capture may happen continuously during working hours (e.g. 9am to 1 1 pm).
  • the audio capture device may be configured to perform an automated ‘start-up system’ to activate the device when the sound level reached or is higher than a selected threshold dB level, such as a 40dB level.
  • Analysing the sound data further comprises dynamically calculating a plurality of sound data parameters as a function of time for the environment from the sound data. Analysing the sound data further comprises dynamically calculating at least one hearing wellness parameter as a function of time from the plurality of sound data parameters for assessing the hearing wellness characteristics of an environment.
  • the plurality of sound data parameters may comprise any or all of : Nominal sound pressure level, Nominal Interval frequency spectrum, A(C)-weighted sound pressure level (dBA), A-weighted equivalent continuous sound pressure level (in dB), measured over a specified period of time (T) (LAeq,T), A(C)-weighted population minimum, A(C)-weighted population maximum, A(C)-weighted population standard deviation, A(C)-weighted population variance, A(C)-weighted population median, A(C)-weighted population average, and A(C)-weighted Interval frequency spectrum.
  • the at least one hearing wellness parameter comprises at least one of: quality of speech, exposure to hazardous noise, unpleasant frequencies, maximum frequency magnitude, average dBA of the venue, acoustic quality, reverberation time and signal to noise ratio.
  • each hearing wellness parameter may correspond to a hearing wellness component.
  • the hearing wellness component may comprise any or all of : adverse noise, quality of speech, sensation of sounds, space ambient noise, space acoustic, space reflection to sounds, space atmosphere.
  • a degree of processing of the sound data is performed before it is sent to the remote device 150. This may advantageously save on bandwidth as less data may need to be transmitted.
  • local processing may comprise filtering the sound data and/or averaging the sound data.
  • local processing by the audio capture device may comprise: removing data outside of a selected time window, for example the venue’s working hour. For example, data taken outside of working hours of between 9 am-11 pm may be removed.
  • Local processing may additionally or alternatively include, for each sample, normalizing 257 frequency magnitudes whereas: ft l 57 f t
  • Local processing may additionally or alternatively include, computing dBAs share for each frequency, removing frequencies less than 200 Hz and more than 20kHz, and/or summing up dBA shares for the 200-20k Hz range to get new values of the dBAs.
  • the “hub” or “master” audio capture device may be configured to receive sound data from the array of audio capture devices, calculate averaged sound data as a function of time from the sound data from the plurality of audio capture devices, and transmit the averaged sound data as a function of time to the remote server, wherein the averaged sound data is used for analysis of the sound data. It will be understood that in examples where there is a plurality or an array of audio capture devices, the “hub” or “master” audio capture device may be configured to perform the local processing discussed above.
  • the remote device which may be implemented “in the cloud” in the form of cloud computing, may therefore be used to capture and store the data, while there is the processing of the raw sound data capture on the device.
  • the data may be processed, for example at the remote device, by a script (such as a python script) to follow a particular data structure for recording in a database management solution (such as Mongo DB).
  • the data held in the database management solution may further be processed (for example via a python script) to analyse the data to obtain insights for use in reporting to the user via a user interface.
  • the remote server or device may be further configured to generate an alert in the event that at least one of (i) a sound data parameter or (ii) a hearing wellness parameter exceeds a pre-determined threshold; and wherein the alert is for initiating improvement of acoustic properties of the environment.
  • the alert may be e.g. to, alert an acoustics expert to improve the site, to a local authority such as a noise complaint department, and/or may be used by the venue to automatically adjust the sound levels in the venue i.e. smart music control to match a venue operator’s target atmosphere.
  • the remote server or device is configured to calculate the A- weighted equivalent continuous sound pressure level (in dB), measured over a 15 minute time interval (LAeq,15 min). The remote server or device is then configured to determine whether the LAeq,15 min sits within a specified range, or exceeds a threshold. For example, preferably the remote server or device is then configured to determine whether the LAeq,15 min exceeds the World Health Organisation (WHO) Global Standard for Safe Listening sound level limit of 100 dB LAeq,15 min. In the event that the LAeq,15 min does exceed 100 dB LAeq,15 min, the sound levels in the venue may be automatically adjusted, for example using smart audio control.
  • WHO World Health Organisation
  • an alert may be generated to alert an acoustics expert to improve the site or to a local authority such as a noise complaint department.
  • a reduced sound level limit may be implemented for venues or events specifically targeted at children, such as a limit equal to or less than 94 dB LAeq,15 min, preferably 90 dB LAeq,15 min or lower.
  • the remote server or device may also be configured to calculate a C-weighted peak instantaneous sound pressure level (in dB), LCpeak, and/or A- weighted fast maximum sound level, LA,Fmax.
  • the remote server or device is then configured to determine whether the LCpeak and/or LA,Fmax sits within a specified range, or exceeds a threshold.
  • the remote server or device is then configured to determine whether the LCpeak and/or LA,Fmax exceeds 140 dB LCpeak and/or LA,Fmax.
  • the sound levels in the venue may be automatically adjusted, for example by controlling or signalling to a fast-acting electronic limiter coupled to any loudspeakers.
  • an alert may be generated to alert an acoustics expert to improve the site or to a local authority such as a noise complaint department.
  • a reduced sound level limit may be implemented forvenues or events specifically targeted at children, such as a limit equal to or less than 120 dB LCpeak and/or LA,Fmax.
  • Fig. 3 shows a functional schematic view of the different layers of the OSI model of an example audio capture device such as the audio capture device shown in Fig. 1 or 2.
  • the application layer 301 comprises communications messaging logic module, a database link module, SPL parameters computing and storage module, and an application logic module.
  • the middleware layer 303 comprises a Linux operating module (as the audio capture device 200 may operate using Linux).
  • the hardware abstraction layer comprises an IO module, a console module, a microphone module, a persistent storage module, and a communication interface module which in this example comprises a 3G/4G module and a Wi-Fi® module.
  • the hardware layer 307 comprises an MCU core, RF, timers, EEPROM, ADC, UARTs, GPIOs and SPI.
  • Fig. 4 shows a functional schematic view of the functioning of an example hearing wellness monitoring system, for example using the example audio capture device described above with respect to Figs. 1 to 3.
  • the example hearing wellness monitoring system can be broadly divided into two parts: a first part 430 relates to a user’s hearing preferences or “personality” and can be determined using the system and helps a user to select a venue that matches their hearing personality.
  • the second part 440 relates to how the hearing wellness monitoring system can be used to calculate at least one hearing wellness parameter and determine a hearing wellness category for the venue and/or to give it a hearing wellness rating. Both the first part and the second part interact with the remote device 150 described above with respect to Fig. 1 which is shown in Fig.
  • the Mumbli Platform 410 can process the sound data to determine sound data parameters and/or receive sound data parameters from an audio capture device 420 (depending on how much processing is performed locally by the audio capture device) to calculate at least one hearing wellness parameter which can then be used to assign a hearing wellness category and/or a hearing wellness rating to a venue.
  • a user can interact with the Mumbli Platform 150 to determine their own hearing personality and identify venues that may be suitable for them through the use of a graphical user interface which will be described in more detail below.
  • the remote server 410 (shown as the Mumbli API Cloud but also known and described in other figures as the Mumbli Platform) is furtherconfiguredto obtain hearing preference data about a user, and dynamically assess the suitability of the environment for the user, based on the hearing preference data and the current hearing wellness category of the environment.
  • the at least one hearing wellness parameter comprises a plurality of hearing wellness parameters
  • analysing the sound data further comprises dynamically assigning the environment a hearing wellness category, based on the plurality of hearing wellness parameters, for characterising the environment according to its hearing wellness atmosphere.
  • the hearing wellness categories may be determined as part of a series of interviews with people who described different social or public environments. Advantageously this may help to translate dB levels in intuitive words to describe when a venue Is too loud.
  • Assigning an environment a hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter.
  • the hearing wellness category may be a classification that seeks to describe the venue’s atmosphere in readily translatable and understandable terms, such as “buzzy”, “energetic” or “calm”.
  • Each hearing wellness category is defined by a plurality of ranges for a combination of hearing wellness parameters, and at least a portion of the hearing wellness parameter ranges defining each of the plurality of hearing wellness categories may overlap.
  • Fig. 5 shows an example process flow chart for an example hearing wellness monitoring system, such as part 440 of the system shown in Fig. 4.
  • sound data obtained by an audio capture device 510 is combined with information about the venue 525, which may be called a static venue parameter, and analysed by the remote device 520 for example for calculating the at least one hearing wellness parameter.
  • the hearing wellness parameter may therefore be based on at least one sound data parameter and at least one static venue parameter.
  • the static venue parameter may comprise at least one of : venue measurements, building materials, furniture materials, furniture measurements, reverberation time, acoustic quality, venue capacity and unstructured data (such as venue photos/pictures).
  • the table below lists a number (seven) of hearing wellness parameters and show what sound data parameters and static venue parameters may be used in calculating the hearing wellness parameters.
  • hearing wellness parameters For some hearing wellness parameters it may be seen that they are calculated using one or a plurality of sound data parameters (such as average dBA or exposure durations for each dBA level). For other hearing wellness parameters it may be seen that they are calculated using one or more static venue parameters and no sound data parameters. For other hearing wellness parameters it may be seen that they are calculated based on a combination of at least one sound data parameter and at least one static venue parameter. For other hearing wellness parameters it may be seen that they are calculated based on a combination of at least one hearing wellness parameter, for examples as seen for M6.
  • the hearing wellness parameters shown in the table below are M1 (exposure to hazardous noises), M2 (SNR, signal to noise ratio), M3 (unpleasant frequencies), M4 (max frequency magnitude), M5 (average dBA of the venue, loudness), M6 (acoustic quality), M7 (reverberation time), and M8 (vibe).
  • the data may be obtained on a period time basis, for example at a selected frequency (such as every 12 seconds) where the sound data may comprise frequency information and decibel information.
  • the data may be stored as a semi-structured JSON format.
  • the data may be averaged, for example over a plurality of time windows, for example over the course of a selected period of time (such as an hour, a day, a week etc.).
  • the sound data is received from at least one, or an array, of audio capture devices.
  • Averaged sound data is calculated as a function of time from the sound data from the plurality of audio capture devices 510 (this is described as being performed locally on an audio capture device and may be performed on each audio capture device and/or by a hub or master audio capture device, but in other examples may be performed at the remote device or server).
  • the averaged sound data is then transmitted 515 as a function of time to the remote server, wherein the averaged sound data is used for analysis of the sound data.
  • the remote device 520 (the Mumbli Platform or the Mumbli API) captures and stores the sound data, with some processing of the raw sound data captured being performed locally by the audio capture device(s) 510.
  • venue data is also transmitted 525 to the remote server 520 in the form of static venue parameters.
  • the venue data may be collected during onboarding of the venue into the hearing wellness monitoring system.
  • Both the sound data and venue data are then stored in a database management solution 530 (such as Mongo DB) which in then processed further 540 for example to determine hearing wellness parameters and/or assigning a hearing wellness category and/or a hearing wellness rating to the venue from which the sound data was captured.
  • the data processing 540 comprises data wrangling 540A, data cleaning 540B, and data analysis 540C.
  • the data analysis 540C is used to determine a hearing wellness rating 540D and/or assign a hearing wellness category to the venue from which the sound data was captured.
  • the data analytics, including the hearing wellness rating and/or hearing wellness category are presented on a graphical user interface for the hearing wellness monitoring system 540E. An example graphical user interface is shown in more detail in Figure 6.
  • Figs. 6A to 6D show example graphical user interfaces for the hearing wellness monitoring system.
  • the graphical user interfaces in Figs. 6A, 6B, and 6D display an output of the hearing wellness parameters 610 a hearing wellness rating 620, also known as a “Certified for Sound” or CfS rating.
  • the graphical user interface in Figs. 6C displays the hearing wellness category of the venue 615, wherein the hearing wellness category 615 is assigned to the venue based on sound data parameters and/or hearing wellness parameters, for example such as hearing parameter M8 (vibe) described in more detail below.
  • the graphical user interfaces of Figs. 6A, 6B, and 6D also display sound data parameters 630.
  • the sound data parameter 630 displayed shows the venue sound level as a function of time.
  • the hearing wellness rating 620 is determined by ascribing a numerical value to each of the plurality of hearing wellness parameters, applying a weighting to each of the plurality of hearing wellness parameters, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
  • the hearing wellness rating corresponds to respective ranges of the numerical value of the sum of weighted numerical values - e.g. A, B, C, D etc.
  • the hearing wellness rating 620 (labelled as the “CfS rank”) is listed as C.
  • determining a hearing wellness rating further comprises ascribing each of the plurality of sound data parameters (and/or hearing wellness parameters) to a sound data principle 625, such that each sound data principle comprises a respective subset of the sound data parameters (and/or hearing wellness parameters), applying a first weighting to each of the plurality of hearing wellness parameters, applying a second weighting to each sound data principle to give a respective weighted value for each hearing wellness parameter, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
  • the first weighting comprises a percentage of a total percentage ascribable to each sound data principle, and wherein the second weighting is a percentage of a total percentage ascribable to all sound data principles.
  • each sound data principle there are one or more components, and for each component there are one or more hearing wellness parameters.
  • Each hearing wellness parameter is ascribed a weighting which is a percentage such that all of the sum of all of the weightings for the hearing wellness parameters for a sound data principle sum to 100.
  • Each sound data principle is also ascribed a weighting, such that the sum of all of the sound data principle weightings also sum to 100.
  • the sound data principle inclusive space comprises components adverse noise, quality of speech and sensation of sounds.
  • Adverse noise comprises the hearing wellness parameter M1 (exposure to hazardous noises) and has a weighting of 40.
  • Quality of speech comprises the sound data component M2 (SNR) which itself comprises speech typical dBA and average dBA of the venue and has a weighting of 40.
  • Sensation of sounds comprises the sound data principle M3 (unpleasant frequencies) and has a weighting of 20.
  • the sound data principle inclusive space has a weighting of 40.
  • the sound data principle acoustic space comprises the components space ambient noise, space acoustic and space reflection to sounds.
  • Space ambient noise comprises the sound data parameters M4 (maximum frequency magnitude) and M5 (average dBA of the venue) each of which have a rating of 15.
  • the component space acoustic comprises the sound data parameter M6 (acoustic quality) and has a weighting of 35.
  • the component space reflection to sounds comprises the sound data parameter M7 (reverberation time) and has a weighting of 35.
  • the sound data principle acoustic space has a weighting of 30.
  • the sound data principle harmonic space comprises the component space atmosphere and comprises a sound data parameter M8 (vibe of the venue) and has a weighting of 30.
  • the scores when weightings are applied are then summed and ascribed a hearing wellness rating of A, B, C or D according to the table below.
  • the scores for the sound data parameter M1 are determined based on the average decibel rating for the venue over a selected time period or over the course of a day, with a higher score (scores ranging from 1 to 4, 4 being the highest) being given to a higher average decibel level.
  • the average may be determined over a period of time, for example a selected plurality of time windows. For example, the average may be determined over a period of five minutes, ten minutes, thirty minutes, an hour, several hours, a day, a week etc..
  • the sound data parameter M2 (SNR) may be calculated based on the below equation:
  • the scores for the sound data parameter M2 are determined based on the SNR, with a higher SNR giving a lower score (ranging from 1 to 6) and a lower SNR giving a higher score.
  • the SNR may be averaged over a period of time as with M1 above.
  • the scores for the sound data parameter M3 are determined based on whether the average or predominant frequency over a selected time period (such as five minutes, for example) falls into one of the four selected groups listed above, with frequencies in the range 1480 to 5300 Hz having the highest score (as being most unpleasant) followed by frequencies in the range 5300 to 7700 Hz, frequencies greater than 7700 Hz and then lastly frequencies lower than 1480 Hz having the lowest score.
  • the Bark scale is defined so that the critical bands of human hearing each have a width of one Bark.
  • the human auditory (hearing) system can be thought of as consisting of a series of bandpass filters. Interestingly, the spacing of these filters do not strictly follow either a linear frequency scale or a logarithmic musical scale.
  • the Bark Scale is an attempt to determine what the center frequency and bandwidth of those “hearing filters” are (known as critical bands).
  • Bark scales according to our grouping mechanism (table below). For example, group, 3 consists of 4 adjacent Bark scales 4, 5, 6, 7.
  • the scores for the sound data parameter M5 are determined based on the average dBA range the average dBA of the venue falls within, with a lower average dBA giving a lower score (from 1 to 4) and a higher dBA giving a higher score.
  • the average dBA range may be calculated over a 3-week time period.
  • the acoustic capacity is defined as the number of people that would create a signal-to- noise ratio (SNR) of -3 dB, which is considered the lower limit for “sufficient” quality of verbal communication under certain preconditions.
  • SNR signal-to- noise ratio
  • the acoustic capacity is calculated from volume and reverberation time.
  • the acoustic quality of an eating establishment may be characterized by the ration between the acoustic capacity and the total capacity.
  • V Volume of the space
  • RT60 Reverberation Time
  • RT60 is the time the sound pressure level takes to decrease by 50dB, after a sound source is abruptly switched off.
  • RT60 is thus a commonly used abbreviation for Reverberation Time.
  • RT60 values may vary in different positions within a room. Therefore, an average reading is most often taken across the space being measured. Rooms with an RT60 of ⁇ 0.3 seconds are called acoustically “dead”. Rooms with an RT60 of >2 seconds are considered to be “echoic”.
  • T60 time It is often very difficult to accurately measure the T60 time as it may not be possible to generate a sound level that is consistent and stable enough, especially in large rooms or spaces. To get around this problem, it is more common to measure the T20 and T30 times and to then multiply these by 3 and 2 respectively to obtain the overall T60 time.
  • the T20 and T30 values are usually called “late reverberation times” as they are measured a short period of time after the noise source has been switched off or has ended.
  • a sound source is used, and this can either be an interrupted source, such as a loudspeaker, or an impulsive noise source, such as a starting pistol.
  • the interrupted method is most commonly used as the sound source can be calibrated and controlled accurately, allowing for more repeatable measurements.
  • the measurement of reverberation time typically follows this process:
  • the calculation of the T20 and T30 times starts after the sound has decayed by 5 dB and ends after the level has dropped by 20 dB and 30 dB respectively.
  • the measured data must have at least 10 dB headroom above the noise floor.
  • the score for the sound data parameter M7 (reverberation time) is then determined based on what RT range the determined RT for the venue falls within, with a lower RT giving a lower score (from 1 to 4) and a higher RT giving a higher score.
  • the scores for the sound data parameter M8 may be determined based on the average dBA range that the average dBA of the venue falls within, with a lower average dBA giving a lower score (from 1 to 7) and a higher average dBA giving a higher score. It can also be seen how the subjective terms “calm”, “buzzy”, “energetic” and “overwhelming” may be applied to describe the vibe of the venue based on the average dBA level.
  • the average dBA may be determined over a selected time period and optionally during selected time windows within that selected time period, such as hourly, daily, weekly, monthly and annually generally taking the opening times of the venue into account - for example it may be determined weekly during the opening hours of the venue.
  • Fig. 7 shows an example process flowchart of how the hearing wellness monitoring system may be used by a user to identify venues that have a hearing wellness category that suits them.
  • the hearing wellness monitoring system comprises an application 710, wherein the application 710 communicates with the remote server.
  • a plurality of venues is listed on the application 710, wherein each venue is characterised by an assigned hearing wellness category 720 based on at least one hearing wellness parameter of each venue, as determined by the remote server.
  • Assigning a venue a hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter.
  • the hearing wellness category may be a classification that seeks to describe the venue’s atmosphere in readily translatable and understandable terms, such as “overwhelming”, “buzzy”, “energetic” or “calm”, for examples as described above in relation to sound data parameter M8 (vibe).
  • the application 710 also obtains hearing preference data about the user, and a subset of the plurality of venues are matched (either locally by the application 710 or at the remote device/server) to the user according to the hearing wellness category which best matches the hearing preference data of the user 730.
  • the remote server additionally alerts venues categorised by an “overwhelming” hearing wellness category, wherein the alert may be for initiating improvement of acoustic properties of the environment 740.
  • the alert may be e.g., to, alert an acoustics expert to improve the site, to a local authority such as a noise complaint department, and/or may be used by the venue to automatically adjust the sound levels in the venue i.e. smart music control to match a venue operator’s target atmosphere. This may result in the venue being re-categorised into a different hearing wellness category based on changes to at least one hearing wellness parameter of the venue 750.
  • Figs. 8 to 11 show various screenshots of an example graphical user interface fora method of determining a user’s hearing personality.
  • the graphical user interface can be used to display tailored information relating to hearing wellness of a user on a graphical user interface.
  • the graphical user interface may display a series of questions for determining hearing preference data of a user.
  • the graphical user interface displays a series of predetermined questions (for example, in Fig.
  • the graphical user interface 900 asks a user what kind of atmosphere the user looks for in a certain social setting when doing solo work outside of office/home setting - to be selected from calm, buzzy or energetic -and asks the user what venue they last went to when they attended a work meeting outside the office - to be selected from cafe, bar or restaurant).
  • the graphical user interface then receives the hearing preference data of a user via user input in response to the series of predetermined questions, and then dynamically displays (as shown in Fig.
  • hearing preference data such as information relating to a subset of environments from a plurality of environments, wherein the subset of environments is determined to be suitable for the user based on correlating the hearing preference data of the user with real-time sound data parameters of each environment.
  • Fig. 10 indicates that the hearing preference data of the user is that they can’t stand environmental sounds (e.g. sirens), that they prefer calm places, and that they use ear tech to control what and how they hear.
  • This hearing preference data may form a hearing preference profile associated to the user.
  • Fig. 1 1 shows a summary of the user’s preferred atmospheres for different occasions (e.g.
  • the graphical interface 1100 a preferred atmosphere for each of solo work, work meetings and socialising) as well as an indication of the user’s hearing personality. This information may then be used to enable the user to identify a venue that matches a venue’s hearing wellness parameters and hearing wellness category that is calculated or assigned based on the sound data captured at the venue.
  • Fig. 12 shows an example process flow chart for a method of interacting with a venue’s sound system to adjust the sound profile of a venue.
  • a plurality of remote devices 1210 each associated with a user, communicate via an app to an Internet-Of-Things (IOT) device 1220 located in the venue.
  • the IOT device 1220 may be the audio capture device shown in relation to Figs. 1 to 3.
  • the communication comprises each device 1210 broadcasting a user identifier to the IOT device 1220.
  • the communication method may be a short-range, wireless communication method, for example Bluetooth®.
  • the IOT device 1220 then obtains hearing preference data associated with each user identifier from a remote server 1230.
  • the remote server 1230 may be the server described as “Mumbli Cloud” 150, 410, or “Mumbli API” 520 shown in relation to Figs. 1 to 5.
  • the IOT device 1220 may communicate with the remote server 1230 via a long-range, wireless communication method, for example 3G, 4G or 5G.
  • a group hearing preference profile is created from based on the obtained hearing preference data of the users.
  • the IOT device 1220 may then compare group hearing preference profile with the current sound environment of the venue. Based on this comparison, the IOT device 1220 may communicate with the venue sound system 1240, for example using wireless communication such as Wi-Fi, to adjust the volume or audio profile of sound from the venue sound system 1240. This may be advantageous as the venue's sound system may adapt to the user needs.
  • the IOT device 1220 may communicate to the venue sound system 1240 to reduce the volume of the music, thus reducing the average dBA of the venue to comply with the preferred hearing environment. Communication between the IOT device 1220 and the venue sound system 124 may also be used to adjust the sound from venue sound system 124 the to keep a constant sound dB value over the environmental noise.
  • the IOT device 1120 may request the app on each remote device 1210 to pop for a poll of the user regarding any venue profile changes.
  • Fig. 13 shows an example process flow chart for a method of interacting with a venue’s sound system.
  • a plurality of remote devices 1210 each associated with a user, communicate via an app to an IOT device 1220 located in the venue.
  • the IOT device 1220 may be the audio capture device shown in relation to Figs. 1 to 3, or the IOT device shown in relation to Fig. 12.
  • the communication comprises each device 1210 broadcasting a user identifier to the IOT device 1220 which can be used to “check-in” the remote device to the venue.
  • the communication method may via a short-range, wireless communication method, for example Bluetooth®.
  • the venue in this example comprises a microphone system 1350.
  • the microphone system 1350 is coupled to an audio transmitter 1340 which is configured to wirelessly communicate with an audio receiver module 1225 coupled to the IOT device 1220.
  • the remote device 1210 can initiate a request to the IOT device 1220 to stream the audio input from the microphone system 1350 to the headset 1215.
  • the IOT device 1220 may then create a secure channel with the user remote device 1210, for example over a Wi-Fi channel or via Bluetooth®, such as via AuracastTM broadcast audio, enabling the input from the microphone system to be streamed to the headset 1215.
  • Fig. 14 shows an example process flow chart for a method for determining the hearing wellness of an environment.
  • the method is configured to be performed by a remote server.
  • the method first comprises obtaining sound data about an environment as a function of time 1410, for example wherein the sound data is obtained from an array of audio capture devices positioned throughout the environment as described in relation to Figures 1 and 2.
  • a plurality of sound data parameters is then dynamically calculated for the environment as a function of time based on analysing the sound data 1420.
  • At least one hearing wellness parameter is then determined based on the plurality of sound data parameters 1430, for example as described above in relation to Figure 6.
  • a hearing wellness category is the assigned to the environment, based on the at least one hearing wellness parameter 1440.
  • Assigning a venue a hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter. At least a portion of the hearing parameter ranges defining each of the plurality of hearing wellness categories may overlap.
  • the hearing wellness category may be a classification that seeks to describe the venue’s atmosphere in readily translatable and understandable terms, such as “overwhelming”, “buzzy”, “energetic” or “calm”, for examples as described above in relation to sound data parameter M8 (vibe).
  • Hearing preference data of a user is then obtained based on the user profile. An assessment of suitability of the environment forthe user is then made based on correlating the hearing preference data of the user and the hearing wellness category of the environment. If the environment is assessed to be suitable based on matching criteria determined by the hearing preference data in relation to the hearing wellness category of the environment, the environment is matched and/or suggested to the user. If the environment is assessed to be unsuitable, the environment is not matched and/or not suggested to the user.
  • the hearing preference data of the user comprises at least one of (i) subjective user preference data, and (ii) objective hearing performance data of the user.
  • the hearing preference data may also comprise different criteria for identifying a suitable environment for the user based on the intended activity of the user, for example work, socialising, meetings etc.
  • the method further comprises obtaining venue data about the environment. Determining the at least one hearing wellness parameter may then further be based on the venue data.
  • Venue data comprises at least one static venue parameter, for example, at least one of venue measurements, building materials, furniture materials, furniture measurements, photos.
  • the method furthercomprises ascribing each of the plurality of hearing wellness parameters (or sound data parameters) to a sound data principle, such that each sound data principle comprises a respective subset of the hearing wellness parameters, then applying a first weighting to each of the plurality of hearing wellness parameters within the sound data principle.
  • a second weighting to each sound data principle is then applied to give a respective weighted value for each hearing wellness parameter.
  • a hearing wellness rating is then determined for the environment based on the sum of the weighted numerical values of the hearing wellness parameters, wherein the first weighting comprises a percentage of a total percentage ascribable to each sound data principle, and wherein the second weighting is a percentage of a total percentage ascribable to all sound data principles.
  • obtaining sound data about an environment comprises obtaining sound data about a plurality of environments or venues.
  • the method comprises calculating at least one sound data parameter for each environment, determining at least one hearing wellness parameter for each environment; and assigning a hearing wellness category to each environment. Assessing the suitability of the environment for the user then further comprises identifying a sub-set of the plurality of environments to match to the user, based on matching criteria determined by the hearing preference data in relation to the hearing wellness categories of the plurality of environments.
  • the aforementioned examples relate to a hearing wellness monitoring system and method for use in a social venue or environment.
  • the audio capture device of Figs. 1 to 3 which may be an IOT device as discussed above, may additionally provide security functionality to a venue or environment by monitoring sound while location is closed.
  • the IOT device may send an alert to the venue or environment owner or relevant authorities, indicating a burglary or trespasser.
  • the IOT device may also simultaneously generate an alarm.
  • the IOT smart device described in relation to Figures 1 to 3 may also be used to identify the presence and location of a distress call using the microphone array and send an alert to a relevant person or authority.
  • the location may be determined based on determining the proximity of the source of sound (distress call) relative to the array of microphones.
  • the device may also classify a sexual assault or other similar types of distress calls using audio processing.
  • the smart IOT device of Figures 1 to 3 may also be used in a dense traffic region (including human traffic and vehicular traffic), wherein the IOT device is configured to monitor the environmental sound and send an alert the authorities in the event the legal sound threshold is exceeded.
  • the IOT device may be configured for indoor or outdoor use.
  • the IOT device may also be used to monitor the noise level of people in non-venue locations, for example corridors or shared accommodation.
  • the IOT device may be used to provide an indication to reduce noise in loud areas where a threshold has been exceeded.
  • the IOT device may be configured to communicate the indication through a display.
  • the display may comprise a screen located in the corridor which displays an indication to reduce noise to passers-by.
  • the display may comprise a local screen display, and/or a display to a supervisor. In the event that a threshold has been exceeded, a supervisor or relevant authority may be alerted to take action to reduce the noise.
  • the IOT device may also be used coupled to a sound system, wherein the IOT device is configured to communicate with the sound system to adjust the volume and/or sound profile from the sound system to keep a constant sound dB value over the environmental noise.
  • Environmental noise may include background noise, conversation, traffic noise etc.
  • the IOT device may be configured to monitor environmental sound values (for example, average SPL) for a predetermined time window, for example every 12 seconds.
  • the IOT device may then repeatedly compare the updated average environmental sound value to the preceding average environmental sound value.
  • the IOT device may calculate the difference in the sound value and correspondingly adjust the volume and/or sound profile of the sound system.
  • the corresponding adjustment may be proportional to the change in average environmental sound value.
  • the adjustment to the sound system is made in real-time to the change in environmental sound value.
  • the IOT device may adjust the volume and/or sound profile of the sound system via a short-range, wireless communication method, for example Wi-Fi.
  • the IOT device may also be used to control a "white noise level" of at least one headset coupled to a user remote device to protect the hearing of the user based on environmental noise levels, wherein the IOT device is configured to communicate to the user remote device via a short-range, wireless communication method such as Wi-Fi.
  • White noise is a random signal having equal intensity at different frequencies. Controlling the “white noise level” may include increasing the volume or sound profile of the white noise provided to the headset.
  • the IOT device may be configured to monitor environmental noise (for example, average SPL) for a predetermined time window, for example every 12 seconds.
  • the IOT device may then correspondingly initiate and/or adjust the volume of white noise played through a connected headset.
  • the corresponding adjustment may be proportional to the magnitude of the environmental sound value in excess of the threshold.
  • the control of the white noise is made in real-time to the change in environmental sound value.
  • the IOT device may adjust the volume and/or sound profile of the white noise via a short-range, wireless communication method to the user remote device, for example Wi-Fi or Bluetooth®, including AuracastTM.
  • the user remote device then provides the adjust white noise to the coupled headset.
  • the IOT device may provide the sound cancelling signal directly to the headset via a short-range, wireless communication method, for example Wi-Fi or Bluetooth®, including AuracastTM.
  • the IOT device may also be used to provide a "noise cancelling signal" to at least one headset coupled to a user remote device to protect the hearing of the user based on environmental noise levels.
  • the IOT device may be configured to monitor environmental noise (for example across the frequency spectrum) and generate a realtime “noise cancelling signal”.
  • the noise cancelling signal may be generated using antisound waves or other active sound cancelling methods.
  • the IOT device may provide the sound cancelling signal to the user remote device via a short-range, wireless communication method, for example Wi-Fi Fi or Bluetooth®, including AuracastTM.
  • the user remote device then provides the adjust white noise to the coupled headset.
  • the IOT device may provide the sound cancelling signal directly to the headset via a short-range, wireless communication method.
  • An application on a user remote device may also be configured to set up an audio profile for audio earpieces or a headset based on the preferences of the user, similar to the hearing preference data described in relation to the examples above.
  • the IOT device may then be configured to adjust the audio properties of the earpieces or headset based on the environmental noise measurements monitored by the IOT device.
  • the IOT device may also be used to analyse sound distribution in a venue or space, for example an office space.
  • the IOT device for example the device of Figures 1 to 3, may use audio input from a microphone array to determine the directionality and source of sound, as well as sound data parameters (for example, SPL and frequency spectrums). Based on the sound data received at each microphone, the IOT device may be configured to map the sound in space according to at least one sound parameter. Mapping in space may comprise overlaying a visual representation of local sound environments across an image of the space (e.g., a 2D floor plan, or 3D image).
  • the IOT device may be configured to generate a “heat map”, for example wherein loud areas (high SPL) are identified by a colour and quiet areas (low SPL) are identified by a second colour.
  • the generated map may also comprise a gradient of colours corresponding to a plurality of different ranges of a sound parameter.
  • the IOT device may be configured to map in space any of the sound data parameters and/or hearing wellness parameters described above in relation to aforementioned specific examples.
  • Such sound maps may be used for management of sound and sound planning or monitoring occupancy of a location.
  • an alert may be sent to a manager or relevant authority to take action in that specific area to rectify the sound profile. Such action could involve dispersing crowds or redistributing people, moving furniture, installing sound dampening structures, etc.
  • the IOT device may also be used to track a user remote device in an environment.
  • a user remote device may load an application which causes the user remote device to generate and emit a sound of a specific fixed frequency, preferably wherein the frequency is above the human hearing range.
  • the IOT device communicates with the application on the user remote device and associates the specific fixed frequency generated by the user remote device with a device identifier (ID).
  • ID device identifier
  • the IOT device can then track the position of the user remote device within the environment based on the directionality of sound detected by the microphone array corresponding to the specific fixed frequency which can be used to extrapolate the location of the user remote device (as the source of the sound) as a function of time.
  • the IOT device may additionally generate a trajectory of a fixed frequency trace to visualise the movement of the user device in the environment.
  • the IOT device may simultaneously monitor and/or track a plurality of user remote devices, wherein each user device is associated with a different specific fixed frequency.

Abstract

A hearing wellness monitoring system is disclosed herein. The hearing wellness monitoring system comprises at least one audio capture device configured to provide sound data about an environment as a function of time, and a remote server, configured to receive the sound data, and wherein the remote server is configured to analyse the sound data. The audio capture device comprises at least one microphone and a communication interface, configured to transmit the sound data from the audio capture device. Analysing the sound data further comprises dynamically calculating a plurality of sound data parameters as a function of time for the environment from the sound data, and dynamically calculating at least one hearing wellness parameter as a function of time from the plurality of sound data parameters for assessing the hearing wellness characteristics of an environment.

Description

A hearing wellness monitoring system and method
Field of the invention
The present disclosure relates to a hearing wellness monitoring system and method.
Figure imgf000003_0001
Different venues exhibit different sound profiles. The sound profile of venues is one of the factors that influence people’s choices of social venue. Venues with good sound profile can enhance or improve social interaction. Conversely venues with bad sound profile inhibits meaningful social interaction and can damage the hearing health of patrons. There is a desire for people to easily identify and locate social venues that fits their venue sound profile preference, particularly for those that experience difficulties with their hearing or sensitive to background noise. At present there is a lack of an objective holistic way for venues operators to establish the right sound profilefortheir venue. There are also limited ways to effectively or easily describe sound profile of a venue to layperson.
US2017372242A1 describes a plurality of stationary noise sensors may each include a microphone to sense noise, a power source, and a communication device to transmit data about noise sensed by the microphone. A plurality of mobile noise sensors may each include a microphone to sense noise, a power source, and a communication device to transmit data about noise sensed by the microphone. A noise information hub may receive data from the stationary noise sensors and mobile noise sensors and provide indications associated with the received data via a cloud-based application. An analytics platform may receive the indications and analyse them to determine noise level exposure information for each of a plurality of locations within a workplace.
US10068451 B1 describes a first value is received and is associated with a noise level of an environment that a user is in. It is determined whether the first value exceeds a first threshold. A second computing device is notified when the first value exceeds the first threshold. The notifying indicates that the user must leave the environment.
US2018005142A1 describes a method and device system for determining, publishing, and taking reservation reguests against a venue offering a restricted or resource-limited service or, a restaurant, bar, or cafe whose occupancy is limited. The method can function unattended and provide the measure of availability from ambient sound levels, using digital acoustic sensors (microphones) and a machine learning process to predict and publish the availability of the resource or occupancy level of the space in which it is contained.
US10390157B2 relates generally to devices, systems, and methods for assessing a hearing protection device. A method for assessing a hearing protection device may comprise monitoring sounds with a hearing protection device; transmitting sound measurements from the hearing protection device to a portable electronic device; transmitting the sound measurements from the portable electronic device to a server; comparing, with the portable electronic device or the server, sound measurements to a look-up table; suggesting, with the portable electronic device, the most suitable hearing protection device to be worn based on a comparison between the sound measurements and the look-up table.
US2019139565A1 describes a personalized computing experience for a user based on a user-visit-characterized venue profile. In particular, user visits to a venue are determined. For those visits, user characteristics and/or visit characteristics are determined. User similarities and visit features similarities may be determined and associated with the venue to form the user-visit-characterized venue profile. The user-visit-characterized venue profile may be provided to an application or service such as a personal assistant service associated with the user, or may be provided as an API to facilitate consumption of the user-visit-characterized venue profile by an application or service.
US10587970B2 describes an acoustic camera which comprises an array of arranged microphones. The microphone arrangement can be organized in planar or non-planar form. The device can be handheld, and it comprises a touch screen for interacting with the user. The acoustic camera measures acoustic signal intensities in the pointed direction and simultaneously takes an optical image of the measured area and shows the acoustic signal strengths with the taken image on the screen. The device analyses the acoustic signals and makes a classification for the sound based on the analysis. If necessary, an alarm is given by the acoustic camera. Besides the handheld manual use, the device can be fixed on an immobile structure or fixed to a movable or rotatable device or a vehicle, such as to a drone.
US7983426B2 a method for monitoring and reporting sound pressure level exposure for a user of a first communication device is implemented in one embodiment when the device measures a sound pressure level (SPL) of the surrounding environment. The device stores at least the SPL measurement in a memory, producing an SPL exposure record, and displays a visual representation of the SPL exposure record on a display screen. In another embodiment, the SPL is measured by a second communication device and combined with a known SPL for an output audio transducer of the second device, producing a user sound exposure level. The user sound exposure level is transmitted to the first communication device. The user is notified when the user sound exposure level exceeds a predetermined threshold. A server may also be used to track SPLs over time and recommend corrective action when exposure limits are exceeded.
US2020202626A1 describes methods and apparatuses for identifying and indicating open space noise are described. In one example, a method includes receiving an audio sensor data from a plurality of microphones disposed at known locations throughout an open space. The method includes generating a three-dimensional sound map data from the audio sensor data. The method further includes generating an augmented reality visualization of the three-dimensional sound map data, which includes capturing with a video camera at a mobile device a video image of the open space, displaying the video image on a display screen of the mobile device, and overlaying a visualization of the three- dimensional sound map data on the video image on the display screen.
US9510118B2 describes how environmental sound is captured and mapped. Application of a large number of mobile communication terminals embodied as a data supplier leads to extensive, automatic and continual mapping of environmental sound. For sites that are visited relatively frequently and by a plurality of subscribers, a relatively large quantity of remotely transmitted data records for the environmental sound is obtained informally, which permits a more precise depiction of the environmental noise. Commercially available mobile communication terminals may be augmented using simple measures in order to communicate with a central or local mapping system or mapping service directly or indirectly. Summary of the invention
Aspects of the invention are as set out in the independent claims and optional features are set out in the dependent claims. Aspects of the invention may be provided in conjunction with each other and features of one aspect may be applied to other aspects.
In a first aspect there is provided a hearing wellness monitoring system. The hearing wellness monitoring system comprises at least one audio capture device configured to provide sound data about an environment as a function of time. The at least one audio capture device comprises at least one microphone and a communication interface, configured to transmit the sound data from the audio capture device. The hearing wellness monitoring system also comprises a remote server, configured to receive the sound data, and wherein the remote server is configured to analyse the sound data. Analysing the sound data comprises dynamically calculating a plurality of sound data parameters as a function of time for the environment from the sound data, and dynamically calculating at least one hearing wellness parameter as a function of time from the plurality of sound data parameters for assessing the hearing wellness characteristics of an environment. It will be understood that the at least one microphone may comprise at least four microphones. Advantageously this may allow directionality of audio/sound information to be obtained.
The at least one hearing wellness parameter may comprise a plurality of hearing wellness parameters, and wherein analysing the sound data further comprises dynamically assigning the environment a hearing wellness category for characterising the environment according to its hearing wellness atmosphere. The hearing wellness category is assigned based on at least one hearing wellness parameter. Hearing wellness categories may be determined as part of a series of interviews with people who described different social or public environments. The hearing wellness categories may act to translate dB levels in intuitive words to describe when a venue is too loud.
Assigning the environment the hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter. Each hearing wellness category may be defined by a plurality of ranges for a combination of hearing wellness parameters. At least a portion of the hearing wellness parameter ranges defining each of the plurality of hearing wellness categories may overlap. Hearing parameter ranges for different hearing wellness categories may overlap it will be understood that it is not required that the plurality of hearing wellness categories require mutually exclusive hearing wellness parameter ranges.
The hearing wellness monitoring system may further be configured to determine a hearing wellness rating. The hearing wellness rating may be determined by ascribing a numerical value to each of the plurality of hearing wellness parameters, applying a weighting to each of the plurality of hearing wellness parameters, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters. The hearing wellness rating may correspond to respective ranges of the numerical value of the sum of weighted numerical values.
The hearing wellness monitoring system may further be configured to determine the hearing wellness rating by ascribing each of the plurality of sound data parameters and/or hearing wellness parameters to a sound data principle, such that each sound data principle comprises a respective subset of the sound data parameters, applying a first weighting to each of the plurality of hearing wellness parameters, applying a second weighting to each sound data principle to give a respective weighted value for each hearing wellness parameter, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters. The first weighting may comprise a percentage of a total percentage ascribable to each sound data principle, and wherein the second weighting may be a percentage of a total percentage ascribable to all sound data principles.
The sound data may be obtained as a function of time according to pre-determined time windows. A preferred time window is 12 seconds. The sound capture may happen continuously during working hours. In some examples there may be an automated ‘startup system’ to activate the device when the sound level reached or is higher than 40dB level. The rolling window is every 12 seconds. Every 12 seconds the at least one audio capture device captures noise characteristics as sound data according to the format 257 frequency ranges (2. elements) and 1 set of sound pressure levels (SPLs) (aweightSPL) . The SPL may be a sound pressure level in dB referenced to 2 x 10-5 Pa. As the audio capture device may work 24 hours a day as a result there may be around N=7200 samples during a day.
Analysing the sound data may comprise calculating each sound data parameter as a function of time from (i) sound data from a single time window, or (ii) sound data from a plurality of time windows.
The at least one audio capture device comprises an array of audio capture devices. For example, the audio capture device may comprise a hub and satellite system, whereby one audio capture device has additional functionality and acts as a hub to receive sound data from satellite audio capture devices connected (e.g. wirelessly) to it and then process and transmit this to the remote device/server. The system may further comprise at least one of (i) a central local processor configured for local processing of the sound data from the array of audio capture devices, or (ii) wherein the at least one audio capture device comprises a processor configured for local processing of the sound data from the at least one microphone of each respective audio capture device. Local processing may comprise any of removing data outside of a selected time window such as the venue’s working hours. Additionally, or alternatively, it may include, for each sample, normalizing 257 frequency magnitudes, computing dBAs share for each frequency, removing frequencies less than 200 Hz and more than 20kHz, and/or summing up dBA shares for the 200-20k Hz range to get new values of the dBAs.
Each audio capture device is preferably compliant with the sound level measurement system as determined by the WHO Global Standard for Safe Listening (see Feature 2,
WHO Global Standard for Safe Listening 2022, Available at:
Figure imgf000008_0001
[Accessed 05 October 2022]). For example, wherein each audio capture device preferably comprises a class 1 or class 2 sound level meter, as defined in international standard IEC 61672-1 :2013.
The audio capture device, or at least one audio capture device from an array of audio capture device, may be configured to be located at (or as close to as is practicable) the reference measurement position and/or long-term measurement position of an environment as determined by the World Health Organisation (WHO) Global Standard for Safe Listening (see Annex 5, WHO Global Standard for Safe Listening 2022). Preferably, the audio capture device(s) should be configured to be positioned at least one of: at a height comparable to audience head height; out of reach of audience members; at least 1 metre from any large reflective surface (such as a wall, ceiling, or large piece of furniture or equipment); and/or with a clear line of sight to any main loudspeakers where appropriate.
The processor may be configured to receive sound data from the array of audio capture devices, calculate averaged sound data as a function of time from the sound data from the plurality of audio capture devices, and transmit the averaged sound data as a function of time to the remote server, wherein the averaged sound data is used for analysis of the sound data. The remote server may be implemented in the cloud and may be used to capture and store the data, and there is the processing of the raw data capture on the device, followed by a structure of that data in our database management solution (such as Mongo DB or other database storage and management system) which is then being processed further through a python script.
The sound data may include at least one of (i) audio frequency, and (ii) sound pressure level (SPL). The at least one sound data parameter may comprise at least one of : Nominal sound pressure level, Nominal Interval frequency spectrum, A(C)-weighted sound pressure level, A-weighted equivalent continuous sound level measured over a specified period of time (LAeq,T), C-weighted peak instantaneous sound pressure level (LCpeak), A-weighted fast maximum sound level (LA,Fmax), A(C)-weighted population minimum, A(C)-weighted population maximum, A(C)-weighted population standard deviation, A(C)- weighted population variance, A(C)-weighted population median, A(C)-weighted population average, and A(C)-weighted Interval frequency spectrum.
SPL (expressed in dB) measurement is done using an audio-stream of 16bit data on each of the 4 channels (each channel represents a microphone in the array). As the calibration is out of scope, note here that the 16bit value can be changed to a 32bit one in the final product for a better representation of a low SPL level. The raw data is stored for preprocessing in chunks of 1024 samples (frame) with a sample rate of 44100Hz fora duration of 15 seconds 4 times per minute. The parameters of time will be configurable and will allow a user to set the preferred values as the tests are carried out at a later stage. The nominal SPL value is calculated on each individual sample using the following formula:
Figure imgf000010_0001
The values are concatenated to a new data array and stored under nominal.value SPL name in the data container.
A-weighted SPL (expressed in dBA) for each microphone may be obtained using a digital A-weighted filter (according to IEC/CD 1672) applied to each individual nominal data frame. The values may be concatenated to a new data array and stored under aweighted.value name in the data container C-weighted SPL (expressed in dBC) will be calculated using a digital C-weighted filter in a similar manner as the A-weighted SPL and will be stored in eweighted. value.
A-weighted population minimum (expressed in dBA) is a statistical measure applied om the a-weighted SPL measurements, it is referred to the minimum value in the 15 second interval and it is stored as a value in the data container under the name of aweighted.population minimum value in the data container. C-weighted population minimum (expressed in dBC) will be calculated in a similar manner as the A-weighted population minimum and will be stored in eweighted. population minimum value data container.
A-weighted population maximum (expressed in dBA) is a statistical measure applied om the a-weighted SPL measurements, it is referred to the maximum value in the 15 second interval and it is stored as a value in the data container under the name of aweighted.population maximum value in the data container C-weighted population maximum (expressed in dBC) will be calculated in a similar manner as the A-weighted population maximum and will be stored in eweighted. population maximum value data container.
A-weighted population standard deviation (expressed in dBA) is a measure of variation or dispersion of the a-weighted set of values. This is stored in the data container as aweighted.population_standard_deviation. C-weighted population standard deviation (expressed in dBC) will be calculated in a similar manner as the A-weighted population standard deviation and will be stored in eweighted. population_standard_deviation data container.
A-weighted population variance is the average of the squared differences from the mean value. This is stored in the data container as aweighted.population variance C-weighted population variance will be calculated in a similar manner as the A-weighted population variance and will be stored in eweighted. population variance data container.
A-weighted population median (expressed in dBA) is a statistical measure which represents the value in the middle of the population. This value is stored in the data container as aweighted.population median. C-weighted population standard median (expressed in dBC) will be calculated in a similar manner as the A-weighted population median and will be stored in eweighted. population median data container.
A-weighted population average (expressed in dBA) is a statistical measure which represents the arithmetic mean of a population, and it is calculated by dividing the population values with the population number. This value is stored in the data container as aweighted.population average C-weighted population average (expressed in dBC) will be calculated in a similar manner as the A-weighted population average and will be stored in eweighted. population average data container.
Nominal Interval frequency spectrum represents the range of frequencies contained by the analyzed dataset. The frequency range of this spectrum will be 20-20000Hz in 1024 distinct portions of the spectrum range. The discrete fast fourier will be used with no additional windowing for the computation, at a later stage the addition of a windowing can be arranged but will not make the scope of this work. The spectrum range array will be stored in the data container as nominal.freq_spectrum.
A-weighted Interval frequency spectrum is similar to the normal interval frequency spectrum, with the remark that it is applied on the already weighted A-values The spectrum range array will be stored in the data container as aweighted.freq_spectrum C-weighted Interval frequency spectrum will be calculated in a similar manner as the A-weighted Interval frequency spectrum and will be stored in eweighted. freq_spectrum data container.
The remote server may be further configured to obtain hearing preference data about a user, and dynamically assess the suitability of the environment for the user, based on the hearing preference data and the current hearing wellness category of the environment.
The at least one hearing wellness parameter may comprise at least one of : exposure to hazardous noise, unpleasant frequencies, maximum frequency magnitude, average dBA of the venue, acoustic quality, reverberation time, vibe of the venue, and signal to noise ratio (SNR).
There are pre-defined parameters set to identify the difference between the most important frequencies in non-tonal and unpleasant or pleasant frequencies. Each hearing wellness parameter may correspond to a hearing wellness component. The hearing wellness component may comprise at least one of : adverse noise, quality of speech, sensation of sounds, space ambient noise, space acoustic, space reflection to sounds, space atmosphere.
The hearing wellness parameter may be based on at least one sound data parameter and at least one static venue parameter. The static venue parameter may comprise at least one of: reverberation time, venue building materials and furniture, acoustic quality, venue capacity, and unstructured data (such as photos of the venue) from which other data may be inferred or determined.
The at least one audio capture device may further comprise a memory configured to store the sound data, for example to buffer the sound data before it is sent to the remote device/server. The memory may be configured to store the sound data fora predetermined time period.
The audio capture device communication interface may comprise a wireless telecommunication interface, for example 3G, 4G or 5G. The remote server may further be configured to generate an alert in the event that at least one of (i) a sound data parameter or (ii) a hearing wellness parameter exceeds a predetermined threshold; and wherein the alert is for initiating improvement of acoustic properties of the environment. The alert may be e.g. to, alert an acoustics expert to improve the site, to a local authority such as a noise complaint department, and/or may be used by the venue to automatically adjust the sound levels in the venue i.e. smart music control to match a venue operator’s target atmosphere.
In another aspect there is provided a method for determining the hearing wellness of an environment. The method comprises obtaining sound data about an environment as a function of time, dynamically calculating a plurality of sound data parameters for the environment as a function of time based on analysing the sound data, determining at least one hearing wellness parameter based on the plurality of sound data parameters; assigning the environment a hearing wellness category, based on the at least one hearing wellness parameter, obtaining hearing preference data of a user, and assessing suitability of the environment for the user based on correlating the hearing preference data of the user and the hearing wellness category of the environment.
The sound data may be obtained from an array of audio capture devices positioned throughout the environment. At least one audio capture device may be located at (or as close to as is practicable) the reference measurement position and/or long-term measurement position as determined by the World Health Organisation (WHO) Global Standard for Safe Listening (see Annex 5, WHO Global Standard for Safe Listening 2022). Preferably, at least a portion of the array of audio capture devices should be positioned at least one of : at a height comparable to audience head height; out of reach of audience members; at least 1 metre from any large reflective surface (such as a wall, ceiling, or large piece of furniture or equipment); and/or with a clear line of sight to any main loudspeakers where appropriate.
Assessing suitability of the environment for the user may comprise either (i) matching the environment to the user, or (ii) not matching the environment to the user, based on matching criteria determined by the hearing preference data in relation to the hearing wellness category of the environment. The hearing preference data may comprise at least one of (i) subjective user preference data, and (ii) objective hearing performance data.
The method may further comprise obtaining venue data about the environment, wherein venue data comprises at least one static venue parameter, and wherein determining the at least one hearing wellness parameter is further based on the venue data. The static venue parameter may comprise at least one of: reverberation time, venue building materials and furniture, acoustic quality, venue capacity, and unstructured data (such as photos of the venue) from which other data may be inferred or determined.
Assigning the environment the hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by at least one range of at least one hearing wellness parameter.
The at least one hearing wellness parameter may comprises a plurality of hearing wellness parameters, and each hearing wellness category is defined by a combination of hearing wellness parameters, wherein each hearing wellness parameter for each category is defined by a range.
At least a portion of the hearing parameter ranges defining each of the plurality of hearing wellness categories may overlap.
Obtaining sound data may comprise obtaining sound data about an environment as a function of time, and wherein calculating the plurality of sound data parameters and calculating the at least one hearing wellness parameter comprises dynamically calculating the parameters as a function of time.
The method may further comprise determining a hearing wellness rating by ascribing a numerical value to each of the plurality of hearing wellness parameters, applying a weighting to each of the plurality of hearing wellness parameters, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters. The hearing wellness rating may correspond to respective ranges of the numerical value of the sum of weighted numerical values.
The method may furthercomprise ascribing each of the plurality of sound data parameters and/or hearing wellness parameters to a sound data principle, such that each sound data principle comprises a respective subset of the sound data parameters (and/or hearing wellness parameters), applying a first weighting to each of the plurality of hearing wellness parameters, applying a second weighting to each sound data principle to give a respective weighted value for each hearing wellness parameter, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters. The first weighting may comprise a percentage of a total percentage ascribable to each sound data principle, and wherein the second weighting may be a percentage of a total percentage ascribable to all sound data principles.
In another aspect there is provided a hearing wellness monitoring apparatus, the apparatus comprising a server configured to perform the method described above.
In another aspect there is provided a method for identifying a suitable environment for a user based on hearing wellness. The method comprises obtaining sound data about a plurality of environments, analysing the sound data to calculate at least one sound data parameter for each environment, correlating the at least one sound data parameter with at least one hearing wellness parameter for each environment, obtaining hearing preference data for a user, and matching at least one environment from the plurality of environments to the user based on correlating the at least one hearing wellness parameter of each environment and the hearing preference data of the user. This may allow users to be categorised based on an interaction with their Hearing Personality profile.
In some examples hearing preference data may be obtained by simulating an environment (e.g. a noisy restaurant) and using logic to determine a user’s tolerance levels for understanding speech in noise, but also preference levels.
Identifying at least one suitable environment may further comprise assigning each environment a hearing wellness category, based on the at least one hearing wellness parameter for each environment, and matching at least one environment from the plurality of environments to the user based on correlating the hearing preference data of the user and the hearing wellness category for each environment and/or the hearing wellness parameters.
Matching at least one environment from the plurality of environments to the user may further comprise identifying a subset of the plurality of environments, wherein the subset match criteria determined by the hearing preference data. The hearing preference data may comprise different criteria for identifying suitable environments for the user based on the intended activity of the user.
The hearing preference data may comprise at least one of (i) subjective user preference data, and (ii) objective hearing performance data.
In another aspect there is provided a hearing wellness monitoring apparatus, the apparatus comprising a server configured to perform the method described above.
In another aspect there is provided a computer program product comprising program instructions configured to program a programmable device to perform the method of any of the aspects described above.
In another aspect there is provided a method for displaying tailored information relating to hearing wellness of a user on a graphical user interface. The method comprises displaying a series of predetermined questions, receiving hearing preference data of a user via user input in response to the series of predetermined questions, and dynamically displaying information relating to a subset of environments from a plurality of environments, wherein the subset of environments is determined to be suitable for the user based on correlating the hearing preference data of the user with real-time sound data parameters of each environment.
Embodiments of the disclosure will now be described, by way of example only, with reference to the accompanying drawings, in which: Fig. 1 shows a functional schematic view of an example process flow chart for an audio capture device for use with an example hearing wellness monitoring system;
Fig. 2 shows a functional schematic view of an example audio capture device for use with an example hearing wellness monitoring system;
Fig. 3 shows a functional schematic view of the different layers of the OSI model of an example audio capture device such as the audio capture device shown in Fig. 1 or 2;
Fig. 4 shows a functional schematic view of the functioning of an example hearing wellness monitoring system;
Fig. 5 shows an example process flow chart for an example hearing wellness monitoring system, such as the system shown in Fig. 4;
Figs. 6A to 6D show example graphical user interfaces for the hearing wellness monitoring system;
Fig. 7 shows an example process flowchart of how the hearing wellness monitoring system may be used by a user to identify venues that have a hearing wellness category that suits them;
Fig. 8 shows an example graphical user interface for a method of determining a user’s hearing personality;
Fig. 9 shows another view of the graphical user interface of Fig. 8 for determining a user’s hearing personality;
Fig. 10 shows another view of the graphical user interface of Fig. 8 fordetermining a user’s hearing personality;
Fig. 11 shows another view of the graphical user interface of Fig. 8 indicating a user’s hearing personality;
Fig. 12 shows an example process flow chart for a method of interacting with a venue’s sound system to adjust the sound profile of a venue;
Fig. 13 shows an example process flow chart for a method of interacting with a venue’s sound system; and
Fig. 14 shows an example process flow chart for a method for determining the hearing wellness of an environment.
Specific description
Fig. 1 shows a functional schematic view of an example process flow chart for an audio capture device 100 for use with an example hearing wellness monitoring system. The example audio capture device 100 shown in Fig. 1 comprises a microphone array 103 configured to obtain sound data such as audio frequency spectrum 105 and venue sound pressure level 107. From this sound data the audio capture device may comprise a processor (not shown) that is configured to calculate sound data parameters 109 for example audio related statistics (such as average, min, max, SD, variance). The audio capture device 100 further comprises a communication interface 1 1 1 configured to access and communicate with a database on a remote device or server 150 (such as in the cloud).
Fig. 2 shows a functional schematic view of an example audio capture device 200 such as the audio capture device shown in Fig. 1 for use with an example hearing wellness monitoring system. The audio capture device 200 shown in Fig. 2 comprises a power management module 201 configured to provide power to the audio capture device 200. It also comprises a processor 203 coupled to a memory 205 and a data storage 207 which may for example be a memory card such as an SD card. The audio capture device also comprises an audio driver 209 coupled to an external audio codec and four microphones 213 (which advantageously may allow directionality of audio/sound information to further be obtained), although it will be understood that more or less microphones may be used. The audio capture device 200 also comprises a communication interface 215, which in this example may comprise a short-range wireless interface (e.g. a Bluetooth®, including Auracast™, or Wi-Fi® interface) and a long-range wireless interface which in this example is a telecommunications interface (e.g. a 3G, 4G or 5G interface). In the example shown the audio capture device 200 is also operable to provide a graphical user interface 217, although it will be understood that this is optional.
It will also be understood that in some examples an example hearing wellness monitoring system may comprise a plurality or an array of audio capture devices 200, and some of these audio capture devices 200 may have reduced functionality. For example, the example audio capture device 200 shown in Fig. 2 may act as a “hub” or “master” audio capture device and may be operable to receive sound data from a plurality of “spoke” or “slave” audio capture devices that may have reduced functionality. For example, the “spoke” or “slave” audio capture devices may have no data storage 207 and may only have a short-range wireless interface to communicate with the “hub” or “master” audio capture device. The “spoke” or “slave” audio capture devices may also have less memory 205 and/or lower processing power for the processor 203.
The audio capture device 200, or at least one audio capture device 200 of an array may be located at (or as close to as is practicable) the reference measurement position and/or long-term measurement position as determined by the World Health Organisation (WHO) Global Standard for Safe Listening (see Annex 5, WHO Global Standard for Safe Listening 2022). Preferably, the at least one audio capture device 200 should be at a height comparable to audience head height; out of reach of audience members; at least 1 metre from any large reflective surface (such as a wall, ceiling, or large piece of furniture or equipment); and with a clear line of sight to any main loudspeakers where appropriate.
The system is preferably compliant with the sound level measurement system as determined by the WHO Global Standard for Safe Listening (see Feature 2, WHO Global Standard for Safe Listening 2022). For example, wherein each audio capture device preferably comprises a class 1 or class 2 sound level meter, as defined in international standard IEC 61672-1 :2013.
The power management module 201 is operable to power the audio capture device 200 and all its components. The processor 203 is operable to control the functioning of the audio capture device and to carry out the tasks described above with respect to Fig 1 . For example, the processor 203 is operable to receive sound date from the external audio codec 21 1 and optionally process the data to obtain sound data parameters, optionally store these on data storage 207 (for example in case buffering is needed due to poor network connection) and send data parameters via the communication interface 215 to a remote device 150 as shown in Fig. 1 .
In more detail, the audio capture device 200 is configured to obtain sound data about an environment as a function of time. The sound data may include at least one of (i) audio frequency, and (ii) sound pressure level. At least one sound data parameter may be determined from the sound data. The at least one sound data parameter may comprise at least one of : Nominal sound pressure level, Nominal Interval frequency spectrum, A(C)- weighted sound pressure level, A(C)-weighted population minimum, A(C)-weighted population maximum, A(C)-weighted population standard deviation, A(C)-weighted population variance, A(C)-weighted population median, A(C)-weighted population average, and A(C)-weighted Interval frequency spectrum. Preferably, the at least one sound data parameter comprises A-weighted equivalent continuous sound level (in dB), measured over a specified period of time (T), LAeq,T. Preferably, the time (T) is 15 minutes, such that the at least one sound data parameter preferably comprises l_Aeq,i5 min.
The communication interface 215 is configured to transmit the sound data from the audio capture device 200 to the remote device 150. The remote device 150 is configured to receive the sound data and configured to analyse the sound data.
The sound data may be obtained as a function of time according to pre-determined time windows. Analysing the sound data may comprise calculating each sound data parameter as a function of time from (i) sound data from a single time window, or (ii) sound data from a plurality of time windows.
A preferred time window is 12 seconds. The sound capture may happen continuously during working hours (e.g. 9am to 1 1 pm). However, in some examples the audio capture device may be configured to perform an automated ‘start-up system’ to activate the device when the sound level reached or is higher than a selected threshold dB level, such as a 40dB level.
Where the time window is 12 seconds, this means that every 12 seconds the audio capture device captures noise characteristics according to the following format of 257 frequency ranges (2. elements) and 1 set of SPLs (aweightSPL). If the device works 24 hours a day as a result, there may be around N=7200 samples during a day.
Analysing the sound data further comprises dynamically calculating a plurality of sound data parameters as a function of time for the environment from the sound data. Analysing the sound data further comprises dynamically calculating at least one hearing wellness parameter as a function of time from the plurality of sound data parameters for assessing the hearing wellness characteristics of an environment. The plurality of sound data parameters may comprise any or all of : Nominal sound pressure level, Nominal Interval frequency spectrum, A(C)-weighted sound pressure level (dBA), A-weighted equivalent continuous sound pressure level (in dB), measured over a specified period of time (T) (LAeq,T), A(C)-weighted population minimum, A(C)-weighted population maximum, A(C)-weighted population standard deviation, A(C)-weighted population variance, A(C)-weighted population median, A(C)-weighted population average, and A(C)-weighted Interval frequency spectrum.
The at least one hearing wellness parameter comprises at least one of: quality of speech, exposure to hazardous noise, unpleasant frequencies, maximum frequency magnitude, average dBA of the venue, acoustic quality, reverberation time and signal to noise ratio.
There may be pre-defined parameters set to identify the difference between the most important frequencies in non-tonal and unpleasant or pleasant frequencies.
As indicated in the table below, each hearing wellness parameter may correspond to a hearing wellness component. The hearing wellness component may comprise any or all of : adverse noise, quality of speech, sensation of sounds, space ambient noise, space acoustic, space reflection to sounds, space atmosphere.
Figure imgf000021_0001
Figure imgf000022_0001
As noted above, in some examples a degree of processing of the sound data is performed before it is sent to the remote device 150. This may advantageously save on bandwidth as less data may need to be transmitted. For example, local processing may comprise filtering the sound data and/or averaging the sound data.
As an example, local processing by the audio capture device may comprise: removing data outside of a selected time window, for example the venue’s working hour. For example, data taken outside of working hours of between 9 am-11 pm may be removed. Local processing may additionally or alternatively include, for each sample, normalizing 257 frequency magnitudes whereas: ft l57ft
Local processing may additionally or alternatively include, computing dBAs share for each frequency, removing frequencies less than 200 Hz and more than 20kHz, and/or summing up dBA shares for the 200-20k Hz range to get new values of the dBAs.
In examples where there is a plurality or an array of audio capture devices, the “hub” or “master” audio capture device may be configured to receive sound data from the array of audio capture devices, calculate averaged sound data as a function of time from the sound data from the plurality of audio capture devices, and transmit the averaged sound data as a function of time to the remote server, wherein the averaged sound data is used for analysis of the sound data. It will be understood that in examples where there is a plurality or an array of audio capture devices, the “hub” or “master” audio capture device may be configured to perform the local processing discussed above.
The remote device, which may be implemented “in the cloud” in the form of cloud computing, may therefore be used to capture and store the data, while there is the processing of the raw sound data capture on the device. As discussed in more detail with respect to Fig. 5 below, in some examples the data may be processed, for example at the remote device, by a script (such as a python script) to follow a particular data structure for recording in a database management solution (such as Mongo DB). The data held in the database management solution may further be processed (for example via a python script) to analyse the data to obtain insights for use in reporting to the user via a user interface. In some examples there may be processing of the sound data both locally on the audio capture device as well as on the remote device/in the cloud.
The remote server or device may be further configured to generate an alert in the event that at least one of (i) a sound data parameter or (ii) a hearing wellness parameter exceeds a pre-determined threshold; and wherein the alert is for initiating improvement of acoustic properties of the environment. The alert may be e.g. to, alert an acoustics expert to improve the site, to a local authority such as a noise complaint department, and/or may be used by the venue to automatically adjust the sound levels in the venue i.e. smart music control to match a venue operator’s target atmosphere.
In one embodiment, the remote server or device is configured to calculate the A- weighted equivalent continuous sound pressure level (in dB), measured over a 15 minute time interval (LAeq,15 min). The remote server or device is then configured to determine whether the LAeq,15 min sits within a specified range, or exceeds a threshold. For example, preferably the remote server or device is then configured to determine whether the LAeq,15 min exceeds the World Health Organisation (WHO) Global Standard for Safe Listening sound level limit of 100 dB LAeq,15 min. In the event that the LAeq,15 min does exceed 100 dB LAeq,15 min, the sound levels in the venue may be automatically adjusted, for example using smart audio control. Alternatively, or in addition, an alert may be generated to alert an acoustics expert to improve the site or to a local authority such as a noise complaint department. A reduced sound level limit may be implemented for venues or events specifically targeted at children, such as a limit equal to or less than 94 dB LAeq,15 min, preferably 90 dB LAeq,15 min or lower.
Alternatively, or in addition, the remote server or device may also be configured to calculate a C-weighted peak instantaneous sound pressure level (in dB), LCpeak, and/or A- weighted fast maximum sound level, LA,Fmax. The remote server or device is then configured to determine whether the LCpeak and/or LA,Fmax sits within a specified range, or exceeds a threshold. For example, preferably the remote server or device is then configured to determine whether the LCpeak and/or LA,Fmax exceeds 140 dB LCpeak and/or LA,Fmax. In the event that the LCpeak and/or LA,Fmax does exceed 140 dB, the sound levels in the venue may be automatically adjusted, for example by controlling or signalling to a fast-acting electronic limiter coupled to any loudspeakers. Alternatively, or in addition, an alert may be generated to alert an acoustics expert to improve the site or to a local authority such as a noise complaint department. A reduced sound level limit may be implemented forvenues or events specifically targeted at children, such as a limit equal to or less than 120 dB LCpeak and/or LA,Fmax.
Fig. 3 shows a functional schematic view of the different layers of the OSI model of an example audio capture device such as the audio capture device shown in Fig. 1 or 2. It can be seen that the application layer 301 comprises communications messaging logic module, a database link module, SPL parameters computing and storage module, and an application logic module. The middleware layer 303 comprises a Linux operating module (as the audio capture device 200 may operate using Linux). The hardware abstraction layer comprises an IO module, a console module, a microphone module, a persistent storage module, and a communication interface module which in this example comprises a 3G/4G module and a Wi-Fi® module. Finally, the hardware layer 307 comprises an MCU core, RF, timers, EEPROM, ADC, UARTs, GPIOs and SPI.
Fig. 4 shows a functional schematic view of the functioning of an example hearing wellness monitoring system, for example using the example audio capture device described above with respect to Figs. 1 to 3. The example hearing wellness monitoring system can be broadly divided into two parts: a first part 430 relates to a user’s hearing preferences or “personality” and can be determined using the system and helps a user to select a venue that matches their hearing personality. The second part 440 relates to how the hearing wellness monitoring system can be used to calculate at least one hearing wellness parameter and determine a hearing wellness category for the venue and/or to give it a hearing wellness rating. Both the first part and the second part interact with the remote device 150 described above with respect to Fig. 1 which is shown in Fig. 4 as the “Mumbli API Cloud” 410 but which may also be called the “Mumbli Platform”. The Mumbli Platform 410 can process the sound data to determine sound data parameters and/or receive sound data parameters from an audio capture device 420 (depending on how much processing is performed locally by the audio capture device) to calculate at least one hearing wellness parameter which can then be used to assign a hearing wellness category and/or a hearing wellness rating to a venue. A user can interact with the Mumbli Platform 150 to determine their own hearing personality and identify venues that may be suitable for them through the use of a graphical user interface which will be described in more detail below.
As can be seen in Fig. 4, the remote server 410 (shown as the Mumbli API Cloud but also known and described in other figures as the Mumbli Platform) is furtherconfiguredto obtain hearing preference data about a user, and dynamically assess the suitability of the environment for the user, based on the hearing preference data and the current hearing wellness category of the environment.
It will be understood that the at least one hearing wellness parameter comprises a plurality of hearing wellness parameters, and wherein analysing the sound data further comprises dynamically assigning the environment a hearing wellness category, based on the plurality of hearing wellness parameters, for characterising the environment according to its hearing wellness atmosphere. The hearing wellness categories may be determined as part of a series of interviews with people who described different social or public environments. Advantageously this may help to translate dB levels in intuitive words to describe when a venue Is too loud.
Assigning an environment a hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter. The hearing wellness category may be a classification that seeks to describe the venue’s atmosphere in readily translatable and understandable terms, such as “buzzy”, “energetic” or “calm”.
Each hearing wellness category is defined by a plurality of ranges for a combination of hearing wellness parameters, and at least a portion of the hearing wellness parameter ranges defining each of the plurality of hearing wellness categories may overlap. Fig. 5 shows an example process flow chart for an example hearing wellness monitoring system, such as part 440 of the system shown in Fig. 4.
In the example shown in Fig. 5, it can be seen that sound data obtained by an audio capture device 510 is combined with information about the venue 525, which may be called a static venue parameter, and analysed by the remote device 520 for example for calculating the at least one hearing wellness parameter. The hearing wellness parameter may therefore be based on at least one sound data parameter and at least one static venue parameter. The static venue parameter may comprise at least one of : venue measurements, building materials, furniture materials, furniture measurements, reverberation time, acoustic quality, venue capacity and unstructured data (such as venue photos/pictures). The table below lists a number (seven) of hearing wellness parameters and show what sound data parameters and static venue parameters may be used in calculating the hearing wellness parameters. For some hearing wellness parameters it may be seen that they are calculated using one or a plurality of sound data parameters (such as average dBA or exposure durations for each dBA level). For other hearing wellness parameters it may be seen that they are calculated using one or more static venue parameters and no sound data parameters. For other hearing wellness parameters it may be seen that they are calculated based on a combination of at least one sound data parameter and at least one static venue parameter. For other hearing wellness parameters it may be seen that they are calculated based on a combination of at least one hearing wellness parameter, for examples as seen for M6. The hearing wellness parameters shown in the table below are M1 (exposure to hazardous noises), M2 (SNR, signal to noise ratio), M3 (unpleasant frequencies), M4 (max frequency magnitude), M5 (average dBA of the venue, loudness), M6 (acoustic quality), M7 (reverberation time), and M8 (vibe).
Figure imgf000026_0001
Figure imgf000027_0001
The data may be obtained on a period time basis, for example at a selected frequency (such as every 12 seconds) where the sound data may comprise frequency information and decibel information. The data may be stored as a semi-structured JSON format. The data may be averaged, for example over a plurality of time windows, for example over the course of a selected period of time (such as an hour, a day, a week etc.).
In summary, the sound data is received from at least one, or an array, of audio capture devices. Averaged sound data is calculated as a function of time from the sound data from the plurality of audio capture devices 510 (this is described as being performed locally on an audio capture device and may be performed on each audio capture device and/or by a hub or master audio capture device, but in other examples may be performed at the remote device or server). The averaged sound data is then transmitted 515 as a function of time to the remote server, wherein the averaged sound data is used for analysis of the sound data. In this way, the remote device 520 (the Mumbli Platform or the Mumbli API) captures and stores the sound data, with some processing of the raw sound data captured being performed locally by the audio capture device(s) 510. Additionally, venue data is also transmitted 525 to the remote server 520 in the form of static venue parameters. The venue data may be collected during onboarding of the venue into the hearing wellness monitoring system. Both the sound data and venue data are then stored in a database management solution 530 (such as Mongo DB) which in then processed further 540 for example to determine hearing wellness parameters and/or assigning a hearing wellness category and/or a hearing wellness rating to the venue from which the sound data was captured. The data processing 540 comprises data wrangling 540A, data cleaning 540B, and data analysis 540C. The data analysis 540C is used to determine a hearing wellness rating 540D and/or assign a hearing wellness category to the venue from which the sound data was captured. The data analytics, including the hearing wellness rating and/or hearing wellness category, are presented on a graphical user interface for the hearing wellness monitoring system 540E. An example graphical user interface is shown in more detail in Figure 6.
Figs. 6A to 6D show example graphical user interfaces for the hearing wellness monitoring system. The graphical user interfaces in Figs. 6A, 6B, and 6D display an output of the hearing wellness parameters 610 a hearing wellness rating 620, also known as a “Certified for Sound” or CfS rating. The graphical user interface in Figs. 6C displays the hearing wellness category of the venue 615, wherein the hearing wellness category 615 is assigned to the venue based on sound data parameters and/or hearing wellness parameters, for example such as hearing parameter M8 (vibe) described in more detail below. The graphical user interfaces of Figs. 6A, 6B, and 6D also display sound data parameters 630. In these examples, the sound data parameter 630 displayed shows the venue sound level as a function of time. The hearing wellness rating 620 is determined by ascribing a numerical value to each of the plurality of hearing wellness parameters, applying a weighting to each of the plurality of hearing wellness parameters, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters. The hearing wellness rating corresponds to respective ranges of the numerical value of the sum of weighted numerical values - e.g. A, B, C, D etc. In the example shown in Fig. 6A, the hearing wellness rating 620 (labelled as the “CfS rank”) is listed as C.
In more detail, and as will be described in more detail below, determining a hearing wellness rating further comprises ascribing each of the plurality of sound data parameters (and/or hearing wellness parameters) to a sound data principle 625, such that each sound data principle comprises a respective subset of the sound data parameters (and/or hearing wellness parameters), applying a first weighting to each of the plurality of hearing wellness parameters, applying a second weighting to each sound data principle to give a respective weighted value for each hearing wellness parameter, and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
The first weighting comprises a percentage of a total percentage ascribable to each sound data principle, and wherein the second weighting is a percentage of a total percentage ascribable to all sound data principles.
As can be seen from the table below, there are three sound data principles. These are inclusive space, acoustic space, and harmonic space. For each sound data principle there are one or more components, and for each component there are one or more hearing wellness parameters. Each hearing wellness parameter is ascribed a weighting which is a percentage such that all of the sum of all of the weightings for the hearing wellness parameters for a sound data principle sum to 100. Each sound data principle is also ascribed a weighting, such that the sum of all of the sound data principle weightings also sum to 100.
Figure imgf000029_0001
Figure imgf000030_0001
In the table listed above it can be seen that the sound data principle inclusive space comprises components adverse noise, quality of speech and sensation of sounds. Adverse noise comprises the hearing wellness parameter M1 (exposure to hazardous noises) and has a weighting of 40. Quality of speech comprises the sound data component M2 (SNR) which itself comprises speech typical dBA and average dBA of the venue and has a weighting of 40. Sensation of sounds comprises the sound data principle M3 (unpleasant frequencies) and has a weighting of 20. The sound data principle inclusive space has a weighting of 40.
The sound data principle acoustic space comprises the components space ambient noise, space acoustic and space reflection to sounds. Space ambient noise comprises the sound data parameters M4 (maximum frequency magnitude) and M5 (average dBA of the venue) each of which have a rating of 15. The component space acoustic comprises the sound data parameter M6 (acoustic quality) and has a weighting of 35. The component space reflection to sounds comprises the sound data parameter M7 (reverberation time) and has a weighting of 35. The sound data principle acoustic space has a weighting of 30.
The sound data principle harmonic space comprises the component space atmosphere and comprises a sound data parameter M8 (vibe of the venue) and has a weighting of 30.
The scores when weightings are applied are then summed and ascribed a hearing wellness rating of A, B, C or D according to the table below.
Figure imgf000031_0001
How the scores for each sound data parameter are now described in more detail below with reference to each sound data parameter.
Exposure to hazardous noises (M1 )
The scores for the sound data parameter M1 (exposure to hazardous noises) are determined based on the average decibel rating for the venue over a selected time period or over the course of a day, with a higher score (scores ranging from 1 to 4, 4 being the highest) being given to a higher average decibel level. The average may be determined over a period of time, for example a selected plurality of time windows. For example, the average may be determined over a period of five minutes, ten minutes, thirty minutes, an hour, several hours, a day, a week etc..
Figure imgf000032_0001
SNR (M2)
The sound data parameter M2 (SNR) may be calculated based on the below equation:
SNR = LA S lm — LA N = Speaking SPL in 1 meter — Noise SPL dBA)
The scores for the sound data parameter M2 (SNR) are determined based on the SNR, with a higher SNR giving a lower score (ranging from 1 to 6) and a lower SNR giving a higher score. The SNR may be averaged over a period of time as with M1 above.
Figure imgf000032_0002
Figure imgf000033_0001
Unpleasant frequencies (M3)
A study by Jumar et al., “Features versus Feelings: Dissociable Representations of the Acoustic Features and Valence of Aversive Sounds”, Journal of Neuroscience, 10 October 2013, 32(41 ) 14184-14193; DOI: https://doi.Org/10,1523/JNEUROSCI.1759-12.2012, shows that sounds in the higher-frequency range of around 2,000 to 5,000 Hz were rated as most unpleasant. Ratings were on a scale from 1 to 5, with 1 corresponding to low unpleasantness and 5 corresponding high unpleasantness. To consider the bark scale as a base of frequency metrics calculations for a hearing wellness rating, 24 barks were grouped into smaller groups (4 is preferred to be linkable to the user metrics). To do this, we assign higher scores (more negative effect on hearing wellness rating) to the greatest sensitivity frequency range for human hearing. After computing the frequency spectrum for each day and combining 21 daily spectrums to get one representative spectrum for 3- weeks, we must divide frequencies into 25 Barks (24 standard Barks plus 25th Barks for frequencies higher than 15500 Hz). Then, Combining Bark scales according to our grouping mechanism (table below). For example, group, 3 consists of 4 adjacent Bark scales 4, 5, 6, 7.
The next step is selecting the group which has the highest total magnitude. For instance, in the table below, the highest total magnitude is for the second group of the table (f >= 12k).
Figure imgf000034_0001
The scores for the sound data parameter M3 (unpleasant frequencies) are determined based on whether the average or predominant frequency over a selected time period (such as five minutes, for example) falls into one of the four selected groups listed above, with frequencies in the range 1480 to 5300 Hz having the highest score (as being most unpleasant) followed by frequencies in the range 5300 to 7700 Hz, frequencies greater than 7700 Hz and then lastly frequencies lower than 1480 Hz having the lowest score.
Figure imgf000034_0002
Max frequency magnitude (M4)
Based on the results of many psychoacoustic (scientific study of sound perception) experiments, the Bark scale is defined so that the critical bands of human hearing each have a width of one Bark. The human auditory (hearing) system can be thought of as consisting of a series of bandpass filters. Interestingly, the spacing of these filters do not strictly follow either a linear frequency scale or a logarithmic musical scale. The Bark Scale is an attempt to determine what the center frequency and bandwidth of those “hearing filters” are (known as critical bands). There are 24 “barks” defined, based on the first 24 critical bands of hearing.
Figure imgf000035_0001
To consider the bark scale as a base of frequency metrics calculations for the hearing wellness rating, 24 barks are grouped into smaller groups (7 is preferred to be linkable to the user metrics). To do this, we assign higher scores (more negative effect on hearing wellness rating) to the greatest sensitivity frequency range for human hearing.
After computing the frequency spectrum for each day and combining 21 daily spectrums to get one representative spectrum for3-weeks, we must divide frequencies into 25 Barks. Then, combining Bark scales according to our grouping mechanism (table below). For example, group, 3 consists of 4 adjacent Bark scales 4, 5, 6, 7.
The next step is selecting the group which has the highest total magnitude. For instance, in the below figure, the highest total magnitude is for the second group of the table (f >= 12k).
Figure imgf000036_0001
Figure imgf000036_0002
Figure imgf000037_0001
Average dBA of the venue (loudness) (M5)
The scores for the sound data parameter M5 (average dBA of the venue, loudness) are determined based on the average dBA range the average dBA of the venue falls within, with a lower average dBA giving a lower score (from 1 to 4) and a higher dBA giving a higher score. The average dBA range may be calculated over a 3-week time period.
Figure imgf000037_0002
Figure imgf000037_0003
Acoustic quality (M6) Acoustic quality = (Acoustic capacity) / (Actual capacity of the venue)
The acoustic capacity is defined as the number of people that would create a signal-to- noise ratio (SNR) of -3 dB, which is considered the lower limit for “sufficient” quality of verbal communication under certain preconditions. The acoustic capacity is calculated from volume and reverberation time. The acoustic quality of an eating establishment may be characterized by the ration between the acoustic capacity and the total capacity.
Figure imgf000038_0001
For a SNR of -3 dB fitting with sufficient quality of verbal communication, an assumption to fit with group size, g, of four people is made, meaning one out of four is talking. This gives you:
V
1 NNmax — - 20 . T
Assumptions:
V = Volume of the space
T = Reverberation time
Nmax = Acoustic capacity g= customers group size = 4 person (1 out of 4 is talking)
SNR = -3 dBA (sufficient verbal communication quality)
Further details for determining the acoustic quality may be found here:
Rindel, Jens. (2012). Acoustical capacity as a means of noise control in eating establishments.10.13140/2.1 .4767.3604; Rindel, Jens. (2014). Acoustic capacity as a means to deal with poor restaurant acoustics. Acoustics Bulletin. 39. 27-30.; and Rahbaek, David & Bolberg, Mads. (2019). Applicability of acoustic capacity in regulations based on study of noise environment in Danish nursery schools.
Figure imgf000038_0002
Figure imgf000039_0001
Reverberation time (M7)
Reverberation Time (RT) is the time the sound pressure level takes to decrease by 50dB, after a sound source is abruptly switched off. RT60 is thus a commonly used abbreviation for Reverberation Time. RT60 values may vary in different positions within a room. Therefore, an average reading is most often taken across the space being measured. Rooms with an RT60 of <0.3 seconds are called acoustically “dead”. Rooms with an RT60 of >2 seconds are considered to be “echoic”.
It is often very difficult to accurately measure the T60 time as it may not be possible to generate a sound level that is consistent and stable enough, especially in large rooms or spaces. To get around this problem, it is more common to measure the T20 and T30 times and to then multiply these by 3 and 2 respectively to obtain the overall T60 time.
The T20 and T30 values are usually called “late reverberation times” as they are measured a short period of time after the noise source has been switched off or has ended. To measure the T20 and T30 values, a sound source is used, and this can either be an interrupted source, such as a loudspeaker, or an impulsive noise source, such as a starting pistol. The interrupted method is most commonly used as the sound source can be calibrated and controlled accurately, allowing for more repeatable measurements. The measurement of reverberation time typically follows this process:
1 . Create a stable sound field using a sound source
2. Start a sound measurement instrument, such as a sound level meter
3. Switch off the sound source and allow the sound to decay
4. Wait forthe background sound to stabilise and stop the measurement (avoiding creating any noise that may disturb the measurement data)
The calculation of the T20 and T30 times starts after the sound has decayed by 5 dB and ends after the level has dropped by 20 dB and 30 dB respectively. The measured data must have at least 10 dB headroom above the noise floor.
The score for the sound data parameter M7 (reverberation time) is then determined based on what RT range the determined RT for the venue falls within, with a lower RT giving a lower score (from 1 to 4) and a higher RT giving a higher score.
Figure imgf000040_0001
Vibe (M8) The scores for the sound data parameter M8 (vibe) may be determined based on the average dBA range that the average dBA of the venue falls within, with a lower average dBA giving a lower score (from 1 to 7) and a higher average dBA giving a higher score. It can also be seen how the subjective terms “calm”, “buzzy”, “energetic” and “overwhelming” may be applied to describe the vibe of the venue based on the average dBA level. The average dBA may be determined over a selected time period and optionally during selected time windows within that selected time period, such as hourly, daily, weekly, monthly and annually generally taking the opening times of the venue into account - for example it may be determined weekly during the opening hours of the venue.
Figure imgf000040_0002
Figure imgf000041_0001
Figure imgf000041_0002
Fig. 7 shows an example process flowchart of how the hearing wellness monitoring system may be used by a user to identify venues that have a hearing wellness category that suits them. In this example, the hearing wellness monitoring system comprises an application 710, wherein the application 710 communicates with the remote server. A plurality of venues is listed on the application 710, wherein each venue is characterised by an assigned hearing wellness category 720 based on at least one hearing wellness parameter of each venue, as determined by the remote server. Assigning a venue a hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter. The hearing wellness category may be a classification that seeks to describe the venue’s atmosphere in readily translatable and understandable terms, such as “overwhelming”, “buzzy”, “energetic” or “calm”, for examples as described above in relation to sound data parameter M8 (vibe). The application 710 also obtains hearing preference data about the user, and a subset of the plurality of venues are matched (either locally by the application 710 or at the remote device/server) to the user according to the hearing wellness category which best matches the hearing preference data of the user 730.
The remote server additionally alerts venues categorised by an “overwhelming” hearing wellness category, wherein the alert may be for initiating improvement of acoustic properties of the environment 740. For example, the alert may be e.g., to, alert an acoustics expert to improve the site, to a local authority such as a noise complaint department, and/or may be used by the venue to automatically adjust the sound levels in the venue i.e. smart music control to match a venue operator’s target atmosphere. This may result in the venue being re-categorised into a different hearing wellness category based on changes to at least one hearing wellness parameter of the venue 750.
Figs. 8 to 11 show various screenshots of an example graphical user interface fora method of determining a user’s hearing personality. The graphical user interface can be used to display tailored information relating to hearing wellness of a user on a graphical user interface. For example, the graphical user interface may display a series of questions for determining hearing preference data of a user. As can be seen from Figs. 8 to 1 1 , the graphical user interface displays a series of predetermined questions (for example, in Fig. 9 the graphical user interface 900 asks a user what kind of atmosphere the user looks for in a certain social setting when doing solo work outside of office/home setting - to be selected from calm, buzzy or energetic -and asks the user what venue they last went to when they attended a work meeting outside the office - to be selected from cafe, bar or restaurant). The graphical user interface then receives the hearing preference data of a user via user input in response to the series of predetermined questions, and then dynamically displays (as shown in Fig. 10 graphical user interface 1000) hearing preference data, such as information relating to a subset of environments from a plurality of environments, wherein the subset of environments is determined to be suitable for the user based on correlating the hearing preference data of the user with real-time sound data parameters of each environment. For example, Fig. 10 indicates that the hearing preference data of the user is that they can’t stand environmental sounds (e.g. sirens), that they prefer calm places, and that they use ear tech to control what and how they hear. This hearing preference data may form a hearing preference profile associated to the user. Fig. 1 1 shows a summary of the user’s preferred atmospheres for different occasions (e.g. the graphical interface 1100 a preferred atmosphere for each of solo work, work meetings and socialising) as well as an indication of the user’s hearing personality. This information may then be used to enable the user to identify a venue that matches a venue’s hearing wellness parameters and hearing wellness category that is calculated or assigned based on the sound data captured at the venue.
Fig. 12 shows an example process flow chart for a method of interacting with a venue’s sound system to adjust the sound profile of a venue. A plurality of remote devices 1210, each associated with a user, communicate via an app to an Internet-Of-Things (IOT) device 1220 located in the venue. The IOT device 1220 may be the audio capture device shown in relation to Figs. 1 to 3. The communication comprises each device 1210 broadcasting a user identifier to the IOT device 1220. The communication method may be a short-range, wireless communication method, for example Bluetooth®. The IOT device 1220 then obtains hearing preference data associated with each user identifier from a remote server 1230. The remote server 1230 may be the server described as “Mumbli Cloud” 150, 410, or “Mumbli API” 520 shown in relation to Figs. 1 to 5. The IOT device 1220 may communicate with the remote server 1230 via a long-range, wireless communication method, for example 3G, 4G or 5G. A group hearing preference profile is created from based on the obtained hearing preference data of the users. The IOT device 1220 may then compare group hearing preference profile with the current sound environment of the venue. Based on this comparison, the IOT device 1220 may communicate with the venue sound system 1240, for example using wireless communication such as Wi-Fi, to adjust the volume or audio profile of sound from the venue sound system 1240. This may be advantageous as the venue's sound system may adapt to the user needs. For example, if the current sound environment of the venue is characterised as “energetic”, for example in accordance with sound data parameter M8 (vibe) described above, however the average hearing preference (which may be determined, for example, by an average of the aggregate of user preferences in the room) data suggests the users 1210 prefer “buzzy” environments, the IOT device 1220 may communicate to the venue sound system 1240 to reduce the volume of the music, thus reducing the average dBA of the venue to comply with the preferred hearing environment. Communication between the IOT device 1220 and the venue sound system 124 may also be used to adjust the sound from venue sound system 124 the to keep a constant sound dB value over the environmental noise. Optionally, the IOT device 1120 may request the app on each remote device 1210 to pop for a poll of the user regarding any venue profile changes.
Fig. 13 shows an example process flow chart for a method of interacting with a venue’s sound system. A plurality of remote devices 1210, each associated with a user, communicate via an app to an IOT device 1220 located in the venue. The IOT device 1220 may be the audio capture device shown in relation to Figs. 1 to 3, or the IOT device shown in relation to Fig. 12. The communication comprises each device 1210 broadcasting a user identifier to the IOT device 1220 which can be used to “check-in” the remote device to the venue. The communication method may via a short-range, wireless communication method, for example Bluetooth®. The venue in this example comprises a microphone system 1350. The microphone system 1350 is coupled to an audio transmitter 1340 which is configured to wirelessly communicate with an audio receiver module 1225 coupled to the IOT device 1220. Where the user remote device 1210 is coupled to a headset 1215 (for example, a hearing aid or headphones), the remote device 1210 can initiate a request to the IOT device 1220 to stream the audio input from the microphone system 1350 to the headset 1215. The IOT device 1220 may then create a secure channel with the user remote device 1210, for example over a Wi-Fi channel or via Bluetooth®, such as via Auracast™ broadcast audio, enabling the input from the microphone system to be streamed to the headset 1215. This may be advantageous to enable a user to access a direct link with the venue's microphone input 1350, for example in the case that the venue environment is too loud and/or the user’s capability to hear and understand the audio input is limited, for example for a presentation or event.
Fig. 14 shows an example process flow chart for a method for determining the hearing wellness of an environment. In some examples, the method is configured to be performed by a remote server. The method first comprises obtaining sound data about an environment as a function of time 1410, for example wherein the sound data is obtained from an array of audio capture devices positioned throughout the environment as described in relation to Figures 1 and 2. A plurality of sound data parameters is then dynamically calculated for the environment as a function of time based on analysing the sound data 1420. At least one hearing wellness parameter is then determined based on the plurality of sound data parameters 1430, for example as described above in relation to Figure 6. A hearing wellness category is the assigned to the environment, based on the at least one hearing wellness parameter 1440. Assigning a venue a hearing wellness category may comprise selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter. At least a portion of the hearing parameter ranges defining each of the plurality of hearing wellness categories may overlap. The hearing wellness category may be a classification that seeks to describe the venue’s atmosphere in readily translatable and understandable terms, such as “overwhelming”, “buzzy”, “energetic” or “calm”, for examples as described above in relation to sound data parameter M8 (vibe).
Hearing preference data of a user is then obtained based on the user profile. An assessment of suitability of the environment forthe user is then made based on correlating the hearing preference data of the user and the hearing wellness category of the environment. If the environment is assessed to be suitable based on matching criteria determined by the hearing preference data in relation to the hearing wellness category of the environment, the environment is matched and/or suggested to the user. If the environment is assessed to be unsuitable, the environment is not matched and/or not suggested to the user. The hearing preference data of the user comprises at least one of (i) subjective user preference data, and (ii) objective hearing performance data of the user. The hearing preference data may also comprise different criteria for identifying a suitable environment for the user based on the intended activity of the user, for example work, socialising, meetings etc.
In some examples, the method further comprises obtaining venue data about the environment. Determining the at least one hearing wellness parameter may then further be based on the venue data. Venue data comprises at least one static venue parameter, for example, at least one of venue measurements, building materials, furniture materials, furniture measurements, photos.
Optionally, the method furthercomprises ascribing each of the plurality of hearing wellness parameters (or sound data parameters) to a sound data principle, such that each sound data principle comprises a respective subset of the hearing wellness parameters, then applying a first weighting to each of the plurality of hearing wellness parameters within the sound data principle. A second weighting to each sound data principle is then applied to give a respective weighted value for each hearing wellness parameter. A hearing wellness rating is then determined for the environment based on the sum of the weighted numerical values of the hearing wellness parameters, wherein the first weighting comprises a percentage of a total percentage ascribable to each sound data principle, and wherein the second weighting is a percentage of a total percentage ascribable to all sound data principles.
In some examples, obtaining sound data about an environment comprises obtaining sound data about a plurality of environments or venues. In this case, the method comprises calculating at least one sound data parameter for each environment, determining at least one hearing wellness parameter for each environment; and assigning a hearing wellness category to each environment. Assessing the suitability of the environment for the user then further comprises identifying a sub-set of the plurality of environments to match to the user, based on matching criteria determined by the hearing preference data in relation to the hearing wellness categories of the plurality of environments.
The aforementioned examples relate to a hearing wellness monitoring system and method for use in a social venue or environment. However, the skilled person will understand that the disclosure described herein may also be usefully implemented in a number of applications. The audio capture device of Figs. 1 to 3, which may be an IOT device as discussed above, may additionally provide security functionality to a venue or environment by monitoring sound while location is closed. In the event that sound is detected above a predetermined threshold, the IOT device may send an alert to the venue or environment owner or relevant authorities, indicating a burglary or trespasser. Optionally, the IOT device may also simultaneously generate an alarm.
The IOT smart device described in relation to Figures 1 to 3 may also be used to identify the presence and location of a distress call using the microphone array and send an alert to a relevant person or authority. The location may be determined based on determining the proximity of the source of sound (distress call) relative to the array of microphones. The device may also classify a sexual assault or other similar types of distress calls using audio processing.
The smart IOT device of Figures 1 to 3 may also be used in a dense traffic region (including human traffic and vehicular traffic), wherein the IOT device is configured to monitor the environmental sound and send an alert the authorities in the event the legal sound threshold is exceeded. The IOT device may be configured for indoor or outdoor use.
The IOT device may also be used to monitor the noise level of people in non-venue locations, for example corridors or shared accommodation. The IOT device may be used to provide an indication to reduce noise in loud areas where a threshold has been exceeded. The IOT device may be configured to communicate the indication through a display. In corridors, for example, the display may comprise a screen located in the corridor which displays an indication to reduce noise to passers-by. In shared accommodation, the display may comprise a local screen display, and/or a display to a supervisor. In the event that a threshold has been exceeded, a supervisor or relevant authority may be alerted to take action to reduce the noise.
The IOT device may also be used coupled to a sound system, wherein the IOT device is configured to communicate with the sound system to adjust the volume and/or sound profile from the sound system to keep a constant sound dB value over the environmental noise. Environmental noise may include background noise, conversation, traffic noise etc. The IOT device may be configured to monitor environmental sound values (for example, average SPL) for a predetermined time window, for example every 12 seconds. The IOT device may then be configured to calculate an average sound value for each time window and/or an average sound value across a plurality of time windows (for example, four time windows = 48 seconds). The IOT device may then repeatedly compare the updated average environmental sound value to the preceding average environmental sound value. In response to a change in the average environmental sound value, the IOT device may calculate the difference in the sound value and correspondingly adjust the volume and/or sound profile of the sound system. The corresponding adjustment may be proportional to the change in average environmental sound value. Preferably, the adjustment to the sound system is made in real-time to the change in environmental sound value. The IOT device may adjust the volume and/or sound profile of the sound system via a short-range, wireless communication method, for example Wi-Fi.
The IOT device may also be used to control a "white noise level" of at least one headset coupled to a user remote device to protect the hearing of the user based on environmental noise levels, wherein the IOT device is configured to communicate to the user remote device via a short-range, wireless communication method such as Wi-Fi. White noise is a random signal having equal intensity at different frequencies. Controlling the “white noise level” may include increasing the volume or sound profile of the white noise provided to the headset. For example, the IOT device may be configured to monitor environmental noise (for example, average SPL) for a predetermined time window, for example every 12 seconds. The IOT device may then be configured to calculate an average environmental sound value for each time window and/or an average environmental sound value across a plurality of time windows (for example, four time windows = 48 seconds). In response to the environmental sound value exceeding a threshold, the IOT device may then correspondingly initiate and/or adjust the volume of white noise played through a connected headset. The corresponding adjustment may be proportional to the magnitude of the environmental sound value in excess of the threshold. Preferably, the control of the white noise is made in real-time to the change in environmental sound value. The IOT device may adjust the volume and/or sound profile of the white noise via a short-range, wireless communication method to the user remote device, for example Wi-Fi or Bluetooth®, including Auracast™. The user remote device then provides the adjust white noise to the coupled headset. Alternatively, the IOT device may provide the sound cancelling signal directly to the headset via a short-range, wireless communication method, for example Wi-Fi or Bluetooth®, including Auracast™.
Alternatively, the IOT device may also be used to provide a "noise cancelling signal" to at least one headset coupled to a user remote device to protect the hearing of the user based on environmental noise levels. The IOT device may be configured to monitor environmental noise (for example across the frequency spectrum) and generate a realtime “noise cancelling signal”. The noise cancelling signal may be generated using antisound waves or other active sound cancelling methods. The IOT device may provide the sound cancelling signal to the user remote device via a short-range, wireless communication method, for example Wi-Fi Fi or Bluetooth®, including Auracast™. The user remote device then provides the adjust white noise to the coupled headset. Alternatively, the IOT device may provide the sound cancelling signal directly to the headset via a short-range, wireless communication method.
An application on a user remote device may also be configured to set up an audio profile for audio earpieces or a headset based on the preferences of the user, similar to the hearing preference data described in relation to the examples above. The IOT device may then be configured to adjust the audio properties of the earpieces or headset based on the environmental noise measurements monitored by the IOT device.
The IOT device may also be used to analyse sound distribution in a venue or space, for example an office space. The IOT device, for example the device of Figures 1 to 3, may use audio input from a microphone array to determine the directionality and source of sound, as well as sound data parameters (for example, SPL and frequency spectrums). Based on the sound data received at each microphone, the IOT device may be configured to map the sound in space according to at least one sound parameter. Mapping in space may comprise overlaying a visual representation of local sound environments across an image of the space (e.g., a 2D floor plan, or 3D image). For example, the IOT device may be configured to generate a “heat map”, for example wherein loud areas (high SPL) are identified by a colour and quiet areas (low SPL) are identified by a second colour. The generated map may also comprise a gradient of colours corresponding to a plurality of different ranges of a sound parameter. The IOT device may be configured to map in space any of the sound data parameters and/or hearing wellness parameters described above in relation to aforementioned specific examples. Such sound maps may be used for management of sound and sound planning or monitoring occupancy of a location. In the event sound data in an area of the map exceeds a predetermined threshold, an alert may be sent to a manager or relevant authority to take action in that specific area to rectify the sound profile. Such action could involve dispersing crowds or redistributing people, moving furniture, installing sound dampening structures, etc.
The IOT device may also be used to track a user remote device in an environment. A user remote device may load an application which causes the user remote device to generate and emit a sound of a specific fixed frequency, preferably wherein the frequency is above the human hearing range. The IOT device communicates with the application on the user remote device and associates the specific fixed frequency generated by the user remote device with a device identifier (ID). Using an array of microphones, the IOT device can then track the position of the user remote device within the environment based on the directionality of sound detected by the microphone array corresponding to the specific fixed frequency which can be used to extrapolate the location of the user remote device (as the source of the sound) as a function of time. The IOT device may additionally generate a trajectory of a fixed frequency trace to visualise the movement of the user device in the environment. The IOT device may simultaneously monitor and/or track a plurality of user remote devices, wherein each user device is associated with a different specific fixed frequency.
It will be appreciated from the discussion above that the embodiments shown in the Figures are merely exemplary, and include features which may be generalised, removed, or replaced as described herein and as set out in the claims. In the context of the present disclosure other examples and variations of the apparatus and methods described herein will be apparent to a person of skill in the art.

Claims

- 49 - CLAIMS:
1. A hearing wellness monitoring system, comprising: at least one audio capture device configured to provide sound data about an environment as a function of time, wherein the audio capture device comprises: at least one microphone; and a communication interface, configured to transmit the sound data from the audio capture device; a remote server, configured to receive the sound data, and wherein the remote server is configured to analyse the sound data, wherein analysing the sound data further comprises: dynamically calculating a plurality of sound data parameters as a function of time for the environment from the sound data; and dynamically calculating at least one hearing wellness parameter as a function of time from the plurality of sound data parameters for assessing the hearing wellness characteristics of an environment.
2. The system of claim 1 wherein the at least one hearing wellness parameter comprises a plurality of hearing wellness parameters, and wherein analysing the sound data further comprises dynamically assigning the environment a hearing wellness category, based on the plurality of hearing wellness parameters, for characterising the environment according to its hearing wellness atmosphere.
3. The system of claim 2 wherein assigning the environment the hearing wellness category comprises selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by a respective at least one range of at least one hearing wellness parameter.
4. The system of claim 3 wherein each hearing wellness category is defined by a plurality of ranges for a combination of hearing wellness parameters.
5. The system of claim 4 wherein at least a portion of the hearing wellness parameter ranges defining each of the plurality of hearing wellness categories overlap. - 50 -
6. The system of any preceding claim further comprising determining a hearing wellness rating, wherein the hearing wellness rating is determined by: ascribing a numerical value to each of the plurality of hearing wellness parameters; applying a weighting to each of the plurality of hearing wellness parameters; and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
7. The system of claim 6 further comprising: ascribing each of the plurality of sound data parameters to a sound data principle, such that each sound data principle comprises a respective subset of the sound data parameters; applying a first weighting to each of the plurality of hearing wellness parameters; and applying a second weighting to each sound data principle to give a respective weighted value for each hearing wellness parameter; and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
8. The system of any preceding claim wherein the sound data is obtained as a function of time according to pre-determined time windows.
9. The system of claim 8 wherein analysing the sound data comprises calculating each sound data parameter as a function of time from (i) sound data from a single time window, or (ii) sound data from a plurality of time windows.
10. The system of any preceding claim wherein the at least one audio capture device comprises an array of audio capture devices.
11. The system of claim 10 further comprising at least one of (i) a central local processor configured for local processing of the sound data from the array of audio capture devices, or (ii) wherein the at least one audio capture device comprises a processor configured for local processing of the sound data from the at least one microphone of each - 51 - respective audio capture device.
12. The system of claim 11 , wherein the processor is configured to: receive sound data from the array of audio capture devices; calculate averaged sound data as a function of time from the sound data from the plurality of audio capture devices; and transmit the averaged sound data as a function of time to the remote server, wherein the averaged sound data is used for analysis of the sound data.
13. The system of any preceding claim wherein the sound data includes at least one of (i) audio frequency, and (ii) sound pressure level.
14. The system of claim 13 wherein the at least one sound data parameter comprises at least one of : Nominal sound pressure level, Nominal Interval frequency spectrum, A(C)- weighted sound pressure level, A(C)-weighted population minimum, A(C)-weighted population maximum, A(C)-weighted population standard deviation, A(C)-weighted population variance, A(C)-weighted population median, A(C)-weighted population average, and A(C)-weighted Interval frequency spectrum.
15. The system of any of claims 2 to 14, wherein the remote server is f urtherconf igured to: obtain hearing preference data about a user; and dynamically assess the suitability of the environment for the user, based on the hearing preference data and the current hearing wellness category of the environment.
16. The system of any preceding claim wherein the at least one hearing wellness parameter comprises at least one of : quality of speech, exposure to hazardous noise, unpleasant frequencies, maximum frequency magnitude, average dBA of the venue, acoustic quality, reverberation time and signal to noise ratio.
17. The system of any preceding claim wherein the hearing wellness parameter may be based on at least one sound data parameter and at least one static venue parameter. - 52 -
18. The system of any preceding claim wherein the at least one audio capture device further comprises a memory configured to store the sound data.
19. The system of claim 18 wherein the memory is configured to store the sound data for a predetermined time period.
20. The system of any preceding claim wherein the at least one microphone comprises an array of at least four microphones.
21. The system of any preceding claim wherein the audio capture device communication interface comprises a wireless telecommunication interface, for example 3G, 4G or 5G.
22. The system of any preceding claim wherein the remote server is further configured to generate an alert in the event that at least one of (i) a sound data parameter or (ii) a hearing wellness parameter exceeds a pre-determined threshold; and wherein the alert is for initiating improvement of acoustic properties of the environment.
23. A method for determining the hearing wellness of an environment, the method comprising: obtaining sound data about an environment as a function of time; dynamically calculating a plurality of sound data parameters for the environment as a function of time based on analysing the sound data; determining at least one hearing wellness parameter based on the plurality of sound data parameters; assigning the environment a hearing wellness category, based on the at least one hearing wellness parameter; obtaining hearing preference data of a user; and assessing suitability of the environment for the user based on correlating the hearing preference data of the user and the hearing wellness category of the environment.
24. The method of claim 23 wherein the sound data is obtained from an array of audio capture devices positioned throughout the environment.
25. The method of claims 23 to 24 wherein assessing suitability of the environment for the user comprises either (i) matching the environment to the user, or (ii) not matching the environment to the user, based on matching criteria determined by the hearing preference data in relation to the hearing wellness category of the environment.
26. The method of claims 23 to 25 wherein the hearing preference data comprises at least one of (i) subjective user preference data, and (ii) objective hearing performance data.
27. The method of any claims 23 to 26 further comprising: obtaining venue data about the environment, wherein venue data comprises at least one static venue parameter; and wherein determining the at least one hearing wellness parameter is further based on the venue data.
28. The method of claims 23 to 27 wherein assigning the environment the hearing wellness category comprises selecting a hearing wellness category from a plurality of hearing wellness categories, wherein each hearing wellness category is defined by at least one range of at least one hearing wellness parameter.
29. The method of claim 28 wherein the at least one hearing wellness parameter comprises a plurality of hearing wellness parameters, and each hearing wellness category is defined by a combination of hearing wellness parameters, wherein each hearing wellness parameter for each category is defined by a range.
30. The method of claim 29 wherein at least a portion of the hearing parameter ranges defining each of the plurality of hearing wellness categories overlap.
31. The method of any claims 23 to 30 wherein obtaining sound data comprises obtaining sound data about an environment as a function of time, and wherein calculating the plurality of sound data parameters and calculating the at least one hearing wellness parameter comprises dynamically calculating the parameters as a function of time.
32. The method of any of claims 23 to 31 further comprising determining a hearing wellness rating by: ascribing a numerical value to each of the plurality of hearing wellness parameters; applying a weighting to each of the plurality of hearing wellness parameters; and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
33. The method of claim 32 further comprising: ascribing each of the plurality of sound data parameters to a sound data principle, such that each sound data principle comprises a respective subset of the sound data parameters; applying a first weighting to each of the plurality of hearing wellness parameters; and applying a second weighting to each sound data principle to give a respective weighted value for each hearing wellness parameter; and determining the hearing wellness rating based on the sum of the weighted numerical values of the hearing wellness parameters.
34. A hearing wellness monitoring apparatus, the apparatus comprising a server configured to perform the method of claims 23 to 33.
35. A method for identifying a suitable environment for a user based on hearing wellness, the method comprising: obtaining sound data about a plurality of environments; analysing the sound data to calculate at least one sound data parameter for each environment; correlating the at least one sound data parameter with at least one hearing wellness parameter for each environment; obtaining hearing preference data for a user; matching at least one environment from the plurality of environments to the user based on correlating the at least one hearing wellness parameter of each environment and the hearing preference data of the user. - 55 -
36. The method of claim 35 wherein identifying at least one suitable environment further comprises: assigning each environment a hearing wellness category, based on the at least one hearing wellness parameter for each environment; and matching at least one environment from the plurality of environments to the user based on correlating the hearing preference data of the user and the hearing wellness category for each environment and/or the hearing wellness parameters.
37. The method of claims 35 to 36 wherein matching at least one environment from the plurality of environments to the user further comprises identifying a subset of the plurality of environments, wherein the subset match criteria determined by the hearing preference data.
38. The method of claim 37 wherein the hearing preference data comprises different criteria for identifying suitable environments for the user based on the intended activity of the user.
39. The method of claims 36 to 38 wherein the hearing preference data comprises at least one of (i) subjective user preference data, and (ii) objective hearing performance data.
40. A hearing wellness monitoring apparatus, the apparatus comprising a server configured to perform the method of claims 36 to 39.
41. A computer program product comprising program instructions configured to program a programmable device to perform the method of any claims 23 to 40.
42. A method for displaying tailored information relating to hearing wellness of a user on a graphical user interface, the method comprising: displaying a series of predetermined questions; receiving hearing preference data of a user via user input in response to the series of predetermined questions; and - 56 - dynamically displaying information relating to a subset of environments from a plurality of environments, wherein the subset of environments is determined to be suitable for the user based on correlating the hearing preference data of the user with real-time sound data parameters of each environment.
PCT/GB2022/052517 2021-10-05 2022-10-05 A hearing wellness monitoring system and method WO2023057752A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2114250.0 2021-10-05
GB2114250.0A GB2611529A (en) 2021-10-05 2021-10-05 A hearing wellness monitoring system and method

Publications (1)

Publication Number Publication Date
WO2023057752A1 true WO2023057752A1 (en) 2023-04-13

Family

ID=78497774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2022/052517 WO2023057752A1 (en) 2021-10-05 2022-10-05 A hearing wellness monitoring system and method

Country Status (2)

Country Link
GB (1) GB2611529A (en)
WO (1) WO2023057752A1 (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6845108B1 (en) 2001-05-14 2005-01-18 Calmar Optcom, Inc. Tuning of laser wavelength in actively mode-locked lasers
US7983426B2 (en) 2006-12-29 2011-07-19 Motorola Mobility, Inc. Method for autonomously monitoring and reporting sound pressure level (SPL) exposure for a user of a communication device
US20120147169A1 (en) * 2010-12-14 2012-06-14 Scenetap Llc Apparatus and method to monitor customer demographics in a venue or similar facility
US20130329863A1 (en) * 2012-06-08 2013-12-12 Avaya Inc. System and method to use enterprise communication systems to measure and control workplace noise
US9510118B2 (en) 2014-04-29 2016-11-29 Siemens Aktiengesellschaft Mapping system with mobile communication terminals for measuring environmental sound
US20170372242A1 (en) 2016-06-27 2017-12-28 Hartford Fire Insurance Company System to monitor and process noise level exposure data
US20180005142A1 (en) 2016-02-02 2018-01-04 Ali Meruani Method for determining venue and restaurant occupancy from ambient sound levels
US20190139565A1 (en) 2017-11-08 2019-05-09 Honeywell International Inc. Intelligent sound classification and alerting
US10390157B2 (en) 2017-08-18 2019-08-20 Honeywell International Inc. Hearing protection device self-assessment of suitability and effectiveness to provide optimum protection in a high noise environment based on localized noise sampling and analysis
US10587970B2 (en) 2016-09-22 2020-03-10 Noiseless Acoustics Oy Acoustic camera and a method for revealing acoustic emissions from various locations and devices
US20200202626A1 (en) 2018-12-21 2020-06-25 Plantronics, Inc. Augmented Reality Noise Visualization
WO2021030463A1 (en) * 2019-08-13 2021-02-18 Gupta Shayan Method for safe listening and user engagement

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9757069B2 (en) * 2008-01-11 2017-09-12 Staton Techiya, Llc SPL dose data logger system
US20150223000A1 (en) * 2014-02-04 2015-08-06 Plantronics, Inc. Personal Noise Meter in a Wearable Audio Device
WO2018087570A1 (en) * 2016-11-11 2018-05-17 Eartex Limited Improved communication device
US10068451B1 (en) * 2017-04-18 2018-09-04 International Business Machines Corporation Noise level tracking and notification system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6845108B1 (en) 2001-05-14 2005-01-18 Calmar Optcom, Inc. Tuning of laser wavelength in actively mode-locked lasers
US7983426B2 (en) 2006-12-29 2011-07-19 Motorola Mobility, Inc. Method for autonomously monitoring and reporting sound pressure level (SPL) exposure for a user of a communication device
US20120147169A1 (en) * 2010-12-14 2012-06-14 Scenetap Llc Apparatus and method to monitor customer demographics in a venue or similar facility
US20130329863A1 (en) * 2012-06-08 2013-12-12 Avaya Inc. System and method to use enterprise communication systems to measure and control workplace noise
US9510118B2 (en) 2014-04-29 2016-11-29 Siemens Aktiengesellschaft Mapping system with mobile communication terminals for measuring environmental sound
US20180005142A1 (en) 2016-02-02 2018-01-04 Ali Meruani Method for determining venue and restaurant occupancy from ambient sound levels
US20170372242A1 (en) 2016-06-27 2017-12-28 Hartford Fire Insurance Company System to monitor and process noise level exposure data
US10587970B2 (en) 2016-09-22 2020-03-10 Noiseless Acoustics Oy Acoustic camera and a method for revealing acoustic emissions from various locations and devices
US10390157B2 (en) 2017-08-18 2019-08-20 Honeywell International Inc. Hearing protection device self-assessment of suitability and effectiveness to provide optimum protection in a high noise environment based on localized noise sampling and analysis
US20190139565A1 (en) 2017-11-08 2019-05-09 Honeywell International Inc. Intelligent sound classification and alerting
US20200202626A1 (en) 2018-12-21 2020-06-25 Plantronics, Inc. Augmented Reality Noise Visualization
WO2021030463A1 (en) * 2019-08-13 2021-02-18 Gupta Shayan Method for safe listening and user engagement

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JUMAR ET AL.: "Features versus Feelings: Dissociable Representations of the Acoustic Features and Valence of Aversive Sounds", JOURNAL OF NEUROSCIENCE, vol. 32, no. 41, 10 October 2013 (2013-10-10), pages 14184 - 14193, Retrieved from the Internet <URL:https://doi.oro/10.1523/JNEUROSC!.1759-12.2012>
RAHBAEK,DAVID, BOLBERG, MADS, APPLICABILITY OF ACOUSTIC CAPACITY IN REGULATIONS BASED ON STUDY OF NOISE ENVIRONMENT IN DANISH NURSERY SCHOOLS, 2019
RINDEL, JENS., ACOUSTICAL CAPACITY AS A MEANS OF NOISE CONTROL IN EATING ESTABLISHMENTS, 2012
RINDEL, JENS.: "Acoustic capacity as a means to deal with poor restaurant acoustics", ACOUSTICS BULLETIN, vol. 39, 2014, pages 27 - 30

Also Published As

Publication number Publication date
GB202114250D0 (en) 2021-11-17
GB2611529A (en) 2023-04-12

Similar Documents

Publication Publication Date Title
US9736264B2 (en) Personal audio system using processing parameters learned from user feedback
US9703524B2 (en) Privacy protection in collective feedforward
US10275209B2 (en) Sharing of custom audio processing parameters
US11099059B2 (en) Intelligent noise mapping in buildings
US10531178B2 (en) Annoyance noise suppression
US11218796B2 (en) Annoyance noise suppression
US7702112B2 (en) Intelligibility measurement of audio announcement systems
US10853025B2 (en) Sharing of custom audio processing parameters
US11900016B2 (en) Multi-frequency sensing method and apparatus using mobile-clusters
EP3414924A1 (en) Hearing augmentation systems and methods
WO2023057752A1 (en) A hearing wellness monitoring system and method
US11547366B2 (en) Methods and apparatus for determining biological effects of environmental sounds
US9769553B2 (en) Adaptive filtering with machine learning
CH709679A2 (en) Method for remote support of hearing impaired.
US11150868B2 (en) Multi-frequency sensing method and apparatus using mobile-clusters
US20200241837A1 (en) Methods for collecting and managing public music performance royalties and royalty payouts
US11145320B2 (en) Privacy protection in collective feedforward
JP6801853B1 (en) Sound leakage prevention application program and sound leakage prevention device
Adams New insights into perception of aircraft and community noise events
WO2020237249A1 (en) Multi-frequency sensing method and apparatus using mobile-based clusters
Foreman et al. Noise criteria and regulations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22789996

Country of ref document: EP

Kind code of ref document: A1