CN117814788A - Hearing detection method, device and system - Google Patents

Hearing detection method, device and system Download PDF

Info

Publication number
CN117814788A
CN117814788A CN202211185227.9A CN202211185227A CN117814788A CN 117814788 A CN117814788 A CN 117814788A CN 202211185227 A CN202211185227 A CN 202211185227A CN 117814788 A CN117814788 A CN 117814788A
Authority
CN
China
Prior art keywords
signal
user
audio output
hearing
computing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211185227.9A
Other languages
Chinese (zh)
Inventor
刘俊材
赵安
林龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211185227.9A priority Critical patent/CN117814788A/en
Priority to PCT/CN2023/117902 priority patent/WO2024067034A1/en
Publication of CN117814788A publication Critical patent/CN117814788A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/38Acoustic or auditory stimuli

Abstract

The application provides a hearing test method, a hearing test device and a hearing test system, which mainly relate to interaction among audio output equipment, signal acquisition equipment and computing equipment. Specifically, the audio output device outputs a sound signal to the user; the signal acquisition equipment acquires an electroencephalogram signal generated by a user aiming at the sound signal and transmits the acquired electroencephalogram signal to the computing equipment; the computing device determines a hearing profile of the user from the received brain electrical signals. The audio output device may be, for example, a headset, the signal acquisition device may be, for example, a smart glasses, and the computing device may be, for example, a mobile phone. According to the hearing test method, device and system, the hearing condition of the user can be automatically evaluated by utilizing cooperation of multiple devices, namely, the user can finish hearing test by using common electronic equipment in daily life, and the hearing test method, device and system are convenient and simple.

Description

Hearing detection method, device and system
Technical Field
The embodiment of the application relates to the field of hearing detection, and more particularly, to a hearing detection method, device and system.
Background
Hearing loss is an unprecedented risk in today's world, and recently, the world health organization has issued the first world hearing report, which shows that currently one-fifth of the world has hearing impairment, and that hearing loss affects more than 15 hundred million people worldwide, and thus hearing loss detection is becoming increasingly important. Traditional hearing test methods can be divided into two main categories: the hearing test method relies on the test mode of the test understanding of the subject and the cooperation of the test, and the accuracy depends on the experience of a audiologist; the other type does not need objective tests of a subject understanding and audiologists, such as otoacoustic emission, auditory brainstem evoked potential and the like, but needs special equipment such as an otoacoustic emission monitor, an auditory brainstem response tester and the like to carry out clinical detection, so that the subject cannot carry out hearing detection in daily life. Therefore, how to conveniently perform hearing assessment of a subject is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a hearing test method, device and system, which can realize the evaluation of the hearing condition of a user by utilizing the cooperation of multiple devices, namely, the user can finish the hearing test by using common electronic equipment in daily life, and the method is convenient and simple.
In a first aspect, a hearing test system is provided, the system comprising an audio output device, a signal acquisition device, and a computing device, wherein the audio output device is configured to: receiving first information sent by the computing device, wherein the first information is used for indicating the audio output device to output sound signals to a user; the signal acquisition device is used for: receiving second information sent by the computing equipment, wherein the second information is used for indicating the signal acquisition equipment to acquire an electroencephalogram signal, and the electroencephalogram signal is generated by a user aiming at a sound signal; the signal acquisition device is also for: transmitting the brain electrical signal to a computing device; the computing device is to: and determining the hearing situation of the user according to the electroencephalogram signals.
Wherein the audio output device is a device capable of emitting audio signals, such as headphones and the like; the signal acquisition equipment is equipment capable of acquiring brain electrical signals, is provided with electrodes, and can acquire brain electrical signals generated by a user, for example, the signal acquisition equipment can be intelligent glasses; the computing device is a device with a CPU, which may be, for example, a cell phone, a computer, etc.
It should be understood that the sound signal is standard test audio. The standard test audio may be, for example, a sound of fixed frequency and fixed duration, the frequency of the sound signal may be, for example, 1kHz, 4kHz, 8kHz or 12kHz, and the duration of the sound signal may be, for example, 1s or 2s, i.e. the sound signal may be a short pure tone of fixed frequency. By setting the sound signal from low frequency to high frequency, it can be detected whether the user can hear the sound of each frequency, thereby more accurately measuring whether the user's hearing is normal. Further, the audio output apparatus may repeatedly output the sound signal at intervals of a preset period (e.g., 5 s).
It should be noted that, before the hearing test, the computing device, the audio output device and the signal acquisition device may be networked and time synchronized by a distributed soft bus.
Based on the technical scheme, the audio output device can output sound signals to the user, the signal acquisition device can acquire brain electrical signals generated by the user aiming at the sound signals and send the acquired brain electrical signals to the computing device, and the computing device determines the hearing situation of the user according to the received brain electrical signals. According to the method, the hearing condition of the user can be automatically evaluated by utilizing cooperation of multiple devices, that is, the user can complete hearing detection by using common electronic equipment in daily life, and the method is convenient and simple. In addition, because the audio output device and the signal acquisition device are mutually independent, the noise generated by the working current of the audio output device on the measuring circuit can be reduced, and the signal to noise ratio is improved.
With reference to the first aspect, in certain implementations of the first aspect, the signal acquisition device includes a first electrode, a second electrode, a third electrode, and a fourth electrode, the first electrode being in contact with one side of the user's ear, the second electrode being in contact with the other side of the user's ear, the other side of the user's ear being for receiving the sound signal, the third electrode and the fourth electrode being in contact with the user's scalp, the fourth electrode and the third electrode being located at different positions of the user's scalp;
the brain electrical signal measured by the first electrode is a first signal, the brain electrical signal measured by the second electrode is a second signal, the brain electrical signal measured by the third electrode is a third signal, and the brain electrical signal measured by the fourth electrode is a fourth signal; the computing device may also be for: and determining the hearing situation of the user according to the first signal, the second signal, the third signal and the fourth signal.
It should be understood that the electrodes of the signal acquisition device may be divided into a main electrode, a secondary electrode and a reference electrode, wherein the first electrode is the main electrode, the second electrode is the secondary electrode, and the third electrode and the fourth electrode are the reference electrodes. The electrode at the audio sounding side is a secondary electrode, the electrode at the opposite audio sounding side is a primary electrode, and the electrodes at the rest positions are reference electrodes.
It should be noted that, because the noise interference of the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) is strong, the computing device may also not consider the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) when determining the hearing situation of the user. That is, the computing device is further to: and determining the hearing condition of the user according to the first signal, the third signal and the fourth signal.
Based on the technical scheme, the computing equipment can determine the hearing situation of the user according to the electroencephalogram signals acquired by the signal acquisition equipment at different positions, so that the hearing situation of the user can be accurately determined, and the hearing automatic evaluation of the user is completed.
With reference to the first aspect, in certain implementations of the first aspect, the computing device is further configured to: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the second signal and the third signal to obtain a second differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; calculating the difference value of the second signal and the fourth signal to obtain a fourth differential signal; and determining the hearing situation of the user according to the first differential signal, the second differential signal, the third differential signal and the fourth differential signal.
It should be noted that, because the noise interference of the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) is strong, the computing device may also not consider the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) when determining the hearing situation of the user. That is, the computing device may also be configured to: and determining the hearing situation of the user according to the first differential signal and the third differential signal.
Based on the technical scheme, the computing equipment can further calculate the differential signal of the electroencephalogram signal acquired by the signal acquisition equipment and determine the hearing situation of the user according to the differential signal, so that the hearing situation of the user can be accurately determined, and the hearing automatic evaluation of the user is completed.
With reference to the first aspect, in certain implementations of the first aspect, the computing device is further configured to: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the second signal and the third signal to obtain a second differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; calculating the difference value of the second signal and the fourth signal to obtain a fourth differential signal; determining a hearing situation of the user according to the first differential signal, the second differential signal, the third differential signal, the fourth differential signal and the first reference signal; the first reference signal is the difference between the first signal and the first basic signal, and the first basic signal is the average value of the third signal and the fourth signal.
It should be noted that, because the noise interference of the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) is strong, the computing device may also not consider the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) when determining the hearing situation of the user. That is, the computing device is further to: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; calculating the average value of the third signal and the fourth signal to obtain a first basic signal; calculating the difference value between the first signal and the first basic signal to obtain a first reference signal; a hearing profile of the user is determined from the first differential signal, the third differential signal, and the first reference signal.
It should be understood that the number of electrodes in this application is merely exemplary, and that this application may include additional electrodes.
Based on the technical scheme, in the application, the test time can be reduced or the test accuracy can be improved under the same test time by adding the additional electrode to participate in the hearing evaluation of the user.
With reference to the first aspect, in certain implementations of the first aspect, a standard deviation of the first differential signal and the first reference signal is less than a first threshold, and a correlation coefficient of the first differential signal and the first reference signal is greater than a second threshold; the standard deviation of the second differential signal and the first reference signal is smaller than a first threshold value, and the correlation coefficient of the second differential signal and the first reference signal is larger than a second threshold value; the standard deviation of the third differential signal and the first reference signal is smaller than a first threshold value, and the correlation coefficient of the third differential signal and the first reference signal is larger than a second threshold value; the standard deviation of the fourth differential signal and the first reference signal is smaller than the first threshold value, and the correlation coefficient of the fourth differential signal and the first reference signal is larger than the second threshold value.
Illustratively, the first threshold may be 2 and the second threshold may be 0.8.
It will be appreciated that if the user performs other activities in the test hearing, the data collected by the electrodes have noise differences or voltage differences, and the correlation between the reference signal and the differential signal becomes smaller and the variance becomes larger. Therefore, by limiting the standard deviation and the correlation coefficient of the first differential signal, the second differential signal, the third differential signal and the fourth differential signal to meet certain requirements, the data with lower signal quality can be discarded, and the data meeting certain requirements can be reserved, so that the accuracy of the hearing evaluation of the user can be improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the signal acquisition device is further configured to: and when the signal acquisition times of the signal acquisition equipment are larger than a third threshold value, transmitting the electroencephalogram signals to the computing equipment.
Illustratively, the third threshold may be 900.
It can be understood that when the signal acquisition device performs signal acquisition, the acquisition times should meet certain requirements, for example, the acquisition times of the signal acquisition device are greater than 900 times, the more the electroencephalogram signals acquired by the signal acquisition device are, the more references are made by the computing device when determining the hearing situation of the user, and the more the computing device is beneficial to evaluating the hearing situation of the user.
With reference to the first aspect, in certain implementations of the first aspect, the audio output device is wirelessly connected with the computing device, and the signal acquisition device is wirelessly connected with the computing device.
With reference to the first aspect, in certain implementations of the first aspect, the audio output device is further configured to: outputting a sound signal from a first time; the signal acquisition device is also for: collecting brain electrical signals from the second moment; the time difference between the first time and the second time is less than a fourth threshold.
Illustratively, the fourth threshold may be 1.
Based on the above technical scheme, in the application, the electrodes can be connected without wires, and wireless measurement can be realized. When the audio output device is connected with the computing device in a wireless mode, and the signal acquisition device is connected with the computing device in a wireless mode, the time among the devices needs to be kept synchronous, so that the hearing situation of the user is accurately determined, and the hearing automatic assessment of the user is completed.
With reference to the first aspect, in certain implementations of the first aspect, the computing device is further configured to: detecting the connection state and wearing posture of the audio output device and the connection state and wearing posture of the signal acquisition device; if the audio output device is not connected with the computing device and/or the signal acquisition device is not connected with the computing device, outputting prompt information to remind a user; if the wearing posture of the audio output device and/or the wearing posture of the signal acquisition device are/is detected to be not in accordance with the wearing standard, outputting prompt information to remind a user.
It can be appreciated that the audio output device and the signal acquisition device may be configured with their own sensors, and the computing device may detect the connection state and the wearing posture of the user by using the sensors, and if the connection state and the wearing posture of the audio output device and the signal acquisition device are detected to be not satisfied, the user needs to be prompted to adjust, so that the user may perform accurate hearing detection.
With reference to the first aspect, in certain implementations of the first aspect, the computing device is further configured to: performing filtering and superposition processing on the electroencephalogram signals to determine waveforms of the electroencephalogram signals; classifying the waveforms of the electroencephalogram signals by using a preset algorithm, and determining the hearing situation of the user.
It should be noted that, the preset algorithm may be a commonly used trained machine learning algorithm, for example, a neural network classification algorithm, a bayesian classification algorithm, a support vector machine SVM algorithm, and the like.
It can be appreciated that, when measuring the hearing situation of the right ear of the user, the computing device may perform filtering processing and superposition processing on the first differential signal obtained by multiple measurements, so as to obtain the waveform of the electroencephalogram signal of the right ear of the user. The waveform is classified by the classifier, and the waveform is determined to belong to a normal waveform or an abnormal waveform, so that whether the hearing of the user is normal is determined.
In a second aspect, there is provided a hearing test method applied to a computing device, the method comprising: transmitting first information to the audio output device, the first information being used to instruct the audio output device to output a sound signal to a user; sending second information to the signal acquisition equipment, wherein the second information is used for indicating the signal acquisition equipment to acquire the electroencephalogram signals; receiving an electroencephalogram signal sent by signal acquisition equipment, wherein the electroencephalogram signal is an electroencephalogram signal generated by a user aiming at a sound signal; and determining the hearing situation of the user according to the electroencephalogram signals.
With reference to the second aspect, in some implementations of the second aspect, the determining, according to an electroencephalogram signal, a hearing condition of a user includes: determining a hearing situation of the user according to the first signal, the second signal, the third signal and the fourth signal; the first signal is an electroencephalogram signal obtained by the signal acquisition equipment for measuring the sound signal on the opposite side of sound production, the second signal is an electroencephalogram signal obtained by the signal acquisition equipment for measuring the sound signal on the opposite side of sound production, the third signal and the fourth signal are electroencephalogram signals obtained by the signal acquisition equipment for measuring the scalp of a user, and the measuring position of the third signal is different from the measuring position of the fourth signal.
It should be noted that, because the noise interference of the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) is strong, the computing device may also not consider the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) when determining the hearing situation of the user. That is, the determining the hearing situation of the user according to the electroencephalogram signal may further include: and determining the hearing condition of the user according to the first signal, the third signal and the fourth signal.
With reference to the second aspect, in certain implementation manners of the second aspect, the determining a hearing situation of the user according to the first signal, the second signal, the third signal, and the fourth signal includes: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the second signal and the third signal to obtain a second differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; calculating the difference value of the second signal and the fourth signal to obtain a fourth differential signal; and determining the hearing situation of the user according to the first differential signal, the second differential signal, the third differential signal and the fourth differential signal.
It should be noted that, because the noise interference of the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) is strong, the computing device may also not consider the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) when determining the hearing situation of the user. That is, the determining the hearing condition of the user according to the first signal, the third signal, and the fourth signal may further include: and determining the hearing situation of the user according to the first differential signal and the third differential signal.
With reference to the second aspect, in certain implementation manners of the second aspect, the determining a hearing situation of the user according to the first signal, the second signal, the third signal, and the fourth signal includes: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the second signal and the third signal to obtain a second differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; calculating the difference value of the second signal and the fourth signal to obtain a fourth differential signal; determining a hearing situation of the user according to the first differential signal, the second differential signal, the third differential signal, the fourth differential signal and the first reference signal; the first reference signal is the difference between the first signal and the first basic signal, and the first basic signal is the average value of the third signal and the fourth signal.
It should be noted that, because the noise interference of the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) is strong, the computing device may also not consider the electroencephalogram signal measured by the secondary electrode (i.e. the second electrode) when determining the hearing situation of the user. That is, the determining the hearing condition of the user according to the first signal, the third signal, and the fourth signal may further include: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; calculating the average value of the third signal and the fourth signal to obtain a first basic signal; calculating the difference value between the first signal and the first basic signal to obtain a first reference signal; a hearing profile of the user is determined from the first differential signal, the third differential signal, and the first reference signal.
With reference to the second aspect, in certain implementations of the second aspect, a standard deviation of the first differential signal and the first reference signal is less than a first threshold, and a correlation coefficient of the first differential signal and the first reference signal is greater than a second threshold; the standard deviation of the second differential signal and the first reference signal is smaller than a first threshold value, and the correlation coefficient of the second differential signal and the first reference signal is larger than a second threshold value; the standard deviation of the third differential signal and the first reference signal is smaller than a first threshold value, and the correlation coefficient of the third differential signal and the first reference signal is larger than a second threshold value; the standard deviation of the fourth differential signal and the first reference signal is smaller than the first threshold value, and the correlation coefficient of the fourth differential signal and the first reference signal is larger than the second threshold value.
With reference to the second aspect, in some implementations of the second aspect, the audio output device is connected to the computing device in a wireless manner, and the signal acquisition device is connected to the computing device in a wireless manner.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: controlling the audio output device to output a sound signal from a first time; the control signal acquisition equipment starts to acquire the electroencephalogram signals from the second moment; the time difference between the first time and the second time is less than a fourth threshold.
With reference to the second aspect, in certain implementations of the second aspect, before sending the first information to the audio output device, the method further includes: detecting the connection state and wearing posture of the audio output device and the connection state and wearing posture of the signal acquisition device; if the audio output device is not connected with the computing device and/or the signal acquisition device is not connected with the computing device, outputting prompt information to remind a user; if the wearing posture of the audio output device and/or the wearing posture of the signal acquisition device are/is detected to be not in accordance with the wearing standard, outputting prompt information to remind a user.
With reference to the second aspect, in some implementations of the second aspect, the determining, according to an electroencephalogram signal, a hearing condition of a user includes: performing filtering and superposition processing on the electroencephalogram signals to determine waveforms of the electroencephalogram signals; classifying the waveforms of the electroencephalogram signals by using a preset algorithm, and determining the hearing situation of the user.
In a third aspect, a hearing test device is provided, the device comprising a receiving unit, a transmitting unit and a processing unit, the transmitting unit being adapted to: transmitting first information to the audio output device, the first information being for instructing the audio output device to output a sound signal to a user; transmitting second information to the signal acquisition equipment, wherein the second information is used for indicating the signal acquisition equipment to acquire the electroencephalogram signals; the receiving unit is used for: receiving an electroencephalogram signal sent by signal acquisition equipment, wherein the electroencephalogram signal is an electroencephalogram signal generated by a user aiming at a sound signal; the processing unit is used for: and determining the hearing situation of the user according to the electroencephalogram signals.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is further configured to: determining a hearing situation of the user according to the first signal, the third signal and the fourth signal; the first signal is an electroencephalogram signal obtained by measuring the opposite side of sound signal sounding by the signal acquisition equipment, the third signal and the fourth signal are electroencephalogram signals obtained by measuring the scalp of a user by the signal acquisition equipment, and the measuring position of the third signal is different from the measuring position of the fourth signal.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is further configured to: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; calculating the average value of the third signal and the fourth signal to obtain a first basic signal; calculating the difference value between the first signal and the first basic signal to obtain a first reference signal; a hearing profile of the user is determined from the first differential signal, the third differential signal, and the first reference signal.
With reference to the third aspect, in certain implementations of the third aspect, a standard deviation of the first differential signal and the first reference signal is less than a first threshold, and a correlation coefficient of the first differential signal and the first reference signal is greater than a second threshold; the standard deviation of the third differential signal and the first reference signal is smaller than the first threshold value, and the correlation coefficient of the third differential signal and the first reference signal is larger than the second threshold value.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is further configured to: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; and determining the hearing situation of the user according to the first differential signal and the third differential signal.
With reference to the third aspect, in some implementations of the third aspect, the audio output device is wirelessly connected with the computing device, and the signal acquisition device is wirelessly connected with the computing device.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is further configured to: controlling the audio output device to output a sound signal from a first time; the control signal acquisition equipment starts to acquire the electroencephalogram signals from the second moment; the time difference between the first time and the second time is less than a fourth threshold.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is further configured to: detecting the connection state and wearing posture of the audio output device and the connection state and wearing posture of the signal acquisition device; if the audio output device is not connected with the computing device and/or the signal acquisition device is not connected with the computing device, outputting prompt information to remind a user; if the wearing posture of the audio output device and/or the wearing posture of the signal acquisition device are/is detected to be not in accordance with the wearing standard, outputting prompt information to remind a user.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is further configured to: performing filtering and superposition processing on the electroencephalogram signals to determine waveforms of the electroencephalogram signals; classifying the waveforms of the electroencephalogram signals by using a preset algorithm, and determining the hearing situation of the user.
In a fourth aspect, there is provided a hearing test device comprising: the system comprises an audio output module, a signal acquisition module and a calculation module. The audio output module is configured to perform the steps performed by the audio output device in the first aspect or any one of the possible implementation manners of the first aspect; the signal acquisition module is configured to perform the steps performed by the signal acquisition device in the first aspect or any one of the possible implementation manners of the first aspect; the computing module is configured to perform the steps performed by the computing device in the first aspect or any of the possible implementations of the first aspect, or to perform the steps performed by the computing device in the second aspect or any of the possible implementations of the second aspect.
In a fifth aspect, there is provided a hearing test device comprising: a processor coupled to a memory for storing a program or instructions which, when executed by the processor, cause the control device to implement the method of the second aspect or any one of the possible implementations of the second aspect.
In a sixth aspect, there is provided a chip comprising a processor and a data interface, the processor reading instructions stored on a memory via the data interface to perform the method of any one of the possible implementations of the second aspect or the second aspect.
Optionally, as an implementation manner, the chip may further include a memory, where the memory stores instructions, and the processor is configured to execute the instructions stored on the memory, where the processor is configured to execute the method in the second aspect or any possible implementation manner of the second aspect when the instructions are executed.
In a seventh aspect, a computer readable storage medium is provided, in which a computer program or instructions is stored which, when executed, implement the method of the second aspect or any one of the possible implementations of the second aspect.
In an eighth aspect, there is provided a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the second aspect or any one of the possible implementations of the second aspect.
Drawings
Fig. 1 is a schematic diagram of a hearing test system according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a hearing test method according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of another hearing test method provided in an embodiment of the present application.
Fig. 4 is a standard schematic diagram of electroencephalogram electrode position placement according to an embodiment of the present application.
Fig. 5 is a schematic flow chart of another hearing test method provided in an embodiment of the present application.
Fig. 6 is a BAEP classification chart of auditory brainstem evoked potentials for auditory neuroma according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a hearing test device according to an embodiment of the present application.
Fig. 8 is a schematic view of another hearing test device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
As described in the background section, more and more people are currently affected by hearing loss, and therefore, more and more people are concerned about the field of hearing tests. The current hearing test methods can be mainly classified into subjective test and objective test methods.
Subjective testing methods may include pure tone threshold hearing, speech testing, and the like. The pure tone threshold hearing test method mainly comprises the steps of giving a test subject a plurality of stimulus signals (the frequency of the stimulus signals is generally in the range of 125 Hz-8000 Hz) under a specified condition, and comparing the minimum sound signal intensity which can be heard by the test subject with the normal level to judge whether the hearing is impaired. The method can rapidly and accurately judge the hearing degree and the ear focus of the subject, and more intuitively and comprehensively reflect the hearing condition of the subject, but requires the subject to understand the test requirement and cooperate with detection, and the accuracy depends on the experience of a audiologist. The speech test method mainly comprises the step of checking whether a subject can hear speech sounds and recognize different speech sounds, namely whether the subject can understand the meaning carried by the speech sounds. This approach can provide more patient hearing functions, speech-resolved information, but its accuracy also needs to rely on the experience of the audiologist.
Objective testing methods may include otoacoustic emissions, auditory brainstem evoked potentials, and the like. Otoacoustic emissions can determine whether there is a lesion or lesion in the hair cells of the cochlea by detecting the amount of cochlea efferent energy. The otoacoustic emission is commonly used for screening the hearing of the newborn, and can also judge whether the function of the auditory efferent nerve is normal, but the otoacoustic emission only reflects the acoustic emission function of the cochlea and cannot reflect the hearing condition, and the clinical examination is carried out by depending on the otoacoustic emission detector, so that the acoustic emission can not be screened by oneself. Auditory brainstem evoked potentials assess auditory function by detecting a series of neurogenic electrical activities produced in the auditory system from the cochlea of the inner ear to the auditory center of the cerebral cortex, caused by acoustic stimulation. Auditory brainstem induction potential can reflect the auditory sensitivity of the periphery and the nerve conduction function of the brainstem auditory pathway, but also depends on an auditory brainstem response tester for clinical examination, and cannot be screened by itself.
It will be appreciated that auditory brainstem evoked potentials can reflect the peripheral auditory acuity and the neural conduction function of the brainstem auditory pathways, i.e., whether the auditory nerve and conduction pathways, auditory cortex centers are normal. That is, although the hearing test method of auditory brainstem evoked potentials can objectively reflect the hearing situation of a subject, the hearing test method of auditory brainstem evoked potentials relies on an auditory brainstem response tester, and the subject cannot perform hearing test in daily life.
Therefore, the method, the device and the system for detecting the hearing can realize automatic assessment of the hearing condition of the user by utilizing the cooperation of multiple devices, namely, the user can complete hearing detection by using common electronic devices in daily life, and the method, the device and the system are convenient and simple.
Fig. 1 is a schematic diagram of a hearing test system according to an embodiment of the present application.
As shown in fig. 1, the hearing test system 100 may include an audio output module 120, a signal acquisition module 130, and a hearing assessment module 140. In some embodiments, the hearing test system 100 may further include a pre-processing module 110.
It is understood that in some embodiments, the preprocessing module 110, the audio output module 120, the signal acquisition module 130, and the hearing assessment module 140 may be integrated into the same device. In other embodiments, the audio output module 120, the signal acquisition module 130, and the hearing assessment module 140 may be integrated into different devices, and the preprocessing module 110 may be integrated into the same device as the hearing assessment module 140. By way of example, the audio output module 120 may be integrated in a first device, which may be, for example, an audio output device hereinafter; the signal acquisition module 130 may be integrated in a second device, which may be, for example, a signal acquisition device as follows; the preprocessing module 110 and the hearing assessment module 140 may be integrated in a third device, which may be, for example, a computing device as follows.
The preprocessing module 110 is used for detecting whether the connection state and the wearing posture of the audio output module and the signal acquisition module meet the requirements. When a user starts to perform hearing test on the computing device, the preprocessing module 110 may detect a connection state of the audio output device and a connection state of the signal acquisition device, and if the audio output device is not connected to the computing device and/or the signal acquisition device is not connected to the computing device, the computing device needs to output prompt information to remind the user to connect the audio output device and/or remind the user to connect the signal acquisition device so as to meet the hearing test requirement. When a user starts to perform hearing detection on the computing device, the preprocessing module 110 can detect the wearing posture of the audio output device and the wearing posture of the signal acquisition device, and if the wearing posture of the audio output device and/or the wearing posture of the signal acquisition device do not meet the specified requirement, the computing device needs to output prompt information to prompt the user to adjust the audio output device and/or the signal acquisition device so as to meet the hearing detection requirement.
The audio output module 120 is used for outputting sound signals. The audio output module can output standard test audio (e.g., sound at a particular frequency and loudness) to the ear and can be connected to the hearing assessment module 140 by wired or wireless means.
The signal acquisition module 130 is used for acquiring an electric signal generated by brainstem after receiving specific audio frequency by the ear of the user. The signal acquisition module 130 can acquire brain electrical signals and can be connected to the hearing assessment module 140 by wired or wireless means.
The hearing evaluation module 140 is configured to perform data processing on the brain wave signals acquired by the signal acquisition module 130, identify different brain wave signals, and evaluate hearing of a user. The hearing assessment module 140 can be coupled to the audio output module 120 and the signal acquisition module 130.
Fig. 2 is a schematic flow chart of a hearing test method according to an embodiment of the present application. The hearing test method 300 is primarily directed to the interaction of an audio output device, a signal acquisition device, and a computing device, and the method 200 may include S201 to S204.
S201, the audio output device receives the first information sent by the computing device.
Specifically, the computing device transmits first information that the audio output device receives, the first information for instructing the audio output device to output a sound signal to the user. That is, when the computing device receives the first information, a sound signal may be output to the user. Wherein the audio output device, which may be, for example, a headset, an in-ear headset, a bluetooth headset, etc., is capable of wired or wireless connection with the computing device.
S202, the signal acquisition device receives second information sent by the computing device.
Specifically, the computing device sends second information, the signal acquisition device receives the second information, and the second information is used for indicating the signal acquisition device to acquire an electroencephalogram signal, wherein the electroencephalogram signal is an electroencephalogram signal generated by a user aiming at a sound signal. That is, when the user receives the second information, acquisition of the brain electrical signal may be started. The signal acquisition device can be connected with the computing device in a wired or wireless mode, and is provided with a forehead, mastoid electrodes (such as intelligent glasses) on the rear side of the ear or single-electrode devices (such as intelligent sensors) which can be flexibly placed.
The signal acquisition device comprises a first electrode, a second electrode, a third electrode and a fourth electrode, wherein the first electrode is contacted with one side ear of a user, the second electrode is contacted with the other side ear of the user, the other side ear of the user is used for receiving the sound signal, the third electrode and the fourth electrode are contacted with the scalp of the user, the fourth electrode and the third electrode are positioned at different positions of the scalp of the user, the right ear of the user is the side for receiving the sound signal, and the left ear of the user is the opposite side for receiving the sound signal; the brain electrical signal measured by the first electrode is a first signal, the brain electrical signal measured by the second electrode is a second signal, the brain electrical signal measured by the third electrode is a third signal, and the brain electrical signal measured by the fourth electrode is a fourth signal.
It will be appreciated that the method may further comprise, prior to S201 and S202: networking the computing device, the audio output device and the signal acquisition device, and performing time synchronization.
After the computing device, the audio output device and the signal acquisition device are networked, the computing device is a main device, the audio output device and the signal acquisition device are secondary devices, and the main device can respectively send information to the secondary devices, namely, the computing device can respectively send information to the audio output device and the signal acquisition device.
It should be noted that, the computing device, the audio output device and the signal acquisition device need to perform time synchronization, and when the computing device and the audio output device are connected in a wired manner, the time of the computing device, the audio output device and the signal acquisition device can be kept synchronous; when the computing device and the audio output device are connected wirelessly, the computing device, the audio output device, and the signal acquisition device need to be time synchronized, which is described in detail in fig. 6.
S203, the signal acquisition device transmits the brain electrical signals to the computing device.
This part of the content will be described in detail in S305, and will not be described here again.
For example, the signal acquisition device may include a first electrode in contact with one side of the user's ear, a second electrode in contact with the other side of the user's ear for receiving the sound signal, a third electrode in contact with the user's scalp, and a fourth electrode in contact with the scalp of the user, the fourth electrode being located at a different position from the third electrode; the brain electrical signal measured by the first electrode is a first signal, the brain electrical signal measured by the second electrode is a second signal, the brain electrical signal measured by the third electrode is a third signal, and the brain electrical signal measured by the fourth electrode is a fourth signal. The signal acquisition device may send the acquired first, second, third, and fourth signals to the computing device.
S204, the computing equipment determines the hearing situation of the user according to the electroencephalogram signals.
In one possible implementation, the computing device may determine a hearing profile of the user from the first signal, the third signal, and the fourth signal.
Optionally, the computing device calculates a difference between the first signal and the third signal to obtain a first differential signal; the computing equipment calculates the difference value of the first signal and the fourth signal to obtain a third differential signal; and determining a hearing profile of the user based on the first differential signal and the third differential signal.
The computing device may perform filtering processing and superposition processing on the first differential signal acquired multiple times and the third differential signal acquired multiple times to obtain an electroencephalogram signal waveform of the user, and classify the obtained waveform by using a trained classifier to determine whether the waveform belongs to a normal waveform or an abnormal waveform.
Optionally, the computing device calculates a difference between the first signal and the third signal to obtain a first differential signal; the computing equipment calculates the difference value of the first signal and the fourth signal to obtain a third differential signal; the computing device calculates the average value of the third signal and the fourth signal to obtain a first basic signal; the computing equipment calculates the difference value between the first signal and the first basic signal to obtain a first reference signal; and determining a hearing profile of the user based on the first differential signal, the third differential signal, and the first reference signal. The specific process of determining the hearing situation of the user according to the first differential signal, the third differential signal and the first reference signal by the computing device may refer to S306, which is not described herein.
In another possible implementation, the computing device may determine the hearing profile of the user from the first signal, the second signal, the third signal, and the fourth signal.
Optionally, the computing device calculates a difference between the first signal and the third signal to obtain a first differential signal; the computing device calculates the difference value between the second signal and the third signal to obtain a second differential signal; the computing equipment calculates the difference value of the first signal and the fourth signal to obtain a third differential signal; the computing equipment calculates the difference value of the second signal and the fourth signal to obtain a fourth differential signal; and determining a hearing profile of the user based on the first differential signal, the second differential signal, the third differential signal, and the fourth differential signal.
The computing device may perform filtering processing and superposition processing on the first differential signal acquired multiple times, the second differential signal acquired multiple times, the third differential signal acquired multiple times, and the fourth differential signal acquired multiple times to obtain an electroencephalogram waveform of the user, and classify the obtained waveform by using a trained classifier to determine whether the waveform belongs to a normal waveform or an abnormal waveform.
Optionally, the computing device calculates a difference between the first signal and the third signal to obtain a first differential signal; the computing device calculates the difference value between the second signal and the third signal to obtain a second differential signal; the computing equipment calculates the difference value of the first signal and the fourth signal to obtain a third differential signal; the computing equipment calculates the difference value of the second signal and the fourth signal to obtain a fourth differential signal; the computing device calculates the average value of the third signal and the fourth signal to obtain a first basic signal; the computing equipment calculates the difference value between the first signal and the first basic signal to obtain a first reference signal; and determining a hearing profile of the user based on the first differential signal, the second differential signal, the third differential signal, the fourth differential signal, and the first reference signal. The specific process of determining the hearing situation of the user according to the first differential signal, the third differential signal and the first reference signal by the computing device may refer to S306, which is not described herein.
In the embodiment of the application, the audio output device can output a sound signal to a user, the signal acquisition device can acquire an electroencephalogram signal generated by the user aiming at the sound signal, the acquired electroencephalogram signal is sent to the computing device, and the computing device determines the hearing condition of the user according to the received electroencephalogram signal. According to the hearing test method, the hearing condition of the user can be automatically evaluated by utilizing cooperation of multiple devices, namely, the user can finish hearing test by using common electronic equipment in daily life, and the hearing test method is convenient and simple. In addition, because the audio output device and the signal acquisition device are mutually independent, the noise generated by the working current of the audio output device on the measuring circuit can be reduced, and the signal to noise ratio is improved.
Fig. 3 is a schematic flow chart of another hearing test method provided in an embodiment of the present application. The hearing test method 300 mainly involves the interaction of an audio output device, a signal acquisition device and a computing device, and the method 300 may comprise S301 to S307.
S301, detecting the connection state and the wearing posture of the audio output device by the computing device and the connection state and the wearing posture of the signal acquisition device.
When a user turns on hearing detection on a computing device (e.g., a mobile phone), the connection state of the audio output device and the connection state of the signal acquisition module need to be detected, and the wearing posture of the audio output device and the wearing posture of the signal acquisition device need to be detected.
When a user turns on a hearing test on a computing device (e.g., a cell phone), the computing device may detect a connection state of an audio output device (e.g., a headset) and a connection state of a signal acquisition device (e.g., a smart glasses), and if the audio output device is not connected to the computing device and/or the signal acquisition device is not connected to the computing device, the user may be prompted to connect the audio output device and/or the signal acquisition device. For example, a prompt box and a popup window of the mobile phone, in which the earphone is not connected and/or the intelligent glasses are not connected, appear.
When the user turns on the hearing test on the computing device (e.g., a mobile phone), the computing device may also detect the gesture of the user wearing the audio output device (e.g., a headset) and the gesture of the user wearing the signal acquisition device (e.g., a pair of smart glasses), and if the audio output device does not meet the wearing standard and/or the signal acquisition device does not meet the wearing standard, the user needs to be reminded of adjusting the wearing state of the audio output device and/or the signal acquisition device until the audio output device and/or the signal acquisition device meet the standard. For example, a prompt box and popup window appear when the earphone is not worn correctly and/or the smart glasses are not worn correctly.
After detecting that the connection state and the wearing posture of the audio output device and the connection state and the wearing posture of the signal acquisition device meet the requirements, the positions of the audio output device and the signal acquisition device can be marked on the computing device.
When a user turns on a hearing test on a computing device (e.g., a cell phone), the locations of the audio output device and the signal acquisition device need to be marked on the computing device. In some embodiments, the computing device may display a prompt box, pop-up window, etc. on the computing device to alert the user to mark the location of the audio output device and the signal acquisition device during the hearing test initialization phase. In some embodiments, the user may also actively mark the location of the audio output device as well as the signal acquisition device while conducting the hearing test on the computing device. In some embodiments, the computing device may provide some audio output device and signal acquisition device locations that the user may autonomously select.
The audio output device can be a headset and the like, and the sounding position of the audio output device can be a left ear or a right ear; the signal acquisition device may include smart glasses, patches, helmets, etc., and the location of the signal acquisition device may be, for example, the left ear, right ear, top of the head, forehead, or other locations of the scalp.
As shown in fig. 4, fig. 4 shows a standard diagram of electroencephalogram electrode placement. The location of the signal acquisition device may include a primary acquisition location, a secondary acquisition location, and an additional acquisition location. The main acquisition position of the signal acquisition device can comprise the point A1 or the point A2 in fig. 4, the secondary acquisition position of the signal acquisition device can comprise the point A1 or the point A2 in fig. 4, and the additional acquisition position of the signal acquisition device can be preferentially placed at the positions of Cz, fpz, nz, fz, CPz, pz, POz, oz, iz and the like.
It can be understood that when the audio output device emits a sound signal at the left ear of the user, the main acquisition position of the signal acquisition device is the right ear of the user; when the audio output device sends out a sound signal at the right ear of the user, the main acquisition position of the signal acquisition device is the left ear of the user.
For example, if the sounding position of the audio output device is the left ear, the primary collecting position of the signal collecting device may be the point A2 in fig. 4, and the secondary collecting position of the signal collecting device may be the point A1 in fig. 4; if the sounding position of the audio output device is the right ear, the main acquisition position of the signal acquisition device may be the point A1 in fig. 4, and the secondary acquisition position of the signal acquisition device may be the point A2 in fig. 4.
In some embodiments, the signal acquisition device may include at least three sensors (e.g., a first sensor, a second sensor, and a third sensor), the first sensor may include a primary electrode (or a secondary electrode) at the A1 position, the second sensor may include a secondary electrode (or a primary electrode) at the A2 position, and the third sensor may include a reference electrode at the Cz or Fpz position.
Optionally, the signal acquisition device may further include a fourth sensor, which may include additional electrodes at Nz, fz, CPz, pz, POz, oz, iz or the like.
After the connection state and the wearing posture of the audio output equipment and the signal acquisition equipment are detected to meet the requirements, the computing equipment, the audio output equipment and the signal acquisition equipment can be networked, and time synchronization is performed.
Illustratively, the computing device, the audio output device, and the signal acquisition device may be networked via a distributed soft bus. The distributed soft bus provides unified distributed communication capability for interconnection and interworking between devices, and provides conditions for non-inductive discovery and zero-latency transmission between devices. Depending on the soft bus technology, a plurality of devices can be easily realized to cooperatively complete a task, and the task can be transferred to another device by one device to continue to be executed. For users, the soft bus can realize self-discovery and self-networking without paying attention to networking of a plurality of devices.
In addition, time synchronization needs to be performed among the computing device, the audio output device and the signal acquisition device, and in the embodiment of the application, the audio output device and the computing device can be connected in a wired mode, so that the time synchronization is performed between the audio output device and the computing device; the signal acquisition equipment and the computing equipment can be connected in a wired mode, so that the time between the signal output equipment and the computing equipment is synchronous; and further time synchronization among the computing equipment, the audio output equipment and the signal acquisition equipment can be realized. In other embodiments, where the audio output device and the computing device may be connected wirelessly, and where the signal acquisition device and the computing device may also be connected wirelessly, time synchronization may be required between the computing device, the audio output device, and the signal acquisition device, as will be described in detail in fig. 6.
S302, the computing device sends first information to the audio output device.
Specifically, after detecting that the connection state and the wearing posture of the audio output device and the connection state and the wearing posture of the signal acquisition device meet requirements, the computing device can send first information to the audio output device, and after receiving the first information, the audio output device starts to send audio signals to one side ear.
Wherein the first information may comprise first policy information indicating an occasion to transmit the first data set. That is, the first policy information is used to indicate under what circumstances the audio output device is to send the first data set to the computing device.
S303, the computing device sends second information to the signal acquisition device.
Specifically, the computing device may send the second information to the signal acquisition device, and after the signal acquisition device receives the second information, the signal acquisition device starts to acquire the electroencephalogram signal.
Wherein the second information may comprise second policy information indicating an occasion to transmit the second data set. That is, the second policy information is used to instruct the signal acquisition device in what circumstances the second data set is to be sent to the computing device.
It should be noted that S302 and S303 may be performed simultaneously, that is, the computing device may simultaneously send the first information to the audio output device and send the second information to the signal acquisition device.
It will be appreciated that the computing device may control the audio output device to output standard test audio, and the signal acquisition device (e.g., smart glasses) may set the signal acquisition electrodes at positions corresponding to the forehead, the ears, etc. to acquire electrical signals (i.e., brain electrical signals) generated by the brainstem after a specific audio stimulus.
Wherein the audio signal is standard test audio. The standard test audio may be, for example, a sound of a fixed frequency and a fixed duration, the frequency of the audio signal may be, for example, 1kHz, 4kHz, 8kHz or 12kHz, and the duration of the audio signal may be, for example, 1s or 2s, i.e., the audio signal may be a short sound/short pure tone of 1kHz, 4kHz, 8kHz or 12 kHz. By setting the audio signal from low frequency to high frequency, whether the user can hear the sound of each frequency can be detected, so that whether the hearing of the user is normal can be measured more accurately.
It should be appreciated that the audio output device may repeatedly sound at intervals of a preset period (e.g., 5 s), and the signal acquisition device repeatedly acquires the brain electrical signals. For example, if the sounding duration of the audio signal is 1s and sounding is repeated at intervals of 5s, the signal acquisition device can repeatedly measure 10 electroencephalogram signals within one minute.
It should be noted that the times of the audio output device, the signal acquisition device and the computing device are synchronized, that is, the time errors of the audio output device, the signal acquisition device and the computing device should be less than a preset threshold (for example, 1 ms).
And S304, the audio output device sends the collected first data set to the computing device.
It will be appreciated that when the computing device sends the first information to the audio output device, the first policy information may be carried in the first information, which may include the timing of transmitting the first data set, e.g., the audio output device reporting once a minute.
In some embodiments, if the user marks the sound signal sound emitting side on the computing device, the data collected by the audio output device (i.e., the first data set) may include the sound frequency and the sound time stamp. That is, the computing device can learn what sounds the audio output device makes at what time and what place. Wherein the voicing time stamp may comprise a start voicing time and an end voicing time, and the sound frequency may be, for example, 1kHz, 2kHz, 4kHz or 8kHz.
In some embodiments, if the audio output device autonomously determines the sound signal sound emitting side location, the data collected by the audio output device (i.e., the first data set) may include the sound frequency, the sound emitting time stamp, and the sound emitting side. That is, the computing device may know at what time and where the audio output device made what sound.
By way of example, the data sent by the audio output device to the computing device may be 06:00-06:01 (duration 1 s) sounding 1kHz at the left ear. After the computing device receives the audio output device transmission data, the relationship of the audio signal to the timestamp may be determined.
And S305, the signal acquisition device sends the collected second data set to the computing device.
It will be appreciated that when the computing device sends the second information to the signal acquisition device, the second policy information may be carried in the second information, which may include the timing of transmitting the second data set. For example, when the number of signal acquisition device acquisitions reaches a set number of repetitions (e.g., 900), the second data set may be sent to the computing device.
It should be noted that the first policy information and the second policy information should be the same policy information. Illustratively, the first policy information and the second policy information each indicate one minute of data to be returned.
It will be appreciated that the data collected by the signal acquisition device (i.e. the second data set) may include the brain electrical signal and the corresponding time stamp of the brain electrical signal. That is, the computing device may learn what time and where the signal acquisition device measures the brain electrical signals.
For example, if the sampling frequency of the electroencephalogram signals is 100, that is, 100 electroencephalogram signals can be acquired in one second, the data sent to the computing device by the signal acquisition device may be 100 electroencephalogram signals acquired at the A1 position and the time stamps corresponding to the 100 electroencephalogram signals. After the computing device receives the data sent by the signal acquisition device, the relation between the electroencephalogram signal and the time stamp can be determined.
S306, the computing device determines a target signal from the first data set and the second data set.
Specifically, after receiving the data (i.e., the first data set) sent by the audio output device and the data (i.e., the second data set) sent by the signal acquisition device, the computing device may perform data processing on the received data to determine a reserved signal (i.e., a target signal).
It should be understood that the data (i.e. the first data set) sent by the audio output device includes a sound frequency, a sounding time stamp and a sounding side, and the data (i.e. the second data set) sent by the signal acquisition device includes an electroencephalogram signal, a time stamp corresponding to the electroencephalogram signal and an electroencephalogram signal acquisition position, and when the computing device receives the first data set and the second data set, the electroencephalogram signal of the user at a certain frequency can be determined according to the corresponding relationship between the time stamp of the first data set and the time stamp of the second data set, so that the hearing situation of the user at the certain frequency can be determined.
In some embodiments, if there is one additional reference electrode, for example, the signal acquisition device includes a primary electrode at the A1 position, a secondary electrode at the A2 position, and a reference electrode including the Cz position. The computing device may determine the first differential signal by subtracting a reference signal generated by the reference electrode from a primary signal generated by the primary electrode; the second differential signal is determined by subtracting the reference signal generated by the reference electrode from the secondary signal generated by the secondary electrode, thereby determining the target signal from the first differential signal and the second differential signal.
Exemplary, electrodes other than the two ears are selected as reference electrode V ref The electrode on the audio sound producing side is a secondary electrode V i The electrode on the opposite side of the audio sound production is the main electrode V m Calculate a first differential signal S 1 And a second differential signal S 2 Wherein S is 1 =V m -V ref ,S 2 =V i -V ref According to S 1 And S is 2 A target signal is determined.
Alternatively, the computing device may take into account the measured brain electrical signals of the primary and secondary electrodes in combination when determining the target signal. That is, the computing device may consider the first differential signal and the second differential signal in combination when determining the target signal.
Alternatively, considering that the secondary electrode is located on the sound emitting side of the sound signal and is susceptible to noise, the computing device may not consider the electroencephalogram signal measured by the secondary electrode, i.e. the second differential signal, when determining the target signal.
In other embodiments, if there are a plurality of other additional electrodes, it is assumed that there are a plurality of additional electrodes on the secondary electrode V i Main electrode V m In addition, there are N additional reference electrodes V on the scalp ref1 ,V ref2 ,…,V refn The computing device may determine the target reference signal using an auto-re-reference algorithm as follows:
step one, calculating an average basic reference voltage V avg =V ref1 +V ref2 +…+V refn N, and calculating V m -V avg Obtaining a reference signal S ref
Step two, calculating V m -V ref1 Obtain a first differential signal S 1 And calculates a first differential signal S 1 With reference signal S ref Standard deviation sigma of 1 Correlation C 1
If the standard deviation sigma 1 Less than sigma max And C 1 Greater than acceptable correlation C min Marking the first differential signal as reserved; if C 1 Less than or equal to C min Alternatively, the standard deviation sigma 1 Greater than or equal to sigma max The first differential signal is marked as discarded. Wherein C is min Can be 0.8, sigma max May be 2 μv.
Step three, the same calculation is performed on the rest of the reference electrodes, for example, calculation of V m -V refn Obtain the N-th differential signal S n And calculates an Nth differential signal S n With reference signal S ref Standard deviation sigma of n Correlation C n
Similarly, if the standard deviation sigma n Less than sigma max And C n Greater than acceptable correlationSex C min Marking the Nth differential signal as reserved; if C n Less than or equal to C min Alternatively, the standard deviation sigma n Greater than or equal to sigma max The nth differential signal is marked as discarded. Wherein C is min Can be 0.8, sigma max May be 2 μv.
And step four, saving the reference signal marked as reserved (namely determining the target signal).
Further, the computing device may also consider the secondary electrode V i And measuring the obtained brain electrical signals. That is, the computing device may also perform the following steps before determining the target signal using the auto-re-reference algorithm (i.e., before step four):
Calculate V i -V ref1 Obtain a second differential signal S 2 And calculates a second differential signal S 2 With reference signal S ref Standard deviation sigma of 2 Correlation C 2
If standard deviation sigma 2 Less than sigma max And C 2 Greater than acceptable correlation C min Marking the second differential signal as reserved; if C 2 Less than or equal to C min Alternatively, the standard deviation sigma 2 Greater than or equal to sigma max The second differential signal is marked as discarded. Wherein C is min Can be 0.8, sigma max May be 2 μv.
It will be appreciated that if the subject performs other activities in the test hearing, the data collected by the electrodes will have noise differences or voltage differences, and the reference signal S will be ref With a first differential signal S 1 The correlation of the equal signals becomes smaller and the variance becomes larger, so that the data with lower quality can be discarded, and the data meeting certain requirements can be reserved.
In addition, under the condition that the test is normally carried out on the test subject, and the quality of the signals collected by each electrode meets the standard, under the condition that N additional electrodes are added for re-reference, the test effect of the prior art can be achieved only by carrying out 1/(n+2) of test time each time; or the same test time is carried out, and the signal to noise ratio can reach the prior artMultiple times.
Exemplary, if the test is repeated 900 times according to the conventional scheme, the signal-to-noise ratio increases Multiple times. According to the scheme provided by the application, under ideal conditions (namely, the test is normally carried out by a subject, and the quality of signals collected by each electrode meets the standard), only 300 times of tests are needed to carry out re-reference by using one additional electrode, so that the signal to noise ratio identical with that of a reference electrode can be achieved; if the same 900 tests are carried out, the signal-to-noise ratio can be increased by +.>Multiple times.
S307, the computing device processes the target signal, determines the hearing situation of the user,
in some embodiments, the computing device performs filtering and superposition processing on the target signal, may determine a waveform of the auditory brainstem evoked potential (brainstem auditory evoked potentials, BAEP) characteristics of the user, and may determine the current hearing profile of the user by waveform classification through a pre-trained classifier.
By way of example, fig. 5 provides a classification of auditory brainstem evoked potentials BAEP for auditory neuroma. As shown in fig. 5, the hearing impairment of the user can be classified into four types including type 1, type 2, type 3 and normal type. A normal user can hear an I wave to a V wave, wherein the I wave originates from the peripheral portion of the cochlear nerve, reflecting the action potential of the extracranial segment of the auditory nerve; the II wave originates in the cochlear nucleus and is related to electrical activity in the intracranial segment of the auditory nerve; the III wave originates from the upper olive pit and is closely related to the electrical activity of the upper olive pit; IV wave originates from the lateral colliculus ventral nucleus group; the V wave originates from the hypothalamus. The user of type 1 cannot hear the I-wave to V-wave, a serious impairment of the user's acoustic nerve; the user of type 2 cannot hear the II to V wave, and the intracranial segment or brainstem of the auditory nerve of the user of this type is severely damaged; the extended III-V wave interval for type 3 users suggests that lesions may affect auditory conduction pathways within the brainstem.
It will be appreciated that when the computing device classifies waveforms, it may be directly compared to a classification chart such as that shown in fig. 5 to determine a user's hearing profile.
The classifier may be, for example, a neural network, a bayesian classifier, a support vector machine (support vector machine, SVM), and the like, which is not limited in this application.
According to the hearing detection method, the acquisition of the electric signals generated by brainstem after the user receives the specific audio stimulus to the ear is realized, the data processing is performed through the modes of signal superposition, re-reference and the like, and the automatic identification of the electric signal waveforms is realized through the modes of machine learning and the like, so that the objective evaluation of the hearing condition of the user is automatically realized. In addition, by adding additional electrodes for automatic re-referencing, the test time can be reduced or the test accuracy can be improved under the same test time.
Fig. 6 is a schematic flow chart of another hearing test method provided in an embodiment of the present application. The hearing test method 600 mainly involves the interaction of an audio output device, a signal acquisition device and a computing device, and the method 600 may comprise S601 to S606.
S601, the computing device detects a connection state and a wearing posture of the audio output device, and a connection state and a wearing posture of the signal acquisition device.
The specific content of this step may refer to S301, and will not be described herein.
S602, time synchronization is performed by the computing device, the audio output device and the signal acquisition device.
After the connection state and the wearing posture of the audio output device and the signal acquisition device are detected to meet the conditions, networking the calculation module, the audio output module and the signal acquisition module through a soft bus, and performing time synchronization.
It will be appreciated that in making a wireless measurement of the hearing profile of the user, a higher accuracy (time error <1 ms) of time synchronization of the devices is also required.
For example, the master device may initiate time synchronization to each slave device, synchronize the time of the master device to the slave devices, send the received time to the master device, after receiving the time, the master device may calculate a delay between the master device and the slave devices, and if a delay error between the master device and each slave device is greater than a preset threshold, send the time of the master device+the calculated delay to each slave device, and the slave device sets its own time to the calculated time of the master device, thereby completing the time synchronization. If the error between the times at which the respective slave devices return to the master device is less than or equal to a preset threshold, it is indicated that the times of the respective slave devices are synchronized. The master device may be a computing device, and the slave device may include an audio output device and a signal acquisition device.
S603, the computing device sends the first information to the audio output device.
S604, the computing device sends the second information to the signal acquisition device.
S603 and S604 may refer to S302 and S303, respectively, and are not described herein.
After the audio output device and the signal acquisition device complete time synchronization, the signal acquisition device starts to acquire brain electrical signals, and the acquired data are stored in groups according to each audio output. Further, when wireless measurement is performed, if the time error is found to be greater than the maximum threshold value when the device is used for synchronizing the time stamp, the last acquired electroencephalogram data needs to be removed and time synchronization is performed again.
Illustratively, the computing device initiates wireless measurement of the hearing profile of the user, the audio output device sounds once for 1s, and the signal acquisition device acquires for 3s. For example, the audio output device sounds when the device has a time stamp of 100000000, ends the sound at 100001000, and sends the time stamp to the computing device; the signal acquisition device begins acquiring brain electrical signals at 100001000, ends acquiring brain electrical signals at 100004000, and sends the result to the computing device. The computing device determines, from the received data, that the start times of the two devices have an error of 1s, greater than a maximum threshold, discards the set of data, and reinitiates time synchronization.
S605, the audio output device sends the collected first data set to the computing device.
S606, the signal acquisition device sends the collected second data set to the computing device.
S607, the computing device determines a target signal from the first data set and the second data set.
S608, the computing device processes the target signal, and determines the hearing situation of the user.
The specific contents of S605 to S608 may refer to S304 to S307, respectively, and will not be described herein.
According to the hearing detection method, the brain wave signals can be measured wirelessly while the test time is shortened and the accuracy is improved, so that the hearing of a user can be detected wirelessly.
Fig. 7 is a schematic block diagram of a hearing test device provided in an embodiment of the present application. The apparatus 700 shown in fig. 7 includes a receiving unit 701, a transmitting unit 702, and a processing unit 703.
The transmitting unit 702 is configured to: transmitting first information to the audio output device, the first information being for instructing the audio output device to output a sound signal to a user; and sending second information to the signal acquisition equipment, wherein the second information is used for indicating the signal acquisition equipment to acquire the electroencephalogram signals.
The receiving unit 701 is configured to: and receiving an electroencephalogram signal sent by the signal acquisition equipment, wherein the electroencephalogram signal is an electroencephalogram signal generated by a user aiming at the sound signal.
The processing unit 703 is configured to: and determining the hearing situation of the user according to the electroencephalogram signals.
Optionally, the processing unit 703 is further configured to: determining a hearing situation of the user according to the first signal, the third signal and the fourth signal; the first signal is an electroencephalogram signal obtained by measuring the opposite side of sound signal sounding by the signal acquisition equipment, the third signal and the fourth signal are electroencephalogram signals obtained by measuring the scalp of a user by the signal acquisition equipment, and the measuring position of the third signal is different from the measuring position of the fourth signal.
Optionally, the processing unit 703 is further configured to: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; calculating the average value of the third signal and the fourth signal to obtain a first basic signal; calculating the difference value between the first signal and the first basic signal to obtain a first reference signal; a hearing profile of the user is determined from the first differential signal, the third differential signal, and the first reference signal.
Optionally, the standard deviation of the first differential signal and the first reference signal is smaller than a first threshold value, and the correlation coefficient of the first differential signal and the first reference signal is larger than a second threshold value; the standard deviation of the third differential signal and the first reference signal is smaller than the first threshold value, and the correlation coefficient of the third differential signal and the first reference signal is larger than the second threshold value.
Optionally, the processing unit 703 is further configured to: calculating the difference value of the first signal and the third signal to obtain a first differential signal; calculating the difference value of the first signal and the fourth signal to obtain a third differential signal; and determining the hearing situation of the user according to the first differential signal and the third differential signal.
Optionally, the audio output device is connected with the computing device in a wireless manner, and the signal acquisition device is connected with the computing device in a wireless manner.
Optionally, the processing unit 703 is further configured to: controlling the audio output device to output a sound signal from a first time; the control signal acquisition equipment starts to acquire the electroencephalogram signals from the second moment; the time difference between the first time and the second time is less than a fourth threshold.
Optionally, the processing unit 703 is further configured to: detecting the connection state and wearing posture of the audio output device and the connection state and wearing posture of the signal acquisition device; if the audio output device is not connected with the computing device and/or the signal acquisition device is not connected with the computing device, outputting prompt information to remind a user; if the wearing posture of the audio output device and/or the wearing posture of the signal acquisition device are/is detected to be not in accordance with the wearing standard, outputting prompt information to remind a user.
Optionally, the processing unit 703 is further configured to: performing filtering and superposition processing on the electroencephalogram signals to determine waveforms of the electroencephalogram signals; classifying the waveforms of the electroencephalogram signals by using a preset algorithm, and determining the hearing situation of the user.
In addition, the embodiment of the application also provides a hearing test system, which can comprise an audio output device, a signal acquisition device and a computing device.
Wherein the audio output device may perform the steps performed by the audio output device in fig. 2, 3 and 6. For example, the audio output device may be configured to receive first information sent by a computing device; the audio output device may also be used to output sound signals to one side of the user's ear.
The signal acquisition device may perform the steps performed by the signal acquisition device in fig. 2, 3 and 6. For example, the signal acquisition device may be configured to receive second information transmitted by the computing device; the signal acquisition equipment can also be used for acquiring brain electrical signals generated by a user; the signal acquisition device may also be used to: the brain electrical signals are transmitted to a computing device.
The computing device may perform the steps performed by the computing device in fig. 2, 3, and 6. For example, the computing device may be configured to send first information to the audio output device, the first information being configured to instruct the audio output device to output a sound signal to a user; the computing device may also be configured to send second information to the signal acquisition device, the second information being configured to instruct the signal acquisition device to acquire an electroencephalogram signal; the computing device may also be configured to determine a hearing profile of the user from the brain electrical signals.
Fig. 8 is a schematic block diagram of a hearing test device provided in an embodiment of the present application. The apparatus 800 shown in fig. 8 includes a memory 801, a processor 802, a communication interface 803, and a bus 804. Wherein the memory 801, the processor 802, and the communication interface 803 are communicatively connected to each other through a bus 804.
The memory 801 may be a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access memory (random access memory, RAM). The memory 801 may store a program, and when the program stored in the memory 801 is executed by the processor 802, the processor 802 is configured to perform various steps of the hearing test method of the embodiment of the present application, for example, various steps of the embodiments shown in fig. 2, 3, and 6 may be performed.
The processor 802 may employ a general-purpose CPU, microprocessor, application-specific integrated circuit (application specific integrated circuit, ASIC), or one or more integrated circuits for executing associated programs to perform the hearing test methods of the method embodiments of the present application.
The processor 802 may also be an integrated circuit chip with signal processing capabilities. In implementation, the various steps of the hearing test methods of embodiments of the present application may be performed by integrated logic circuitry in hardware or instructions in software in the processor 802.
The processor 802 may also be a general purpose processor, a digital signal processor (digital signal processing, DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 801, and the processor 802 reads information in the memory 801, and in combination with hardware thereof, performs a hearing test method according to an embodiment of the method of the present application, for example, may perform the steps/functions of the embodiments shown in fig. 2, 3 and 6.
Communication interface 803 may enable communication between apparatus 800 and other devices or communication networks using, but is not limited to, a transceiver-like transceiver.
Bus 804 may include a path for transferring information between components of apparatus 800 (e.g., memory 801, processor 802, communication interface 803).
It should also be understood that fig. 8 is merely an example and not a limitation, and that the communication device including the processor, memory, and transceiver described above may not rely on the structure shown in fig. 8.
Furthermore, the present application provides a chip comprising a processor. The memory for storing the computer program is provided separately from the chip and the processor is configured to execute the computer program stored in the memory such that the operations and/or processes performed by the computing device in any of the method embodiments are performed.
Further, the chip may also include a data interface. The data interface may be an input/output interface, an interface circuit, or the like. Further, the chip may further include a memory.
The chip in the embodiments of the present application may be a programming gate array (field programmable gate array, FPGA), an application specific integrated chip (application specific integrated circuit, ASIC), a system on chip (SoC), a CPU, a digital signal processing circuit (digital signal processor, DSP), a microcontroller (micro controller unit, MCU), a programmable controller (programmable logic device, PLD), other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or other integrated chips.
The present application also provides a computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of any of the embodiments shown in fig. 2, 3 and 6.
The present application also provides a computer readable medium having stored thereon a program code which, when run on a computer, causes the computer to perform the method of any of the embodiments shown in fig. 2, 3 and 6.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
It should be noted that the processor in the embodiments of the present application may be an integrated circuit chip with signal processing capability. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

1. A hearing test system comprising an audio output device, a signal acquisition device and a computing device, wherein,
the audio output device is used for: receiving first information sent by the computing device, wherein the first information is used for indicating the audio output device to output a sound signal to a user;
the signal acquisition device is used for: receiving second information sent by the computing equipment, wherein the second information is used for indicating the signal acquisition equipment to acquire an electroencephalogram signal, and the electroencephalogram signal is generated by the user aiming at the sound signal;
the signal acquisition device is further configured to: transmitting the electroencephalogram signal to the computing device;
the computing device is to: and determining the hearing situation of the user according to the electroencephalogram signal.
2. The system of claim 1, wherein the system further comprises a controller configured to control the controller,
the signal acquisition device comprises a first electrode, a third electrode and a fourth electrode, wherein the first electrode is contacted with one side ear of the user, the other side ear of the user is used for receiving the sound signal, the third electrode and the fourth electrode are contacted with the scalp of the user, the fourth electrode and the third electrode are positioned at different positions of the scalp of the user,
the electroencephalogram signal measured by the first electrode is a first signal, the electroencephalogram signal measured by the third electrode is a third signal, and the electroencephalogram signal measured by the fourth electrode is a fourth signal;
the computing device is further to: and determining the hearing situation of the user according to the first signal, the third signal and the fourth signal.
3. The system of claim 2, wherein the computing device is further to:
calculating the difference value between the first signal and the third signal to obtain a first differential signal;
calculating the difference value of the first signal and the fourth signal to obtain a third differential signal;
calculating the average value of the third signal and the fourth signal to obtain a first basic signal;
Calculating the difference value between the first signal and the first basic signal to obtain a first reference signal;
and determining the hearing situation of the user according to the first differential signal, the third differential signal and the first reference signal.
4. The system of claim 3, wherein the system further comprises a controller configured to control the controller,
the standard deviation of the first differential signal and the first reference signal is smaller than a first threshold value, and the correlation coefficient of the first differential signal and the first reference signal is larger than a second threshold value;
the standard deviation of the third differential signal and the first reference signal is smaller than the first threshold value, and the correlation coefficient of the third differential signal and the first reference signal is larger than the second threshold value.
5. The system of claim 2, the computing device further to:
calculating the difference value between the first signal and the third signal to obtain a first differential signal;
calculating the difference value of the first signal and the fourth signal to obtain a third differential signal;
and determining the hearing condition of the user according to the first differential signal and the third differential signal.
6. The system of any one of claims 1-5, wherein the signal acquisition device is further configured to:
And when the signal acquisition times of the signal acquisition equipment are larger than a third threshold value, transmitting the electroencephalogram signals to the computing equipment.
7. The system of any of claims 1-6, wherein the audio output device is wirelessly connected with the computing device and the signal acquisition device is wirelessly connected with the computing device.
8. The system of claim 7, wherein the system further comprises a controller configured to control the controller,
the audio output device is further configured to: outputting the sound signal from a first time;
the signal acquisition device is further configured to: collecting the electroencephalogram signals from the second moment;
the time difference between the first time and the second time is less than a fourth threshold.
9. The system of any of claims 1-8, wherein the computing device is further to:
detecting the connection state and wearing posture of the audio output device and the connection state and wearing posture of the signal acquisition device;
outputting prompt information to remind the user if the audio output device is not connected with the computing device and/or the signal acquisition device is not connected with the computing device;
And if the wearing posture of the audio output device and/or the wearing posture of the signal acquisition device are detected to be not in accordance with the wearing standard, outputting prompt information to remind the user.
10. The system of any of claims 1-9, wherein the computing device is further to:
performing filtering processing and superposition processing on the electroencephalogram signals to determine waveforms of the electroencephalogram signals;
classifying the waveforms of the electroencephalogram signals by using a preset algorithm, and determining the hearing situation of the user.
11. A method of hearing test, the method applied to a computing device comprising:
transmitting first information to an audio output device, wherein the first information is used for indicating the audio output device to output a sound signal to a user;
transmitting second information to a signal acquisition device, wherein the second information is used for indicating the signal acquisition device to acquire an electroencephalogram signal;
receiving an electroencephalogram signal sent by the signal acquisition equipment, wherein the electroencephalogram signal is an electroencephalogram signal generated by the user aiming at the sound signal;
and determining the hearing situation of the user according to the electroencephalogram signal.
12. The method of claim 11, wherein said determining the hearing profile of the user from the electroencephalogram signal comprises:
Determining a hearing profile of the user from the first signal, the third signal, and the fourth signal;
the first signal is an electroencephalogram signal obtained by measuring the opposite side of sound signal sounding by the signal acquisition equipment, the third signal and the fourth signal are electroencephalogram signals obtained by measuring the scalp of the user by the signal acquisition equipment, and the measuring position of the third signal is different from the measuring position of the fourth signal.
13. The method of claim 12, wherein the determining the hearing profile of the user from the first signal, the third signal, and the fourth signal comprises:
calculating the difference value between the first signal and the third signal to obtain a first differential signal;
calculating the difference value of the first signal and the fourth signal to obtain a third differential signal;
calculating the average value of the third signal and the fourth signal to obtain a first basic signal;
calculating the difference value between the first signal and the first basic signal to obtain a first reference signal;
and determining the hearing condition of the user according to the first differential signal, the third differential signal and a first reference signal.
14. The method of claim 13, wherein the step of determining the position of the probe is performed,
the standard deviation of the first differential signal and the first reference signal is smaller than a first threshold value, and the correlation coefficient of the first differential signal and the first reference signal is larger than a second threshold value;
the standard deviation of the third differential signal and the first reference signal is smaller than the first threshold value, and the correlation coefficient of the third differential signal and the first reference signal is larger than the second threshold value.
15. The method of claim 12, wherein the determining the hearing profile of the user from the first signal, the third signal, and the fourth signal comprises:
calculating the difference value between the first signal and the third signal to obtain a first differential signal;
calculating the difference value of the first signal and the fourth signal to obtain a third differential signal;
and determining the hearing condition of the user according to the first differential signal and the third differential signal.
16. The method of any of claims 11-15, wherein the audio output device is wirelessly connected with the computing device and the signal acquisition device is wirelessly connected with the computing device.
17. The method of claim 16, wherein the method further comprises:
controlling the audio output device to output the sound signal from a first time;
controlling the signal acquisition equipment to acquire the electroencephalogram signals from the second moment;
the time difference between the first time and the second time is less than a fourth threshold.
18. The method of any of claims 11-17, wherein prior to transmitting the first information to the audio output device, the method further comprises:
detecting the connection state and wearing posture of the audio output device and the connection state and wearing posture of the signal acquisition device;
outputting prompt information to remind the user if the audio output device is not connected with the computing device and/or the signal acquisition device is not connected with the computing device;
and if the wearing posture of the audio output device and/or the wearing posture of the signal acquisition device are detected to be not in accordance with the wearing standard, outputting prompt information to remind the user.
19. The method according to any one of claims 11-18, wherein said determining a hearing profile of the user from the electroencephalogram signal comprises:
Performing filtering processing and superposition processing on the electroencephalogram signals to determine waveforms of the electroencephalogram signals;
classifying the waveforms of the electroencephalogram signals by using a preset algorithm, and determining the hearing situation of the user.
20. A hearing test device, comprising: a processor coupled to a memory for storing a program or instructions that, when executed by the processor, cause the hearing test device to perform the method of any one of claims 11 to 19.
21. A chip, comprising: a processor and data interface through which the processor reads instructions stored on a memory to perform the method of any one of claims 11 to 19.
22. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 11 to 19.
CN202211185227.9A 2022-09-27 2022-09-27 Hearing detection method, device and system Pending CN117814788A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211185227.9A CN117814788A (en) 2022-09-27 2022-09-27 Hearing detection method, device and system
PCT/CN2023/117902 WO2024067034A1 (en) 2022-09-27 2023-09-11 Hearing detection method, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211185227.9A CN117814788A (en) 2022-09-27 2022-09-27 Hearing detection method, device and system

Publications (1)

Publication Number Publication Date
CN117814788A true CN117814788A (en) 2024-04-05

Family

ID=90476013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211185227.9A Pending CN117814788A (en) 2022-09-27 2022-09-27 Hearing detection method, device and system

Country Status (2)

Country Link
CN (1) CN117814788A (en)
WO (1) WO2024067034A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110301486A1 (en) * 2010-06-03 2011-12-08 Cordial Medical Europe Measurement of auditory evoked responses
CN103989482B (en) * 2013-12-26 2016-04-27 应俊 The sound stimulation device detected for auditory hallucination and the device detected for auditory hallucination
WO2015161343A1 (en) * 2014-04-23 2015-10-29 Hear Ip Pty Ltd Systems and methods for objectively determining hearing thresholds
CN108541080B (en) * 2018-04-23 2020-09-22 Oppo广东移动通信有限公司 Method for realizing loop connection between first electronic equipment and second electronic equipment and related product
US20210307656A1 (en) * 2018-08-17 2021-10-07 The Bionics Institute Of Australia Methods and systems for determining hearing thresholds
CN112515663A (en) * 2020-11-30 2021-03-19 深圳镭洱晟科创有限公司 Auditory pathway evaluation and analysis system and method thereof
CN113288127B (en) * 2021-04-12 2023-07-25 中国人民解放军总医院第六医学中心 Hearing detection device and detection method based on time-frequency modulation detection threshold

Also Published As

Publication number Publication date
WO2024067034A1 (en) 2024-04-04

Similar Documents

Publication Publication Date Title
US8277390B2 (en) Method for automatic non-cooperative frequency specific assessment of hearing impairment and fitting of hearing aids
CN103313653B (en) Personal eeg monitoring device with electrode validation
CN102202570B (en) Word sound cleanness evaluating system, method therefore
US9227062B2 (en) Systems and methods for synchronizing an operation of a middle ear analyzer and a cochlear implant system
JP2018528735A5 (en)
CN103561643B (en) Voice recognition ability decision maker, system and method and hearing aid gain determination device
EP1865843A2 (en) Test battery system and method for assessment of auditory function
JP2010521906A (en) Objective measurement system and method for individual hearing
KR20120131778A (en) Method for testing hearing ability and hearing aid using the same
US10299696B2 (en) Electrophysiological method for assessing the effectiveness of a hearing aid
US20150314124A1 (en) Systems and methods for detecting an occurrence of a stapedius reflex within a cochlear implant patient
WO2018154289A2 (en) System, method, computer program and computer program product for detecting a change in hearing response
CN112515663A (en) Auditory pathway evaluation and analysis system and method thereof
EP3588984B1 (en) System for validation of hearing aids for infants using a speech signal
Anastasio et al. A report of extended high frequency audiometry thresholds in school-age children with no hearing complaints
US20210386357A1 (en) Determination of cochlear hydrops based on recorded auditory electrophysiological responses
Hirsch et al. A comparison of acoustic reflex and auditory brain stem response screening of high-risk infants
US20150297890A1 (en) Systems and methods for facilitating use of a middle ear analyzer in determining one or more stapedius reflex thresholds associated with a cochlear implant patient
KR20120068199A (en) Electrophysiological based threshold equalizing test device and method for providing information about cochlea dead region using the same
CN117814788A (en) Hearing detection method, device and system
Uhler et al. Mismatched response predicts behavioral speech discrimination outcomes in infants with hearing loss and normal hearing
RU2722875C1 (en) Method for determining optimal settings of a hearing aid
Gravel Hearing and auditory function
Kaf et al. Examining the profile of noise-induced cochlear synaptopathy using iPhone health app data and cochlear and brainstem electrophysiological responses to fast clicks rates
Holder et al. Cochlear Implant Upper Stimulation Levels: eSRT vs. Loudness Scaling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination