CN116269360A - Remote test method, system, terminal and storage medium for hearing fine capability - Google Patents

Remote test method, system, terminal and storage medium for hearing fine capability Download PDF

Info

Publication number
CN116269360A
CN116269360A CN202310511830.XA CN202310511830A CN116269360A CN 116269360 A CN116269360 A CN 116269360A CN 202310511830 A CN202310511830 A CN 202310511830A CN 116269360 A CN116269360 A CN 116269360A
Authority
CN
China
Prior art keywords
user
test
client
information selected
fine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310511830.XA
Other languages
Chinese (zh)
Inventor
吴皓
张钦杰
文雯
贾欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Original Assignee
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine filed Critical Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority to CN202310511830.XA priority Critical patent/CN116269360A/en
Publication of CN116269360A publication Critical patent/CN116269360A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • A61B5/123Audiometering evaluating hearing capacity subjective methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36036Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
    • A61N1/36038Cochlear stimulation
    • A61N1/36039Cochlear stimulation fitting procedures

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Otolaryngology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Acoustics & Sound (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application provides a remote test method, a remote test system, a remote test terminal and a remote test storage medium for hearing fine capability, wherein the remote test method, the remote test system, the remote test terminal and the remote test storage medium comprise a client and a platform, wherein the client is used for sending interactive information to the platform according to a user instruction and receiving data sent by the platform so as to realize remote test for the hearing fine capability of a user; the platform end is used for receiving the interaction information sent by the client end, providing test question data for the client end, obtaining a test result of the hearing fine ability of the user according to the interaction information sent by the client end, and returning the test result to the client end for reference by the user. According to the method and the device, the remote test of the hearing fine capability of the user is realized through the interaction of the client and the platform, and the problem that the hearing fine capability of the patient is less in evaluation in the prior art can be effectively solved; the related evaluation software does not carry out false result screening, and is not objective enough; rehabilitation training is not targeted and cannot be remotely implemented.

Description

Remote test method, system, terminal and storage medium for hearing fine capability
Technical Field
The present application relates to the field of hearing test, and in particular, to a remote test method, system, terminal and storage medium for hearing fine capability.
Background
Artificial hearing implants are now widely used in clinic as an efficient means of hearing reconstruction and intervention, and to some extent, to restore hearing to many hearing impaired patients. The current technology not only can solve the problem of conductive hearing loss, but also can improve sensorineural hearing loss. New artificial auditory implant techniques such as artificial cochlea, bone anchored hearing aid (bone-anchored hearing aid, BAHA), vibration bridge, etc. have been widely used in clinic. At present, a set of relatively mature evaluation systems are established clinically for the evaluation of the auditory reconstruction effect of a patient after artificial auditory implantation.
However, the existing clinical audiology evaluation method lacks in evaluating the auditory fine ability of the patient, especially the distinguishing ability of the patient in terms of initials, finals, tones, melodies and the like, and the existing audiology evaluation method is often high in subjectivity, and the obtained evaluation result cannot reflect the auditory ability of the patient more accurately. In addition, the existing audiology evaluation method or system still lacks attention for the postoperative hearing rehabilitation training of the patient, and the existing training method does not have good pertinence, so that the postoperative rehabilitation process of the patient is delayed to a certain extent, and a great deal of effort is required to be input by a speech training person, so that the effort is reduced. In addition, the existing clinical audiology evaluation method needs the patient to go to offline for treatment, so that inconvenience is caused to part of patients, the remote audiometry requirement of part of patients cannot be met, and the method has certain limitation in application scenes.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present application is to provide a remote test method, system, terminal and storage medium for auditory fine capability, which are used for solving the problems that the prior art lacks an evaluation of auditory fine capability of a patient, the evaluation result is not objective, the rehabilitation training is not targeted and cannot be remotely implemented.
To achieve the above and other related objects, a first aspect of the present application provides a remote test method of auditory fine capability, which is executed by a client, comprising the steps of: acquiring test module information selected by a user; transmitting the test module information selected by the user to a platform end; receiving test question bank data corresponding to the test module information selected by the user and sent by the platform, playing test question audio in the test question bank data and providing a plurality of options for the user to select; acquiring option information selected by a user; transmitting the option information selected by the user to a platform end; and receiving the test result sent by the platform end and displaying the test result.
In some embodiments of the first aspect of the present application, when performing the step of obtaining the option information selected by the user, the method further includes the following steps: acquiring time information consumed by a user when selecting options; and sending the time information consumed by the user when selecting options to the platform end.
In some embodiments of the first aspect of the present application, after performing the step of sending the time information spent by the user in selecting the option to the platform end, the method further includes the steps of: acquiring an alarm instruction sent by the platform end; and displaying the warning information to the user according to the warning instruction.
In some embodiments of the first aspect of the present application, when performing the step of obtaining the option information selected by the user, the method further includes the following steps: acquiring eye movement information of a user when selecting options; judging whether the user has abnormal answering conditions according to the eye movement information.
To achieve the above and other related objects, a second aspect of the present application provides a remote test method for auditory fine capability, which is executed by a platform end, comprising the steps of: acquiring test module information selected by a user and sent by a client; according to the test module information selected by the user, invoking test question bank data corresponding to the test module information selected by the user in a database; transmitting the test question bank data to a client; receiving option information selected by the user and sent by the client, and performing deep learning analysis on the option information selected by the user to generate a test result of hearing fine capability; and sending the test result to the client.
In some embodiments of the second aspect of the present application, when performing the step of receiving the option information selected by the user and sent by the client, the method further includes the steps of: receiving time information which is sent by the client and consumed by the user when selecting options; judging whether the user has abnormal answer conditions according to the time information consumed by the user when selecting options; and if the abnormal answer condition exists in the user, sending an alarm instruction to the client.
In some embodiments of the second aspect of the present application, when performing the step of receiving the option information selected by the user and sent by the client, the method further includes the steps of: receiving time information which is sent by the client and consumed by the user when selecting options; judging whether the user has abnormal answer conditions according to the time information consumed by the user when selecting options; and if the abnormal answer condition exists in the user, sending an alarm instruction to the client.
In some embodiments of the second aspect of the present application, when performing the step of data analyzing the option information selected by the user to generate a test result of auditory fine capability, the method further comprises the steps of: collecting answer data according to the option information selected by the user; extracting the frequency domain information of the audio in the answer data by using an acoustic signal analysis method; based on the frequency domain information of the audio in the answer data, integrating to obtain a confusable frequency band; the confusable frequency band is used for increasing the weight of test question audio under the confusable frequency band when the same user performs auditory fine capability test.
In some embodiments of the second aspect of the present application, a method of deep learning analysis of the user-selected option information to generate test results of auditory fine capability, comprises the steps of: acquiring personal condition parameters of the user; inputting the personal condition parameters of the user and the option information selected by the user into a trained neural network model; the trained neural network model is used for predicting the scoring condition of the user; and the trained neural network model outputs the scoring condition of the user as the test result.
In some embodiments of the second aspect of the present application, after the integrating obtains the confusable frequency band, the method further includes the following steps: and determining an abnormal electrode channel of the user according to the confusable frequency band, and adjusting corresponding stimulation parameters according to the abnormal electrode channel of the user.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method.
To achieve the above and other related objects, a fourth aspect of the present application provides an electronic terminal, including: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory so as to enable the terminal to execute the method.
To achieve the above and other related objects, a fifth aspect of the present application provides a remote test system for auditory fine capability, comprising: the client is used for acquiring the test module information selected by the user; transmitting the test module information selected by the user to a platform end; receiving test question bank data corresponding to the test module information selected by the user and sent by the platform, playing test question audio in the test question bank data and providing a plurality of options for the user to select; acquiring option information selected by a user; transmitting the option information selected by the user to a platform end; and receiving the test result sent by the platform end and displaying the test result. The platform end is used for acquiring the test module information selected by the user and sent by the client; according to the test module information selected by the user, invoking test question bank data corresponding to the test module information selected by the user in a database; transmitting the test question bank data to a client; receiving option information selected by the user and sent by the client, and performing deep learning analysis on the option information selected by the user to generate a test result of hearing fine capability; and sending the test result to the client.
As described above, the remote test method, system, terminal and storage medium for auditory fine capability of the present application have the following beneficial effects:
the method and the device are based on the existing clinical audiology evaluation, and are provided with the discrimination capability evaluation module in the aspects of forest six, vowels, consonants, tones, melodies and the like, so that the fine hearing capability is brought into an evaluation system. The method and the device can be applied to the terminal equipment such as the mobile terminal and the like of the client side so as to realize remote hearing fine capability assessment, and greatly enrich application scenes of the method and the device. In addition, the method and the device can obtain a more accurate hearing fine ability assessment result according to the test condition of the patient, the assessment result screens out a part with stronger subjectivity, and the patient is given with guidance of rehabilitation training in a targeted manner according to the assessment result, so that the patient is helped to recover as early as possible. In conclusion, the method has excellent technical effects and good application prospects.
Drawings
FIG. 1 is a flow chart of a remote test method for auditory fine capability according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a module selection interface of a client according to an embodiment of the invention.
FIG. 3 is a schematic diagram of a test stem and option interface of a client according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a test result display interface of a client according to an embodiment of the present invention.
Fig. 5 is a flowchart of a method for monitoring an abnormal answer condition of a user according to an embodiment of the invention.
Fig. 6 is a flowchart illustrating a method for sending alert information to an abnormal user according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a client alert interface according to an embodiment of the invention.
Fig. 8 is a flowchart of a method for determining whether a user has an abnormal answer condition according to eye movement information in an embodiment of the invention.
Fig. 9 is a flowchart of a method for extracting a confusable frequency band of answer data according to an embodiment of the present invention.
FIG. 10 is a flow chart of a method for generating test results by deep learning analysis according to an embodiment of the invention.
Fig. 11 is a schematic diagram of a client user login interface according to an embodiment of the invention.
Fig. 12 is a schematic diagram showing a correspondence relationship between a confusable frequency band and an electrode channel in an embodiment of the invention.
Fig. 13 is a schematic structural diagram of an electronic terminal according to an embodiment of the present application.
FIG. 14 is a schematic diagram of a remote test system for auditory fine capability according to an embodiment of the present invention.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the present disclosure, when the following description of the embodiments is taken in conjunction with the accompanying drawings. The present application may be embodied or carried out in other specific embodiments, and the details of the present application may be modified or changed from various points of view and applications without departing from the spirit of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings, which describe several embodiments of the present application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures as being related to another element or feature.
In this application, unless specifically stated and limited otherwise, the terms "mounted," "connected," "secured," "held," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
In order to solve the problems in the background art, the invention provides a remote test method, a remote test system, a remote test terminal and a remote test storage medium for auditory fine capability, which aim to solve the problems that the prior art lacks of evaluation of the auditory fine capability of a patient, the evaluation result is not objective, the rehabilitation training is not targeted and cannot be remotely implemented. Meanwhile, in order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention will be further described in detail by the following examples with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Before explaining the present invention in further detail, terms and terminology involved in the embodiments of the present invention will be explained, and the terms and terminology involved in the embodiments of the present invention are applicable to the following explanation:
<1> auditory implant (auditory implantation), auditory implant devices include artificial cochlea, vibration bridge, bone bridge, auditory brainstem implant, etc. artificial electronic devices, which are required to surgically implant into the human body to restore hearing to sensorineural deafness, conductive deafness, and mixed deafness patients. The artificial cochlea implant is suitable for patients with severe to extremely severe sensorineural hearing loss.
<2> an artificial neural network (Artificial Neural Networks, abbreviated as ANNs), also abbreviated as Neural Networks (NNs) or Connection models (Connection models), which is an algorithmic mathematical Model that mimics the behavior of animal neural networks for distributed parallel information processing. The network relies on the complexity of the system and achieves the purpose of processing information by adjusting the relationship of the interconnection among a large number of nodes.
<3> Eye Tracking technique (Eye Tracking), which is a process of measuring Eye operation. The most interesting event for eye tracking studies is the determination of where a human or animal looks (e.g. "gaze point" or "gaze point"). More precisely, the method comprises the steps of performing image processing technology through instrument and equipment, positioning pupil positions, acquiring coordinates, and calculating eye gazing or gazing points through a certain algorithm, so that a computer processes to obtain an eye operation process.
<4> Lin's six-tone, another name "Ling's six-tone" is what is usually said six-tone test, which is designed by Daniel Ling, OC, PHD (1926-2003) of the current acoustic spoken language rehabilitation university, is a simple, easy and effective method for clinical rehabilitation practice, and the application of this test method can rapidly and effectively check whether children can perceive sounds in the speech frequency range, which is a skill that parents, teachers and audiologists must master.
<5> formants (formants), which are areas where energy is relatively concentrated in the spectrum of sound, reflect physical characteristics of vocal tract (resonance cavity) as well as determining factors of sound quality. Peak positions on the vowel and consonant vocal envelope curves.
<6> linear predictive coding (LPC, near predictive coding), which is a tool mainly used in audio signal processing and speech processing to represent the digital speech signal spectral envelope (spectral envelope) in compressed form based on information of the linear predictive model. It is one of the most effective speech analysis techniques, and also one of the most useful methods for coding high quality speech at low bit rates, and it can provide very accurate speech parameter predictions.
Embodiments of the present invention provide a remote test method of auditory fine capability, a system of the remote test method of auditory fine capability, and a storage medium storing an executable program for implementing the remote test method of auditory fine capability. With respect to implementation of a remote test method of auditory fine capability, an exemplary implementation scenario of a remote test of auditory fine capability will be described.
As shown in fig. 1, a flow chart of a remote test method for auditory fine capability in an embodiment of the present invention is shown. The remote test method for auditory fine capability in the embodiment mainly comprises the following steps: the following steps S101 to S111 are completed through interaction between the client and the platform. The execution subjects of the steps are steps S101, S102, S106, S107, S108, and S111 executed by the client, and steps S103, S104, S105, S109, and S110 executed by the platform, and the following description will be given specifically for the steps:
step S101: and acquiring the test module information selected by the user.
Specifically, step S101 is performed by the client, and the operation subject of the client is a user, and the user in this application is usually a hearing disorder patient with a requirement for a hearing fine capability test, and may be an ordinary person who wants to test his hearing fine capability by the mobile client, and the application is not limited in any way to the usage subject of the client.
It should be noted that, in step S101, the display interface of the client presents several test modules, each of which represents each type of auditory fine capability test scheme. As shown in fig. 2, a schematic diagram of a module selection interface of a client according to an embodiment of the present invention is shown, and in one embodiment of the present application, the test modules include the following five test modules: the device comprises a ringer's six-tone module, a vowel distinguishing module, a consonant distinguishing module, a level and narrow distinguishing module and a melody distinguishing module.
It should be emphasized that the existing clinical audiology evaluation method is mainly based on electrophysiological and pure-tone audiometric results to reflect the response capability of the patient to external acoustic stimuli; in the aspect of auditory fine ability assessment, the existing auditory assessment method only has one speech recognition rate (including monosyllabic, bisyllabic, short sentences and the like), and the assessment method is high in subjectivity and is easy to interfere with the lip reading ability of a patient. Therefore, in order to solve the problem that the audiology evaluation method in the prior art is not fine enough for the evaluation of the hearing ability, five different types of test modules, namely a ringer's six-tone module, a vowel recognition module, a consonant recognition module, a level and narrow recognition module and a melody recognition module, are arranged in the embodiment of the application, and through the cooperation of the five modules, the finer test of the hearing ability can be realized, so that the diversified demands of users are met.
In step S101, when the client presents the test module selection interface as shown in fig. 2, the user can select the test module required by the user on the client according to his own requirement.
Step S102: and sending the test module information selected by the user to a platform end.
Specifically, step S102 is executed by the client, and in step S102, after the user selects the required test module, the client sends the test module information selected by the user to the platform end, so that the platform end can determine the corresponding test question library according to the operation information of the user. It should be noted that in step S101 and step S102, the test modules selected by the user during a single operation are not limited to one, for example, the user may select all the test modules at the same time, and at this time, the auditory fine capability test method of the present application sequentially presents the question banks corresponding to the test modules to the user according to the information of the test modules selected by the user, so as to implement a single multi-module test, thereby meeting the diversified requirements of the user.
Step S103: and acquiring the test module information selected by the user and sent by the client.
Specifically, step S103 is performed by a platform end, which may be a server in the present application, where the server is configured to respond to an instruction of the client and implement interaction between the client and the platform end according to the instruction of the client. In step S103, the platform side first obtains the test module information selected by the user sent from the client side.
Step S104: and calling test question library data corresponding to the test module information selected by the user in a database according to the test module information selected by the user.
Specifically, step S104 is executed by the platform end, where the platform end in the present application further includes a database, and the data in the database includes test question bank data corresponding to each test module, for example, when the test modules in step S101 are a ringer 'S six-tone module, a vowel recognition module, a consonant recognition module, a flat-tone recognition module, and a melody recognition module, the corresponding test question banks in the database include a ringer' S six-tone test question bank, a vowel recognition test question bank, a consonant recognition test question bank, a flat-tone recognition test question bank, and a melody recognition test question bank, and these question bank data are stored in the database of the platform end in advance. Accordingly, after the platform end obtains the test module information selected by the user, the platform end can call the test question bank data corresponding to the test module information according to the test module information selected by the user. For example, when the test module selected by the user in step S101 is a ringer ' S six-tone module, the platform terminal retrieves the ringer ' S six-tone test question bank corresponding to the ringer ' S six-tone module in its database after obtaining the information. It should be emphasized that, in the same way as step S101, when a plurality of test modules are selected by the user, step S104 retrieves a plurality of test question libraries corresponding to the plurality of test modules in the database according to the information of the plurality of test modules.
Step S105: and sending the test question bank data to a client.
Specifically, step S104 is executed by the platform end, and after the platform end retrieves the test question bank required by the user, the platform end sends the test question bank required by the user to the client end to enter the following answering step.
Step S106: and receiving test question library data corresponding to the test module information selected by the user and sent by the platform, playing test question audio in the test question library data and providing a plurality of options for the user to select.
Specifically, step S106 is executed by the client, and the client receives test question library data corresponding to the test module information selected by the user and sent from the platform, where the test question library data includes a plurality of test questions and a plurality of options corresponding to the test questions, the test questions in the application are presented in an audio and text mode, the client plays test question audio in the test question library data, and the test question stem and the plurality of options are displayed on a display interface for the user to select. FIG. 3 is a schematic diagram of a test stem and option interface of a client according to an embodiment of the present invention.
Step S107: and acquiring option information selected by a user.
Specifically, step S107 is executed by the client, after the client plays the test question audio in the test question library data and provides a plurality of options in step S106, the user may select the corresponding option according to his own judgment after listening to the test question audio, and the client obtains the option information selected by the user.
Step S108: and sending the option information selected by the user to a platform end.
Specifically, step S108 is executed by the client, and after the client acquires the option information selected by the user, the option information selected by the user is sent to the platform end, so that the platform end processes the information.
Step S109: receiving option information selected by the user and sent by the client, and performing deep learning analysis on the option information selected by the user to generate a test result of hearing fine capability.
Specifically, step S109 is performed by the platform end, and the platform end receives the option information selected by the user and sent by the client end, and generates a test result of auditory fine capability according to the option information selected by the user through a deep learning analysis method. The deep learning analysis method in step S109 is a deeper field of machine learning methods, and maps the inherent relation between samples by using a large amount of sample data. For example, in step S109, the option information selected by the user may be input into a pre-trained deep learning model, where the pre-trained deep learning model can map the functional relationship between the option information of the user and the test result of the hearing fine ability of the user more accurately, and as the training sample size further increases, the mapping ability of the deep learning model is also more accurate, which is also an advantage of the deep learning analysis method. In one embodiment of the present application, the test result of the auditory fine capability generated by the platform end according to the option information of the user may be that the test result of the auditory fine capability of the user is presented in a scoring manner, where a higher score indicates that the auditory fine capability of the user is better, and a lower score indicates that the auditory fine capability of the user is worse.
Step S110: and sending the test result to the client.
Specifically, step S110 is executed by the platform end, and after the platform end obtains the test result of the hearing fine capability of the user through deep learning analysis on the option information of the user, the test result needs to be sent to the client end so as to present the test result of the hearing fine capability to the user through the client end.
Step S111: and receiving the test result sent by the platform end and displaying the test result.
Specifically, step S111 is executed by the client, and the client receives the test result of the hearing ability of the user sent from the platform, and the client can display the test result of the hearing ability of the user to the user in text form, so that the user can know the test condition of the hearing ability of the user conveniently. Fig. 4 is a schematic diagram of a test result display interface of a client according to an embodiment of the present invention.
Fig. 5 is a flowchart illustrating a method for monitoring abnormal answer conditions of a user according to an embodiment of the present invention. In one embodiment of the present application, in order to monitor an abnormal condition when a user answers questions, thereby judging a false negative/false positive situation in a user answer result, the remote test method for auditory fine ability of the present application further includes the following steps: the execution body of step S501 and step S502 is a client, the execution body of step S503 and step S504 is a platform, and the judgment of whether the user has an abnormal answer condition is completed through the interaction between the client and the platform.
Step S501: the time information spent by the user in selecting the option is obtained.
Specifically, step S501 is performed by the client, and the client obtains time information consumed when the user selects the option in answering the question. The time information spent by the user when selecting the option can include single question answering residence time and touch screen interaction time of the user. The single question answer stay time refers to the time spent by a user from the time when a single question stem appears to the time when any one of the options is selected; the touch screen interaction time comprises an option-option touch screen time difference and a question-option touch screen time difference, wherein the option-option touch screen time difference refers to the time spent by a user from clicking an option audio play button to selecting the option by the user; the "title-option touch screen time difference" refers to the time taken by the user from clicking the title stem audio play button until the user selects any option. It should be noted that, the above-mentioned time information, such as the single question answer stay time, the option-option touch screen time difference, and the question-option touch screen time difference, is actually used to reflect the speed of the user reading and answering the questions when receiving the test, and is used as the basis for subsequently judging whether the user has an abnormal answer condition.
Step S502: and sending the time information consumed by the user when selecting options to the platform end.
Specifically, step S502 is executed by the client, after the client obtains the time information consumed by the user when selecting the option, the client sends the time information consumed by the user when selecting the option obtained in step S501 to the platform end, so that the platform end can further process the time information to analyze whether the user is in an abnormal answer condition.
Step S503: and receiving time information which is sent by the client and is consumed by the user when the user selects options.
Specifically, step S503 is performed by the platform end, and the platform end receives time information consumed by the user when selecting the option, which is sent from the client end.
Step S504: judging whether the user has abnormal answering conditions according to the time information spent by the user when selecting options.
Specifically, step S504 is executed by the platform end, which determines whether the user has an abnormal answer condition according to the time information spent by the user in selecting the options. The time threshold value can be preset and used for comparing with the time spent by the user when the user selects the options, so that whether the time spent by the user when the user selects the options is too short or not is measured, and whether the user has an abnormal answer condition or not is judged. For example, when the time spent by the user when selecting the option received by the platform end is less than the preset time threshold, judging that the user has an abnormal answer condition; otherwise, judging that the answer condition of the user is normal.
In some embodiments of steps S501-S504, to further improve the accuracy of determining whether the user has an abnormal answer condition, the time threshold used for comparison with the time spent by the user in selecting the option may be a time threshold determined by a deep learning method. For example, the answering time of different people or specific people when the remote test method for auditory fine capability of the application is applied can be collected in advance, so that the average value of the answering time of different people or specific people is used as the time threshold value to serve as a basis for judging whether the abnormal answering condition exists in the user. Based on the improvement, the method and the device can judge whether the user has an abnormal answer condition or not according to the time spent by the user when the user selects the options.
In some embodiments of steps S501-S504, to further enhance the objectivity of determining whether the user has an abnormal answer condition, the time spent by the user in selecting the option according to which the user determines whether the user has an abnormal answer condition may be several consecutive test questions. For example, the time spent by the user in selecting the options in 3 mutually consecutive test questions can be obtained, the user is judged to have an abnormal answer condition only when the time spent by the user in selecting the options in 3 mutually consecutive test questions is less than the time threshold, and if any one of the time spent by the user in selecting the options in the test questions is greater than the time threshold, the user is not considered to have the abnormal answer condition. Based on the improvement, the judging result of the application on whether the user has abnormal answer conditions can be more objective, and the influence of special conditions on the judging precision of the application is prevented. It can be understood that, in the above-mentioned basis of judging whether the user has an abnormal answer condition, the number of the test questions is not limited to 3 consecutive ways, and any person can set the numerical value according to the actual use requirement, so as to realize accurate judgment of whether the user has an abnormal answer condition, which is not limited in this embodiment.
Fig. 6 is a flowchart illustrating a method for sending alert information to an abnormal user according to an embodiment of the present invention. In one embodiment of the present application, to alert the user to focus on the hearing test, the remote test method of hearing fine capability of the present application further comprises the steps of: the execution main body of step S601 and step S602 is a platform end, the execution main body of step S603 and step S604 is a client end, and the warning function when the user has an abnormal answer condition is completed through the interaction between the client end and the platform end.
Step S601: and judging that the user has abnormal answering conditions.
Specifically, step S601 is executed by the platform end, and when it is determined that the user has an abnormal answer condition in step S501 to step S504, the following steps S602 to S604 are executed to complete the alerting action to the specific user.
Step S602: and sending an alarm instruction to the client.
Specifically, step S602 is executed by the platform end, and the current platform end determines that the user has an abnormal answer condition, so that an alert instruction is sent to the client end.
Step S603: and acquiring an alarm instruction sent by the platform end.
Specifically, step S603 is executed by the client, and the client obtains the alert instruction sent by the platform
Step S604: and displaying the warning information to the user according to the warning instruction.
Specifically, step S604 is executed by the client, and the client displays the warning information to the user according to the warning instruction sent by the platform. For example, the warning instruction may be presented to the client in a text manner, so as to prompt the client to keep the attention of the answer. Fig. 7 is a schematic diagram of a client alert information interface according to an embodiment of the present invention.
Fig. 8 is a flowchart illustrating a method for determining whether an abnormal answer condition exists by using eye movement information according to an embodiment of the present invention. In one embodiment of the present application, to more precisely determine whether the user has an abnormal answer condition, the remote test method for auditory fine capability of the present application further includes the following steps:
step S801: eye movement information of a user when selecting an option is acquired.
Specifically, step S801 is performed by a client, for a partially conditioned tester wearable portable head-mounted eye-movement device communicatively connected to the client, collecting eye-movement information of the tester by the portable head-mounted eye-movement device, and transmitting the eye-movement information to the client.
Step S802: judging whether the user has abnormal answering conditions according to the eye movement information.
Specifically, step S802 is executed by the client, and the eye movement information is combined with the above-mentioned time information by tracking and measuring the eye position and the eye movement information, so as to analyze whether the tester has a false result, thereby judging whether the user has an abnormal answer condition. Specifically, the eye movement data indexes such as an AOI interest area (areas of interest) and a hotspot graph can be analyzed, and if the answer is made, the eye movement area is completely deviated from the answer interface; or when the options are played, the answer results in the period are regarded as false results when the options deviate from the areas where the options are located, and the abnormal answer condition of the user is judged.
Fig. 9 is a flowchart illustrating a method for extracting a confusable frequency band of answer data according to an embodiment of the present invention. In one embodiment of the present application, to enhance the personalized training of the fine hearing ability of the user, when performing the step of data analyzing the option information selected by the user to generate the test result of the fine hearing ability, the method further comprises the steps of: it should be noted that, steps S901 to S903 are performed by the platform end.
Step S901: and collecting answer data according to the option information selected by the user.
Specifically, in step S901 to step S903, the platform terminal firstly collects answer data of the user according to the received option information selected by the user. The answer data collected by the platform end comprises wrong question data of the user and question data of the user, for example, audio data of correct options A of a wrong question and wrong options B selected by the user.
Step S902: and extracting the frequency domain information of the audio in the answer data by using an acoustic signal analysis method.
Specifically, the correct option of the wrong question data in the answer data collected in step S901 and the audio frequency of the wrong option selected by the user are respectively detected by using an improved linear prediction coding method, and the formant of each frame of audio frequency data and the formant track of the whole voice are calculated.
In a preferred embodiment, the step S902 may be repeated for a plurality of times, where each time of calculation is performed by dividing the speech signal into different frames, the specific frames may be determined according to the requirement, then the formants under the different frames are calculated by using the root-finding method of linear predictive coding, and finally the formants under the different frames are subjected to mean processing to obtain the average formants. The improvement has the significance that the single calculation of the formants by taking the fixed frame number as a reference can lead to certain calculation errors, and the deviation caused by the single formants can be obviously reduced by processing formant data under different frame numbers as final formants by average, so that more accurate and objective voice data analysis results are obtained.
Step S903: based on the frequency domain information of the audio in the answer data, integrating to obtain a confusable frequency band; the confusable frequency band is used for increasing the weight of test question audio under the confusable frequency band when the same user performs auditory fine capability test.
Specifically, the frequency band ranges of the formant tracks of the correct option and the wrong option selected by the user in the wrong question data obtained in step S902 are extracted respectively, and are used as the confusing frequency band of the current user. It should be noted that, the confusable frequency band refers to an audio frequency band in which the user generates high frequency of confusion of options, which may reflect that the user has insufficient hearing fine discrimination capability in the corresponding audio frequency band. Regarding the function of the confusable frequency band, the confusable frequency band obtained by the voice analysis in step S903 is used for correspondingly increasing the training weight of the test question audio under the confusable frequency band corresponding to the user when the user subsequently performs the auditory fine capability test, so that the user can perform the auditory training with high pertinence and high individuation. Based on the improvement, the remote test method for the hearing fine capability can meet the personalized requirements of different users, and the improvement can obviously improve the rehabilitation training efficiency of the hearing fine capability of the user and help partial patients to realize hearing recovery more quickly because the weight of the audio frequency of the user which is easy to be confused is increased when the user performs rehabilitation training later.
Fig. 10 is a schematic flow chart of a method for generating a test result through deep learning analysis in the embodiment of the invention. In one embodiment of the present application, in order to obtain a more accurate test result according to a answer situation of a user, a method for deep learning and analyzing option information selected by the user to generate a test result of hearing fine ability includes the following steps: it should be noted that, steps S1001 and S1003 are all performed by the platform end.
Step S1001: and acquiring the personal condition parameters of the user.
Specifically, the personal condition parameters of the user in step S1001 may be data filled out by the user, and may include factors such as age of the user, pure tone (hearing aid) threshold condition, implantation time of the hearing device, language family, cognitive level, etc., which have a strong correlation with the auditory fine ability of the user, so that determination of the test result of the auditory fine ability should also depend on these factors. Fig. 11 is a schematic diagram of a client user login interface according to an embodiment of the present invention. In an embodiment of the present application, the personal condition parameter of the user in step S1001 may be data having a binding relationship with the personal electronic account of the user, and the personal condition parameter of the user may be obtained by reading the personal electronic account number of the user, where the personal electronic account of the user may be a mobile phone number of the user.
Step S1002: inputting the personal condition parameters of the user and the option information selected by the user into a trained neural network model; the trained neural network model is used to predict scoring conditions of the user.
Specifically, the trained neural network model in step S1002 is obtained through model training, the training sample is derived from the personal condition parameters of the user collected in the history and the score condition of the user corresponding to the personal condition parameters of the user, and the neural network model can accurately map the nonlinear function relationship between the personal condition parameters of the user and the score condition of the user after being trained to be converged. Based on the above, the platform end can input the data into the trained neural network model after obtaining the personal condition parameters of the user and the option information selected by the user, and the scoring condition of the user is obtained through the prediction of the neural network model.
Step S1003: the trained neural network model outputs the scoring condition of the user.
Specifically, the trained neural network model can accurately map the nonlinear function relation between the personal condition parameters of the user and the answer condition of the user and the scoring condition of the user, so that after the personal condition parameters of the user and the option information selected by the user are input, the trained neural network model can output the scoring condition of the user. In the modification of step S1001 to step S1003, since the neural network model is adopted, the improvement can obtain the score of the user more accurately based on the fact that the neural network model itself has infinite approximation capability. In addition, the input of the neural network model in the method is not only option information selected by the user, but also personal conditions of the user, so that data deviation among all users but different groups can be reduced, adverse effects of individual groups on prediction accuracy are avoided, and more accurate and objective test score results are obtained.
Fig. 12 is a schematic diagram showing a correspondence relationship between a confusable frequency band and an electrode channel in the embodiment of the invention. In one embodiment of the present application, after the integrating obtains the confusable frequency band, the method further includes the following steps: and determining an abnormal electrode channel of the user according to the confusable frequency band, and adjusting corresponding stimulation parameters according to the abnormal electrode channel of the user.
Specifically, for artificial hearing to be implanted into a user, each electrode channel of artificial hearing corresponds to a segment of frequency band. In most cases the frequency ranges of the devices of the same hearing implant company are substantially similar for the same number of channels. After the voice signal is processed by the external machine (mainly bandpass filtering), the frame envelope information of the specific frequency band is transmitted to the corresponding channel. Moreover, the frequency range of each channel can be known by knowing the company in which the user implants the artificial hearing device. Thus, based on the confusable frequency band extracted in step S901 to step S903, the confusable frequency band may be corresponding to the corresponding electrode channel, so that the electrode channel is labeled as "channel to be noted". The improved meaning is that in the subsequent machine adjustment process of the artificial hearing user, the otologist can be reminded of paying additional attention to the hearing condition of the channel, so that the hearing condition of the user is improved by taking the hearing condition as a treatment basis to adjust the stimulation parameters of the corresponding electrode channel.
Although the steps are described in the above-described sequential order in the above-described embodiments, it will be appreciated by those skilled in the art that in order to achieve the effects of the present embodiments, the steps need not be performed in such order, and may be performed simultaneously (in parallel) or in reverse order, and such simple variations are within the scope of the present invention.
Referring to fig. 13, an optional hardware structure diagram of a remote test terminal 1300 for auditory fine capability provided in an embodiment of the present invention may be shown, where the terminal 1300 may be a mobile phone, a computer device, a tablet device, a personal digital processing device, a factory background processing device, etc. The remote test terminal 1300 of auditory fine capability includes: at least one processor 1301, a memory 1302, at least one network interface 1304, and a user interface 1306. The various components in the device are coupled together by a bus system 1305. It is appreciated that the bus system 1305 is used to implement the connectivity communications between these components. The bus system 1305 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled as bus systems in fig. 13.
The user interface 1306 may include, among other things, a display, keyboard, mouse, trackball, click gun, keys, buttons, touch pad, or touch screen, etc.
It is to be appreciated that the memory 1302 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (PROM, programmable Read-Only Memory), which serves as an external cache, among others. By way of example and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory). The memory described by embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 1302 in an embodiment of the present invention is used to store various categories of data to support the operation of the remote test terminal 1300 for auditory fine capabilities. Examples of such data include: any executable programs, such as an operating system 13021 and application programs 13022, for operation on the auditory fine-capability remote test terminal 1300; the operating system 13021 contains various system programs, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks. The application 13022 may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. The remote test method for realizing the auditory fine capability provided by the embodiment of the present invention may be included in the application 13022.
The method disclosed in the above embodiment of the present invention may be applied to the processor 1301 or implemented by the processor 1301. Processor 1301 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the method described above may be performed by integrated logic circuitry in hardware in processor 1301 or instructions in software. The processor 1301 may be a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 1301 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. The general purpose processor 1301 may be a microprocessor or any conventional processor or the like. The steps of the accessory optimization method provided by the embodiment of the invention can be directly embodied as the execution completion of the hardware decoding processor or the execution completion of the hardware and software module combination execution in the decoding processor. The software modules may be located in a storage medium having memory and a processor reading information from the memory and performing the steps of the method in combination with hardware.
In an exemplary embodiment, the remote test terminal 1300 of auditory fine capability may be used by one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex programmable logic devices (CPLDs, complex Programmable Logic Device) to perform the aforementioned methods.
As shown in fig. 14, a schematic diagram of a remote test system for auditory fine capability in an embodiment of the present invention is shown. In this embodiment, the remote test system for auditory fine capability includes: the client is used for acquiring the test module information selected by the user; transmitting the test module information selected by the user to a platform end; receiving test question bank data corresponding to the test module information selected by the user and sent by the platform, playing test question audio in the test question bank data and providing a plurality of options for the user to select; acquiring option information selected by a user; transmitting the option information selected by the user to a platform end; and receiving the test result sent by the platform end and displaying the test result. The platform end is used for acquiring the test module information selected by the user and sent by the client; according to the test module information selected by the user, invoking test question bank data corresponding to the test module information selected by the user in a database; transmitting the test question bank data to a client; receiving option information selected by the user and sent by the client, and performing deep learning analysis on the option information selected by the user to generate a test result of hearing fine capability; and sending the test result to the client.
It should be noted that: in the remote test system for auditory fine capability according to the above embodiment, only the division of the program modules is used for illustration, and in practical application, the process allocation may be performed by different program modules according to needs, i.e. the internal structure of the system is divided into different program modules to complete all or part of the processes described above. In addition, the remote test device for auditory fine capability provided in the above embodiment and the remote test method embodiment for auditory fine capability belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment, which is not described herein again.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
In the embodiments provided herein, the computer-readable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, U-disk, removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In summary, the present application provides a method, a system, a terminal and a medium for remote testing of auditory fine capability, and the present invention provides a method for improving the remote testing efficiency of auditory fine capability, which is used for solving the problem that the prior art lacks in evaluating auditory fine capability of a patient; during evaluation, a false result screening system is not added, so that the evaluation is not objective; rehabilitation training is not targeted and cannot be remotely implemented. Therefore, the method effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles of the present application and their effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications and variations which may be accomplished by persons skilled in the art without departing from the spirit and technical spirit of the disclosure be covered by the claims of this application.

Claims (12)

1. A remote testing method of auditory fine capability, characterized by being executed by a client, comprising the steps of:
acquiring test module information selected by a user;
Transmitting the test module information selected by the user to a platform end;
receiving test question bank data corresponding to the test module information selected by the user and sent by the platform, playing test question audio in the test question bank data and providing a plurality of options for the user to select;
acquiring option information selected by a user;
transmitting the option information selected by the user to a platform end;
and receiving the test result sent by the platform end and displaying the test result.
2. The method for remote testing of auditory fine capability according to claim 1, wherein said step of obtaining user-selected option information is performed by:
acquiring time information consumed by a user when selecting options;
and sending the time information consumed by the user when selecting options to the platform end.
3. The method for remote testing of auditory fine capability according to claim 1, wherein said step of obtaining user-selected option information is performed by:
acquiring eye movement information of a user when selecting options;
judging whether the user has abnormal answering conditions according to the eye movement information.
4. The method of claim 2, further comprising the step of, after performing the step of sending the time-consuming information of the user when selecting an option to the platform side:
acquiring an alarm instruction sent by the platform end;
and displaying the warning information to the user according to the warning instruction.
5. A remote test method for auditory fine capability, which is executed by a platform end and comprises the following steps:
acquiring test module information selected by a user and sent by a client;
according to the test module information selected by the user, invoking test question bank data corresponding to the test module information selected by the user in a database;
transmitting the test question bank data to a client;
receiving option information selected by the user and sent by the client, and performing deep learning analysis on the option information selected by the user to generate a test result of hearing fine capability;
and sending the test result to the client.
6. The method for remote testing of auditory fine capability according to claim 5, further comprising, when said step of receiving said user-selected option information from said client is performed, the steps of:
Receiving time information which is sent by the client and consumed by the user when selecting options;
judging whether the user has abnormal answer conditions according to the time information consumed by the user when selecting options;
and if the abnormal answer condition exists in the user, sending an alarm instruction to the client.
7. The method of claim 5, wherein when performing the step of data analyzing the option information selected by the user to generate a test result of auditory fine capability, further comprising the steps of:
collecting answer data according to the option information selected by the user;
extracting the frequency domain information of the audio in the answer data by using an acoustic signal analysis method;
based on the frequency domain information of the audio in the answer data, integrating to obtain a confusable frequency band; the confusable frequency band is used for increasing the weight of test question audio under the confusable frequency band when the same user performs auditory fine capability test.
8. The method of claim 5, wherein the method of deep learning analysis of the user selected option information to generate a test result of auditory fine capability comprises the steps of:
Acquiring personal condition parameters of the user;
inputting the personal condition parameters of the user and the option information selected by the user into a trained neural network model; the trained neural network model is used for predicting the scoring condition of the user;
and the trained neural network model outputs the scoring condition of the user as the test result.
9. The method of claim 7, further comprising the steps of, after said integrating into a confusable frequency band:
and determining an abnormal electrode channel of the user according to the confusable frequency band, and adjusting corresponding stimulation parameters according to the abnormal electrode channel of the user.
10. A remote testing system for auditory fine capability, comprising:
the client is used for acquiring the test module information selected by the user; transmitting the test module information selected by the user to a platform end; receiving test question bank data corresponding to the test module information selected by the user and sent by the platform, playing test question audio in the test question bank data and providing a plurality of options for the user to select; acquiring option information selected by a user; transmitting the option information selected by the user to a platform end; receiving the test result sent by the platform end and displaying the test result;
The platform end is used for acquiring the test module information selected by the user and sent by the client; according to the test module information selected by the user, invoking test question bank data corresponding to the test module information selected by the user in a database; transmitting the test question bank data to a client; receiving option information selected by the user and sent by the client, and performing deep learning analysis on the option information selected by the user to generate a test result of hearing fine capability; and sending the test result to the client.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 4 or of claims 5 to 9.
12. An electronic terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, to cause the terminal to perform the method according to any one of claims 1 to 4 or claims 5 to 9.
CN202310511830.XA 2023-05-08 2023-05-08 Remote test method, system, terminal and storage medium for hearing fine capability Pending CN116269360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310511830.XA CN116269360A (en) 2023-05-08 2023-05-08 Remote test method, system, terminal and storage medium for hearing fine capability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310511830.XA CN116269360A (en) 2023-05-08 2023-05-08 Remote test method, system, terminal and storage medium for hearing fine capability

Publications (1)

Publication Number Publication Date
CN116269360A true CN116269360A (en) 2023-06-23

Family

ID=86801567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310511830.XA Pending CN116269360A (en) 2023-05-08 2023-05-08 Remote test method, system, terminal and storage medium for hearing fine capability

Country Status (1)

Country Link
CN (1) CN116269360A (en)

Similar Documents

Publication Publication Date Title
US20220240842A1 (en) Utilization of vocal acoustic biomarkers for assistive listening device utilization
Pichora-Fuller et al. Effects of age on auditory and cognitive processing: implications for hearing aid fitting and audiologic rehabilitation
JP2012130722A (en) Cochlear implant system with map optimization using genetic algorithm
CA3002004A1 (en) A computer-implemented dynamically-adjustable audiometer
Kanber et al. Highly accurate and robust identity perception from personally familiar voices.
Geller et al. Validation of the Iowa test of consonant perception
Ooster et al. Speech audiometry at home: automated listening tests via smart speakers with normal-hearing and hearing-impaired listeners
Souza et al. Contributions to speech-cue weighting in older adults with impaired hearing
US20220036878A1 (en) Speech assessment using data from ear-wearable devices
Harrison Variability of formant measurements
CN111493883B (en) Chinese language repeating-memory speech cognitive function testing and evaluating system
US9844326B2 (en) System and methods for creating reduced test sets used in assessing subject response to stimuli
US12009008B2 (en) Habilitation and/or rehabilitation methods and systems
AU2009279764A1 (en) Automatic performance optimization for perceptual devices
Miller et al. The effects of static and moving spectral ripple sensitivity on unaided and aided speech perception in noise
Han Methods for robust characterization of consonant perception in hearing-impaired listeners
CN113425293B (en) Auditory dyscognition disorder evaluation system and method
CN116269360A (en) Remote test method, system, terminal and storage medium for hearing fine capability
JP7307507B2 (en) Pathological condition analysis system, pathological condition analyzer, pathological condition analysis method, and pathological condition analysis program
RU2743049C1 (en) Method for pre-medical assessment of the quality of speech recognition and screening audiometry, and a software and hardware complex that implements it
EP3890592B1 (en) Speech discrimination test system
Davidson et al. Spectral modulation detection performance and speech perception in pediatric cochlear implant recipients
Sagi et al. A mathematical model of vowel identification by users of cochlear implants
Watkins et al. Prediction of Individual Cochlear Implant Recipient Speech Perception With the Output Signal to Noise Ratio Metric
KR20220089043A (en) Methods for training auditory skills

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination