WO2020217359A1 - Dispositif d'aide à l'ajustement, procédé d'aide à l'ajustement et support d'enregistrement lisible par ordinateur - Google Patents

Dispositif d'aide à l'ajustement, procédé d'aide à l'ajustement et support d'enregistrement lisible par ordinateur Download PDF

Info

Publication number
WO2020217359A1
WO2020217359A1 PCT/JP2019/017507 JP2019017507W WO2020217359A1 WO 2020217359 A1 WO2020217359 A1 WO 2020217359A1 JP 2019017507 W JP2019017507 W JP 2019017507W WO 2020217359 A1 WO2020217359 A1 WO 2020217359A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
parameter data
hearing test
fitting
fitting support
Prior art date
Application number
PCT/JP2019/017507
Other languages
English (en)
Japanese (ja)
Inventor
大 窪田
優太 芦田
英恵 下村
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2021515389A priority Critical patent/JP7272425B2/ja
Priority to PCT/JP2019/017507 priority patent/WO2020217359A1/fr
Publication of WO2020217359A1 publication Critical patent/WO2020217359A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • H04R25/507Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic

Definitions

  • the present invention relates to a fitting support device and a fitting support method, and further relates to a computer-readable recording medium that records a program for realizing these.
  • the technician When using a hearing aid, it is necessary to make adjustments (fitting) according to the subject. In addition, when performing the fitting, the technician performs the fitting in consideration of the subject's hearing, ear structure, living environment sound, and the like.
  • Patent Document 1 discloses a fitting device that automatically fits a hearing aid. According to the fitting device disclosed in Patent Document 1, the subject is made to audition the environmental sound created in advance, the subject is made to evaluate how to hear the auditioned environmental sound, and the hearing aid is used until the subject is satisfied. It is a device that fits by repeating the adjustment of.
  • Patent Document 1 can perform fitting so as not to feel the noise contained in the environmental sound noisy, it is not possible to carry out the fitting as performed by a highly skilled technician. difficult.
  • An example of an object of the present invention is to provide a fitting support device, a fitting support method, and a computer-readable recording medium for improving the accuracy of fitting.
  • the fitting support device in one aspect of the present invention is An acquisition means for acquiring hearing test information representing the results of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
  • the fitting support method in one aspect of the present invention is: (A) Hearing test information representing the result of the hearing test performed on the target person, attribute information representing the attribute of the target person, and background information representing the background of the target person are acquired. (B) It is characterized in that the acquired audiometry information, the attribute information, and the background information are input to estimate the parameter data used for fitting the hearing aid to the subject.
  • a computer-readable recording medium on which a program according to one aspect of the present invention is recorded is On the computer (A) A step of acquiring hearing test information representing the result of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person. (B) A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject. It is characterized in that it records a program containing an instruction to execute.
  • the accuracy of fitting can be improved.
  • FIG. 1 is a diagram showing an example of a fitting support device.
  • FIG. 2 is a diagram showing an example of a system having a fitting support device.
  • FIG. 3 is a diagram showing an example of a data structure of audiometry information.
  • FIG. 4 is a diagram showing an example of a data structure of attribute information.
  • FIG. 5 is a diagram showing an example of a data structure of background information.
  • FIG. 6 is a diagram showing an example of the data structure of the estimated parameter data.
  • FIG. 7 is a diagram showing an example of a system having a learning device.
  • FIG. 8 is a diagram showing an example of the operation of the fitting support device.
  • FIG. 9 is a diagram showing an example of the operation of the learning device.
  • FIG. 10 is a diagram showing an example of a computer that realizes a fitting support device.
  • FIG. 1 is a diagram showing an example of a fitting support device.
  • the fitting support device shown in FIG. 1 is a device that improves the accuracy of adjustment (fitting) for adapting the hearing aid to the subject. Further, as shown in FIG. 1, the fitting support device 1 has an acquisition unit 2 and an estimation unit 3.
  • the acquisition unit 2 acquires hearing test information representing the results of the hearing test performed on the target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
  • the estimation unit 3 inputs the acquired hearing test information, attribute information, and background information, and estimates the parameter data used for fitting the hearing aid to the subject.
  • a hearing aid collects sound using a sound collecting unit (microphone), amplifies and processes the collected sound using a processing unit, and outputs the amplified and processed sound using an output unit (receiver). It is a device to do.
  • the audiometry information representing the result of the audiometry performed on the subject has, for example, at least one or more of air conduction audiogram, bone conduction audiogram, discomfort threshold value, and speech intelligibility.
  • Attribute information representing the attributes of the subject includes, for example, at least age, gender, occupation, whereabouts, family structure, medical history, treatment history, hearing aid usage history, and physical characteristics (for example, height, weight, ear acoustics, etc.). Have any one or more pieces of information.
  • the ear acoustics are information representing the acoustic characteristics of the ear.
  • the attribute information may include information indicating the type of hearing aid.
  • the background information representing the background of the target person has, for example, at least one or more of the living environment sounds and tastes of the target person.
  • Living environment sound is information representing the sound heard in the subject's daily life.
  • the taste is information expressing the taste of the subject for the sound.
  • the estimation unit 3 has a learning model for estimating parameter data generated by using machine learning by inputting a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past. ..
  • the parameter data is data used for adjusting the sound collecting unit, processing unit, and output unit provided in the target hearing aid.
  • the parameter data is, for example, a parameter used for adjusting the frequency characteristic value, noise suppression strength, howling suppression strength, directivity type, impact sound suppression degree, etc. for each output level.
  • Machine learning can be supervised learning, semi-supervised learning, etc.
  • regression least squares method, regression method such as random forest
  • multi-class classification algorithm such as decision tree
  • machine learning is not limited to the above-mentioned machine learning, and learning other than the above-mentioned machine learning may be used.
  • the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like, so that appropriate fitting is performed for the target person. be able to. Further, the fitting support device 1 can reduce the time required for fitting.
  • the technician first performs fitting using the parameter data obtained based on the hearing test information in the regulation selection method.
  • the hearing test information alone is usually insufficient to determine the parameter data suitable for the subject, it is necessary to further approach the optimum parameter data by the comparative selection method.
  • the technician observes the reaction of the target person by interacting with the target person or listening to the sample sound source in order to approach the optimum parameter data for the target person. , Brings the parameter data closer to the optimum parameter data.
  • the technician observes the reaction of the target person by interacting with the target person or listening to the sample sound source in order to approach the optimum parameter data for the target person. , Brings the parameter data closer to the optimum parameter data.
  • it takes a long time to approach the optimum parameter data.
  • the amount of information that would otherwise be obtained by the comparative selection method is supplemented in advance by utilizing the attribute information and background information of the target person, and this information is directly obtained as parameter data by the learning model. It can be used for making decisions. Therefore, the time required to approach the optimum parameter data can be shortened.
  • FIG. 2 is a diagram showing an example of a system having a fitting support device.
  • FIG. 3 is a diagram showing an example of a data structure of audiometry information.
  • FIG. 4 is a diagram showing an example of a data structure of attribute information.
  • FIG. 5 is a diagram showing an example of a data structure of background information.
  • FIG. 6 is a diagram showing an example of the data structure of the estimated parameter data.
  • the system having the fitting support device 1 in the present embodiment has an input device 21 and an output device 22 in addition to the fitting support device 1.
  • the input device 21 is a device used to input hearing test information, attribute information, and background information to the fitting support device 1. Specifically, the input device 21 first obtains hearing test information and attribute information of the target person from an information processing device (for example, a computer) and a storage device provided in a hearing aid dealer, a manufacturer, a related facility, or the like. , Get background information.
  • an information processing device for example, a computer
  • a storage device provided in a hearing aid dealer, a manufacturer, a related facility, or the like.
  • the hearing test information includes "item” information indicating the hearing test item, "hearing test result” information indicating the hearing test result, and "data type” information indicating the data type. Is associated with and stored in the storage device.
  • the attribute information is associated with the information of the "item” representing the item of the attribute, the information of the "attribute” representing the attribute of the target person, and the information of the "data type” representing the type of data. Is stored in the storage device.
  • the background information is associated with the information of the "item” representing the background item, the information of the "background” representing the background of the target person, and the information of the "data type” representing the data type. Is stored in the storage device.
  • the input device 21 transmits the acquired hearing test information, attribute information, and background information to the acquisition unit 2 of the fitting support device 1 by using communication such as wired or wireless.
  • the input device 21 is, for example, an information processing device such as a personal computer, a mobile computer, a smartphone, or a tablet. Further, a plurality of input devices 21 may be prepared, and hearing test information, attribute information, and background information may be input from separate input devices 21.
  • the output device 22 acquires the output information converted into an outputable format by the output information generation unit 24, and outputs the generated image, sound, and the like based on the output information.
  • the output device 22 is, for example, an image display device using a liquid crystal, an organic EL (Electro Luminescence), or a CRT (Cathode Ray Tube). Further, the image display device may include an audio output device such as a speaker.
  • the output device 22 may be a printing device such as a printer.
  • the fitting support device 1 shown in FIG. 2 has a classification unit 23 and an output information generation unit 24 in addition to the acquisition unit 2 and the estimation unit 3. Further, the estimation unit 3 has a learning model 25.
  • the learning model 25 is provided in the fitting support device 1, but may be provided in an information processing device (for example, a computer or the like) or a storage device (not shown in the system shown in FIG. 2).
  • the estimation unit 3 and the information processing device or the storage device are configured to be able to communicate with each other by communication.
  • the fitting support device 1 will be described.
  • the acquisition unit 2 acquires hearing test information, attribute information, and background information from the input device 21. Specifically, the acquisition unit 2 includes hearing test information transmitted from the input device 21 indicating the result of the hearing test performed on the target person, attribute information representing the attributes of the target person, and the background of the target person. Receives background information and.
  • the classification unit 23 classifies hearing test information, attribute information, and background information. Specifically, the classification unit 23 executes a clustering process on explanatory variables such as hearing test information, attribute information, and background information.
  • the air conduction audiogram, the bone conduction audiogram, the discomfort threshold value, the speech intelligibility, etc. are represented by, for example, vector values.
  • attribute information for example, age, height, weight, etc. are represented by numerical values, gender, etc. are represented by binary values, etc., and occupation, medical history, treatment history, hearing aid usage history, etc. are represented by character strings, etc.
  • Whereabouts, family composition, etc. are represented by category values, etc.
  • ear sounds, etc. are represented by vector values, etc.
  • living environment sounds, tastes, etc. are represented by vector values and the like.
  • the living environment sound may be represented by a time-series signal or the like.
  • adjustment information for example, adjustment date and time, adjustment place, technician identifier, etc.
  • the adjustment date and time and the technician identifier are represented by numerical values
  • the adjustment location is represented by a category value.
  • the classification unit 23 can reduce the amount of computational resources when estimating parameter data by executing clustering processing on explanatory variables such as hearing test information, attribute information, and background information. it can. That is, it is possible to reduce the amount of computational resources such as the processor and memory used for estimating the parameter data.
  • the estimation unit 3 inputs the classified information and estimates the parameter data (objective variable) used to adapt the hearing aid to the subject. Specifically, the estimation unit 3 first acquires the information classified from the classification unit 23. Subsequently, the estimation unit 3 inputs the acquired classified information into the learning model 25 and estimates the parameter data.
  • the estimation unit 3 inputs the classified information and estimates the parameter data (objective variable) used to adapt the hearing aid to the subject. Specifically, the estimation unit 3 first acquires the information classified from the classification unit 23. Subsequently, the estimation unit 3 inputs the acquired classified information into the learning model 25 and estimates the parameter data.
  • the estimation unit 3 does not necessarily have to use the classified information as an input.
  • the estimation unit 3 may use unclassified information as input.
  • the parameter data is data used for adjusting one or more of the sound collecting unit, the processing unit, the output unit, or one or more of the sound collecting unit, the processing unit, or the output unit provided in the hearing aid. Further, the parameter data is, for example, data having at least one or more of the type of hearing aid, the frequency characteristic value for each output level, the noise suppression strength, the howling suppression strength, the directivity type, and the impact sound suppression degree.
  • the parameter data is associated with the information of the "item” representing the item of the parameter data, the information of the "parameter” representing the estimated parameter data, and the information of the "data type” representing the data type. Has been done.
  • the learning model 25 is a model for estimating parameter data generated by machine learning by inputting a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past in the learning phase. The details of the learning model 25 will be described later.
  • the output information generation unit 24 generates output information used to output the parameter data estimated by the estimation unit 3 to the output device 22. Then, the output information generation unit 24 outputs the generated output information to the output device 22.
  • the fitting support device 1 may set the parameter data estimated by the estimation unit 3 in the hearing aid. That is, the fitting support device 1 may directly adjust (fitting) the hearing aid using the estimated parameter data without the intervention of a technician.
  • the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like. Therefore, the technician can use the estimated parameter data to inform the data target person. Appropriate fitting can be performed.
  • the fitting support device 1 can reduce the time required for fitting to approach the optimum parameter data. That is, since the information previously utilized in the comparison selection method can be utilized by using the learning model 25, the time required for fitting can be shortened.
  • FIG. 7 is a diagram showing an example of a system having a learning device.
  • the learning device 31 shown in FIG. 7 is a device that generates a learning model 25 by using machine learning.
  • the system having the learning device 31 shown in FIG. 7 has a storage device 32 in addition to the learning device 31. Further, the learning device 31 has an acquisition unit 33, a classification unit 34, a classification unit 35, and a generation unit 36.
  • the storage device 32 stores a plurality of explanatory variables (hearing test information, attribute information, background information) acquired in the past and objective variables (parameter data).
  • the plurality of hearing test information acquired in the past is information representing the results of hearing tests performed by each user of a plurality of hearing aids in the past.
  • the plurality of attribute information acquired in the past is attribute information acquired by a plurality of hearing aid users for each user in the past.
  • the plurality of background information acquired in the past is background information acquired by a plurality of hearing aid users for each user in the past.
  • the parameter data acquired in the past is the parameter data used for adjusting the hearing aid in the fitting performed by a technician with a high skill level for a plurality of hearing aid users in the past. Further, in the past, it is parameter data used for adjusting the hearing aid in the fitting performed by using the fitting support device 1 for the users of a plurality of hearing aids.
  • the acquisition unit 33 acquires the learning data acquired in the past. Specifically, the acquisition unit 33 acquires learning data such as a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past from the storage device 32, and transfers the acquired learning data to the classification unit 34. Send.
  • the classification unit 34 classifies the received learning data. Specifically, the classification unit 34 first compares a score (satisfaction level) indicating whether or not the user is satisfied with the fitting in the past with a preset threshold value. Subsequently, when the score is equal to or higher than the threshold value, the classification unit 34 classifies the learning data associated with the score equal to or higher than the threshold value.
  • a score satisfaction level
  • the classification unit 35 further classifies the learning data classified by the classification unit 34. Specifically, the classification unit 35 further executes a clustering process on the learning data classified by the classification unit 34. In the clustering process, high-dimensional information is converted into low-dimensional information. The reason is that explanatory variables that can be expressed by sentences and time-series signals become variables with enormous numerical and categorical dimensions, and if learning is performed as it is, the amount of computational resources will increase. Therefore, evaluation of the presence or absence of each word, TF-IDF, etc. are applied to the character string data to convert it into a numerical vector, and for this vectorized data, for example, k-means clustering, deep learning, etc. It is conceivable to convert high-dimensional information into low-dimensional information using.
  • high-dimensional information for example, character string, time series signal, etc.
  • a deep neural network is converted into a vector value.
  • the converted vector value is converted into low-dimensional information (for example, a label) by using the k-nearest neighbor method.
  • labels can be used by unsupervised learning by using a clustering method such as the k-means method. Can be made.
  • the learning model 25 can be trained using only the learning data having a high degree of satisfaction. Therefore, the parameter data estimated by using the learning model 25 has a user satisfaction level. It gets higher. Further, since the learning data can be reduced, the amount of computational resources can be reduced, such as shortening the learning time and reducing the memory usage.
  • the classification unit 35 by executing the clustering process on the learning data classified by the classification unit 34, the amount of computational resources can be further reduced in the case of learning.
  • the generation unit 36 performs machine learning using the learning data classified by the classification unit 35 to generate a learning model 25, and stores the generated learning model 25 in the estimation unit 3.
  • the generation unit 36 may store the learning model 25 in a system having the fitting support device 1, a system having the learning device 31, or a storage device other than the system.
  • machine learning can be, for example, supervised learning or semi-supervised learning.
  • learning such as regression (least squares method, regression method such as random forest) and multiclass classification (algorithm such as decision tree).
  • regression least squares method, regression method such as random forest
  • multiclass classification algorithm such as decision tree
  • the learning data classified by the classification unit 35 may be used, the learning data classified by the classification unit 34 may be used, or the unclassified learning data may be used. You may use it.
  • hearing test information As the learning data used to generate the learning model 25, hearing test information, attribute information, background information, and parameter data that have been determined to be in conformity in the past may be used.
  • the learning device 31 can generate the learning model 25 by taking into account not only the hearing test information acquired in the past but also the attribute information, background information, parameter data, etc. acquired in the past.
  • the learning model 25 can be generated by inputting the result of fitting performed by a technician with a high skill level.
  • the technician when adapting the hearing aid to the subject, the technician can perform appropriate fitting by using the parameter data estimated using the generated learning model 25.
  • parameter data close to the optimum parameter data can be estimated, so that the time required for fitting can be shortened. That is, since the information conventionally obtained by the comparative selection method can be obtained by using the learning model 25, the time required for fitting can be shortened.
  • FIG. 8 is a diagram showing an example of the operation of the fitting support device.
  • FIGS. 2 to 6 will be referred to as appropriate.
  • the fitting support method is implemented by operating the fitting support device. Therefore, the description of the fitting support method in the present embodiment will be replaced with the following description of the operation of the fitting support device.
  • FIG. 9 is a diagram showing an example of the operation of the learning device.
  • FIG. 7 will be referred to as appropriate.
  • the learning method is implemented by operating the learning device. Therefore, the description of the learning method in the present embodiment is replaced with the following description of the operation of the learning device.
  • the acquisition unit 2 acquires the hearing test information, the attribute information, and the background information (explanatory variable) of the target person from the input device 21 in the operation phase (step A1). Specifically, in step A1, the acquisition unit 2 receives the hearing test information that represents the result of the hearing test performed on the target person, the attribute information that represents the attributes of the target person, and the attribute information that is transmitted from the input device 21. Receives background information that represents the background of the target person.
  • the classification unit 23 classifies the hearing test information, the attribute information, and the background information (step A2). Specifically, in step A2, the classification unit 23 executes a clustering process on explanatory variables such as hearing test information, attribute information, and background information. In the clustering process, high-dimensional information is converted into low-dimensional information.
  • the classification unit 23 can reduce the amount of computational resources when estimating parameter data by executing clustering processing on explanatory variables such as hearing test information, attribute information, and background information. it can.
  • the process of step A2 may not be performed.
  • the estimation unit 3 inputs the information classified in step A2 and estimates the parameter data (objective variable) used to adapt the hearing aid to the subject (step A3). Specifically, in step A3, the estimation unit 3 first acquires the information classified from the classification unit 23. Subsequently, in step A3, the estimation unit 3 inputs the acquired classified information into the learning model 25 and estimates the parameter data.
  • estimation unit 3 does not necessarily have to input and use the classified information.
  • the estimation unit 3 may use unclassified information as input.
  • the output information generation unit 24 generates output information used to output the parameter data estimated by the estimation unit 3 to the output device 22 (step A4). Then, the output information generation unit 24 outputs the generated output information to the output device 22 (step A5).
  • the acquisition unit 33 acquires the learning data acquired in the past (step B1). Specifically, in step B1, the acquisition unit 33 acquires learning data such as a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past from the storage device 32, and acquires the acquired learning data. It is transmitted to the classification unit 34.
  • the classification unit 34 classifies the received learning data (step B2). Specifically, in step B2, the classification unit 34 first compares a score (satisfaction level) indicating whether or not the user is satisfied with the fitting in the past with a preset threshold value. Subsequently, in step B2, when the score is equal to or higher than the threshold value, the classification unit 34 classifies the learning data associated with the score equal to or higher than the threshold value.
  • a score evaluation level
  • the classification unit 34 can make the learning model 25 learn using only the learning data having a high degree of satisfaction, so that the parameter data estimated using the learning model 25 can be used by the user. Satisfaction is high.
  • the classification unit 35 further classifies the learning data classified in step B2 (step B3). Specifically, in step B3, the classification unit 35 further executes a clustering process on the learning data classified in step B2. In the clustering process, high-dimensional information is converted into low-dimensional information.
  • the classification unit 35 can further reduce the learning data, so that the amount of computational resources can be reduced, such as shortening the learning time and reducing the memory usage.
  • the generation unit 36 performs machine learning using the learning data classified by the classification unit 35 to generate the learning model 25 (step B4). Subsequently, the generation unit 36 stores the generated learning model 25 in the estimation unit 3 (step B5).
  • the generation unit 36 may store the learning model 25 in a system having a fitting support device 1, a system having a learning device 31, or a storage device other than the system.
  • machine learning can be considered, for example, supervised learning or semi-supervised learning.
  • learning such as regression (least squares method, regression method such as random forest) and multiclass classification (algorithm such as decision tree).
  • regression least squares method, regression method such as random forest
  • multiclass classification algorithm such as decision tree
  • the generation unit 36 may use the learning data classified by the classification unit 34 in step B2 as an input, or may use the learning data classified by the classification unit 35 in step B3, or are classified. Learning data that has not been used may be used.
  • hearing test information As the learning data used to generate the learning model 25, hearing test information, attribute information, background information, and parameter data that have been determined to be in conformity in the past may be used.
  • the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like, so that the technician can estimate the estimated parameter data. It can be used to perform appropriate fitting for data subjects.
  • the fitting support device 1 can reduce the time required for fitting to approach the optimum parameter data. That is, since the information previously utilized in the comparison selection method can be utilized by using the learning model 25, the time required for fitting can be shortened.
  • the fitting support device 1 may set the parameter data estimated by the estimation unit 3 in the hearing aid. That is, the fitting support device 1 may directly adjust (fitting) the hearing aid using the estimated parameter data without the intervention of a technician.
  • the learning device 31 can generate the learning model 25 by taking into account not only the hearing test information acquired in the past but also the attribute information, background information, parameter data, etc. acquired in the past.
  • the learning model 25 can be generated by inputting the result of fitting performed by a technician with a high skill level.
  • the technician when adapting the hearing aid to the subject, the technician can perform appropriate fitting using the parameter data estimated by the generated learning model 25.
  • parameter data close to the optimum parameter data can be estimated, so that the time required for fitting can be shortened. That is, since the information conventionally obtained by the comparative selection method can be obtained by using the learning model 25, the time required for fitting can be shortened.
  • the program for estimating the parameter data in the embodiment of the present invention may be any program that causes a computer to execute steps A1 to A5 shown in FIG. By installing this program on a computer and executing it, the fitting support device and the fitting support method according to the present embodiment can be realized.
  • the computer processor functions as an acquisition unit 2, a classification unit 23, an estimation unit 3, and an output information generation unit 24, and performs processing.
  • the program for estimating the parameter data in the present embodiment may be executed by a computer system constructed by a plurality of computers.
  • each computer may function as any of the acquisition unit 2, the classification unit 23, the estimation unit 3, and the output information generation unit 24, respectively.
  • the program for generating the learning model according to the embodiment of the present invention may be a program that causes a computer to execute steps B1 to B5 shown in FIG. By installing this program on a computer and executing it, the learning device and the learning method according to the present embodiment can be realized.
  • the computer processor functions as an acquisition unit 33, a classification unit 34, a classification unit 35, and a generation unit 36, and performs processing.
  • the program for generating the learning model in the present embodiment may be executed by a computer system constructed by a plurality of computers.
  • each computer may function as any of the acquisition unit 33, the classification unit 34, the classification unit 35, and the generation unit 36, respectively.
  • FIG. 10 is a block diagram showing an example of a computer that realizes the fitting support device or the learning device according to the embodiment of the present invention.
  • the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. And. Each of these parts is connected to each other via a bus 121 so as to be capable of data communication.
  • the computer 110 may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU 111 or in place of the CPU 111.
  • the CPU 111 expands the programs (codes) of the present embodiment stored in the storage device 113 into the main memory 112 and executes them in a predetermined order to perform various operations.
  • the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
  • the program according to the present embodiment is provided in a state of being stored in a computer-readable recording medium 120.
  • the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
  • the storage device 113 include a semiconductor storage device such as a flash memory in addition to a hard disk drive.
  • the input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and mouse.
  • the display controller 115 is connected to the display device 119 and controls the display on the display device 119.
  • the data reader / writer 116 mediates the data transmission between the CPU 111 and the recording medium 120, reads the program from the recording medium 120, and writes the processing result in the computer 110 to the recording medium 120.
  • the communication interface 117 mediates data transmission between the CPU 111 and another computer.
  • the recording medium 120 include a general-purpose semiconductor storage device such as CF (CompactFlash (registered trademark)) and SD (SecureDigital), a magnetic recording medium such as a flexible disk, or a CD-.
  • CF CompactFlash (registered trademark)
  • SD Secure Digital
  • magnetic recording medium such as a flexible disk
  • CD- CompactDiskReadOnlyMemory
  • optical recording media such as ROM (CompactDiskReadOnlyMemory).
  • fitting support device 1 or the learning device 31 in the present embodiment can also be realized by using hardware corresponding to each part instead of the computer on which the program is installed. Further, the fitting support device 1 or the learning device 31 may be partially realized by a program and the rest may be realized by hardware.
  • An acquisition unit that acquires hearing test information representing the results of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
  • An estimation unit that inputs the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
  • a fitting support device characterized by having.
  • the fitting support device according to Appendix 1.
  • the estimation unit has a learning model for estimating parameter data generated by machine learning by inputting a plurality of hearing test information, parameter data, attribute information, and background information acquired in the past.
  • the fitting support device described in Appendix 2 The input used to generate the learning model is a fitting support device characterized by using the hearing test information, the parameter data, the attribute information, and the background information that have been determined to be in conformity in the past. ..
  • the fitting support device according to any one of Appendix 1 to 3.
  • the attribute information has at least one or more information of any one or more of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject.
  • B A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
  • the fitting support method described in Appendix 6 is a fitting support method characterized by using the hearing test information, the parameter data, the attribute information, and the background information that have been determined to be in conformity in the past.
  • the fitting support method according to any one of Appendix 5 to 7.
  • the attribute information has at least one or more information of any one or more of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject.
  • a fitting support method characterized by having any one or more pieces of information.
  • B A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
  • a computer-readable recording medium that records a program, including instructions to execute.
  • Appendix 11 The computer-readable recording medium according to Appendix 10.
  • the input used to generate the learning model is readable by a computer, which uses the hearing test information, the parameter data, the attribute information, and the background information, which are determined to be in conformity in the past. Recording medium.
  • Appendix 12 The computer-readable recording medium according to any one of Appendix 9 to 11.
  • the attribute information includes at least one or more information of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject.
  • a computer-readable recording medium characterized by having any one or more pieces of information.
  • the accuracy of fitting can be improved.
  • the time required for fitting can be shortened.
  • INDUSTRIAL APPLICABILITY The present invention is useful in a field where fitting is required in a wearing device such as a hearing aid.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Otolaryngology (AREA)
  • Neurosurgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un dispositif d'aide à l'ajustement (1) pourvu d'une unité d'acquisition (2) qui acquiert des informations de test auditif indiquant les résultats d'un test auditif effectué sur un sujet, des informations d'attribut représentant les attributs du sujet, et des informations d'arrière-plan représentant l'arrière-plan du sujet, et une unité d'estimation (3) qui entre les informations de test auditif acquises, les informations d'attribut et les informations d'arrière-plan pour estimer des données de paramètre utilisées pour ajuster une aide auditive pour le sujet.
PCT/JP2019/017507 2019-04-24 2019-04-24 Dispositif d'aide à l'ajustement, procédé d'aide à l'ajustement et support d'enregistrement lisible par ordinateur WO2020217359A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021515389A JP7272425B2 (ja) 2019-04-24 2019-04-24 フィッティング支援装置、フィッティング支援方法、及びプログラム
PCT/JP2019/017507 WO2020217359A1 (fr) 2019-04-24 2019-04-24 Dispositif d'aide à l'ajustement, procédé d'aide à l'ajustement et support d'enregistrement lisible par ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/017507 WO2020217359A1 (fr) 2019-04-24 2019-04-24 Dispositif d'aide à l'ajustement, procédé d'aide à l'ajustement et support d'enregistrement lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2020217359A1 true WO2020217359A1 (fr) 2020-10-29

Family

ID=72941169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/017507 WO2020217359A1 (fr) 2019-04-24 2019-04-24 Dispositif d'aide à l'ajustement, procédé d'aide à l'ajustement et support d'enregistrement lisible par ordinateur

Country Status (2)

Country Link
JP (1) JP7272425B2 (fr)
WO (1) WO2020217359A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022264535A1 (fr) * 2021-06-18 2022-12-22 ソニーグループ株式会社 Procédé de traitement d'informations et système de traitement d'informations

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009001559A1 (fr) * 2007-06-28 2008-12-31 Panasonic Corporation Aide auditive s'adaptant à l'environnement
JP2017152865A (ja) * 2016-02-23 2017-08-31 リオン株式会社 補聴器フィッティング装置、補聴器フィッティングプログラム、補聴器フィッティングサーバ、および補聴器フィッティング方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1023647B1 (fr) 1997-10-15 2005-04-13 Beltone Electronics Corporation Dispositif "neuroflou" pour prothese auditive programmable

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009001559A1 (fr) * 2007-06-28 2008-12-31 Panasonic Corporation Aide auditive s'adaptant à l'environnement
JP2017152865A (ja) * 2016-02-23 2017-08-31 リオン株式会社 補聴器フィッティング装置、補聴器フィッティングプログラム、補聴器フィッティングサーバ、および補聴器フィッティング方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022264535A1 (fr) * 2021-06-18 2022-12-22 ソニーグループ株式会社 Procédé de traitement d'informations et système de traitement d'informations

Also Published As

Publication number Publication date
JP7272425B2 (ja) 2023-05-12
JPWO2020217359A1 (fr) 2020-10-29

Similar Documents

Publication Publication Date Title
US10721571B2 (en) Separating and recombining audio for intelligibility and comfort
US20210393168A1 (en) User authentication via in-ear acoustic measurements
JP6293314B2 (ja) 補聴器システムのパラメータ最適化方法および補聴器システム
US20230011937A1 (en) Methods and apparatus to generate optimized models for internet of things devices
CN109600699B (zh) 用于处理服务请求的系统及其中的方法和存储介质
US8335332B2 (en) Fully learning classification system and method for hearing aids
WO2020217359A1 (fr) Dispositif d'aide à l'ajustement, procédé d'aide à l'ajustement et support d'enregistrement lisible par ordinateur
EP3940698A1 (fr) Procédé mis en uvre par ordinateur permettant de fournir des données pour une évaluation automatique des pleurs de bébé
JP7276433B2 (ja) フィッティング支援装置、フィッティング支援方法、及びプログラム
US20230081796A1 (en) Managing audio content delivery
JP2018005122A (ja) 検出装置、検出方法及び検出プログラム
US11882413B2 (en) System and method for personalized fitting of hearing aids
CN111148001A (zh) 听力系统、附件设备和听力算法情境设计的相关方法
EP4207812A1 (fr) Procédé de traitement de signaux audio sur un système auditif, système auditif et réseau neuronal pour le traitement de signaux audio
US11457320B2 (en) Selectively collecting and storing sensor data of a hearing system
JP2023027697A (ja) 端末装置、送信方法、送信プログラム及び情報処理システム
Ni et al. Personalization of Hearing AID DSLV5 Prescription Amplification in the Field via a Real-Time Smartphone APP
Cauchi et al. Hardware/software architecture for services in the hearing aid industry
WO2022264535A1 (fr) Procédé de traitement d'informations et système de traitement d'informations
US20240121560A1 (en) Facilitating hearing device fitting
US11146902B2 (en) Facilitating a bone conduction otoacoustic emission test
US11749270B2 (en) Output apparatus, output method and non-transitory computer-readable recording medium
US20220312126A1 (en) Detecting Hair Interference for a Hearing Device
US10601757B2 (en) Multi-output mode communication support device, communication support method, and computer program product
JP2024034016A (ja) 音声取得装置および音声取得方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925730

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021515389

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925730

Country of ref document: EP

Kind code of ref document: A1