WO2020217359A1 - Fitting assistance device, fitting assistance method, and computer-readable recording medium - Google Patents

Fitting assistance device, fitting assistance method, and computer-readable recording medium Download PDF

Info

Publication number
WO2020217359A1
WO2020217359A1 PCT/JP2019/017507 JP2019017507W WO2020217359A1 WO 2020217359 A1 WO2020217359 A1 WO 2020217359A1 JP 2019017507 W JP2019017507 W JP 2019017507W WO 2020217359 A1 WO2020217359 A1 WO 2020217359A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
parameter data
hearing test
fitting
fitting support
Prior art date
Application number
PCT/JP2019/017507
Other languages
French (fr)
Japanese (ja)
Inventor
大 窪田
優太 芦田
英恵 下村
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2021515389A priority Critical patent/JP7272425B2/en
Priority to PCT/JP2019/017507 priority patent/WO2020217359A1/en
Publication of WO2020217359A1 publication Critical patent/WO2020217359A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • H04R25/507Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic

Definitions

  • the present invention relates to a fitting support device and a fitting support method, and further relates to a computer-readable recording medium that records a program for realizing these.
  • the technician When using a hearing aid, it is necessary to make adjustments (fitting) according to the subject. In addition, when performing the fitting, the technician performs the fitting in consideration of the subject's hearing, ear structure, living environment sound, and the like.
  • Patent Document 1 discloses a fitting device that automatically fits a hearing aid. According to the fitting device disclosed in Patent Document 1, the subject is made to audition the environmental sound created in advance, the subject is made to evaluate how to hear the auditioned environmental sound, and the hearing aid is used until the subject is satisfied. It is a device that fits by repeating the adjustment of.
  • Patent Document 1 can perform fitting so as not to feel the noise contained in the environmental sound noisy, it is not possible to carry out the fitting as performed by a highly skilled technician. difficult.
  • An example of an object of the present invention is to provide a fitting support device, a fitting support method, and a computer-readable recording medium for improving the accuracy of fitting.
  • the fitting support device in one aspect of the present invention is An acquisition means for acquiring hearing test information representing the results of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
  • the fitting support method in one aspect of the present invention is: (A) Hearing test information representing the result of the hearing test performed on the target person, attribute information representing the attribute of the target person, and background information representing the background of the target person are acquired. (B) It is characterized in that the acquired audiometry information, the attribute information, and the background information are input to estimate the parameter data used for fitting the hearing aid to the subject.
  • a computer-readable recording medium on which a program according to one aspect of the present invention is recorded is On the computer (A) A step of acquiring hearing test information representing the result of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person. (B) A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject. It is characterized in that it records a program containing an instruction to execute.
  • the accuracy of fitting can be improved.
  • FIG. 1 is a diagram showing an example of a fitting support device.
  • FIG. 2 is a diagram showing an example of a system having a fitting support device.
  • FIG. 3 is a diagram showing an example of a data structure of audiometry information.
  • FIG. 4 is a diagram showing an example of a data structure of attribute information.
  • FIG. 5 is a diagram showing an example of a data structure of background information.
  • FIG. 6 is a diagram showing an example of the data structure of the estimated parameter data.
  • FIG. 7 is a diagram showing an example of a system having a learning device.
  • FIG. 8 is a diagram showing an example of the operation of the fitting support device.
  • FIG. 9 is a diagram showing an example of the operation of the learning device.
  • FIG. 10 is a diagram showing an example of a computer that realizes a fitting support device.
  • FIG. 1 is a diagram showing an example of a fitting support device.
  • the fitting support device shown in FIG. 1 is a device that improves the accuracy of adjustment (fitting) for adapting the hearing aid to the subject. Further, as shown in FIG. 1, the fitting support device 1 has an acquisition unit 2 and an estimation unit 3.
  • the acquisition unit 2 acquires hearing test information representing the results of the hearing test performed on the target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
  • the estimation unit 3 inputs the acquired hearing test information, attribute information, and background information, and estimates the parameter data used for fitting the hearing aid to the subject.
  • a hearing aid collects sound using a sound collecting unit (microphone), amplifies and processes the collected sound using a processing unit, and outputs the amplified and processed sound using an output unit (receiver). It is a device to do.
  • the audiometry information representing the result of the audiometry performed on the subject has, for example, at least one or more of air conduction audiogram, bone conduction audiogram, discomfort threshold value, and speech intelligibility.
  • Attribute information representing the attributes of the subject includes, for example, at least age, gender, occupation, whereabouts, family structure, medical history, treatment history, hearing aid usage history, and physical characteristics (for example, height, weight, ear acoustics, etc.). Have any one or more pieces of information.
  • the ear acoustics are information representing the acoustic characteristics of the ear.
  • the attribute information may include information indicating the type of hearing aid.
  • the background information representing the background of the target person has, for example, at least one or more of the living environment sounds and tastes of the target person.
  • Living environment sound is information representing the sound heard in the subject's daily life.
  • the taste is information expressing the taste of the subject for the sound.
  • the estimation unit 3 has a learning model for estimating parameter data generated by using machine learning by inputting a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past. ..
  • the parameter data is data used for adjusting the sound collecting unit, processing unit, and output unit provided in the target hearing aid.
  • the parameter data is, for example, a parameter used for adjusting the frequency characteristic value, noise suppression strength, howling suppression strength, directivity type, impact sound suppression degree, etc. for each output level.
  • Machine learning can be supervised learning, semi-supervised learning, etc.
  • regression least squares method, regression method such as random forest
  • multi-class classification algorithm such as decision tree
  • machine learning is not limited to the above-mentioned machine learning, and learning other than the above-mentioned machine learning may be used.
  • the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like, so that appropriate fitting is performed for the target person. be able to. Further, the fitting support device 1 can reduce the time required for fitting.
  • the technician first performs fitting using the parameter data obtained based on the hearing test information in the regulation selection method.
  • the hearing test information alone is usually insufficient to determine the parameter data suitable for the subject, it is necessary to further approach the optimum parameter data by the comparative selection method.
  • the technician observes the reaction of the target person by interacting with the target person or listening to the sample sound source in order to approach the optimum parameter data for the target person. , Brings the parameter data closer to the optimum parameter data.
  • the technician observes the reaction of the target person by interacting with the target person or listening to the sample sound source in order to approach the optimum parameter data for the target person. , Brings the parameter data closer to the optimum parameter data.
  • it takes a long time to approach the optimum parameter data.
  • the amount of information that would otherwise be obtained by the comparative selection method is supplemented in advance by utilizing the attribute information and background information of the target person, and this information is directly obtained as parameter data by the learning model. It can be used for making decisions. Therefore, the time required to approach the optimum parameter data can be shortened.
  • FIG. 2 is a diagram showing an example of a system having a fitting support device.
  • FIG. 3 is a diagram showing an example of a data structure of audiometry information.
  • FIG. 4 is a diagram showing an example of a data structure of attribute information.
  • FIG. 5 is a diagram showing an example of a data structure of background information.
  • FIG. 6 is a diagram showing an example of the data structure of the estimated parameter data.
  • the system having the fitting support device 1 in the present embodiment has an input device 21 and an output device 22 in addition to the fitting support device 1.
  • the input device 21 is a device used to input hearing test information, attribute information, and background information to the fitting support device 1. Specifically, the input device 21 first obtains hearing test information and attribute information of the target person from an information processing device (for example, a computer) and a storage device provided in a hearing aid dealer, a manufacturer, a related facility, or the like. , Get background information.
  • an information processing device for example, a computer
  • a storage device provided in a hearing aid dealer, a manufacturer, a related facility, or the like.
  • the hearing test information includes "item” information indicating the hearing test item, "hearing test result” information indicating the hearing test result, and "data type” information indicating the data type. Is associated with and stored in the storage device.
  • the attribute information is associated with the information of the "item” representing the item of the attribute, the information of the "attribute” representing the attribute of the target person, and the information of the "data type” representing the type of data. Is stored in the storage device.
  • the background information is associated with the information of the "item” representing the background item, the information of the "background” representing the background of the target person, and the information of the "data type” representing the data type. Is stored in the storage device.
  • the input device 21 transmits the acquired hearing test information, attribute information, and background information to the acquisition unit 2 of the fitting support device 1 by using communication such as wired or wireless.
  • the input device 21 is, for example, an information processing device such as a personal computer, a mobile computer, a smartphone, or a tablet. Further, a plurality of input devices 21 may be prepared, and hearing test information, attribute information, and background information may be input from separate input devices 21.
  • the output device 22 acquires the output information converted into an outputable format by the output information generation unit 24, and outputs the generated image, sound, and the like based on the output information.
  • the output device 22 is, for example, an image display device using a liquid crystal, an organic EL (Electro Luminescence), or a CRT (Cathode Ray Tube). Further, the image display device may include an audio output device such as a speaker.
  • the output device 22 may be a printing device such as a printer.
  • the fitting support device 1 shown in FIG. 2 has a classification unit 23 and an output information generation unit 24 in addition to the acquisition unit 2 and the estimation unit 3. Further, the estimation unit 3 has a learning model 25.
  • the learning model 25 is provided in the fitting support device 1, but may be provided in an information processing device (for example, a computer or the like) or a storage device (not shown in the system shown in FIG. 2).
  • the estimation unit 3 and the information processing device or the storage device are configured to be able to communicate with each other by communication.
  • the fitting support device 1 will be described.
  • the acquisition unit 2 acquires hearing test information, attribute information, and background information from the input device 21. Specifically, the acquisition unit 2 includes hearing test information transmitted from the input device 21 indicating the result of the hearing test performed on the target person, attribute information representing the attributes of the target person, and the background of the target person. Receives background information and.
  • the classification unit 23 classifies hearing test information, attribute information, and background information. Specifically, the classification unit 23 executes a clustering process on explanatory variables such as hearing test information, attribute information, and background information.
  • the air conduction audiogram, the bone conduction audiogram, the discomfort threshold value, the speech intelligibility, etc. are represented by, for example, vector values.
  • attribute information for example, age, height, weight, etc. are represented by numerical values, gender, etc. are represented by binary values, etc., and occupation, medical history, treatment history, hearing aid usage history, etc. are represented by character strings, etc.
  • Whereabouts, family composition, etc. are represented by category values, etc.
  • ear sounds, etc. are represented by vector values, etc.
  • living environment sounds, tastes, etc. are represented by vector values and the like.
  • the living environment sound may be represented by a time-series signal or the like.
  • adjustment information for example, adjustment date and time, adjustment place, technician identifier, etc.
  • the adjustment date and time and the technician identifier are represented by numerical values
  • the adjustment location is represented by a category value.
  • the classification unit 23 can reduce the amount of computational resources when estimating parameter data by executing clustering processing on explanatory variables such as hearing test information, attribute information, and background information. it can. That is, it is possible to reduce the amount of computational resources such as the processor and memory used for estimating the parameter data.
  • the estimation unit 3 inputs the classified information and estimates the parameter data (objective variable) used to adapt the hearing aid to the subject. Specifically, the estimation unit 3 first acquires the information classified from the classification unit 23. Subsequently, the estimation unit 3 inputs the acquired classified information into the learning model 25 and estimates the parameter data.
  • the estimation unit 3 inputs the classified information and estimates the parameter data (objective variable) used to adapt the hearing aid to the subject. Specifically, the estimation unit 3 first acquires the information classified from the classification unit 23. Subsequently, the estimation unit 3 inputs the acquired classified information into the learning model 25 and estimates the parameter data.
  • the estimation unit 3 does not necessarily have to use the classified information as an input.
  • the estimation unit 3 may use unclassified information as input.
  • the parameter data is data used for adjusting one or more of the sound collecting unit, the processing unit, the output unit, or one or more of the sound collecting unit, the processing unit, or the output unit provided in the hearing aid. Further, the parameter data is, for example, data having at least one or more of the type of hearing aid, the frequency characteristic value for each output level, the noise suppression strength, the howling suppression strength, the directivity type, and the impact sound suppression degree.
  • the parameter data is associated with the information of the "item” representing the item of the parameter data, the information of the "parameter” representing the estimated parameter data, and the information of the "data type” representing the data type. Has been done.
  • the learning model 25 is a model for estimating parameter data generated by machine learning by inputting a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past in the learning phase. The details of the learning model 25 will be described later.
  • the output information generation unit 24 generates output information used to output the parameter data estimated by the estimation unit 3 to the output device 22. Then, the output information generation unit 24 outputs the generated output information to the output device 22.
  • the fitting support device 1 may set the parameter data estimated by the estimation unit 3 in the hearing aid. That is, the fitting support device 1 may directly adjust (fitting) the hearing aid using the estimated parameter data without the intervention of a technician.
  • the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like. Therefore, the technician can use the estimated parameter data to inform the data target person. Appropriate fitting can be performed.
  • the fitting support device 1 can reduce the time required for fitting to approach the optimum parameter data. That is, since the information previously utilized in the comparison selection method can be utilized by using the learning model 25, the time required for fitting can be shortened.
  • FIG. 7 is a diagram showing an example of a system having a learning device.
  • the learning device 31 shown in FIG. 7 is a device that generates a learning model 25 by using machine learning.
  • the system having the learning device 31 shown in FIG. 7 has a storage device 32 in addition to the learning device 31. Further, the learning device 31 has an acquisition unit 33, a classification unit 34, a classification unit 35, and a generation unit 36.
  • the storage device 32 stores a plurality of explanatory variables (hearing test information, attribute information, background information) acquired in the past and objective variables (parameter data).
  • the plurality of hearing test information acquired in the past is information representing the results of hearing tests performed by each user of a plurality of hearing aids in the past.
  • the plurality of attribute information acquired in the past is attribute information acquired by a plurality of hearing aid users for each user in the past.
  • the plurality of background information acquired in the past is background information acquired by a plurality of hearing aid users for each user in the past.
  • the parameter data acquired in the past is the parameter data used for adjusting the hearing aid in the fitting performed by a technician with a high skill level for a plurality of hearing aid users in the past. Further, in the past, it is parameter data used for adjusting the hearing aid in the fitting performed by using the fitting support device 1 for the users of a plurality of hearing aids.
  • the acquisition unit 33 acquires the learning data acquired in the past. Specifically, the acquisition unit 33 acquires learning data such as a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past from the storage device 32, and transfers the acquired learning data to the classification unit 34. Send.
  • the classification unit 34 classifies the received learning data. Specifically, the classification unit 34 first compares a score (satisfaction level) indicating whether or not the user is satisfied with the fitting in the past with a preset threshold value. Subsequently, when the score is equal to or higher than the threshold value, the classification unit 34 classifies the learning data associated with the score equal to or higher than the threshold value.
  • a score satisfaction level
  • the classification unit 35 further classifies the learning data classified by the classification unit 34. Specifically, the classification unit 35 further executes a clustering process on the learning data classified by the classification unit 34. In the clustering process, high-dimensional information is converted into low-dimensional information. The reason is that explanatory variables that can be expressed by sentences and time-series signals become variables with enormous numerical and categorical dimensions, and if learning is performed as it is, the amount of computational resources will increase. Therefore, evaluation of the presence or absence of each word, TF-IDF, etc. are applied to the character string data to convert it into a numerical vector, and for this vectorized data, for example, k-means clustering, deep learning, etc. It is conceivable to convert high-dimensional information into low-dimensional information using.
  • high-dimensional information for example, character string, time series signal, etc.
  • a deep neural network is converted into a vector value.
  • the converted vector value is converted into low-dimensional information (for example, a label) by using the k-nearest neighbor method.
  • labels can be used by unsupervised learning by using a clustering method such as the k-means method. Can be made.
  • the learning model 25 can be trained using only the learning data having a high degree of satisfaction. Therefore, the parameter data estimated by using the learning model 25 has a user satisfaction level. It gets higher. Further, since the learning data can be reduced, the amount of computational resources can be reduced, such as shortening the learning time and reducing the memory usage.
  • the classification unit 35 by executing the clustering process on the learning data classified by the classification unit 34, the amount of computational resources can be further reduced in the case of learning.
  • the generation unit 36 performs machine learning using the learning data classified by the classification unit 35 to generate a learning model 25, and stores the generated learning model 25 in the estimation unit 3.
  • the generation unit 36 may store the learning model 25 in a system having the fitting support device 1, a system having the learning device 31, or a storage device other than the system.
  • machine learning can be, for example, supervised learning or semi-supervised learning.
  • learning such as regression (least squares method, regression method such as random forest) and multiclass classification (algorithm such as decision tree).
  • regression least squares method, regression method such as random forest
  • multiclass classification algorithm such as decision tree
  • the learning data classified by the classification unit 35 may be used, the learning data classified by the classification unit 34 may be used, or the unclassified learning data may be used. You may use it.
  • hearing test information As the learning data used to generate the learning model 25, hearing test information, attribute information, background information, and parameter data that have been determined to be in conformity in the past may be used.
  • the learning device 31 can generate the learning model 25 by taking into account not only the hearing test information acquired in the past but also the attribute information, background information, parameter data, etc. acquired in the past.
  • the learning model 25 can be generated by inputting the result of fitting performed by a technician with a high skill level.
  • the technician when adapting the hearing aid to the subject, the technician can perform appropriate fitting by using the parameter data estimated using the generated learning model 25.
  • parameter data close to the optimum parameter data can be estimated, so that the time required for fitting can be shortened. That is, since the information conventionally obtained by the comparative selection method can be obtained by using the learning model 25, the time required for fitting can be shortened.
  • FIG. 8 is a diagram showing an example of the operation of the fitting support device.
  • FIGS. 2 to 6 will be referred to as appropriate.
  • the fitting support method is implemented by operating the fitting support device. Therefore, the description of the fitting support method in the present embodiment will be replaced with the following description of the operation of the fitting support device.
  • FIG. 9 is a diagram showing an example of the operation of the learning device.
  • FIG. 7 will be referred to as appropriate.
  • the learning method is implemented by operating the learning device. Therefore, the description of the learning method in the present embodiment is replaced with the following description of the operation of the learning device.
  • the acquisition unit 2 acquires the hearing test information, the attribute information, and the background information (explanatory variable) of the target person from the input device 21 in the operation phase (step A1). Specifically, in step A1, the acquisition unit 2 receives the hearing test information that represents the result of the hearing test performed on the target person, the attribute information that represents the attributes of the target person, and the attribute information that is transmitted from the input device 21. Receives background information that represents the background of the target person.
  • the classification unit 23 classifies the hearing test information, the attribute information, and the background information (step A2). Specifically, in step A2, the classification unit 23 executes a clustering process on explanatory variables such as hearing test information, attribute information, and background information. In the clustering process, high-dimensional information is converted into low-dimensional information.
  • the classification unit 23 can reduce the amount of computational resources when estimating parameter data by executing clustering processing on explanatory variables such as hearing test information, attribute information, and background information. it can.
  • the process of step A2 may not be performed.
  • the estimation unit 3 inputs the information classified in step A2 and estimates the parameter data (objective variable) used to adapt the hearing aid to the subject (step A3). Specifically, in step A3, the estimation unit 3 first acquires the information classified from the classification unit 23. Subsequently, in step A3, the estimation unit 3 inputs the acquired classified information into the learning model 25 and estimates the parameter data.
  • estimation unit 3 does not necessarily have to input and use the classified information.
  • the estimation unit 3 may use unclassified information as input.
  • the output information generation unit 24 generates output information used to output the parameter data estimated by the estimation unit 3 to the output device 22 (step A4). Then, the output information generation unit 24 outputs the generated output information to the output device 22 (step A5).
  • the acquisition unit 33 acquires the learning data acquired in the past (step B1). Specifically, in step B1, the acquisition unit 33 acquires learning data such as a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past from the storage device 32, and acquires the acquired learning data. It is transmitted to the classification unit 34.
  • the classification unit 34 classifies the received learning data (step B2). Specifically, in step B2, the classification unit 34 first compares a score (satisfaction level) indicating whether or not the user is satisfied with the fitting in the past with a preset threshold value. Subsequently, in step B2, when the score is equal to or higher than the threshold value, the classification unit 34 classifies the learning data associated with the score equal to or higher than the threshold value.
  • a score evaluation level
  • the classification unit 34 can make the learning model 25 learn using only the learning data having a high degree of satisfaction, so that the parameter data estimated using the learning model 25 can be used by the user. Satisfaction is high.
  • the classification unit 35 further classifies the learning data classified in step B2 (step B3). Specifically, in step B3, the classification unit 35 further executes a clustering process on the learning data classified in step B2. In the clustering process, high-dimensional information is converted into low-dimensional information.
  • the classification unit 35 can further reduce the learning data, so that the amount of computational resources can be reduced, such as shortening the learning time and reducing the memory usage.
  • the generation unit 36 performs machine learning using the learning data classified by the classification unit 35 to generate the learning model 25 (step B4). Subsequently, the generation unit 36 stores the generated learning model 25 in the estimation unit 3 (step B5).
  • the generation unit 36 may store the learning model 25 in a system having a fitting support device 1, a system having a learning device 31, or a storage device other than the system.
  • machine learning can be considered, for example, supervised learning or semi-supervised learning.
  • learning such as regression (least squares method, regression method such as random forest) and multiclass classification (algorithm such as decision tree).
  • regression least squares method, regression method such as random forest
  • multiclass classification algorithm such as decision tree
  • the generation unit 36 may use the learning data classified by the classification unit 34 in step B2 as an input, or may use the learning data classified by the classification unit 35 in step B3, or are classified. Learning data that has not been used may be used.
  • hearing test information As the learning data used to generate the learning model 25, hearing test information, attribute information, background information, and parameter data that have been determined to be in conformity in the past may be used.
  • the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like, so that the technician can estimate the estimated parameter data. It can be used to perform appropriate fitting for data subjects.
  • the fitting support device 1 can reduce the time required for fitting to approach the optimum parameter data. That is, since the information previously utilized in the comparison selection method can be utilized by using the learning model 25, the time required for fitting can be shortened.
  • the fitting support device 1 may set the parameter data estimated by the estimation unit 3 in the hearing aid. That is, the fitting support device 1 may directly adjust (fitting) the hearing aid using the estimated parameter data without the intervention of a technician.
  • the learning device 31 can generate the learning model 25 by taking into account not only the hearing test information acquired in the past but also the attribute information, background information, parameter data, etc. acquired in the past.
  • the learning model 25 can be generated by inputting the result of fitting performed by a technician with a high skill level.
  • the technician when adapting the hearing aid to the subject, the technician can perform appropriate fitting using the parameter data estimated by the generated learning model 25.
  • parameter data close to the optimum parameter data can be estimated, so that the time required for fitting can be shortened. That is, since the information conventionally obtained by the comparative selection method can be obtained by using the learning model 25, the time required for fitting can be shortened.
  • the program for estimating the parameter data in the embodiment of the present invention may be any program that causes a computer to execute steps A1 to A5 shown in FIG. By installing this program on a computer and executing it, the fitting support device and the fitting support method according to the present embodiment can be realized.
  • the computer processor functions as an acquisition unit 2, a classification unit 23, an estimation unit 3, and an output information generation unit 24, and performs processing.
  • the program for estimating the parameter data in the present embodiment may be executed by a computer system constructed by a plurality of computers.
  • each computer may function as any of the acquisition unit 2, the classification unit 23, the estimation unit 3, and the output information generation unit 24, respectively.
  • the program for generating the learning model according to the embodiment of the present invention may be a program that causes a computer to execute steps B1 to B5 shown in FIG. By installing this program on a computer and executing it, the learning device and the learning method according to the present embodiment can be realized.
  • the computer processor functions as an acquisition unit 33, a classification unit 34, a classification unit 35, and a generation unit 36, and performs processing.
  • the program for generating the learning model in the present embodiment may be executed by a computer system constructed by a plurality of computers.
  • each computer may function as any of the acquisition unit 33, the classification unit 34, the classification unit 35, and the generation unit 36, respectively.
  • FIG. 10 is a block diagram showing an example of a computer that realizes the fitting support device or the learning device according to the embodiment of the present invention.
  • the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. And. Each of these parts is connected to each other via a bus 121 so as to be capable of data communication.
  • the computer 110 may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU 111 or in place of the CPU 111.
  • the CPU 111 expands the programs (codes) of the present embodiment stored in the storage device 113 into the main memory 112 and executes them in a predetermined order to perform various operations.
  • the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
  • the program according to the present embodiment is provided in a state of being stored in a computer-readable recording medium 120.
  • the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
  • the storage device 113 include a semiconductor storage device such as a flash memory in addition to a hard disk drive.
  • the input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and mouse.
  • the display controller 115 is connected to the display device 119 and controls the display on the display device 119.
  • the data reader / writer 116 mediates the data transmission between the CPU 111 and the recording medium 120, reads the program from the recording medium 120, and writes the processing result in the computer 110 to the recording medium 120.
  • the communication interface 117 mediates data transmission between the CPU 111 and another computer.
  • the recording medium 120 include a general-purpose semiconductor storage device such as CF (CompactFlash (registered trademark)) and SD (SecureDigital), a magnetic recording medium such as a flexible disk, or a CD-.
  • CF CompactFlash (registered trademark)
  • SD Secure Digital
  • magnetic recording medium such as a flexible disk
  • CD- CompactDiskReadOnlyMemory
  • optical recording media such as ROM (CompactDiskReadOnlyMemory).
  • fitting support device 1 or the learning device 31 in the present embodiment can also be realized by using hardware corresponding to each part instead of the computer on which the program is installed. Further, the fitting support device 1 or the learning device 31 may be partially realized by a program and the rest may be realized by hardware.
  • An acquisition unit that acquires hearing test information representing the results of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
  • An estimation unit that inputs the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
  • a fitting support device characterized by having.
  • the fitting support device according to Appendix 1.
  • the estimation unit has a learning model for estimating parameter data generated by machine learning by inputting a plurality of hearing test information, parameter data, attribute information, and background information acquired in the past.
  • the fitting support device described in Appendix 2 The input used to generate the learning model is a fitting support device characterized by using the hearing test information, the parameter data, the attribute information, and the background information that have been determined to be in conformity in the past. ..
  • the fitting support device according to any one of Appendix 1 to 3.
  • the attribute information has at least one or more information of any one or more of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject.
  • B A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
  • the fitting support method described in Appendix 6 is a fitting support method characterized by using the hearing test information, the parameter data, the attribute information, and the background information that have been determined to be in conformity in the past.
  • the fitting support method according to any one of Appendix 5 to 7.
  • the attribute information has at least one or more information of any one or more of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject.
  • a fitting support method characterized by having any one or more pieces of information.
  • B A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
  • a computer-readable recording medium that records a program, including instructions to execute.
  • Appendix 11 The computer-readable recording medium according to Appendix 10.
  • the input used to generate the learning model is readable by a computer, which uses the hearing test information, the parameter data, the attribute information, and the background information, which are determined to be in conformity in the past. Recording medium.
  • Appendix 12 The computer-readable recording medium according to any one of Appendix 9 to 11.
  • the attribute information includes at least one or more information of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject.
  • a computer-readable recording medium characterized by having any one or more pieces of information.
  • the accuracy of fitting can be improved.
  • the time required for fitting can be shortened.
  • INDUSTRIAL APPLICABILITY The present invention is useful in a field where fitting is required in a wearing device such as a hearing aid.

Abstract

This fitting assistance device 1 is provided with an acquisition unit 2 which acquires hearing test information indicating the results of a hearing test performed on a subject, attribute information representing the subject's attributes, and background information representing the subject's background, and an estimation unit 3 which inputs the acquired hearing test information, attribute information and background information to estimate parameter data used to fit a hearing aid for the subject.

Description

フィッティング支援装置、フィッティング支援方法、及びコンピュータ読み取り可能な記録媒体Fitting support device, fitting support method, and computer-readable recording medium
 本発明は、フィッティング支援装置、フィッティング支援方法に関し、更には、これらを実現するためのプログラムを記録しているコンピュータ読み取り可能な記録媒体に関する。 The present invention relates to a fitting support device and a fitting support method, and further relates to a computer-readable recording medium that records a program for realizing these.
 補聴器を使用する場合、対象者に合わせた調整(フィッティング)が必要である。また、フィッティングを実施する場合、技能者は、対象者の聴力、耳の構造、生活環境音などを加味してフィッティングを実施している。 When using a hearing aid, it is necessary to make adjustments (fitting) according to the subject. In addition, when performing the fitting, the technician performs the fitting in consideration of the subject's hearing, ear structure, living environment sound, and the like.
 ところが、技能者の熟練度(技能レベル)によっては、対象者に対して適切なフィッティングが実施されないことがある。そこで、フィッティングを支援するシステムが提案されている。 However, depending on the skill level (skill level) of the technician, proper fitting may not be performed for the target person. Therefore, a system that supports fitting has been proposed.
 関連する技術として、特許文献1には、補聴器のフィッティングを自動で実施するフィッティング装置が開示されている。特許文献1に開示のフィッティング装置によれば、あらかじめ作成された環境音を対象者に試聴させ、試聴させた環境音の聞こえ方を対象者に評価させ、対象者が満足できようになるまで補聴器の調整を繰り返すことにより、フィッティングをする装置である。 As a related technique, Patent Document 1 discloses a fitting device that automatically fits a hearing aid. According to the fitting device disclosed in Patent Document 1, the subject is made to audition the environmental sound created in advance, the subject is made to evaluate how to hear the auditioned environmental sound, and the hearing aid is used until the subject is satisfied. It is a device that fits by repeating the adjustment of.
特開2001-008295号公報Japanese Unexamined Patent Publication No. 2001-008295
 しかしながら、特許文献1に開示のフィッティング装置は、環境音に含まれる騒音をうるさく感じないようにするフィッティングを行うことはできるが、熟練度の高い技能者が実施するようなフィッティングを実施することは難しい。 However, although the fitting device disclosed in Patent Document 1 can perform fitting so as not to feel the noise contained in the environmental sound noisy, it is not possible to carry out the fitting as performed by a highly skilled technician. difficult.
 また、装着機器に対してフィッティングを行う場合には、様々な対象者がいるので、対象者それぞれに対して特徴を加味する必要がある。そのため、熟練度の低い技能者では、対象者に対して適切なフィッティングを実施することは難しい。 In addition, when fitting to the attached device, there are various target persons, so it is necessary to add characteristics to each target person. Therefore, it is difficult for a technician with a low skill level to perform appropriate fitting for the target person.
 本発明の目的の一例は、フィッティングの精度を向上させるフィッティング支援装置、フィッティング支援方法、及びコンピュータ読み取り可能な記録媒体を提供することにある。 An example of an object of the present invention is to provide a fitting support device, a fitting support method, and a computer-readable recording medium for improving the accuracy of fitting.
 上記目的を達成するため、本発明の一側面におけるフィッティング支援装置は、
 対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得する、取得手段と、
 取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する、推定手段と、
 を有することを特徴とする。
In order to achieve the above object, the fitting support device in one aspect of the present invention is
An acquisition means for acquiring hearing test information representing the results of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
An estimation means for estimating the parameter data used for fitting the hearing aid to the subject by inputting the acquired hearing test information, the attribute information, and the background information.
It is characterized by having.
 また、上記目的を達成するため、本発明の一側面におけるフィッティング支援方法は、
(a)対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得し、
(b)取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する
 することを特徴とする。
Further, in order to achieve the above object, the fitting support method in one aspect of the present invention is:
(A) Hearing test information representing the result of the hearing test performed on the target person, attribute information representing the attribute of the target person, and background information representing the background of the target person are acquired.
(B) It is characterized in that the acquired audiometry information, the attribute information, and the background information are input to estimate the parameter data used for fitting the hearing aid to the subject.
 更に、上記目的を達成するため、本発明の一側面におけるプログラムを記録したコンピュータ読み取り可能な記録媒体は、
 コンピュータに、
(a)対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得する、ステップと、
(b)取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する、ステップと、
 を実行させる命令を含むプログラムを記録していることを特徴とする。
Further, in order to achieve the above object, a computer-readable recording medium on which a program according to one aspect of the present invention is recorded is
On the computer
(A) A step of acquiring hearing test information representing the result of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
(B) A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
It is characterized in that it records a program containing an instruction to execute.
 以上のように本発明によれば、フィッティングの精度を向上させることができる。 As described above, according to the present invention, the accuracy of fitting can be improved.
図1は、フィッティング支援装置の一例を示す図である。FIG. 1 is a diagram showing an example of a fitting support device. 図2は、フィッティング支援装置を有するシステムの一例を示す図である。FIG. 2 is a diagram showing an example of a system having a fitting support device. 図3は、聴力検査情報のデータ構造の一例を示す図である。FIG. 3 is a diagram showing an example of a data structure of audiometry information. 図4は、属性情報のデータ構造の一例を示す図である。FIG. 4 is a diagram showing an example of a data structure of attribute information. 図5は、背景情報のデータ構造の一例を示す図である。FIG. 5 is a diagram showing an example of a data structure of background information. 図6は、推定したパラメータデータのデータ構造の一例を示す図である。FIG. 6 is a diagram showing an example of the data structure of the estimated parameter data. 図7は、学習装置を有するシステムの一例を示す図である。FIG. 7 is a diagram showing an example of a system having a learning device. 図8は、フィッティング支援装置の動作の一例を示す図である。FIG. 8 is a diagram showing an example of the operation of the fitting support device. 図9は、学習装置の動作の一例を示す図である。FIG. 9 is a diagram showing an example of the operation of the learning device. 図10は、フィッティング支援装置を実現するコンピュータの一例を示す図である。FIG. 10 is a diagram showing an example of a computer that realizes a fitting support device.
(実施の形態)
 以下、本発明の実施の形態について、図1から図10を参照しながら説明する。
(Embodiment)
Hereinafter, embodiments of the present invention will be described with reference to FIGS. 1 to 10.
[装置構成]
 最初に、図1を用いて、本実施の形態におけるフィッティング支援装置1の構成について説明する。図1は、フィッティング支援装置の一例を示す図である。
[Device configuration]
First, the configuration of the fitting support device 1 according to the present embodiment will be described with reference to FIG. FIG. 1 is a diagram showing an example of a fitting support device.
 図1に示すフィッティング支援装置は、対象者に補聴器を適合させる調整(フィッティング)の精度を向上させる装置である。また、図1に示すように、フィッティング支援装置1は、取得部2と、推定部3とを有する。 The fitting support device shown in FIG. 1 is a device that improves the accuracy of adjustment (fitting) for adapting the hearing aid to the subject. Further, as shown in FIG. 1, the fitting support device 1 has an acquisition unit 2 and an estimation unit 3.
 このうち、取得部2は、対象者に対して実施した聴力検査の結果を表す聴力検査情報と、対象者の属性を表す属性情報と、対象者の背景を表す背景情報とを取得する。推定部3は、取得した聴力検査情報と属性情報と背景情報とを入力して、対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する。 Of these, the acquisition unit 2 acquires hearing test information representing the results of the hearing test performed on the target person, attribute information representing the attributes of the target person, and background information representing the background of the target person. The estimation unit 3 inputs the acquired hearing test information, attribute information, and background information, and estimates the parameter data used for fitting the hearing aid to the subject.
 補聴器は、例えば、集音部(マイク)を用いて集音し、処理部を用いて集音された音を増幅及び加工し、出力部(レシーバ)を用いて増幅及び加工された音を出力する装置である。 For example, a hearing aid collects sound using a sound collecting unit (microphone), amplifies and processes the collected sound using a processing unit, and outputs the amplified and processed sound using an output unit (receiver). It is a device to do.
 対象者に対して実施した聴力検査の結果を表す聴力検査情報は、例えば、少なくとも気導オージオグラム、骨導オージオグラム、不快閾値、語音明瞭度のうちいずれか一つ以上の情報を有する。 The audiometry information representing the result of the audiometry performed on the subject has, for example, at least one or more of air conduction audiogram, bone conduction audiogram, discomfort threshold value, and speech intelligibility.
 対象者の属性を表す属性情報は、例えば、少なくとも年齢、性別、職業、居所、家族構成、病歴、治療歴、補聴器の使用履歴、身体的特徴(例えば、身長、体重、耳音響など)のうちいずれか一つ以上の情報を有する。なお、耳音響は、耳の音響特性を表す情報である。また、属性情報には、補聴器の種類を表す情報を含めてもよい。 Attribute information representing the attributes of the subject includes, for example, at least age, gender, occupation, whereabouts, family structure, medical history, treatment history, hearing aid usage history, and physical characteristics (for example, height, weight, ear acoustics, etc.). Have any one or more pieces of information. The ear acoustics are information representing the acoustic characteristics of the ear. Further, the attribute information may include information indicating the type of hearing aid.
 対象者の背景を表す背景情報は、例えば、少なくとも対象者の生活環境音、趣向のうちいずれか一つ以上の情報を有する。生活環境音とは、対象者の普段の生活で聞こえている音を表す情報である。趣向は、対象者の音に対する趣向を表す情報である。 The background information representing the background of the target person has, for example, at least one or more of the living environment sounds and tastes of the target person. Living environment sound is information representing the sound heard in the subject's daily life. The taste is information expressing the taste of the subject for the sound.
 また、推定部3は、過去において取得した複数の聴力検査情報と属性情報と背景情報とパラメータデータとを入力として、機械学習を用いて生成された、パラメータデータを推定するための学習モデルを有する。 In addition, the estimation unit 3 has a learning model for estimating parameter data generated by using machine learning by inputting a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past. ..
 パラメータデータは、対象とする補聴器に設けられている集音部、処理部、出力部の調整に用いるデータである。パラメータデータは、例えば、出力レベルごとの周波数特性値、騒音抑制強度、ハウリング抑制強度、指向性タイプ、衝撃音抑制度などを調整するために用いるパラメータである。 The parameter data is data used for adjusting the sound collecting unit, processing unit, and output unit provided in the target hearing aid. The parameter data is, for example, a parameter used for adjusting the frequency characteristic value, noise suppression strength, howling suppression strength, directivity type, impact sound suppression degree, etc. for each output level.
 機械学習は、教師あり学習、半教師あり学習などが考えられる。機械学習は、例えば、回帰(最小二乗法、ランダムフォレストなどの回帰手法)、多クラス分類(決定木などのアルゴリズム)などを用いることが考えられる。なお、機械学習は、上述した機械学習に限定されるものではなく、上述した機械学習以外の学習を用いてもよい。 Machine learning can be supervised learning, semi-supervised learning, etc. For machine learning, for example, regression (least squares method, regression method such as random forest), multi-class classification (algorithm such as decision tree), etc. can be considered. Note that machine learning is not limited to the above-mentioned machine learning, and learning other than the above-mentioned machine learning may be used.
 このように、本実施の形態において、フィッティング支援装置1は、聴力検査情報だけでなく属性情報、背景情報なども加味してパラメータデータを推定できるので、対象者に対して適切なフィッティングを実施することができる。また、フィッティング支援装置1は、フィッティングにかかる時間を短縮することができる。 As described above, in the present embodiment, the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like, so that appropriate fitting is performed for the target person. be able to. Further, the fitting support device 1 can reduce the time required for fitting.
 理由は、従来において、技能者は、まず、規定選択法において、聴力検査情報に基づいて求めたパラメータデータを用いて、フィッティングを実施する。ところが、通常、聴力検査情報だけでは対象者に適合したパラメータデータの決定に不十分であるため、更に、比較選択法により最適なパラメータデータに近づける必要がある。 The reason is that, conventionally, the technician first performs fitting using the parameter data obtained based on the hearing test information in the regulation selection method. However, since the hearing test information alone is usually insufficient to determine the parameter data suitable for the subject, it is necessary to further approach the optimum parameter data by the comparative selection method.
 続いて、技能者は、比較選択法において、対象者にとって最適なパラメータデータに近づけるため、対象者と対話をしたり、対象者にサンプル音源を聞かせたりして、対象者の反応を観察しながら、パラメータデータを最適なパラメータデータへと近づける。しかし、このようなフィッティングでは、最適なパラメータデータへ近づけるのに長時間を要する。 Then, in the comparative selection method, the technician observes the reaction of the target person by interacting with the target person or listening to the sample sound source in order to approach the optimum parameter data for the target person. , Brings the parameter data closer to the optimum parameter data. However, in such a fitting, it takes a long time to approach the optimum parameter data.
 そこで、本実施の形態においては、従来であれば比較選択法において得る情報量を、対象者の属性情報と背景情報を活用することで事前に補い、学習モデルによって、これらの情報を直接パラメータデータの決定に活用できるようにした。そのため、最適なパラメータデータに近づけるために要していた時間を短縮できる。 Therefore, in the present embodiment, the amount of information that would otherwise be obtained by the comparative selection method is supplemented in advance by utilizing the attribute information and background information of the target person, and this information is directly obtained as parameter data by the learning model. It can be used for making decisions. Therefore, the time required to approach the optimum parameter data can be shortened.
[システム構成]
 続いて、図2、図3、図4、図5、図6を用いて、本実施の形態におけるフィッティング支援装置1の構成をより具体的に説明する。図2は、フィッティング支援装置を有するシステムの一例を示す図である。図3は、聴力検査情報のデータ構造の一例を示す図である。図4は、属性情報のデータ構造の一例を示す図である。図5は、背景情報のデータ構造の一例を示す図である。図6は、推定したパラメータデータのデータ構造の一例を示す図である。
[System configuration]
Subsequently, the configuration of the fitting support device 1 according to the present embodiment will be described more specifically with reference to FIGS. 2, 3, 4, 5, and 6. FIG. 2 is a diagram showing an example of a system having a fitting support device. FIG. 3 is a diagram showing an example of a data structure of audiometry information. FIG. 4 is a diagram showing an example of a data structure of attribute information. FIG. 5 is a diagram showing an example of a data structure of background information. FIG. 6 is a diagram showing an example of the data structure of the estimated parameter data.
 図2に示すように、本実施の形態におけるフィッティング支援装置1を有するシステムは、フィッティング支援装置1に加え、入力装置21と、出力装置22とを有する。 As shown in FIG. 2, the system having the fitting support device 1 in the present embodiment has an input device 21 and an output device 22 in addition to the fitting support device 1.
 システムについて説明をする。
 入力装置21は、聴力検査情報、属性情報、背景情報を、フィッティング支援装置1に入力するために用いる装置である。具体的には、入力装置21は、まず、補聴器の販売店、製造元、関連施設などに設けられている情報処理装置(例えば、コンピュータなど)、記憶装置から、対象者の聴力検査情報、属性情報、背景情報を取得する。
Explain the system.
The input device 21 is a device used to input hearing test information, attribute information, and background information to the fitting support device 1. Specifically, the input device 21 first obtains hearing test information and attribute information of the target person from an information processing device (for example, a computer) and a storage device provided in a hearing aid dealer, a manufacturer, a related facility, or the like. , Get background information.
 図3の例では、聴力検査情報には、聴力検査の項目を表す「項目」の情報と、聴力検査結果を表す「聴力検査結果」の情報と、データの種別を表す「データ種別」の情報とが関連付けられ、記憶装置に記憶されている。 In the example of FIG. 3, the hearing test information includes "item" information indicating the hearing test item, "hearing test result" information indicating the hearing test result, and "data type" information indicating the data type. Is associated with and stored in the storage device.
 図4の例では、属性情報には、属性の項目を表す「項目」の情報と、対象者の属性を表す「属性」の情報と、データの種別を表す「データ種別」の情報とが関連付けられ、記憶装置に記憶されている。 In the example of FIG. 4, the attribute information is associated with the information of the "item" representing the item of the attribute, the information of the "attribute" representing the attribute of the target person, and the information of the "data type" representing the type of data. Is stored in the storage device.
 図5の例では、背景情報には、背景の項目を表す「項目」の情報と、対象者の背景を表す「背景」の情報と、データの種別を表す「データ種別」の情報とが関連付けられ、記憶装置に記憶されている。 In the example of FIG. 5, the background information is associated with the information of the "item" representing the background item, the information of the "background" representing the background of the target person, and the information of the "data type" representing the data type. Is stored in the storage device.
 続いて、入力装置21は、取得した聴力検査情報、属性情報、背景情報を、フィッティング支援装置1の取得部2へ有線又は無線などの通信を用いて送信する。 Subsequently, the input device 21 transmits the acquired hearing test information, attribute information, and background information to the acquisition unit 2 of the fitting support device 1 by using communication such as wired or wireless.
 なお、入力装置21は、例えば、パーソナルコンピュータ、モバイルコンピュータ、スマートフォン、タブレットなどの情報処理装置である。また、入力装置21を複数用意して、別々の入力装置21から聴力検査情報、属性情報、背景情報を入力してもよい。 The input device 21 is, for example, an information processing device such as a personal computer, a mobile computer, a smartphone, or a tablet. Further, a plurality of input devices 21 may be prepared, and hearing test information, attribute information, and background information may be input from separate input devices 21.
 出力装置22は、出力情報生成部24により、出力可能な形式に変換された、出力情報を取得し、その出力情報に基づいて、生成した画像及び音声などを出力する。出力装置22は、例えば、液晶、有機EL(Electro Luminescence)、CRT(Cathode Ray Tube)を用いた画像表示装置などである。更に、画像表示装置は、スピーカなどの音声出力装置などを備えていてもよい。なお、出力装置22は、プリンタなどの印刷装置でもよい。 The output device 22 acquires the output information converted into an outputable format by the output information generation unit 24, and outputs the generated image, sound, and the like based on the output information. The output device 22 is, for example, an image display device using a liquid crystal, an organic EL (Electro Luminescence), or a CRT (Cathode Ray Tube). Further, the image display device may include an audio output device such as a speaker. The output device 22 may be a printing device such as a printer.
 次に、図2に示すフィッティング支援装置1は、取得部2、推定部3に加えて、分類部23と、出力情報生成部24とを有する。更に、推定部3は、学習モデル25を有する。 Next, the fitting support device 1 shown in FIG. 2 has a classification unit 23 and an output information generation unit 24 in addition to the acquisition unit 2 and the estimation unit 3. Further, the estimation unit 3 has a learning model 25.
 なお、システムは、図2に示した構成に限定されるものではない。例えば、図2においては、学習モデル25は、フィッティング支援装置1に設けられているが、図2に示すシステムに不図示の情報処理装置(例えば、コンピュータなど)又は記憶装置に設けてもよい。その場合には、推定部3と、情報処理装置又は記憶装置とは、互いに通信によりやり取りが可能な構成とする。 The system is not limited to the configuration shown in FIG. For example, in FIG. 2, the learning model 25 is provided in the fitting support device 1, but may be provided in an information processing device (for example, a computer or the like) or a storage device (not shown in the system shown in FIG. 2). In that case, the estimation unit 3 and the information processing device or the storage device are configured to be able to communicate with each other by communication.
 フィッティング支援装置1について説明する。
 取得部2は、入力装置21から聴力検査情報、属性情報、背景情報を取得する。具体的には、取得部2は、入力装置21から送信された、対象者に対して実施した聴力検査の結果を表す聴力検査情報と、対象者の属性を表す属性情報と、対象者の背景を表す背景情報とを受信する。
The fitting support device 1 will be described.
The acquisition unit 2 acquires hearing test information, attribute information, and background information from the input device 21. Specifically, the acquisition unit 2 includes hearing test information transmitted from the input device 21 indicating the result of the hearing test performed on the target person, attribute information representing the attributes of the target person, and the background of the target person. Receives background information and.
 分類部23は、聴力検査情報、属性情報、背景情報を分類する。具体的には、分類部23は、聴力検査情報、属性情報、背景情報などの説明変数に対してクラスタリング処理を実行する。 The classification unit 23 classifies hearing test information, attribute information, and background information. Specifically, the classification unit 23 executes a clustering process on explanatory variables such as hearing test information, attribute information, and background information.
 クラスタリング処理では、高次元の情報を低次元の情報へ変換する。その理由は、文章や時系列信号で表現されうる説明変数は膨大な数値・カテゴリ次元を持つ変数となってしまい、そのまま用いてパラメータデータを推定すると、計算資源量が増加するためである。そこで、例えば、文字列データに対しては各単語の有無の評価、TF-IDF(Term Frequency - Inverse Document Frequency)などを適用して数値ベクトルへ変換し、このベクトル化したデータに対して、k-平均法(k-means)、深層学習などを用いて、高次元の情報を低次元の情報へ変換する。 In the clustering process, high-dimensional information is converted into low-dimensional information. The reason is that the explanatory variables that can be expressed by sentences and time-series signals become variables with a huge number of numerical values and category dimensions, and if parameter data is estimated using them as they are, the amount of computational resources will increase. Therefore, for example, evaluation of the presence or absence of each word is applied to character string data, TF-IDF (Term Frequency-Inverse Document Frequency) is applied to convert the data into a numerical vector, and the vectorized data is k. -Convert high-dimensional information to low-dimensional information using k-means, deep learning, etc.
 聴力検査情報において、気導オージオグラム、骨導オージオグラム、不快閾値、語音明瞭度などは、例えば、ベクトル値などで表される。また、属性情報において、例えば、年齢、身長、体重などは数値などで表され、性別などはバイナリ値などで表され、職業、病歴、治療歴、補聴器の使用履歴などは文字列などで表され、居所、家族構成などはカテゴリ値などで表され、耳音響などはベクトル値などで表される。また、背景情報において、例えば、生活環境音、趣向などはベクトル値などで表される。なお、生活環境音は、時系列信号などで表してもよい。 In the hearing test information, the air conduction audiogram, the bone conduction audiogram, the discomfort threshold value, the speech intelligibility, etc. are represented by, for example, vector values. In the attribute information, for example, age, height, weight, etc. are represented by numerical values, gender, etc. are represented by binary values, etc., and occupation, medical history, treatment history, hearing aid usage history, etc. are represented by character strings, etc. , Whereabouts, family composition, etc. are represented by category values, etc., and ear sounds, etc. are represented by vector values, etc. Further, in the background information, for example, living environment sounds, tastes, etc. are represented by vector values and the like. The living environment sound may be represented by a time-series signal or the like.
 更に、説明変数として、調整情報(例えば、調整日時、調整場所、技能者識別子など)を用いてもよい。例えば、調整日時、技能者識別子は数値などで表され、調整場所はカテゴリ値で表される。 Further, adjustment information (for example, adjustment date and time, adjustment place, technician identifier, etc.) may be used as an explanatory variable. For example, the adjustment date and time and the technician identifier are represented by numerical values, and the adjustment location is represented by a category value.
 このように、分類部23においては、聴力検査情報、属性情報、背景情報などの説明変数に対してクラスタリング処理を実行することで、パラメータデータを推定する場合において、計算資源量を削減することができる。すなわち、パラメータデータの推定に用いるプロセッサ、メモリなどの計算資源量を削減することができる。 In this way, the classification unit 23 can reduce the amount of computational resources when estimating parameter data by executing clustering processing on explanatory variables such as hearing test information, attribute information, and background information. it can. That is, it is possible to reduce the amount of computational resources such as the processor and memory used for estimating the parameter data.
 推定部3は、運用フェーズにおいて、分類された情報を入力して、対象者に補聴器を適合させるために用いるパラメータデータ(目的変数)を推定する。具体的には、推定部3は、まず、分類部23から分類された情報を取得する。続いて、推定部3は、取得した分類された情報を学習モデル25に入力し、パラメータデータを推定する。 In the operation phase, the estimation unit 3 inputs the classified information and estimates the parameter data (objective variable) used to adapt the hearing aid to the subject. Specifically, the estimation unit 3 first acquires the information classified from the classification unit 23. Subsequently, the estimation unit 3 inputs the acquired classified information into the learning model 25 and estimates the parameter data.
 ただし、推定部3は、かならずしも、分類された情報を入力として用いなくてもよい。推定部3は、分類されていない情報を入力として用いてもよい。 However, the estimation unit 3 does not necessarily have to use the classified information as an input. The estimation unit 3 may use unclassified information as input.
 パラメータデータは、補聴器に設けられている集音部、又は処理部、又は出力部、又はいずれか一つ以上の調整に用いるデータである。また、パラメータデータは、例えば、少なくとも補聴器の種類、出力レベルごとの周波数特性値、騒音抑制強度、ハウリング抑制強度、指向性タイプ、衝撃音抑制度のうち一つ以上を有するデータである。 The parameter data is data used for adjusting one or more of the sound collecting unit, the processing unit, the output unit, or one or more of the sound collecting unit, the processing unit, or the output unit provided in the hearing aid. Further, the parameter data is, for example, data having at least one or more of the type of hearing aid, the frequency characteristic value for each output level, the noise suppression strength, the howling suppression strength, the directivity type, and the impact sound suppression degree.
 図6の例では、パラメータデータは、パラメータデータの項目を表す「項目」の情報と、推定したパラメータデータを表す「パラメータ」の情報と、データの種別を表す「データ種別」の情報とが関連付けられている。 In the example of FIG. 6, the parameter data is associated with the information of the "item" representing the item of the parameter data, the information of the "parameter" representing the estimated parameter data, and the information of the "data type" representing the data type. Has been done.
 学習モデル25は、学習フェーズにおいて、過去において取得した複数の聴力検査情報と属性情報と背景情報とパラメータデータとを入力として、機械学習により生成された、パラメータデータを推定するためのモデルである。なお、学習モデル25の詳細については後述する。 The learning model 25 is a model for estimating parameter data generated by machine learning by inputting a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past in the learning phase. The details of the learning model 25 will be described later.
 出力情報生成部24は、推定部3が推定したパラメータデータを、出力装置22に出力するために用いる出力情報を生成する。そして、出力情報生成部24は、生成した出力情報を出力装置22へ出力する。 The output information generation unit 24 generates output information used to output the parameter data estimated by the estimation unit 3 to the output device 22. Then, the output information generation unit 24 outputs the generated output information to the output device 22.
 なお、フィッティング支援装置1は、推定部3が推定したパラメータデータを、補聴器に設定してもよい。すなわち、フィッティング支援装置1が、推定したパラメータデータを用いて、技能者を介さずに直接、補聴器の調整(フィッティング)をしてもよい。 Note that the fitting support device 1 may set the parameter data estimated by the estimation unit 3 in the hearing aid. That is, the fitting support device 1 may directly adjust (fitting) the hearing aid using the estimated parameter data without the intervention of a technician.
 このように、フィッティング支援装置1は、聴力検査情報だけでなく属性情報、背景情報なども加味してパラメータデータを推定できるので、技能者は、推定したパラメータデータを用いて、データ対象者に対して適切なフィッティングを実施することができる。 In this way, the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like. Therefore, the technician can use the estimated parameter data to inform the data target person. Appropriate fitting can be performed.
 また、フィッティング支援装置1は、最適なパラメータデータに近づけるために要していたフィッティングにかかる時間を短縮することができる。すなわち、従来は比較選択法において活用をしていた情報を、学習モデル25を用いて活用できるようにしたため、フィッティングにかかる時間を短縮できる。 In addition, the fitting support device 1 can reduce the time required for fitting to approach the optimum parameter data. That is, since the information previously utilized in the comparison selection method can be utilized by using the learning model 25, the time required for fitting can be shortened.
 次に、学習モデルの生成について説明する。
 図7は、学習装置を有するシステムの一例を示す図である。図7に示す学習装置31は、機械学習を用いて学習モデル25を生成する装置である。
Next, the generation of the learning model will be described.
FIG. 7 is a diagram showing an example of a system having a learning device. The learning device 31 shown in FIG. 7 is a device that generates a learning model 25 by using machine learning.
 図7に示す学習装置31を有するシステムは、学習装置31に加えて、記憶装置32を有する。また、学習装置31は、取得部33と、分類部34と、分類部35と、生成部36とを有する。 The system having the learning device 31 shown in FIG. 7 has a storage device 32 in addition to the learning device 31. Further, the learning device 31 has an acquisition unit 33, a classification unit 34, a classification unit 35, and a generation unit 36.
 記憶装置32は、過去において取得した複数の説明変数(聴力検査情報、属性情報、背景情報)と、目的変数(パラメータデータ)とを記憶している。 The storage device 32 stores a plurality of explanatory variables (hearing test information, attribute information, background information) acquired in the past and objective variables (parameter data).
 過去において取得した複数の聴力検査情報とは、過去において、複数の補聴器の利用者が、利用者ごとに実施した聴力検査の結果を表す情報である。過去において取得した複数の属性情報とは、過去において、複数の補聴器の利用者が、利用者ごとに取得した属性情報である。過去において取得した複数の背景情報とは、過去において、複数の補聴器の利用者が、利用者ごとに取得した背景情報である。 The plurality of hearing test information acquired in the past is information representing the results of hearing tests performed by each user of a plurality of hearing aids in the past. The plurality of attribute information acquired in the past is attribute information acquired by a plurality of hearing aid users for each user in the past. The plurality of background information acquired in the past is background information acquired by a plurality of hearing aid users for each user in the past.
 過去において取得したパラメータデータとは、過去において、複数の補聴器の利用者に対して、技能レベルの高い技能者が実施したフィッティングにおいて、補聴器の調整に用いたパラメータデータである。また、過去において、複数の補聴器の利用者に対して、フィッティング支援装置1を用いて実施したフィッティングにおいて、補聴器の調整に用いたパラメータデータである。 The parameter data acquired in the past is the parameter data used for adjusting the hearing aid in the fitting performed by a technician with a high skill level for a plurality of hearing aid users in the past. Further, in the past, it is parameter data used for adjusting the hearing aid in the fitting performed by using the fitting support device 1 for the users of a plurality of hearing aids.
 学習装置について説明する。
 取得部33は、過去において取得した学習データを取得する。具体的には、取得部33は、記憶装置32から、過去において取得した複数の聴力検査情報、属性情報、背景情報、パラメータデータなどの学習データを取得し、取得した学習データを分類部34へ送信する。
The learning device will be described.
The acquisition unit 33 acquires the learning data acquired in the past. Specifically, the acquisition unit 33 acquires learning data such as a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past from the storage device 32, and transfers the acquired learning data to the classification unit 34. Send.
 分類部34は、受信した学習データを分類する。具体的には、分類部34は、まず、過去において、利用者がフィッティングに対して満足したか否かを表すスコア(満足度)と、あらかじめ設定した閾値とを比較する。続いて、分類部34は、スコアが閾値以上である場合、閾値以上のスコアに関連付けられている学習データを分類する。 The classification unit 34 classifies the received learning data. Specifically, the classification unit 34 first compares a score (satisfaction level) indicating whether or not the user is satisfied with the fitting in the past with a preset threshold value. Subsequently, when the score is equal to or higher than the threshold value, the classification unit 34 classifies the learning data associated with the score equal to or higher than the threshold value.
 分類部35は、分類部34で分類した学習データを更に分類する。具体的には、分類部35は、分類部34で分類した学習データに対して、更にクラスタリング処理を実行する。クラスタリング処理では、高次元の情報を低次元の情報へ変換する。その理由は、文章や時系列信号で表現されうる説明変数は膨大な数値・カテゴリ次元を持つ変数となってしまい、そのまま学習をすると、計算資源量が増加する。そこで、文字列データに対しては各単語の有無の評価、TF-IDFなどを適用して、数値ベクトルへ変換し、このベクトル化したデータに対して、例えば、k-平均法、深層学習などを用いて高次元の情報を低次元の情報へ変換をすることが考えられる。 The classification unit 35 further classifies the learning data classified by the classification unit 34. Specifically, the classification unit 35 further executes a clustering process on the learning data classified by the classification unit 34. In the clustering process, high-dimensional information is converted into low-dimensional information. The reason is that explanatory variables that can be expressed by sentences and time-series signals become variables with enormous numerical and categorical dimensions, and if learning is performed as it is, the amount of computational resources will increase. Therefore, evaluation of the presence or absence of each word, TF-IDF, etc. are applied to the character string data to convert it into a numerical vector, and for this vectorized data, for example, k-means clustering, deep learning, etc. It is conceivable to convert high-dimensional information into low-dimensional information using.
 一例として、まず、高次元の情報(例えば、文字列、時系列信号など)を、深層ニューラルネットワークに入力して、ベクトル値に変換する。続いて、変換したベクトル値を、k近傍法を用いて、低次元の情報(例えば、ラベルなど)に変換する。 As an example, first, high-dimensional information (for example, character string, time series signal, etc.) is input to a deep neural network and converted into a vector value. Subsequently, the converted vector value is converted into low-dimensional information (for example, a label) by using the k-nearest neighbor method.
 なお、ドメイン知識から作成されたラベルを用いて教師あり学習で分類を行う他にも、ラベルが事前に付与されていない場合でも、k平均法などによるクラスタリング手法を用いることで教師無し学習でラベルを作ることができる。 In addition to classifying by supervised learning using labels created from domain knowledge, even if labels are not given in advance, labels can be used by unsupervised learning by using a clustering method such as the k-means method. Can be made.
 このように、分類部34においては、満足度が高い学習データのみを用いて学習モデル25に学習をさせることができるため、学習モデル25を用いて推定したパラメータデータは、利用者の満足度が高くなる。更に、学習データを削減することができるので、学習にかかる時間の短縮、メモリ使用量の低減など、計算資源量を削減することができる。 In this way, in the classification unit 34, the learning model 25 can be trained using only the learning data having a high degree of satisfaction. Therefore, the parameter data estimated by using the learning model 25 has a user satisfaction level. It gets higher. Further, since the learning data can be reduced, the amount of computational resources can be reduced, such as shortening the learning time and reducing the memory usage.
 また、分類部35においては、分類部34で分類した学習データに対してクラスタリング処理を実行することで、学習する場合において、更に計算資源量を削減することができる。 Further, in the classification unit 35, by executing the clustering process on the learning data classified by the classification unit 34, the amount of computational resources can be further reduced in the case of learning.
 生成部36は、分類部35で分類した学習データを用いて機械学習をさせて、学習モデル25を生成し、生成した学習モデル25を推定部3に記憶する。又は、生成部36は、学習モデル25を、フィッティング支援装置1を有するシステム、又は学習装置31を有するシステム、又はそれら以外の記憶装置に記憶してもよい。 The generation unit 36 performs machine learning using the learning data classified by the classification unit 35 to generate a learning model 25, and stores the generated learning model 25 in the estimation unit 3. Alternatively, the generation unit 36 may store the learning model 25 in a system having the fitting support device 1, a system having the learning device 31, or a storage device other than the system.
 なお、機械学習は、例えば、教師あり学習、半教師あり学習などが考えられる。例えば、回帰(最小二乗法、ランダムフォレストなどの回帰手法)、多クラス分類(決定木などのアルゴリズム)などの学習を用いることが考えられる。なお、上述した機械学習に限定されるものではなく、上述した機械学習以外を用いてもよい。 Note that machine learning can be, for example, supervised learning or semi-supervised learning. For example, it is conceivable to use learning such as regression (least squares method, regression method such as random forest) and multiclass classification (algorithm such as decision tree). It should be noted that the present invention is not limited to the above-mentioned machine learning, and other than the above-mentioned machine learning may be used.
 ただし、生成部36の入力は、分類部35で分類された学習データを用いてもよいし、又は分類部34で分類された学習データを用いてもよいし、又は分類されていない学習データを用いてもよい。 However, as the input of the generation unit 36, the learning data classified by the classification unit 35 may be used, the learning data classified by the classification unit 34 may be used, or the unclassified learning data may be used. You may use it.
 また、学習モデル25を生成するために用いる学習データは、過去に適合状態と判断された聴力検査情報と、属性情報と、背景情報と、パラメータデータとを用いてもよい。 Further, as the learning data used to generate the learning model 25, hearing test information, attribute information, background information, and parameter data that have been determined to be in conformity in the past may be used.
 このように、学習装置31は、過去において取得した聴力検査情報だけでなく、過去において取得した属性情報、背景情報、パラメータデータなども加味して、学習モデル25を生成できる。特に、技能レベルの高い技能者が実施したフィッティングの結果を入力として、学習モデル25を生成できる。 In this way, the learning device 31 can generate the learning model 25 by taking into account not only the hearing test information acquired in the past but also the attribute information, background information, parameter data, etc. acquired in the past. In particular, the learning model 25 can be generated by inputting the result of fitting performed by a technician with a high skill level.
 そのため、対象者に補聴器を適合させる場合に、この生成された学習モデル25を用いて推定されたパラメータデータを用いることで、技能者は適切なフィッティングを実施することができる。 Therefore, when adapting the hearing aid to the subject, the technician can perform appropriate fitting by using the parameter data estimated using the generated learning model 25.
 また、この生成された学習モデル25を用いて推定することで、最適なパラメータデータに近いパラメータデータを推定できるので、フィッティングにかかる時間を短縮することができる。すなわち、従来は比較選択法により得ている情報を、学習モデル25を用いて得られるようにしたため、フィッティングにかかる時間を短縮できる。 Further, by estimating using the generated learning model 25, parameter data close to the optimum parameter data can be estimated, so that the time required for fitting can be shortened. That is, since the information conventionally obtained by the comparative selection method can be obtained by using the learning model 25, the time required for fitting can be shortened.
[装置動作]
 次に、本発明の実施の形態におけるフィッティング支援装置の動作について図8を用いて説明する。図8は、フィッティング支援装置の動作の一例を示す図である。以下の説明においては、適宜図2から6を参照する。また、本実施の形態では、フィッティング支援装置を動作させることによって、フィッティング支援方法が実施される。よって、本実施の形態におけるフィッティング支援方法の説明は、以下のフィッティング支援装置の動作説明に代える。
[Device operation]
Next, the operation of the fitting support device according to the embodiment of the present invention will be described with reference to FIG. FIG. 8 is a diagram showing an example of the operation of the fitting support device. In the following description, FIGS. 2 to 6 will be referred to as appropriate. Further, in the present embodiment, the fitting support method is implemented by operating the fitting support device. Therefore, the description of the fitting support method in the present embodiment will be replaced with the following description of the operation of the fitting support device.
 また、本発明の実施の形態における学習装置の動作について図9を用いて説明する。図9は、学習装置の動作の一例を示す図である。以下の説明においては、適宜図7を参照する。また、本実施の形態では、学習装置を動作させることによって、学習方法が実施される。よって、本実施の形態における学習方法の説明は、以下の学習装置の動作説明に代える。 Further, the operation of the learning device according to the embodiment of the present invention will be described with reference to FIG. FIG. 9 is a diagram showing an example of the operation of the learning device. In the following description, FIG. 7 will be referred to as appropriate. Further, in the present embodiment, the learning method is implemented by operating the learning device. Therefore, the description of the learning method in the present embodiment is replaced with the following description of the operation of the learning device.
 フィッティング支援装置の動作について説明する。
 図8に示すように、最初に、取得部2は、運用フェーズにおいて、入力装置21から対象者の聴力検査情報、属性情報、背景情報(説明変数)を取得する(ステップA1)。具体的には、ステップA1において、取得部2は、入力装置21から送信された、対象者に対して実施した聴力検査の結果を表す聴力検査情報と、対象者の属性を表す属性情報と、対象者の背景を表す背景情報とを受信する。
The operation of the fitting support device will be described.
As shown in FIG. 8, first, the acquisition unit 2 acquires the hearing test information, the attribute information, and the background information (explanatory variable) of the target person from the input device 21 in the operation phase (step A1). Specifically, in step A1, the acquisition unit 2 receives the hearing test information that represents the result of the hearing test performed on the target person, the attribute information that represents the attributes of the target person, and the attribute information that is transmitted from the input device 21. Receives background information that represents the background of the target person.
 続いて、分類部23は、聴力検査情報、属性情報、背景情報を分類する(ステップA2)。具体的には、ステップA2において、分類部23は、聴力検査情報、属性情報、背景情報などの説明変数に対してクラスタリング処理を実行する。クラスタリング処理では、高次元の情報を低次元の情報へ変換する。 Subsequently, the classification unit 23 classifies the hearing test information, the attribute information, and the background information (step A2). Specifically, in step A2, the classification unit 23 executes a clustering process on explanatory variables such as hearing test information, attribute information, and background information. In the clustering process, high-dimensional information is converted into low-dimensional information.
 このように、分類部23においては、聴力検査情報、属性情報、背景情報などの説明変数に対してクラスタリング処理を実行することで、パラメータデータを推定する場合において、計算資源量を削減することができる。なお、ステップA2の処理はなくてもよい。 In this way, the classification unit 23 can reduce the amount of computational resources when estimating parameter data by executing clustering processing on explanatory variables such as hearing test information, attribute information, and background information. it can. The process of step A2 may not be performed.
 続いて、推定部3は、ステップA2において分類された情報を入力して、対象者に補聴器を適合させるために用いるパラメータデータ(目的変数)を推定する(ステップA3)。具体的には、ステップA3において、推定部3は、まず、分類部23から分類された情報を取得する。続いて、ステップA3において、推定部3は、取得した分類された情報を学習モデル25に入力し、パラメータデータを推定する。 Subsequently, the estimation unit 3 inputs the information classified in step A2 and estimates the parameter data (objective variable) used to adapt the hearing aid to the subject (step A3). Specifically, in step A3, the estimation unit 3 first acquires the information classified from the classification unit 23. Subsequently, in step A3, the estimation unit 3 inputs the acquired classified information into the learning model 25 and estimates the parameter data.
 なお、推定部3は、かならずしも、分類された情報を入力して用いなくてもよい。推定部3は、分類されていない情報を入力として用いてもよい。 Note that the estimation unit 3 does not necessarily have to input and use the classified information. The estimation unit 3 may use unclassified information as input.
 続いて、出力情報生成部24は、推定部3が推定したパラメータデータを、出力装置22に出力するために用いる出力情報を生成する(ステップA4)。そして、出力情報生成部24は、生成した出力情報を出力装置22へ出力する(ステップA5)。 Subsequently, the output information generation unit 24 generates output information used to output the parameter data estimated by the estimation unit 3 to the output device 22 (step A4). Then, the output information generation unit 24 outputs the generated output information to the output device 22 (step A5).
 学習装置の動作について説明する。
 図9に示すように、最初に、取得部33は、過去において取得した学習データを取得する(ステップB1)。具体的には、ステップB1において、取得部33は、記憶装置32から、過去において取得した複数の聴力検査情報、属性情報、背景情報、パラメータデータなどの学習データを取得し、取得した学習データを分類部34へ送信する。
The operation of the learning device will be described.
As shown in FIG. 9, first, the acquisition unit 33 acquires the learning data acquired in the past (step B1). Specifically, in step B1, the acquisition unit 33 acquires learning data such as a plurality of hearing test information, attribute information, background information, and parameter data acquired in the past from the storage device 32, and acquires the acquired learning data. It is transmitted to the classification unit 34.
 続いて、分類部34は、受信した学習データを分類する(ステップB2)。具体的には、ステップB2において、分類部34は、まず、過去において、利用者がフィッティングに対して満足したか否かを表すスコア(満足度)と、あらかじめ設定した閾値とを比較する。続いて、ステップB2において、分類部34は、スコアが閾値以上である場合、閾値以上のスコアに関連付けられている学習データを分類する。 Subsequently, the classification unit 34 classifies the received learning data (step B2). Specifically, in step B2, the classification unit 34 first compares a score (satisfaction level) indicating whether or not the user is satisfied with the fitting in the past with a preset threshold value. Subsequently, in step B2, when the score is equal to or higher than the threshold value, the classification unit 34 classifies the learning data associated with the score equal to or higher than the threshold value.
 このように、ステップB2において、分類部34は、満足度が高い学習データのみを用いて学習モデル25に学習をさせることができるため、学習モデル25を用いて推定したパラメータデータは、利用者の満足度が高くなる。 As described above, in step B2, the classification unit 34 can make the learning model 25 learn using only the learning data having a high degree of satisfaction, so that the parameter data estimated using the learning model 25 can be used by the user. Satisfaction is high.
 続いて、分類部35は、ステップB2において分類した学習データを更に分類する(ステップB3)。具体的には、ステップB3において、分類部35は、ステップB2において分類した学習データに対して、更にクラスタリング処理を実行する。クラスタリング処理では、高次元の情報を低次元の情報へ変換する。 Subsequently, the classification unit 35 further classifies the learning data classified in step B2 (step B3). Specifically, in step B3, the classification unit 35 further executes a clustering process on the learning data classified in step B2. In the clustering process, high-dimensional information is converted into low-dimensional information.
 このように、ステップB3において、分類部35は、更に、学習データを削減することができるので、学習にかかる時間の短縮、メモリ使用量の低減など、計算資源量を削減することができる。 As described above, in step B3, the classification unit 35 can further reduce the learning data, so that the amount of computational resources can be reduced, such as shortening the learning time and reducing the memory usage.
 続いて、生成部36は、分類部35で分類した学習データを用いて機械学習をさせて、学習モデル25を生成する(ステップB4)。続いて、生成部36は、生成した学習モデル25を推定部3に記憶する(ステップB5)。なお、生成部36は、学習モデル25を、フィッティング支援装置1を有するシステム、又は学習装置31を有するシステム、又はそれら以外の記憶装置に記憶してもよい。 Subsequently, the generation unit 36 performs machine learning using the learning data classified by the classification unit 35 to generate the learning model 25 (step B4). Subsequently, the generation unit 36 stores the generated learning model 25 in the estimation unit 3 (step B5). The generation unit 36 may store the learning model 25 in a system having a fitting support device 1, a system having a learning device 31, or a storage device other than the system.
 ここで、機械学習は、例えば、教師あり学習、半教師あり学習などが考えられる。例えば、回帰(最小二乗法、ランダムフォレストなどの回帰手法)、多クラス分類(決定木などのアルゴリズム)などの学習を用いることが考えられる。なお、上述した機械学習に限定されるものではなく、上述した機械学習以外を用いてもよい。 Here, machine learning can be considered, for example, supervised learning or semi-supervised learning. For example, it is conceivable to use learning such as regression (least squares method, regression method such as random forest) and multiclass classification (algorithm such as decision tree). It should be noted that the present invention is not limited to the above-mentioned machine learning, and other than the above-mentioned machine learning may be used.
 ただし、生成部36は、入力として、ステップB2において分類部34で分類された学習データを用いてもよいし、ステップB3において分類部35で分類された学習データを用いてもよいし、分類されていない学習データを用いてもよい。 However, the generation unit 36 may use the learning data classified by the classification unit 34 in step B2 as an input, or may use the learning data classified by the classification unit 35 in step B3, or are classified. Learning data that has not been used may be used.
 また、学習モデル25を生成するために用いる学習データは、過去に適合状態と判断された聴力検査情報と、属性情報と、背景情報と、パラメータデータとを用いてもよい。 Further, as the learning data used to generate the learning model 25, hearing test information, attribute information, background information, and parameter data that have been determined to be in conformity in the past may be used.
[本実施の形態の効果]
 以上のように本実施の形態によれば、フィッティング支援装置1は、聴力検査情報だけでなく属性情報、背景情報なども加味してパラメータデータを推定できるので、技能者は、推定したパラメータデータを用いて、データ対象者に対して適切なフィッティングを実施することができる。
[Effect of this embodiment]
As described above, according to the present embodiment, the fitting support device 1 can estimate the parameter data by taking into account not only the hearing test information but also the attribute information, the background information, and the like, so that the technician can estimate the estimated parameter data. It can be used to perform appropriate fitting for data subjects.
 また、フィッティング支援装置1は、最適なパラメータデータに近づけるために要していたフィッティングにかかる時間を短縮することができる。すなわち、従来は比較選択法において活用をしていた情報を、学習モデル25を用いて活用できるようにしたため、フィッティングにかかる時間を短縮できる。 In addition, the fitting support device 1 can reduce the time required for fitting to approach the optimum parameter data. That is, since the information previously utilized in the comparison selection method can be utilized by using the learning model 25, the time required for fitting can be shortened.
 また、フィッティング支援装置1は、推定部3が推定したパラメータデータを、補聴器に設定してもよい。すなわち、フィッティング支援装置1が、推定したパラメータデータを用いて、技能者を介さずに直接、補聴器の調整(フィッティング)をしてもよい。 Further, the fitting support device 1 may set the parameter data estimated by the estimation unit 3 in the hearing aid. That is, the fitting support device 1 may directly adjust (fitting) the hearing aid using the estimated parameter data without the intervention of a technician.
 更に、学習装置31は、過去において取得した聴力検査情報だけでなく、過去において取得した属性情報、背景情報、パラメータデータなども加味して、学習モデル25を生成できる。特に、技能レベルの高い技能者が実施したフィッティングの結果を入力として、学習モデル25を生成できる。 Furthermore, the learning device 31 can generate the learning model 25 by taking into account not only the hearing test information acquired in the past but also the attribute information, background information, parameter data, etc. acquired in the past. In particular, the learning model 25 can be generated by inputting the result of fitting performed by a technician with a high skill level.
 そのため、対象者に補聴器を適合させる場合に、技能者は、この生成された学習モデル25により推定されたパラメータデータを用いて、適切なフィッティングを実施することができる。 Therefore, when adapting the hearing aid to the subject, the technician can perform appropriate fitting using the parameter data estimated by the generated learning model 25.
 また、この生成された学習モデル25を用いて推定することで、最適なパラメータデータに近いパラメータデータを推定できるので、フィッティングにかかる時間を短縮することができる。すなわち、従来は比較選択法により得ている情報を、学習モデル25を用いて得られるようにしたため、フィッティングにかかる時間を短縮できる。 Further, by estimating using the generated learning model 25, parameter data close to the optimum parameter data can be estimated, so that the time required for fitting can be shortened. That is, since the information conventionally obtained by the comparative selection method can be obtained by using the learning model 25, the time required for fitting can be shortened.
[プログラム]
 本発明の実施の形態におけるパラメータデータを推定するためのプログラムは、コンピュータに、図8に示すステップA1からA5を実行させるプログラムであればよい。このプログラムをコンピュータにインストールし、実行することによって、本実施の形態におけるフィッティング支援装置とフィッティング支援方法とを実現することができる。この場合、コンピュータのプロセッサは、取得部2、分類部23、推定部3、出力情報生成部24として機能し、処理を行なう。
[program]
The program for estimating the parameter data in the embodiment of the present invention may be any program that causes a computer to execute steps A1 to A5 shown in FIG. By installing this program on a computer and executing it, the fitting support device and the fitting support method according to the present embodiment can be realized. In this case, the computer processor functions as an acquisition unit 2, a classification unit 23, an estimation unit 3, and an output information generation unit 24, and performs processing.
 また、本実施の形態におけるパラメータデータを推定するためのプログラムは、複数のコンピュータによって構築されたコンピュータシステムによって実行されてもよい。この場合は、例えば、各コンピュータが、それぞれ、取得部2、分類部23、推定部3、出力情報生成部24のいずれかとして機能してもよい。 Further, the program for estimating the parameter data in the present embodiment may be executed by a computer system constructed by a plurality of computers. In this case, for example, each computer may function as any of the acquisition unit 2, the classification unit 23, the estimation unit 3, and the output information generation unit 24, respectively.
 更に、本発明の実施の形態における学習モデルを生成するためのプログラムは、コンピュータに、図9に示すステップB1からB5を実行させるプログラムであればよい。このプログラムをコンピュータにインストールし、実行することによって、本実施の形態における学習装置と学習方法とを実現することができる。この場合、コンピュータのプロセッサは、取得部33、分類部34、分類部35、生成部36として機能し、処理を行なう。 Further, the program for generating the learning model according to the embodiment of the present invention may be a program that causes a computer to execute steps B1 to B5 shown in FIG. By installing this program on a computer and executing it, the learning device and the learning method according to the present embodiment can be realized. In this case, the computer processor functions as an acquisition unit 33, a classification unit 34, a classification unit 35, and a generation unit 36, and performs processing.
 また、本実施の形態における学習モデルを生成するためのプログラムは、複数のコンピュータによって構築されたコンピュータシステムによって実行されてもよい。この場合は、例えば、各コンピュータが、それぞれ、取得部33、分類部34、分類部35、生成部36のいずれかとして機能してもよい。 Further, the program for generating the learning model in the present embodiment may be executed by a computer system constructed by a plurality of computers. In this case, for example, each computer may function as any of the acquisition unit 33, the classification unit 34, the classification unit 35, and the generation unit 36, respectively.
[物理構成]
 ここで、実施の形態におけるプログラムを実行することによって、フィッティング支援装置又は学習装置を実現するコンピュータについて図10を用いて説明する。図10は、本発明の実施の形態におけるフィッティング支援装置又は学習装置を実現するコンピュータの一例を示すブロック図である。
[Physical configuration]
Here, a computer that realizes a fitting support device or a learning device by executing the program in the embodiment will be described with reference to FIG. FIG. 10 is a block diagram showing an example of a computer that realizes the fitting support device or the learning device according to the embodiment of the present invention.
 図10に示すように、コンピュータ110は、CPU(Central Processing Unit)111と、メインメモリ112と、記憶装置113と、入力インターフェイス114と、表示コントローラ115と、データリーダ/ライタ116と、通信インターフェイス117とを備える。これらの各部は、バス121を介して、互いにデータ通信可能に接続される。なお、コンピュータ110は、CPU111に加えて、又はCPU111に代えて、GPU(Graphics Processing Unit)、又はFPGA(Field-Programmable Gate Array)を備えていてもよい。 As shown in FIG. 10, the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. And. Each of these parts is connected to each other via a bus 121 so as to be capable of data communication. The computer 110 may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU 111 or in place of the CPU 111.
 CPU111は、記憶装置113に格納された、本実施の形態におけるプログラム(コード)をメインメモリ112に展開し、これらを所定順序で実行することにより、各種の演算を実施する。メインメモリ112は、典型的には、DRAM(Dynamic Random Access Memory)などの揮発性の記憶装置である。また、本実施の形態におけるプログラムは、コンピュータ読み取り可能な記録媒体120に格納された状態で提供される。なお、本実施の形態におけるプログラムは、通信インターフェイス117を介して接続されたインターネット上で流通するものであってもよい。 The CPU 111 expands the programs (codes) of the present embodiment stored in the storage device 113 into the main memory 112 and executes them in a predetermined order to perform various operations. The main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory). Further, the program according to the present embodiment is provided in a state of being stored in a computer-readable recording medium 120. The program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
 また、記憶装置113の具体例としては、ハードディスクドライブの他、フラッシュメモリなどの半導体記憶装置があげられる。入力インターフェイス114は、CPU111と、キーボード及びマウスといった入力機器118との間のデータ伝送を仲介する。表示コントローラ115は、ディスプレイ装置119と接続され、ディスプレイ装置119での表示を制御する。 Further, specific examples of the storage device 113 include a semiconductor storage device such as a flash memory in addition to a hard disk drive. The input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and mouse. The display controller 115 is connected to the display device 119 and controls the display on the display device 119.
 データリーダ/ライタ116は、CPU111と記録媒体120との間のデータ伝送を仲介し、記録媒体120からのプログラムの読み出し、及びコンピュータ110における処理結果の記録媒体120への書き込みを実行する。通信インターフェイス117は、CPU111と、他のコンピュータとの間のデータ伝送を仲介する。 The data reader / writer 116 mediates the data transmission between the CPU 111 and the recording medium 120, reads the program from the recording medium 120, and writes the processing result in the computer 110 to the recording medium 120. The communication interface 117 mediates data transmission between the CPU 111 and another computer.
 また、記録媒体120の具体例としては、CF(Compact Flash(登録商標))及びSD(Secure Digital)などの汎用的な半導体記憶デバイス、フレキシブルディスク(Flexible Disk)などの磁気記録媒体、又はCD-ROM(Compact Disk Read Only Memory)などの光学記録媒体があげられる。 Specific examples of the recording medium 120 include a general-purpose semiconductor storage device such as CF (CompactFlash (registered trademark)) and SD (SecureDigital), a magnetic recording medium such as a flexible disk, or a CD-. Examples include optical recording media such as ROM (CompactDiskReadOnlyMemory).
 なお、本実施の形態におけるフィッティング支援装置1又は学習装置31は、プログラムがインストールされたコンピュータではなく、各部に対応したハードウェアを用いることによっても実現可能である。更に、フィッティング支援装置1又は学習装置31は、一部がプログラムで実現され、残りの部分がハードウェアで実現されていてもよい。 Note that the fitting support device 1 or the learning device 31 in the present embodiment can also be realized by using hardware corresponding to each part instead of the computer on which the program is installed. Further, the fitting support device 1 or the learning device 31 may be partially realized by a program and the rest may be realized by hardware.
[付記]
 以上の実施の形態に関し、更に以下の付記を開示する。上述した実施の形態の一部又は全部は、以下に記載する(付記1)から(付記12)により表現することができるが、以下の記載に限定されるものではない。
[Additional Notes]
The following additional notes will be further disclosed with respect to the above embodiments. A part or all of the above-described embodiments can be expressed by the following descriptions (Appendix 1) to (Appendix 12), but are not limited to the following descriptions.
(付記1)
 対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得する、取得部と、
 取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する、推定部と、
 を有することを特徴とするフィッティング支援装置。
(Appendix 1)
An acquisition unit that acquires hearing test information representing the results of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
An estimation unit that inputs the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
A fitting support device characterized by having.
(付記2)
 付記1に記載のフィッティング支援装置であって、
 前記推定部は、過去において取得した、複数の聴力検査情報と、パラメータデータと、属性情報と、背景情報とを入力とし、機械学習により生成された、パラメータデータを推定するための学習モデルを有する
 ことを特徴とするフィッティング支援装置。
(Appendix 2)
The fitting support device according to Appendix 1.
The estimation unit has a learning model for estimating parameter data generated by machine learning by inputting a plurality of hearing test information, parameter data, attribute information, and background information acquired in the past. A fitting support device characterized by this.
(付記3)
 付記2に記載のフィッティング支援装置であって、
 前記学習モデルを生成するために用いる入力は、過去に適合状態と判断された前記聴力検査情報と、前記パラメータデータと、前記属性情報と、前記背景情報とを用いる
 ことを特徴とするフィッティング支援装置。
(Appendix 3)
The fitting support device described in Appendix 2
The input used to generate the learning model is a fitting support device characterized by using the hearing test information, the parameter data, the attribute information, and the background information that have been determined to be in conformity in the past. ..
(付記4)
 付記1から3のいずれか一つに記載のフィッティング支援装置であって、
 前記属性情報は、少なくとも前記対象者の年齢、性別、身体、職業、病歴、治療歴のいずれか一つ以上の情報を有し、前記背景情報は、少なくとも前記対象者の生活環境音、趣向のいずれか一つ以上の情報を有する
 ことを特徴とするフィッティング支援装置。
(Appendix 4)
The fitting support device according to any one of Appendix 1 to 3.
The attribute information has at least one or more information of any one or more of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject. A fitting support device characterized by having any one or more pieces of information.
(付記5)
(a)対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得する、ステップと、
(b)取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する、ステップと、
 を有することを特徴とするフィッティング支援方法。
(Appendix 5)
(A) A step of acquiring hearing test information representing the result of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
(B) A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
A fitting support method characterized by having.
(付記6)
 付記5に記載のフィッティング支援方法であって、
 前記(b)のステップにおいては、過去において取得した、複数の聴力検査情報と、パラメータデータと、属性情報と、背景情報とを入力とし、機械学習により生成された、パラメータデータを推定するための学習モデルを用いる
 ことを特徴とするフィッティング支援方法。
(Appendix 6)
The fitting support method described in Appendix 5
In the step (b) above, a plurality of hearing test information, parameter data, attribute information, and background information acquired in the past are input, and parameter data generated by machine learning is estimated. A fitting support method characterized by using a learning model.
(付記7)
 付記6に記載のフィッティング支援方法であって、
 前記学習モデルを生成するために用いる入力は、過去に適合状態と判断された前記聴力検査情報と、前記パラメータデータと、前記属性情報と、前記背景情報とを用いる
 ことを特徴とするフィッティング支援方法
(Appendix 7)
The fitting support method described in Appendix 6
The input used to generate the learning model is a fitting support method characterized by using the hearing test information, the parameter data, the attribute information, and the background information that have been determined to be in conformity in the past.
(付記8)
 付記5から7のいずれか一つに記載のフィッティング支援方法であって、
 前記属性情報は、少なくとも前記対象者の年齢、性別、身体、職業、病歴、治療歴のいずれか一つ以上の情報を有し、前記背景情報は、少なくとも前記対象者の生活環境音、趣向のいずれか一つ以上の情報を有する
 ことを特徴とするフィッティング支援方法。
(Appendix 8)
The fitting support method according to any one of Appendix 5 to 7.
The attribute information has at least one or more information of any one or more of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject. A fitting support method characterized by having any one or more pieces of information.
(付記9)
 コンピュータに、
(a)対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得する、ステップと、
(b)取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する、ステップと、
 を実行させる命令を含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
(Appendix 9)
On the computer
(A) A step of acquiring hearing test information representing the result of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
(B) A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
A computer-readable recording medium that records a program, including instructions to execute.
(付記10)
 付記9に記載のコンピュータ読み取り可能な記録媒体であって、
 前記(b)のステップにおいては、過去において取得した、複数の聴力検査情報と、パラメータデータと、属性情報と、背景情報とを入力とし、機械学習により生成された、パラメータデータを推定するための学習モデルを用いる
 ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 10)
The computer-readable recording medium according to Appendix 9.
In the step (b) above, a plurality of hearing test information, parameter data, attribute information, and background information acquired in the past are input, and parameter data generated by machine learning is estimated. A computer-readable recording medium characterized by using a learning model.
(付記11)
 付記10に記載のコンピュータ読み取り可能な記録媒体であって、
 前記学習モデルを生成するために用いる入力は、過去に適合状態と判断された前記聴力検査情報と、前記パラメータデータと、前記属性情報と、前記背景情報とを用いる
 ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 11)
The computer-readable recording medium according to Appendix 10.
The input used to generate the learning model is readable by a computer, which uses the hearing test information, the parameter data, the attribute information, and the background information, which are determined to be in conformity in the past. Recording medium.
(付記12)
 付記9から11のいずれか一つに記載のコンピュータ読み取り可能な記録媒体であって、
 前記属性情報は、少なくとも前記対象者の年齢、性別、身体、職業、病歴、治療歴のいずれか一つ以上の情報を有し、前記背景情報は、少なくとも前記対象者の生活環境音、趣向のいずれか一つ以上の情報を有する
 ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 12)
The computer-readable recording medium according to any one of Appendix 9 to 11.
The attribute information includes at least one or more information of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject. A computer-readable recording medium characterized by having any one or more pieces of information.
 以上、実施の形態を参照して本願発明を説明したが、本願発明は上記実施の形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present invention has been described above with reference to the embodiments, the present invention is not limited to the above embodiments. Various changes that can be understood by those skilled in the art can be made within the scope of the present invention in terms of the structure and details of the present invention.
 以上のように本発明によれば、フィッティングの精度を向上させることができる。また、フィッティングにかかる時間を短縮することができる。本発明は、補聴器などの装用機器においてフィッティングを必要とする分野において有用である。 As described above, according to the present invention, the accuracy of fitting can be improved. In addition, the time required for fitting can be shortened. INDUSTRIAL APPLICABILITY The present invention is useful in a field where fitting is required in a wearing device such as a hearing aid.
  1 フィッティング支援装置
  2 取得部
  3 推定部
 21 入力装置
 22 出力装置
 23 分類部
 24 出力情報生成部
 25 学習モデル
 31 学習装置
 32 記憶装置
 33 取得部
 34、35 分類部
 36 生成部
110 コンピュータ
111 CPU
112 メインメモリ
113 記憶装置
114 入力インターフェイス
115 表示コントローラ
116 データリーダ/ライタ
117 通信インターフェイス
118 入力機器
119 ディスプレイ装置
120 記録媒体
121 バス
1 Fitting support device 2 Acquisition unit 3 Estimating unit 21 Input device 22 Output device 23 Classification unit 24 Output information generation unit 25 Learning model 31 Learning device 32 Storage device 33 Acquisition unit 34, 35 Classification unit 36 Generation unit 110 Computer 111 CPU
112 Main memory 113 Storage device 114 Input interface 115 Display controller 116 Data reader / writer 117 Communication interface 118 Input device 119 Display device 120 Recording medium 121 Bus

Claims (12)

  1.  対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得する、取得手段と、
     取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する、推定手段と、
     を有することを特徴とするフィッティング支援装置。
    An acquisition means for acquiring hearing test information representing the results of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
    An estimation means for estimating the parameter data used for fitting the hearing aid to the subject by inputting the acquired hearing test information, the attribute information, and the background information.
    A fitting support device characterized by having.
  2.  請求項1に記載のフィッティング支援装置であって、
     前記推定手段は、過去において取得した、複数の聴力検査情報と、パラメータデータと、属性情報と、背景情報とを入力とし、機械学習により生成された、パラメータデータを推定するための学習モデルを有する
     ことを特徴とするフィッティング支援装置。
    The fitting support device according to claim 1.
    The estimation means has a learning model for estimating parameter data generated by machine learning by inputting a plurality of hearing test information, parameter data, attribute information, and background information acquired in the past. A fitting support device characterized by this.
  3.  請求項2に記載のフィッティング支援装置であって、
     前記学習モデルを生成するために用いる入力は、過去に適合状態と判断された前記聴力検査情報と、前記パラメータデータと、前記属性情報と、前記背景情報とを用いる
     ことを特徴とするフィッティング支援装置。
    The fitting support device according to claim 2.
    The input used to generate the learning model is a fitting support device characterized by using the hearing test information, the parameter data, the attribute information, and the background information that have been determined to be in conformity in the past. ..
  4.  請求項1から3のいずれか一つに記載のフィッティング支援装置であって、
     前記属性情報は、少なくとも前記対象者の年齢、性別、身体、職業、病歴、治療歴のいずれか一つ以上の情報を有し、前記背景情報は、少なくとも前記対象者の生活環境音、趣向のいずれか一つ以上の情報を有する
     ことを特徴とするフィッティング支援装置。
    The fitting support device according to any one of claims 1 to 3.
    The attribute information has at least one or more information of any one or more of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject. A fitting support device characterized by having any one or more pieces of information.
  5. (a)対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得し、
    (b)取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する
     ことを特徴とするフィッティング支援方法。
    (A) Hearing test information representing the result of the hearing test performed on the target person, attribute information representing the attribute of the target person, and background information representing the background of the target person are acquired.
    (B) A fitting support method characterized in that the acquired audiometry information, the attribute information, and the background information are input to estimate parameter data used for fitting to fit the hearing aid to the subject.
  6.  請求項5に記載のフィッティング支援方法であって、
     前記(b)の処理においては、過去において取得した、複数の聴力検査情報と、パラメータデータと、属性情報と、背景情報とを入力とし、機械学習により生成された、パラメータデータを推定するための学習モデルを用いる
     ことを特徴とするフィッティング支援方法。
    The fitting support method according to claim 5.
    In the process (b), a plurality of hearing test information, parameter data, attribute information, and background information acquired in the past are input, and parameter data generated by machine learning is estimated. A fitting support method characterized by using a learning model.
  7.  請求項6に記載のフィッティング支援方法であって、
     前記学習モデルを生成するために用いる入力は、過去に適合状態と判断された前記聴力検査情報と、前記パラメータデータと、前記属性情報と、前記背景情報とを用いる
     ことを特徴とするフィッティング支援方法。
    The fitting support method according to claim 6.
    The input used to generate the learning model is a fitting support method characterized by using the hearing test information, the parameter data, the attribute information, and the background information, which have been determined to be in conformity in the past. ..
  8.  請求項5から7のいずれか一つに記載のフィッティング支援方法であって、
     前記属性情報は、少なくとも前記対象者の年齢、性別、身体、職業、病歴、治療歴のいずれか一つ以上の情報を有し、前記背景情報は、少なくとも前記対象者の生活環境音、趣向のいずれか一つ以上の情報を有する
     ことを特徴とするフィッティング支援方法。
    The fitting support method according to any one of claims 5 to 7.
    The attribute information has at least one or more information of any one or more of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject. A fitting support method characterized by having any one or more pieces of information.
  9.  コンピュータに、
    (a)対象者に対して実施した聴力検査の結果を表す聴力検査情報と、前記対象者の属性を表す属性情報と、前記対象者の背景を表す背景情報とを取得する、ステップと、
    (b)取得した前記聴力検査情報と、前記属性情報と、前記背景情報とを入力して、前記対象者に補聴器を適合させるフィッティングに用いるパラメータデータを推定する、ステップと、
     を実行させる命令を含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
    On the computer
    (A) A step of acquiring hearing test information representing the result of a hearing test performed on a target person, attribute information representing the attributes of the target person, and background information representing the background of the target person.
    (B) A step of inputting the acquired hearing test information, the attribute information, and the background information to estimate parameter data used for fitting the hearing aid to the subject.
    A computer-readable recording medium that records a program, including instructions to execute.
  10.  請求項9に記載のコンピュータ読み取り可能な記録媒体であって、
     前記(b)のステップにおいては、過去において取得した、複数の聴力検査情報と、パラメータデータと、属性情報と、背景情報とを入力とし、機械学習により生成された、パラメータデータを推定するための学習モデルを用いる
     ことを特徴とするコンピュータ読み取り可能な記録媒体。
    The computer-readable recording medium according to claim 9.
    In the step (b) above, a plurality of hearing test information, parameter data, attribute information, and background information acquired in the past are input, and parameter data generated by machine learning is estimated. A computer-readable recording medium characterized by using a learning model.
  11.  請求項10に記載のコンピュータ読み取り可能な記録媒体であって、
     前記学習モデルを生成するために用いる入力は、過去に適合状態と判断された前記聴力検査情報と、前記パラメータデータと、前記属性情報と、前記背景情報とを用いる
     ことを特徴とするコンピュータ読み取り可能な記録媒体。
    The computer-readable recording medium according to claim 10.
    The input used to generate the learning model is readable by a computer, which uses the hearing test information, the parameter data, the attribute information, and the background information, which are determined to be in conformity in the past. Recording medium.
  12.  請求項9から11のいずれか一つに記載のコンピュータ読み取り可能な記録媒体であって、
     前記属性情報は、少なくとも前記対象者の年齢、性別、身体、職業、病歴、治療歴のいずれか一つ以上の情報を有し、前記背景情報は、少なくとも前記対象者の生活環境音、趣向のいずれか一つ以上の情報を有する
     ことを特徴とするコンピュータ読み取り可能な記録媒体。
    The computer-readable recording medium according to any one of claims 9 to 11.
    The attribute information includes at least one or more information of the subject's age, gender, body, occupation, medical history, and treatment history, and the background information includes at least the living environment sound and taste of the subject. A computer-readable recording medium characterized by having any one or more pieces of information.
PCT/JP2019/017507 2019-04-24 2019-04-24 Fitting assistance device, fitting assistance method, and computer-readable recording medium WO2020217359A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021515389A JP7272425B2 (en) 2019-04-24 2019-04-24 FITTING ASSIST DEVICE, FITTING ASSIST METHOD, AND PROGRAM
PCT/JP2019/017507 WO2020217359A1 (en) 2019-04-24 2019-04-24 Fitting assistance device, fitting assistance method, and computer-readable recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/017507 WO2020217359A1 (en) 2019-04-24 2019-04-24 Fitting assistance device, fitting assistance method, and computer-readable recording medium

Publications (1)

Publication Number Publication Date
WO2020217359A1 true WO2020217359A1 (en) 2020-10-29

Family

ID=72941169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/017507 WO2020217359A1 (en) 2019-04-24 2019-04-24 Fitting assistance device, fitting assistance method, and computer-readable recording medium

Country Status (2)

Country Link
JP (1) JP7272425B2 (en)
WO (1) WO2020217359A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022264535A1 (en) * 2021-06-18 2022-12-22 ソニーグループ株式会社 Information processing method and information processing system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009001559A1 (en) * 2007-06-28 2008-12-31 Panasonic Corporation Environment adaptive type hearing aid
JP2017152865A (en) * 2016-02-23 2017-08-31 リオン株式会社 Hearing aid fitting device, hearing aid fitting program, hearing aid fitting server, and hearing aid fitting method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999019779A1 (en) 1997-10-15 1999-04-22 Beltone Electronics Corporation A neurofuzzy based device for programmable hearing aids

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009001559A1 (en) * 2007-06-28 2008-12-31 Panasonic Corporation Environment adaptive type hearing aid
JP2017152865A (en) * 2016-02-23 2017-08-31 リオン株式会社 Hearing aid fitting device, hearing aid fitting program, hearing aid fitting server, and hearing aid fitting method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022264535A1 (en) * 2021-06-18 2022-12-22 ソニーグループ株式会社 Information processing method and information processing system

Also Published As

Publication number Publication date
JPWO2020217359A1 (en) 2020-10-29
JP7272425B2 (en) 2023-05-12

Similar Documents

Publication Publication Date Title
US10721571B2 (en) Separating and recombining audio for intelligibility and comfort
US20210393168A1 (en) User authentication via in-ear acoustic measurements
JP6293314B2 (en) Hearing aid system parameter optimization method and hearing aid system
US20230011937A1 (en) Methods and apparatus to generate optimized models for internet of things devices
CN109600699B (en) System for processing service request, method and storage medium thereof
US8335332B2 (en) Fully learning classification system and method for hearing aids
WO2020217359A1 (en) Fitting assistance device, fitting assistance method, and computer-readable recording medium
EP3940698A1 (en) A computer-implemented method of providing data for an automated baby cry assessment
JP7276433B2 (en) FITTING ASSIST DEVICE, FITTING ASSIST METHOD, AND PROGRAM
US20230081796A1 (en) Managing audio content delivery
JP2018005122A (en) Detection device, detection method, and detection program
US11882413B2 (en) System and method for personalized fitting of hearing aids
CN111148001A (en) Hearing system, accessory device and related method for hearing algorithm context design
EP4207812A1 (en) Method for audio signal processing on a hearing system, hearing system and neural network for audio signal processing
US11457320B2 (en) Selectively collecting and storing sensor data of a hearing system
JP2023027697A (en) Terminal device, transmission method, transmission program and information processing system
Ni et al. Personalization of Hearing AID DSLV5 Prescription Amplification in the Field via a Real-Time Smartphone APP
Cauchi et al. Hardware/software architecture for services in the hearing aid industry
WO2022264535A1 (en) Information processing method and information processing system
US20240121560A1 (en) Facilitating hearing device fitting
US11146902B2 (en) Facilitating a bone conduction otoacoustic emission test
US11749270B2 (en) Output apparatus, output method and non-transitory computer-readable recording medium
US20220312126A1 (en) Detecting Hair Interference for a Hearing Device
US20220218236A1 (en) Systems and Methods for Hearing Evaluation
US10601757B2 (en) Multi-output mode communication support device, communication support method, and computer program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925730

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021515389

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925730

Country of ref document: EP

Kind code of ref document: A1