WO2014142328A1 - 聴力検査装置、聴力検査方法および聴力検査用単語作成方法 - Google Patents
聴力検査装置、聴力検査方法および聴力検査用単語作成方法 Download PDFInfo
- Publication number
- WO2014142328A1 WO2014142328A1 PCT/JP2014/056994 JP2014056994W WO2014142328A1 WO 2014142328 A1 WO2014142328 A1 WO 2014142328A1 JP 2014056994 W JP2014056994 W JP 2014056994W WO 2014142328 A1 WO2014142328 A1 WO 2014142328A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- word
- hearing
- question
- unit
- words
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/12—Audiometering
- A61B5/121—Audiometering evaluating hearing capacity
- A61B5/123—Audiometering evaluating hearing capacity subjective methods
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- the method for creating a word for hearing test includes: A word selection step for arbitrarily selecting a word from the dictionary; An abnormal hearing conversion step of converting the word selected in the word selection step into an abnormal hearing; A word search step for searching whether or not the word that has been subjected to the abnormal conversion in the abnormal conversion step exists in the dictionary; and A word recording step of recording the word in the recording unit when searched as a word present in the word searching step; It is characterized by providing.
- the abnormal hearing conversion step includes one of the characters of the word selected in the word selection step based on an abnormal hearing matrix indicating an abnormal hearing tendency of a predetermined word group. It is preferable that it is the process which replaces a part.
- the recording unit 2 records words that may be mistakenly heard by people with weak hearing.
- the recording unit 2 is configured by, for example, a database, and a plurality of words that may be mistakenly heard and candidate words that are mistakenly heard are associated and recorded. Specific words will be described later with reference to FIG.
- the CPU 10 includes a question creation unit 11 and a determination unit 12.
- the CPU 10 has a question creation function as the question creation unit 11 and a hearing discrimination function as the discrimination unit 12.
- the question creation unit 11 arbitrarily selects a word recorded in the recording unit 2 that may be mistakenly heard, and creates a question sentence for hearing test including at least one selected word. Sentence examples constituting the question sentence are prepared in advance and stored, for example, in storage means (not shown). The question creating unit 11 creates a question sentence by combining the word selected from the recording unit 2 and the sentence example stored in the storage unit. Note that the sentence example combined with the selected word may be recorded in the recording unit 2 in advance, or may be created by the operator using the input unit 5, for example.
- the question text created by the question creation unit 11 is output via the question output unit 4.
- the question output unit 4 outputs the question text by voice, and presents the output question text to the respondent, that is, the subject of the hearing test.
- the question output unit 4 can be configured by a speaker, for example. Moreover, you may make it comprise in the format which the inspector of the side which performs a hearing test reads a question sentence to an answerer (examinee).
- FIG. 2 shows an example of an abnormal hearing matrix showing a group of words that may be mistakenly recorded in the recording unit 2.
- the degree of hearing decline is divided into three levels of mild, moderate, and severe, and is displayed in the upper, middle, and lower stages in this order. Further, in the horizontal direction of each row, words that may be mistaken for each level of decay are displayed in association with candidate words that are mistaken.
- FIG. 2 shows an abnormal hearing tendency depending on the degree of hearing loss.
- mishearing between nasal sounds (“n” is “m”) are “NATOTO” for “MATTOU”, “NISHIN” for “MISHIN”, “Japan (NIHON)” ) ”Can be mistaken for“ MIHON ”.
- frequency information indicating the frequency of use of the word is attached to each stored word and stored.
- frequency information 500 is attached to the word “NATOTO” shown in the upper row
- frequency information 150 is attached to “MATTOU”
- “NISHIN” is attached to “NISHIN”.
- frequency information of 400 is attached to “MISHIN”
- 500 is attached to “NIHON”
- 400 is attached to “MIHON”.
- the frequency information in FIG. 2 indicates the number of hits of each word by Internet search.
- the inspector calls a passerby on the street, for example, and asks for a hearing test.
- the inspector speaks to the passerby as an investigator conducting a questionnaire survey so that the passerby is not known to be a test for testing hearing.
- the hearing ability of the respondent is tested in the form of answering the questionnaire.
- Fig. 3 shows a hearing test pattern in which one word is selected from two or more words.
- the answer of the respondent to the question is input to the hearing test apparatus 1 via the input unit 5, for example (an example of an input step).
- the respondent answers "NATOTO" to the question step S102
- “NATOTO” is classified as a word that is easy to misunderstand even for a person with weak hearing loss. Therefore, the discrimination by the discrimination unit 12 in this hearing test pattern is a discrimination result of “no problem with hearing” (step S103: an example of a discrimination step).
- step S110 when the respondent answers to this question other than “NATOTO”, “MATTOU”, “NISHIN”, “MISHIN” (step S110), the discrimination in the hearing test pattern results in “no discrimination” (step S111).
- a question sentence is created by the question creation unit 11, and the created question sentence “Select the word you are interested in from the following words.“ United States (BEIKOKU) ”,“ Japan ( NIHON) ”and“ universe (UTYUU) ”are output to the respondent from the question output unit 4, for example (step S301: an example of a question output step).
- the words “United States (BEIKOKU)”, “Japan (NIHON)”, and “Universe (UTYUU)” included in this question sentence are all words that are easily misunderstood by people with weak hearing. Is one of the word groups recorded in advance (see FIG. 2).
- step S310 when the respondent responds to this question with “space (UTYUU)” (step S310), the respondent has not made a mistake in hearing the word “universe (UTYUU)”.
- “universe (UTYUU)” is classified in the lower stage as a word that is easy to be mistaken by a person whose hearing loss is severe. Therefore, the discrimination in the hearing test pattern is a discrimination result that “does not correspond to severe decline (there is a possibility of moderate or mild decline)” (step S311).
- the respondent answered “ITUU” to this question (step S312), the respondent mistakenly heard “UTYU” as “ITYUU”. Determined.
- the abnormal hearing between “I (I)” and “U (U)” such as “University (UTYUU)” and “ITUU” is a hearing ability. Is shown in the lower row as a word that is easy to misunderstand. Therefore, in this case, the discrimination in the hearing test pattern is a discrimination result of “suspected of severe hearing loss” (step S313).
- the respondent When the respondent answers “REIKUKU” to the first question (step S404), the respondent hears “USA (BEIKOKU)” as “REIKOKU”. It is determined that a mistake has been made. Then, as described in steps S304 and S305 in FIG. 5, the determination in the hearing test pattern results in “possibility of moderate or higher hearing loss”. That is, the determination result is that there is a possibility of moderate hearing loss and a possibility of severe hearing loss.
- step S415 when the respondent answers “RISU” to the first question (step S415), the respondent makes a mistake in listening to the word “RISU”. Not done.
- "Risu (RISU)" is classified as a word that is easy to be mistaken for a person with severe hearing loss. Therefore, the discrimination in the hearing test pattern has a discrimination result of “does not correspond to severe decline (there is a possibility of moderate or mild decline)” (step S416).
- the determination unit 12 determines whether or not the respondent's hearing loss has decreased, and the determination result is displayed by the result output unit 6 (for example, by displaying on the display screen or printing Output to respondents).
- the result output unit 6 for example, by displaying on the display screen or printing Output to respondents.
- the question text is in the form of a questionnaire or quiz
- respondents who have not lost their hearing will be able to answer their questions in a natural way without being conscious of hearing tests and determine their hearing. can do. Therefore, for example, a hearing aid seller can investigate people who need a hearing aid and people who do not need a hearing aid in advance, and sell the hearing aid efficiently.
- the level of hearing loss depends on how the respondent makes a mistake in hearing the word, that is, whether the answer is “correct” or “incorrect”, but also by examining the content of the incorrect answer (listening mistake). Therefore, the hearing ability level of the respondent can be determined more accurately.
- question sentences containing different words with different possibilities of hearing are prepared and presented to the respondents in stages in a questionnaire survey.
- the level of hearing loss of the respondent can be determined finely and accurately based on the tendency of hearing loss depending on the degree of hearing loss.
- the question text is output as voice
- a unique survey result that can be derived from the questionnaire survey without outputting the result of the hearing test that is, A result irrelevant to the hearing test may be output so as not to notify that the hearing test has been performed.
- step S501 an example of a word selection step.
- the word extracted from the dictionary will be described as “USHI”.
- the “ISHI” created by replacement is converted into Japanese notation using, for example, an electronic dictionary, a personal computer, or the like (step S504).
- the description will be made assuming that the stone has been converted to “Ishi” (steps S502 to S504 are examples of an abnormal hearing conversion step).
- the words “beef (USHI)” and “stone (ISHI)” to be additionally recorded are searched for on the Internet, for example, and the number of hits, that is, the possibility of actual use in daily life is attached and recorded as frequency information. (Step S508, see FIG. 2). In FIG. 2, 500 hits are recorded for both “cow (USHI)” and “stone (ISHI)”.
- various databases such as a literature database, patent database, and paper database connected via a network are accessed and compared with information recorded in each database. As a result, words with a high hit count may be preferentially recorded in the recording unit 2.
- the present invention is applicable not only to Japanese and English but also to other languages.
- an abnormal hearing matrix described in “Acoustical Society of America 128 (2010) 444-455” may be used. Specifically, you can set English words that may be missed, such as “Deer” and “Tear”, “Sheep” and “Cheap”, “Peach” and “Beach”, “Seed” and “ Seat ”.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Animal Behavior & Ethology (AREA)
- Acoustics & Sound (AREA)
- Otolaryngology (AREA)
- Veterinary Medicine (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- General Engineering & Computer Science (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
聞き間違う可能性がある複数の単語が前記可能性の高低に基づいて分類された状態で記録された記録部と、
前記単語を少なくとも1つ含む設問文章を音声出力する設問出力部と、
前記設問文章に対する回答を入力するための入力部と、
前記入力部に入力された回答の内容に基づいて、回答者の聴力低下の少なくとも有無を判別する判別部と、
前記判別部で判別された判別結果を出力する結果出力部と、
を備えることを特徴とするものである。
聞き間違う可能性が異なる複数の単語から選択された単語を少なくとも1つ含む設問文章を音声出力する設問出力ステップと、
前記設問文章に対する回答を入力するための入力ステップと、
前記入力ステップによって入力された回答の内容に基づいて、回答者の聴力低下の少なくとも有無を判別する判別ステップと、
前記判別ステップで判別された判別結果を出力する結果出力ステップと、
を備えることを特徴とするものである。
辞書から任意に単語を選択する単語選択ステップと、
前記単語選択ステップで選択された単語を異聴変換する異聴変換ステップと、
前記異聴変換ステップで異聴変換された単語が辞書に存在するか否かを検索する単語検索ステップと、
前記単語検索ステップで存在する単語として検索された場合に、当該単語を前記記録部に記録する単語記録ステップと、
を備えることを特徴とするものである。
本出願は、2013年3月15日出願の日本特許出願・出願番号2013-054053に基づくものであり、その内容はここに参照として取り込まれる。
Claims (7)
- 聞き間違う可能性がある複数の単語が前記可能性の高低に基づいて分類された状態で記録された記録部と、
前記単語を少なくとも1つ含む設問文章を音声出力する設問出力部と、
前記設問文章に対する回答を入力するための入力部と、
前記入力部に入力された回答の内容に基づいて、回答者の聴力低下の少なくとも有無を判別する判別部と、
前記判別部で判別された判別結果を出力する結果出力部と、
を備えることを特徴とする聴力検査装置。 - 前記記録部には、聞き間違う可能性がある単語とその聞き間違う候補の単語とが関連づけられた状態で記録されていることを特徴とする請求項1に記載の聴力検査装置。
- 前記設問出力部は、聞き間違う可能性の異なる前記単語を含む複数の前記設問文章を、前記入力部に入力された前記回答の内容に基づいて順次出力し、
前記判別部は、複数の前記設問文章への回答の内容に基づいて、聴力低下のレベルを判別することを特徴とする請求項1に記載の聴力検査装置。 - 聞き間違う可能性が異なる複数の単語から選択された単語を少なくとも1つ含む設問文章を音声出力する設問出力ステップと、
前記設問文章に対する回答を入力するための入力ステップと、
前記入力ステップによって入力された回答の内容に基づいて、回答者の聴力低下の少なくとも有無を判別する判別ステップと、
前記判別ステップで判別された判別結果を出力する結果出力ステップと、
を備えることを特徴とする聴力検査方法。 - 請求項1に記載の聴力検査装置の記録部に記録される単語を作成する単語作成方法であって、
辞書から任意に単語を選択する単語選択ステップと、
前記単語選択ステップで選択された単語を異聴変換する異聴変換ステップと、
前記異聴変換ステップで異聴変換された単語が辞書に存在するか否かを検索する単語検索ステップと、
前記単語検索ステップで存在する単語として検索された場合に、当該単語を前記記録部に記録する単語記録ステップと、
を備えることを特徴とする聴力検査用単語作成方法。 - 前記異聴変換ステップは、所定の単語群の異聴傾向を示す異聴マトリクスに基づいて、前記単語選択ステップで選択された単語の文字の一部を入れ替える処理であることを特徴とする請求項5に記載の聴力検査用単語作成方法。
- 前記単語検索ステップで存在する単語として検索された単語の数が複数存在する場合、ネットワーク上のデータベースに記録された情報と比較して、比較の結果、使用頻度の高い単語を前記単語記録ステップで前記記録部に記録することを特徴とする請求項5または6に記載の聴力検査用単語作成方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/777,006 US20160045142A1 (en) | 2013-03-15 | 2014-03-14 | Hearing examination device, hearing examination method, and method for generating words for hearing examination |
CN201480016026.3A CN105072999A (zh) | 2013-03-15 | 2014-03-14 | 听力检查装置、听力检查方法以及听力检查用单词制作方法 |
KR1020157024523A KR20150131022A (ko) | 2013-03-15 | 2014-03-14 | 청력 검사 장치, 청력 검사 방법 및 청력 검사용 단어 작성 방법 |
SG11201507683RA SG11201507683RA (en) | 2013-03-15 | 2014-03-14 | Hearing examination device, hearing examination method, and method for generating words for hearing examination |
EP14765378.6A EP2974654A4 (en) | 2013-03-15 | 2014-03-14 | HEARING EXAMINATION DEVICE, HEARING EXAMINATION METHOD AND METHOD FOR PRODUCING HEARING EXAMINATION WORDS |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013054053A JP2014176582A (ja) | 2013-03-15 | 2013-03-15 | 聴力検査装置、聴力検査方法および聴力検査用単語作成方法 |
JP2013-054053 | 2013-03-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014142328A1 true WO2014142328A1 (ja) | 2014-09-18 |
Family
ID=51536975
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/056994 WO2014142328A1 (ja) | 2013-03-15 | 2014-03-14 | 聴力検査装置、聴力検査方法および聴力検査用単語作成方法 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20160045142A1 (ja) |
EP (1) | EP2974654A4 (ja) |
JP (1) | JP2014176582A (ja) |
KR (1) | KR20150131022A (ja) |
CN (1) | CN105072999A (ja) |
SG (1) | SG11201507683RA (ja) |
WO (1) | WO2014142328A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105118519A (zh) * | 2015-07-10 | 2015-12-02 | 中山大学孙逸仙纪念医院 | 一种听力评估系统 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10078859B2 (en) | 2015-07-22 | 2018-09-18 | Sara J. Sable | System and method for consumer screening, selecting, recommending, and/or selling personal sound amplification products (PSAP) |
CN109381193B (zh) * | 2017-08-07 | 2021-07-20 | 圣布拉斯特有限公司 | 听力检测装置及其操作方法 |
US11074317B2 (en) | 2018-11-07 | 2021-07-27 | Samsung Electronics Co., Ltd. | System and method for cached convolution calculation |
JP2020130535A (ja) * | 2019-02-18 | 2020-08-31 | 国立大学法人九州大学 | 音声伝達状況評価システム及び音声伝達状況評価方法 |
CN110134335B (zh) * | 2019-05-10 | 2022-08-12 | 天津大学深圳研究院 | 一种基于键值对的rdf数据管理方法、装置及存储介质 |
US20220386902A1 (en) * | 2019-11-21 | 2022-12-08 | Cochlear Limited | Scoring speech audiometry |
KR102564571B1 (ko) | 2021-04-19 | 2023-08-10 | 연세대학교 원주산학협력단 | 기계 학습에 기초한 순음청력검사의 자동화 결과 판독 장치 및 방법 |
WO2023209598A1 (en) * | 2022-04-27 | 2023-11-02 | Cochlear Limited | Dynamic list-based speech testing |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0739540A (ja) * | 1993-07-30 | 1995-02-10 | Sony Corp | 音声解析装置 |
JPH0938069A (ja) * | 1995-08-02 | 1997-02-10 | Nippon Telegr & Teleph Corp <Ntt> | 語音聴力検査方法およびこの方法を実施する装置 |
US6026361A (en) * | 1998-12-03 | 2000-02-15 | Lucent Technologies, Inc. | Speech intelligibility testing system |
US20020107692A1 (en) * | 2001-02-02 | 2002-08-08 | Litovsky Ruth Y. | Method and system for rapid and reliable testing of speech intelligibility in children |
JP2002259714A (ja) | 2001-02-27 | 2002-09-13 | Towa Engineering Corp | 補聴器販売支援システム及び方法 |
JP2002346213A (ja) * | 2001-05-30 | 2002-12-03 | Yamaha Corp | 聴力測定機能を持つゲーム装置およびゲームプログラム |
WO2006007632A1 (en) * | 2004-07-16 | 2006-01-26 | Era Centre Pty Ltd | A method for diagnostic home testing of hearing impairment, and related developmental problems in infants, toddlers, and children |
JP2013054053A (ja) | 2011-08-31 | 2013-03-21 | Brother Ind Ltd | カートリッジ |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5911137B2 (ja) * | 1976-02-14 | 1984-03-13 | 三菱電機株式会社 | 漢字入力方式 |
US5303327A (en) * | 1991-07-02 | 1994-04-12 | Duke University | Communication test system |
JPH07200615A (ja) * | 1993-12-28 | 1995-08-04 | Noriko Yoshii | 言語抽出方法 |
US20070276285A1 (en) * | 2003-06-24 | 2007-11-29 | Mark Burrows | System and Method for Customized Training to Understand Human Speech Correctly with a Hearing Aid Device |
CN102202570B (zh) * | 2009-07-03 | 2014-04-16 | 松下电器产业株式会社 | 语音清晰度评价系统、其方法 |
US9131876B2 (en) * | 2009-08-18 | 2015-09-15 | Samsung Electronics Co., Ltd. | Portable sound source playing apparatus for testing hearing ability and method of testing hearing ability using the apparatus |
-
2013
- 2013-03-15 JP JP2013054053A patent/JP2014176582A/ja active Pending
-
2014
- 2014-03-14 KR KR1020157024523A patent/KR20150131022A/ko not_active Application Discontinuation
- 2014-03-14 SG SG11201507683RA patent/SG11201507683RA/en unknown
- 2014-03-14 CN CN201480016026.3A patent/CN105072999A/zh active Pending
- 2014-03-14 WO PCT/JP2014/056994 patent/WO2014142328A1/ja active Application Filing
- 2014-03-14 US US14/777,006 patent/US20160045142A1/en not_active Abandoned
- 2014-03-14 EP EP14765378.6A patent/EP2974654A4/en not_active Withdrawn
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0739540A (ja) * | 1993-07-30 | 1995-02-10 | Sony Corp | 音声解析装置 |
JPH0938069A (ja) * | 1995-08-02 | 1997-02-10 | Nippon Telegr & Teleph Corp <Ntt> | 語音聴力検査方法およびこの方法を実施する装置 |
US6026361A (en) * | 1998-12-03 | 2000-02-15 | Lucent Technologies, Inc. | Speech intelligibility testing system |
US20020107692A1 (en) * | 2001-02-02 | 2002-08-08 | Litovsky Ruth Y. | Method and system for rapid and reliable testing of speech intelligibility in children |
JP2002259714A (ja) | 2001-02-27 | 2002-09-13 | Towa Engineering Corp | 補聴器販売支援システム及び方法 |
JP2002346213A (ja) * | 2001-05-30 | 2002-12-03 | Yamaha Corp | 聴力測定機能を持つゲーム装置およびゲームプログラム |
WO2006007632A1 (en) * | 2004-07-16 | 2006-01-26 | Era Centre Pty Ltd | A method for diagnostic home testing of hearing impairment, and related developmental problems in infants, toddlers, and children |
JP2013054053A (ja) | 2011-08-31 | 2013-03-21 | Brother Ind Ltd | カートリッジ |
Non-Patent Citations (2)
Title |
---|
See also references of EP2974654A4 |
THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 128, 2010, pages 444 - 45 5 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105118519A (zh) * | 2015-07-10 | 2015-12-02 | 中山大学孙逸仙纪念医院 | 一种听力评估系统 |
Also Published As
Publication number | Publication date |
---|---|
EP2974654A1 (en) | 2016-01-20 |
US20160045142A1 (en) | 2016-02-18 |
SG11201507683RA (en) | 2015-10-29 |
EP2974654A4 (en) | 2016-11-16 |
JP2014176582A (ja) | 2014-09-25 |
CN105072999A (zh) | 2015-11-18 |
KR20150131022A (ko) | 2015-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014142328A1 (ja) | 聴力検査装置、聴力検査方法および聴力検査用単語作成方法 | |
Guion et al. | An investigation of current models of second language speech perception: The case of Japanese adults’ perception of English consonants | |
Holmes et al. | Familiar voices are more intelligible, even if they are not recognized as familiar | |
Bent et al. | Classification of regional dialects, international dialects, and nonnative accents | |
Alves et al. | Prosody and reading in dyslexic children | |
Nelson et al. | Reading, writing, and spoken language assessment profiles for students who are deaf and hard of hearing compared with students with language learning disabilities | |
Stoyneshka et al. | Phoneme restoration methods for investigating prosodic influences on syntactic processing | |
Kalra et al. | Do you like my English? Thai students’ attitudes towards five different Asian accents | |
Shafiro et al. | Perceptual confusions of American-English vowels and consonants by native Arabic bilinguals | |
Harris et al. | Psychometrically equivalent Russian speech audiometry materials by male and female talkers: materiales de logoaudiometría en ruso psicométricamente equivalentes para hablantes masculinos y femeninos | |
Georgiou | Discrimination of uncategorized-categorized and uncategorized-uncategorized Greek consonantal contrasts by Russian speakers | |
Dean et al. | Clinical evaluation of the mini-mental state exam with culturally deaf senior citizens | |
Myles | The clinical use of Arthur Boothroyd (AB) word lists in Australia: exploring evidence-based practice | |
Freeman et al. | First-language influence on second language speech perception depends on task demands | |
Yap et al. | Intonation patterns of questions in Malaysian English | |
De La Plata et al. | Development of the Texas Spanish Naming Test: a test for Spanish speakers | |
Lev-Ari et al. | How the demographic makeup of our community influences speech perception | |
Hung et al. | Development, reliability, and validity of the oral reading assessment for Mandarin-speaking children with hearing loss | |
El Adas et al. | Phonotactic and lexical factors in talker discrimination and identification | |
Linassi et al. | Working memory abilities and the severity of phonological disorders | |
Baltazani et al. | Drifting without an anchor: How pitch accents withstand vowel loss | |
Williams et al. | Sensitivity to the acoustic correlates of lexical stress and their relationship to reading in skilled readers | |
Shen et al. | Accent categorisation by lay listeners: Which type of “native ear” works better | |
George et al. | Community-based naming agreement, familiarity, image agreement and visual complexity ratings among adult Indians | |
Santos Oliveira et al. | Effects of language experience on the discrimination of the Portuguese palatal lateral by nonnative listeners |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480016026.3 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14765378 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20157024523 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014765378 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14777006 Country of ref document: US Ref document number: 2014765378 Country of ref document: EP |