WO2020195164A1 - Méthode de test des fonctions cognitives, programme et système de test des fonctions cognitives - Google Patents

Méthode de test des fonctions cognitives, programme et système de test des fonctions cognitives Download PDF

Info

Publication number
WO2020195164A1
WO2020195164A1 PCT/JP2020/003858 JP2020003858W WO2020195164A1 WO 2020195164 A1 WO2020195164 A1 WO 2020195164A1 JP 2020003858 W JP2020003858 W JP 2020003858W WO 2020195164 A1 WO2020195164 A1 WO 2020195164A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sentence
subject
perforated
cognitive function
Prior art date
Application number
PCT/JP2020/003858
Other languages
English (en)
Japanese (ja)
Inventor
亮佑 南雲
満春 細川
祐輝 小川
角 貞幸
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Priority to JP2021508164A priority Critical patent/JP7241321B2/ja
Publication of WO2020195164A1 publication Critical patent/WO2020195164A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass

Definitions

  • This disclosure relates to cognitive function test methods, programs, and cognitive function test systems. More specifically, the present disclosure relates to cognitive function testing methods, programs, and cognitive function testing systems for testing cognitive function.
  • Patent Document 1 describes a dementia diagnosis system (cognitive function test system) used for diagnosing a subject's dementia.
  • the dementia diagnosis system described in Patent Document 1 includes a dementia diagnosis device and a terminal device.
  • the terminal device records the conversation between the subject and the questioner.
  • the dementia diagnostic device calculates the dementia level of the subject from the voice data recorded by the terminal device.
  • the purpose of the present disclosure is to provide a cognitive function test method, a program, and a cognitive function test system capable of improving the evaluation accuracy of cognitive function.
  • the cognitive function test method is a cognitive function test method in which a subject is made to make an utterance related to the image while displaying an image on a display unit.
  • a perforated sentence in which a part of the sentence related to the image is left blank is displayed on the display unit.
  • the program according to one aspect of the present disclosure is a program for causing one or more processors to execute the above-mentioned cognitive function test method.
  • the cognitive function test system includes a display unit, an examination unit, and a control unit.
  • the inspection unit causes the subject to make an utterance related to the image while displaying the image on the display unit.
  • the control unit causes the display unit to display a perforated sentence in which a part of the sentence related to the image is blank together with the image.
  • FIG. 1 is an explanatory diagram for explaining an examination situation by a cognitive function examination system according to an embodiment of the present disclosure.
  • FIG. 2 is a block diagram showing the configuration of the same cognitive function test system.
  • FIG. 3A is a schematic view showing a display example in the first test of the cognitive function test system of the same.
  • FIG. 3B is a schematic view showing a display example in the first test of the cognitive function test system of the same.
  • FIG. 4A is a schematic view showing a display example in the second test of the cognitive function test system of the same.
  • FIG. 4B is a schematic view showing a display example in the second test of the cognitive function test system of the same.
  • FIG. 5 is a schematic view showing a display example of an image to be displayed on the display unit between the first test and the second test of the cognitive function test system.
  • FIG. 6 is a flowchart showing a series of operations of the cognitive function test system described above.
  • the cognitive function test method is a method for testing cognitive function between the examiner 101 and the subject 102 using the cognitive function test system 1. , Realized by cognitive function test system 1.
  • the cognitive function test system 1 is, for example, a mobile terminal such as a tablet or a smartphone, a personal computer (PC), or the like.
  • the cognitive function test system 1 is a tablet as shown in FIG.
  • the cognitive function test system 1 includes a display unit 121, an inspection unit 112, and a control unit 11.
  • the display unit 121 is, for example, a tablet screen and displays at least the first image (image) I1.
  • the inspection unit 112 causes the subject 102 (see FIG. 1) to make an utterance related to the first image I1 while displaying the first image I1 on the display unit 121.
  • Subject 102 is, for example, an elderly person.
  • the control unit 11 causes the display unit 121 to display the perforated sentence S1 in which a part of the sentence related to the first image I1 is blank together with the first image I1.
  • the "cognitive function” referred to in the present disclosure refers to a function for correctly understanding and appropriately executing things, and is classified into, for example, a memory function, an attention function, an information processing function, and an execution function.
  • “Memory function” refers to the function of keeping past experiences in mind and recalling and using them from time to time.
  • “Attention function” refers to the function of consciousness to selectively react and pay attention to surrounding things, specific parts of events, and specific aspects of complex mental activities.
  • the "information processing function” refers to a function of processing information input from the outside world (for example, visual information, auditory information, etc.).
  • Executive function means a function necessary to effectively accomplish a series of activities with a purpose.
  • Mild cognitive impairment is an intermediate state between normal (healthy) and dementia.
  • Examples of a simple method for examining cognitive function include the revised Hasegawa simple function evaluation scale (HDS-R), MMSE (Mini-Mental State Examination), and MoCA (Montreal Cognitive Assessment). These test methods are interactive tests and are performed by trained medical professionals and the like. Therefore, people other than medical professionals cannot easily perform the examination, and the examination opportunity cannot be expanded.
  • the cognitive function test method according to the present embodiment the following configuration is adopted so that even a person other than a medical worker can perform a test related to cognitive function, thereby expanding test opportunities. are doing.
  • the cognitive function test method is a cognitive function test method in which the subject 102 is made to make an utterance related to the first image I1 while displaying the first image (image) I1 on the display unit 121.
  • this cognitive function test method along with the first image I1, a perforated sentence S1 in which a part of the sentence related to the first image I1 is left blank is displayed on the display unit 121.
  • the subject 102 is made to utter the perforated sentence S1 while filling the perforated portion, and the subject 102 is given a heavier load than simply speaking while looking at the first image I1. It can be given, and the evaluation accuracy of cognitive function can be improved. In addition, it is only necessary to have the subject 102 make an utterance related to the first image I1, and even a person other than a medical worker can easily perform the examination, so that the examination opportunity can be expanded.
  • the cognitive function test system 1 includes a control unit 11, a presentation unit 12, a storage unit 13, an operation unit 14, and a setting unit 15. As shown in FIG. 1, the cognitive function test system 1 is, for example, a tablet.
  • the control unit 11 is composed of, for example, a microcomputer having a processor and a memory. That is, the control unit 11 is realized by a computer system having a processor and a memory. Then, when the processor executes an appropriate program, the computer system functions as the control unit 11 (including the evaluation unit 111, the inspection unit 112, and the analysis unit 113).
  • the program may be pre-recorded in a memory, may be recorded through a telecommunication line such as the Internet, or may be recorded and provided on a non-temporary recording medium such as a memory card.
  • the control unit 11 has an evaluation unit 111, an inspection unit 112, and an analysis unit 113.
  • the evaluation unit 111 evaluates the subject 102 based on the first test result, which is the test result of the first test, and the second test result, which is the test result of the second test. In other words, the evaluation unit 111 determines the state of the cognitive function of the subject 102, that is, whether or not the cognitive function of the subject 102 is deteriorated based on the two test results.
  • the evaluation unit 111 uses, for example, the vocabulary of the subject 102 in the first examination, the task performance time, and the like as feature quantities, and evaluates the cognitive function (attention function and performance function) of the subject 102 based on these feature quantities. ..
  • the vocabulary of the subject 102 includes a selection of words for the perforated portion of the perforated sentence S1 described later.
  • the task execution time is, for example, the time from displaying the first image I1 and the perforated sentence S1 on the display unit 121 to the end of the utterance of the subject 102.
  • the evaluation unit 111 uses, for example, the vocabulary ability of the subject 102 in the second examination, the ability to compose a sentence, the length of one sentence, and the like as feature quantities, and the cognitive function (memory function) of the subject 102 based on these feature quantities. ) Is evaluated.
  • the evaluation unit 111 may evaluate the cognitive function of the subject 102 from the change in the voice power (volume) of the subject 102. For example, in the subject 102 whose cognitive function is deteriorated, the voice power is large because the subject 102 can speak smoothly at first, but the voice power becomes small as the words disappear with the passage of time. From such a change in voice power, it can be determined that the cognitive function of the subject 102 is deteriorated.
  • the "volume” in the present specification may be a so-called loudness level (unit: phon) or a sound pressure level (unit: decibel). You may.
  • the inspection unit 112 is configured to perform the first inspection and the second inspection. Specifically, in the first inspection, the inspection unit 112 outputs the first control signal for displaying the first image I1 and the perforated sentence S1 to the display unit 121, so that the first image I1 and the hole are displayed. The space sentence S1 is displayed on the display unit 121. Further, in the second inspection, the inspection unit 112 outputs the guide information G1 to the display unit 121 by outputting the second control signal for displaying the guide information G1 to the display unit 121.
  • the guide information G1 is, for example, at least one of the second image I2 and the sentence S2.
  • the second image I2 is, for example, a partial image of the first image I1 (see FIG. 4A).
  • the sentence S2 is, for example, a sentence that gives a suggestion about utterance to the subject 102.
  • the second image I2 may be displayed, only the sentence S2 may be displayed, or the second image I2 and the sentence S2 may be displayed. You may display both. Further, the second image I2 and the sentence S2 may be displayed in order. In this case, the sentence S2 may be displayed after the second image I2 is displayed, or the second image I2 may be displayed after the sentence S2 is displayed.
  • the analysis unit 113 performs voice recognition for the voice input via the voice input / output unit 122.
  • the "speech recognition" referred to in the present disclosure includes not only the process of converting the above speech into a character string, but also natural language processing such as semantic analysis and context analysis.
  • the analysis unit 113 recognizes the voice (speech recognition) by using the acoustic model, the recognition dictionary, and the like stored in the storage unit 13. That is, the analysis unit 113 extracts the acoustic feature amount by analyzing the voice with reference to the acoustic model, and performs voice recognition with reference to the recognition dictionary.
  • the presentation unit 12 has a display unit 121 and an audio input / output unit 122.
  • the presentation unit 12 is configured to present information to the subject 102.
  • the information presented by the presentation unit 12 is, for example, image information (first image I1, second image I2, third image I3), audio information, and the like.
  • the display unit 121 is, for example, a liquid crystal display.
  • the display unit 121 is configured to display the first image I1 and the perforated sentence S1 in the first inspection. Further, the display unit 121 is configured to display at least one of the second image I2 and the sentence S2 as the guide information G1 in the second inspection. Further, the display unit 121 is configured to display the third image I3 in the interval period between the first inspection and the second inspection.
  • the cognitive function test system 1 includes a touch panel display
  • the touch panel display may also serve as the display unit 121 and the operation unit 14.
  • the voice input / output unit 122 has, for example, a microphone and a speaker.
  • the voice input / output unit 122 is input with the voice of the subject 102 during the first examination, the second examination, and the interval period via the microphone. Further, the voice input / output unit 122 outputs the voice (for example, the guidance voice of the test) from the cognitive function test system 1 during the first test, the second test, and the interval period via the speaker.
  • the audio input / output unit 122 converts an analog audio signal input via a microphone into a digital audio signal, and outputs the converted audio data to the control unit 11.
  • the storage unit 13 is composed of a readable and writable memory.
  • the storage unit 13 is, for example, a flash memory.
  • the storage unit 13 stores the first image I1, the second image I2, and the third image I3 to be displayed on the display unit 121 during the first inspection, the second inspection, and the interval period.
  • a plurality of perforated sentences S1 related to the first image I1 are set in the first image I1, and the storage unit 13 associates the first image I1 with the plurality of perforated sentences S1.
  • a plurality of second images I2 are set as the guide information G1 in the second inspection in the first image I1, and the storage unit 13 associates the first image I1 with the plurality of second images I2.
  • a plurality of perforated sentences S1 related to the first image I1 are set in the first image I1
  • a plurality of perforated sentences S1 are set in the first image I1
  • a plurality of second images I2 are set as the guide information G1 in the second inspection
  • a plurality of sentences S2 as guide information G1 in the second inspection are set in the first image I1, and the storage unit 13 stores the first image I1 and the plurality of sentences S2 in association with each other. That is, in the cognitive function test system 1 according to the present embodiment, a plurality of perforated sentences S1, a plurality of second images I2, and a plurality of sentences S2 are associated with one first image I1.
  • the storage unit 13 stores the attribute information of the subject 102.
  • the attribute information of the subject 102 includes, for example, the age, gender, hobby, birthplace, etc. of the subject 102. These attribute information may be input by the examiner 101 or the subject 102 in the operation unit 14, for example, and when the past examination data of the subject 102 is stored in the storage unit 13, past utterances It may be set automatically based on data or the like.
  • the storage unit 13 stores the utterance data of the subject 102 during the first examination, the second examination, and the interval period, the conversation data between the examiner 101 and the subject 102, and the like.
  • the storage unit 13 stores an acoustic model, a recognition dictionary, and the like that the analysis unit 113 refers to when performing voice recognition.
  • the operation unit 14 is configured to receive operations of the inspector 101 and the subject 102.
  • the operation unit 14 has, for example, a plurality of push buttons including a numeric keypad. Then, by pressing at least one push button among the plurality of push buttons, the attribute information of the subject 102 can be registered, and the content (third image I3) to be displayed on the display unit 121 during the interval period can be selected. Can be done.
  • the plurality of push buttons may be mechanical switches, or may be touch pads constituting the touch panel if the display unit 121 is a touch panel. The position of the perforated portion of the perforated sentence S1 may be changed by the operation unit 14.
  • the setting unit 15 is configured to set the guide information G1 for each area of the first image I1. For example, the setting unit 15 sets the guide information G1 for prompting the subject 102 to speak about the father in the area including the father. Further, the setting unit 15 sets, for example, guide information G1 for prompting the subject 102 to speak about the mother in the area including the mother.
  • Test contents The test contents of the cognitive function test system 1 will be described below.
  • the first test is a test in which the subject 102 is made to speak about the first image I1 while displaying the first image I1 on the display unit 121 of the cognitive function test system 1.
  • the first inspection is, for example, a perforated inspection, as shown in FIGS. 1, 3A, and 3B.
  • the perforated test is a test in which the perforated sentence S1 related to the first image I1 and the first image I1 is displayed on the display unit 121, and the perforated sentence S1 is spoken to the subject 102.
  • the perforated sentence S1 is a sentence in which a part thereof is left blank.
  • the first inspection includes a perforation inspection by a perforation sentence S1 displayed on the display unit 121 together with the first image I1 and leaving a part of the sentence related to the first image I1 blank.
  • the perforated portion in the perforated sentence S1 is at the beginning of the sentence, and the subject of, for example, "dad” is entered in the perforated portion.
  • the perforated portion of the perforated sentence S1 is at the end of the sentence, and the predicate "drinking" is entered in the perforated portion, for example.
  • the position of the perforated portion in the perforated sentence S1 is preferably the beginning or the middle of the sentence, but FIG. 3B shows. As shown, it may be at the end of the sentence.
  • FIG. 3A "Father is smoking a cigarette” is the correct answer, and in the example shown in Fig. 3B, "Mother is drinking”. tea) ”is the correct answer.
  • the subject 102 utters the perforated sentence S1 while filling the perforated portion while comparing the first image I1 with the perforated sentence S1.
  • the first test can mainly evaluate the attention function and the executive function among the above-mentioned four functions included in the cognitive function. Further, in the first inspection, since it is necessary to compare the first image I1 displayed on the display unit 121 with the perforated sentence S1, the selective attention function is particularly required among the attention functions. As a result, it is possible to give the subject 102 a larger load than simply speaking about the first image I1 while looking at the first image I1 displayed on the display unit 121, and as a result, the prediction accuracy for predicting the deterioration of the cognitive function is high. It leads to improvement.
  • a plurality of (for example, about 10) perforated sentences S1 are prepared for one first image I1, and when the utterance of one perforated sentence S1 is completed, the next It is necessary to perform an operation to shift to the perforated sentence S1.
  • the cognitive function test system 1 includes the operation unit 14, as in the present embodiment, the examiner 101 or the subject 102 can shift to the next perforated sentence S1 at the operation unit 14. ..
  • the response time for one perforated sentence S1 in advance and automatically shift to the next perforated sentence S1 when this answer time elapses.
  • the subject 102 may move to the next perforated sentence S1 even though the subject 102 has not finished speaking to the perforated sentence S1.
  • the utterance time of the subject 102 is shorter than the response time, and unnecessary noise may be recorded in the remaining time after the utterance is completed, which may hinder the voice analysis.
  • the cognitive function test system 1 in order to reduce the operation burden, operation error, and test error of the inspector 101 or the subject 102, the following perforated sentence S1 is used by using the voice recognition by the analysis unit 113. It is configured to automatically migrate to.
  • the voice recognition by the analysis unit 113. It is configured to automatically migrate to.
  • the perforated sentence S1 stating "() is smoking" is displayed on the display unit 121.
  • the end of the sentence is a predetermined wording (here, "sucking"). Therefore, when the word "sucking" at the end of the sentence is detected by the voice recognition of the analysis unit 113, it can be determined that the utterance of the subject 102 is completed, and it is used as a trigger for shifting to the next perforated sentence S1. be able to.
  • the presence or absence of a response to the perforated sentence S1 is detected based on the partial voice (voice of "sucking") included in the utterance of the subject 102. Then, in the example shown in FIG. 3A, the partial voice is the utterance voice at the end of the perforated sentence S1 (the voice saying "sucking").
  • the display unit 121 displays the first image I1 and the next perforated sentence S1. In other words, when there is a reply from the subject 102, the first image I1 and the next perforated sentence S1 are displayed on the display unit 121.
  • the next perforated sentence S1 can be obtained.
  • the migration can be done reliably.
  • the operation burden, the operation error, and the examination error of the inspector 101 or the subject 102 can be reduced. Further, by stopping the recording when the spoken voice at the end of the perforated sentence S1 is detected, there is an advantage that unnecessary noise that may hinder the voice analysis is less likely to be recorded.
  • the perforated sentence S1 saying "Mother has tea ()" is displayed on the display unit 121.
  • the end of the perforated sentence S1 that is, the end of the perforated sentence S1 is the perforated part, and in this perforated part, "drinking", "sipping", etc.
  • Multiple words can be entered. That is, in this case, the partial voice included in the utterance of the subject 102 is the utterance voice of the perforated portion in the perforated sentence S1.
  • the trigger for shifting to the next perforated sentence S1 is the wording "drinking".
  • the subject 102 may not be able to answer depending on the content of the perforated sentence S1.
  • the problem is that the examination time becomes longer by continuing to display the same perforated sentence S1 until the subject 102 can answer. There is. Therefore, in such a case, it is preferable to shift to the next perforated sentence S1 by the specific word (skip word) issued by the subject 102.
  • the specific word includes, for example, "path", “next", "don't know", and the like.
  • the cognitive function test method it is preferable that the first image I1 and the next perforated sentence S1 are displayed on the display unit 121 when a preset predetermined time has elapsed.
  • the specified time is preferably set to, for example, about 30 seconds.
  • the next perforated sentence S1 to be displayed on the display unit 121 is changed according to the answer to the previous perforated sentence S1. For example, if the answer in the perforated sentence S1 is incorrect, or if it takes a long time to answer the perforated sentence, the difficulty level of the perforated sentence S1 is lower than that in the previous perforated sentence S1. Is preferably the following perforated sentence S1. On the other hand, if the answer of the perforated part in the perforated sentence S1 is correct and the time required for answering the perforated part is short, the perforated sentence S1 having a higher difficulty level than the previous perforated sentence S1 is next. It is preferable to use the perforated sentence S1. In this way, by changing the next perforated sentence S1 displayed on the display unit 121 according to the response of the subject 102 to the previous perforated sentence S1, the state of the cognitive function of the subject 102 can be discriminated in more stages. can do.
  • the second examination is an examination in which the subject 102 is made to make an utterance related to the first image I1 after an interval period has elapsed since the first examination was performed.
  • the first image I1 displayed on the display unit 121 in the first examination is not displayed on the display unit 121 (without showing the first image I1 to the subject 102), and the first image I1
  • This is a test in which the subject 102 is made to make an utterance related to. That is, the second test is a delayed reproduction test in which the subject 102 remembers the content of the first image I1 and the subject 102 speaks about the content of the first image I1 after the interval period elapses.
  • the interval period is preferably set to, for example, several minutes to several tens of minutes, and in this embodiment, it is 5 minutes as an example.
  • the inspector 101 gives the subject 102 a task such as "Please talk about the first image I1".
  • Subject 102 speaks about the first image I1 without looking at the first image I1.
  • the subject 102 whose cognitive function (particularly memory function) is not deteriorated speaks smoothly about the contents of the first image I1 (the father is smoking, the mother is drinking tea, etc.). Can be done.
  • the subject 102 whose cognitive function is deteriorated can speak smoothly immediately after the start of the utterance of the first image I1 (for example, several tens of seconds), but the utterance stops after a certain period of examination time. It has been known.
  • the second test can mainly evaluate the memory function (short-term memory function) among the four functions included in the cognitive function.
  • the cognitive function test method is configured to present guide information G1 to the subject 102 in the second test so that it can be evaluated to what extent the memory function of the subject 102 has deteriorated.
  • the guide information G1 about the father is presented to the subject 102 in order to promote the utterance about the father.
  • the second image I2 including a part of the father may be displayed on the display unit 121, or as shown in FIG. 4B, the subject may give suggestions for utterance.
  • Character information (sentence S2) such as that given to 102 may be displayed on the display unit 121.
  • the second image I2 is a partial image based on the first image I1.
  • the second image I2 is an image in which at least a part of the first image I1 is hidden.
  • the guide information G1 includes information (second image I2, sentence S2) that gives the subject 102 a suggestion about the utterance.
  • both the second image I2 and the sentence S2 may be displayed on the display unit 121, or the second image I2 and the sentence S2 may be displayed on the display unit 121 in order.
  • the order of displaying the second image I2 and the sentence S2 may be such that the second image I2 comes first or the sentence S2 comes first.
  • the remaining part of the first image I1 excluding the second image I2 is visible, but in reality, only the second image I2 is visible and the remaining portion is not visible. To do.
  • the analysis unit 113 performs voice recognition in real time based on the voice input to the voice input / output unit 122, and the voice data is character data (character string). It can be converted to. Further, regarding the fluctuation of word notation (for example, father, father, husband, etc.), a dictionary or the like may be created in advance and stored in the storage unit 13. Thereby, the utterance content of the subject 102 can be memorized while corresponding to various expression formats, and the guide information G1 can be presented only for the event that the subject 102 has not uttered.
  • the guide information G1 is determined by whether or not the utterance of the subject 102 includes a keyword.
  • the keywords in the first image I1 are, for example, “dad”, “mother”, “sister”, “younger brother”, “cigarette”, “tea”, “sucking", and “drinking”. And so on.
  • the second image I2 and the sentence S2 are used to encourage the utterance about the father. At least one of the above is displayed on the display unit 121. As a result, even when the subject 102 does not remember the father, the subject 102 can be prompted to speak about the father by presenting the guide information G1.
  • the guide information G1 is preferably determined for each area of the first image I1.
  • the area of the first image I1 is divided into a first area including a father, a second area including a mother, a third area including an older sister, and a fourth area including a younger brother, and guide information G1 is provided for each area.
  • the keywords in the first image I1 are preferably grouped by area.
  • the first area including the father is associated with the keywords related to the father, "dad”, "cigarette”, and “smoking”.
  • the keywords "mother”, "tea”, and "drinking" related to the mother are linked. In this case, it may be determined that there is no problem in the cognitive function if even one keyword appears in each area, or it may be determined that there is a problem in the cognitive function if all the keywords appear.
  • the guide information G1 is displayed hierarchically (stepwise) so that the cognitive function (particularly the memory function) of the subject 102 can be evaluated in multiple stages. For example, assume that the subject 102 is prompted to speak about his father. In the first stage, for example, the guide information G1 such as "Who was in the lower left of the first image I1?" Is displayed on the display unit 121. At this time, if the subject 102 replies "there was a father", the guide information G1 of the next stage is displayed on the display unit 121.
  • the guide information G1 of the second stage is, for example, "What was your father doing in the lower left of the first image I1?"
  • By presenting the guide information G1 hierarchically (stepwise) in this way it is possible to evaluate the state of the cognitive function of the subject 102, that is, how much the cognitive function of the subject 102 is deteriorated stepwise. ..
  • the second image I2 is displayed on the display unit 121 as the guide information G1
  • the size of the second image I2 may be increased stepwise, or the position of the second image I2 in the first image I1 may be increased. That is, the area displayed by the second image I2 may be changed.
  • the third image I3 is, for example, an advertisement for cat food. That is, the content presented by the cognitive function test system 1 is an advertisement.
  • the content is not limited to images, but may be moving images. In this case, it is possible to present a commercial moving image, and it is preferable that the commercial moving image is a moving image optimized for the subject 102 by the recommendation function. As a result, income from commercials is expected, and the cost required for inspection can be reduced. As a result, it is expected that inspection opportunities will be expanded.
  • the subject 102 may listen to the content of interest in advance.
  • the content presented during the interval period is selected according to the subject 102.
  • a plurality of contents that can be displayed during the interval period are displayed on the display unit 121, and the subject 102 is made to select from the plurality of contents.
  • the content is manually selected by the subject 102.
  • the content that the subject 102 is interested in can be displayed on the display unit 121.
  • the content selected by the subject 102 during the interval period may be used for the next second examination, or at least one of the first examination and the second examination thereafter. This allows the subject 102 to be focused on subsequent examinations.
  • the content to be displayed on the display unit 121 may be selected according to the attribute information of the subject 102. For example, if the subject 102 likes a cat from the attribute information of the subject 102, a third image I3 that is related to the cat is displayed on the display unit 121 as shown in FIG.
  • the content may be automatically switched according to the degree of interest of the subject 102 in the content. For example, it is assumed that the subject 102 continues to talk about the third image I3 displayed on the display unit 121 for 40 seconds. If the threshold value in this case is set to 30 seconds, it can be determined that the subject 102 has a high degree of interest in the third image I3 because the subject 102 continues to speak beyond this threshold value. On the other hand, when the subject 102 speaks about the third image I3 for only 20 seconds, it can be determined that the subject 102 has a low degree of interest in the third image I3 because it is below the above threshold value.
  • the third image I3 may be continuously displayed, or the same type of image as the third image I3 may be continuously displayed. Further, when the subject 102 has a low degree of interest in the third image I3, an image of a type different from that of the third image I3 may be displayed.
  • the threshold value for determining the degree of interest of the subject 102 in the content is not limited to one, and may be plural. That is, the degree of interest of the subject 102 in the content may be divided into a plurality of stages. Even in this case, by displaying the content according to the degree of interest of the subject 102 on the display unit 121, the subject 102 can be concentrated on the subsequent examination.
  • a recommendation system by machine learning using the attribute information of the subject 102 as an explanatory variable (independent variable) and the degree of interest of the subject 102 as an objective variable (dependent variable). For example, when a new subject is registered, by predicting the degree of interest in the image (content) from the attribute information of the subject, an image that is likely to be of high interest in the subject can be displayed on the display unit 121. This makes it possible to encourage the subject to speak to the image.
  • the inspection unit 112 of the control unit 11 outputs the first control signal to the display unit 121, and causes the display unit 121 to display the first image I1 and the perforated sentence S1.
  • the subject 102 speaks about the perforated sentence S1 while looking at the first image I1 and the perforated sentence S1 displayed on the display unit 121 (step ST1).
  • the control unit 11 starts measuring the interval period (step ST2).
  • the control unit 11 causes the display unit 121 to display (present) the third image I3 (content) until the interval period elapses (step ST2: No) (step ST3).
  • step ST4 the control unit 11 causes the inspection unit 112 to perform the second inspection (step ST4).
  • a sentence such as "Please talk about the first image I1" may be displayed on the display unit 121.
  • the inspection unit 112 of the control unit 11 outputs a second control signal to the display unit 121 and displays the guide information G1 for prompting the utterance. It is displayed on 121.
  • the evaluation unit 111 of the control unit 11 recognizes the subject 102 based on the first inspection result which is the inspection result of the first inspection and the second inspection result which is the inspection result of the second inspection.
  • the state of function that is, whether or not the cognitive function of the subject 102 is deteriorated is evaluated (step ST5).
  • the control unit 11 causes the display unit 121 to display the evaluation result of the evaluation unit 111 (step ST6).
  • the state of the cognitive function of the subject 102 is evaluated based on the two test results. Therefore, it is possible to improve the evaluation accuracy of the cognitive function as compared with the case of evaluating the state of the cognitive function of the subject 102 based on one test result.
  • step ST1 is the first inspection step
  • step ST4 is the second inspection step
  • step ST5 is the evaluation step.
  • the subject 102 is made to speak a perforated sentence S1 related to the first image I1. Therefore, it is possible to give the subject 102 a larger load than simply speaking while looking at the first image I1, and as a result, it is possible to improve the evaluation accuracy of the cognitive function. In addition, it is only necessary to have the subject 102 make an utterance related to the first image I1, and even a person other than a medical worker can easily perform the examination, so that the examination opportunity can be expanded.
  • the cognitive function test system 1 is a mobile terminal, the place where the test is performed is not restricted, so that it is possible to lead from early detection of MCI to improvement. Further, in the case of a conversational test in which the correct answer has not been determined, such as the cognitive function test method and the cognitive function test system 1 according to the present embodiment, the test cannot be performed accurately because the subject 102 memorizes the test contents. Problems can be reduced.
  • the presence or absence of a response to the perforated sentence S1 is detected based on the partial voice included in the utterance of the subject 102. Therefore, it is possible to automatically shift to the next perforated sentence S1 according to the detection result of the presence or absence of the answer.
  • the partial voice is the last spoken voice in the perforated sentence S1. In this case, it is possible to detect that the utterance of the subject 102 is completed by the utterance voice.
  • the partial voice is the utterance voice of the perforated portion in the perforated sentence S1.
  • the utterance voice it is possible to detect that there is a reply to the perforated sentence S1 by the above-mentioned utterance voice.
  • the cognitive function test method when there is a reply to the perforated sentence S1, the first image I1 and the next perforated sentence S1 are displayed on the display unit 121. Therefore, the examiner 101 or the subject 102 can automatically move to the next perforated sentence S1 without any operation. That is, the answer to the perforated sentence S1 can be used as a trigger for displaying the next perforated sentence S1.
  • next perforated sentence (S1) is changed according to the answer to the perforated sentence S1. Therefore, a different perforated sentence S1 can be displayed on the display unit 121 as the next perforated sentence S1 according to the answer to the perforated sentence S1.
  • the first image I1 and the next perforated sentence S1 are displayed on the display unit 121 when a preset predetermined time has elapsed. Therefore, when the specified time has elapsed, it is possible to shift to the next perforated sentence S1 regardless of whether or not there is an answer.
  • the cognitive function test method when the subject 102 issues a specific word, the first image I1 and the next perforated sentence S1 are displayed on the display unit 121. Therefore, when the subject 102 issues a specific word, it is possible to shift to the next perforated sentence S1 regardless of whether or not there is an answer.
  • the above-described embodiment is only one of the various embodiments of the present disclosure.
  • the above-described embodiment can be changed in various ways depending on the design and the like as long as the object of the present disclosure can be achieved.
  • the cognitive function test method according to the above-described embodiment and the same function as the cognitive function test system 1 may be embodied in a computer program, a non-temporary recording medium on which the computer program is recorded, or the like.
  • the program according to one aspect is a program for causing one or more processors to execute the above-mentioned cognitive function test method.
  • the control unit 11 includes a computer system.
  • the main configuration of a computer system is a processor and memory as hardware.
  • the processor executes the program recorded in the memory of the computer system, the function as the control unit 11 in the present disclosure is realized.
  • the program may be pre-recorded in the memory of the computer system, may be provided through a telecommunications line, and may be recorded on a non-temporary recording medium such as a memory card, optical disk, or hard disk drive that can be read by the computer system. May be provided.
  • a processor in a computer system is composed of one or more electronic circuits including a semiconductor integrated circuit (IC) or a large scale integrated circuit (LSI).
  • IC semiconductor integrated circuit
  • LSI large scale integrated circuit
  • the integrated circuit such as IC or LSI referred to here has a different name depending on the degree of integration, and includes an integrated circuit called a system LSI, a VLSI (Very Large Scale Integration), or a ULSI (Ultra Large Scale Integration). Further, an FPGA (Field-Programmable Gate Array) programmed after the LSI is manufactured, or a logical device capable of reconfiguring the junction relationship inside the LSI or reconfiguring the circuit partition inside the LSI should also be adopted as a processor. Can be done.
  • a plurality of electronic circuits may be integrated on one chip, or may be distributed on a plurality of chips. The plurality of chips may be integrated in one device, or may be distributed in a plurality of devices.
  • the computer system referred to here includes a microcontroller having one or more processors and one or more memories. Therefore, the microcontroller is also composed of one or more electronic circuits including a semiconductor integrated circuit or a large-scale integrated circuit.
  • the cognitive function test system 1 it is not an essential configuration for the cognitive function test system 1 that a plurality of functions in the cognitive function test system 1 are integrated in one housing. That is, the components of the cognitive function test system 1 may be distributed in a plurality of housings. Further, at least a part of the functions of the cognitive function test system 1, for example, the function of the control unit 11 may be realized by a cloud (cloud computing) or the like.
  • the tester 101 and the subject 102 perform the test using the cognitive function test system 1, but the tester 101 may be omitted. Since the cognitive function test system 1 has a structure in which the contents of the first test and the second test are displayed on the display unit 121, it is sufficient that the subject 102 is present, and the subject 102 is tested alone using the cognitive function test system 1. May be done.
  • the third image I3 related to the advertisement is displayed as the content to be displayed on the display unit 121 during the interval period, but for example, the recommendation information for the subject 102 may be displayed on the display unit 121.
  • the recommendation information is information based on the purchase history of the Internet shop purchased by the subject 102 in the past. For example, if the subject 102 purchases a car tire at an internet shop, information about the car is displayed on the display unit 121 as recommendation information. This recommendation information may be used for the subsequent second inspection, or at least one of the first inspection and the second inspection from the next time onward. As a result, the first examination or the second examination specialized for the subject 102 can be performed.
  • the content to be displayed on the display unit 121 during the interval period may be an image or a moving image to be displayed on the display unit 121 in the first inspection, or may be recent news, old events, or the like. Then, with these contents displayed on the display unit 121, the subject may talk about the contents. That is, the content to be displayed on the display unit 121 during the interval period is not limited to the advertisement, and various variations can be considered.
  • the perforated portion of the perforated sentence S1 is the subject located at the beginning of the sentence or the compound word located at the end of the sentence, but for example, the perforated portion may be an object located in the sentence.
  • the object "tobacco" is a perforated portion.
  • the task execution time is the time from displaying the first image I1 and the perforated sentence S1 on the display unit 121 to the completion of the utterance of the subject 102.
  • the task execution time may be, for example, the time from displaying the first image I1 and the perforated sentence S1 on the display unit 121 until the subject 102 starts speaking, or the subject 102 may start speaking. It may be the time from the start of the utterance to the end of the utterance.
  • the guide information G1 is determined for each area of the first image I1, but for example, the guide information G1 is determined for each subject (father, mother, older sister, younger brother) included in the first image I1. May be determined.
  • the keywords in the first image I1 are preferably grouped by subject. For example, if the subject is a father, the keywords “dad”, “cigarette”, and “smoking” are used by the father. Link to. If the subject is a mother, the keywords "mother”, "tea”, and “drinking” are linked to the mother. According to this configuration, since the guide information G1 is changed for each subject, it is possible to prompt the subject 102 to speak for any subject.
  • the degree of interest of the subject 102 is determined from the speech time of the subject 102 with respect to the content, but for example, the degree of interest of the subject 102 may be determined from the result of the sentiment analysis of the voice of the subject 102. In this case, for example, if the feeling of "happy" continues for 30 seconds or more from the result of the sentiment analysis, it is determined that the subject 102 has a high degree of interest.
  • the analysis unit 113 of the control unit 11 performs voice recognition.
  • the server device can perform voice recognition. Recognition may be performed.
  • the cognitive function test system 1 transmits the voice data of the subject 102 input via the voice input / output unit 122 to the server device.
  • the server device performs voice recognition on the voice data from the cognitive function test system 1 and transmits the result of the voice recognition to the cognitive function test system 1.
  • the processing load of the control unit 11 can be reduced by performing voice recognition on the server device. Further, it is not necessary to store the acoustic model, the recognition dictionary, and the like in the storage unit 13, and the memory capacity of the storage unit 13 can be reduced.
  • the second image I2 is a partial image of the first image I1, but the second image I2 may be, for example, a mosaic image obtained by applying a mosaic to the first image I1.
  • the entire first image I1 may be mosaicked, or only the portion of the first image I1 that is desired to be presented as the guide information G1 may be mosaicked.
  • the subject included in the first image I1 is a person, but the subject is not limited to the person, and may be, for example, a wall clock included in the first image I1 and a bay window. ..
  • test results of the first test and the second test are used for the evaluation of the cognitive function, but for example, the voice data of the subject during the interval period may be used for the evaluation of the cognitive function. This has the advantage that the evaluation accuracy of cognitive function is expected to improve.
  • the cognitive function test method is a cognitive function in which the subject (102) is made to make an utterance related to the image (I1) while displaying the image (I1) on the display unit (121). It is an inspection method.
  • a perforated sentence (S1) in which a part of the sentence related to the image (I1) is left blank is displayed on the display unit (121) together with the image (I1).
  • the presence or absence of an answer to the perforated sentence (S1) is detected based on the partial voice included in the utterance.
  • the partial voice is the last spoken voice in the perforated sentence (S1).
  • the partial voice is the utterance voice of the perforated portion in the perforated sentence (S1).
  • the image (I1) and the next perforated sentence (S1) are displayed on the display unit (121). To display.
  • the answer to the perforated sentence (S1) can be used as a trigger for displaying the next perforated sentence (S1).
  • the next perforated sentence (S1) changes according to the answer.
  • the cognitive function test method in any one of the first to sixth aspects, when a preset predetermined time has elapsed, the image (I1) and the next perforated sentence (S1) are displayed. It is displayed on the display unit (121).
  • the cognitive function test method in any one of the first to seventh aspects, when the subject (121) issues a specific word, the image (I1) and the next perforated sentence (S1) are displayed. Is displayed on the display unit (121).
  • the program according to the ninth aspect is a program for causing one or more processors to execute the cognitive function test method according to any one of the first to eighth aspects.
  • the cognitive function test system (1) includes a display unit (121), an inspection unit (112), and a control unit (11).
  • the inspection unit (112) causes the subject (102) to make an utterance related to the image (I1) while displaying the image (I1) on the display unit (121).
  • the control unit (11) causes the display unit (121) to display a perforated sentence (S1) in which a part of the sentence related to the image (I1) is blank together with the image (I1).
  • the configurations according to the second to eighth aspects are not essential configurations for the cognitive function test method and can be omitted as appropriate.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Educational Technology (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Educational Administration (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

La présente invention permet d'améliorer la précision d'estimation de l'évaluation des fonctions cognitives. Cette méthode de test des fonctions cognitives est destiné, pendant qu'une première image (I1) est affichée dans une unité d'affichage (121), à amener un sujet (102) à produire un énoncé concernant la première image (I1). Dans la méthode de test des fonctions cognitives, une phrase partiellement découpée (S1) dans laquelle un espace vierge est fourni à une partie d'une phrase concernant la première image (I1) est affichée dans l'unité d'affichage (121), conjointement avec la première image (I1).
PCT/JP2020/003858 2019-03-26 2020-02-03 Méthode de test des fonctions cognitives, programme et système de test des fonctions cognitives WO2020195164A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021508164A JP7241321B2 (ja) 2019-03-26 2020-02-03 認知機能検査方法、プログラム、及び認知機能検査システム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019059471 2019-03-26
JP2019-059471 2019-03-26

Publications (1)

Publication Number Publication Date
WO2020195164A1 true WO2020195164A1 (fr) 2020-10-01

Family

ID=72608539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/003858 WO2020195164A1 (fr) 2019-03-26 2020-02-03 Méthode de test des fonctions cognitives, programme et système de test des fonctions cognitives

Country Status (3)

Country Link
JP (1) JP7241321B2 (fr)
TW (1) TW202034852A (fr)
WO (1) WO2020195164A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6337362B2 (fr) * 1980-02-18 1988-07-25 Nippon Denshin Denwa Kk
WO2009069756A1 (fr) * 2007-11-30 2009-06-04 Panasonic Electric Works Co., Ltd. Dispositif de maintien et d'amélioration de la fonction cérébrale
JP2015180933A (ja) * 2014-03-07 2015-10-15 国立大学法人鳥取大学 認知症予防システム
JP2016071897A (ja) * 2014-09-30 2016-05-09 株式会社プローバホールディングス 認知症介護支援システム、認知症介護支援サーバおよび認知症介護支援プログラム
US20170105666A1 (en) * 2014-07-08 2017-04-20 Samsung Electronics Co., Ltd. Cognitive function test device and method
JP2017217052A (ja) * 2016-06-03 2017-12-14 一生 重松 認知症診断支援装置とその作動方法および作動プログラム、並びに認知症診断支援システム
JP2018512202A (ja) * 2015-03-12 2018-05-17 アキリ・インタラクティヴ・ラブズ・インコーポレイテッド 認知能力を測定するためのプロセッサ実装システムおよび方法
WO2019044255A1 (fr) * 2017-08-28 2019-03-07 パナソニックIpマネジメント株式会社 Dispositif d'évaluation de fonction cognitive, système d'évaluation de fonction cognitive, procédé d'évaluation de fonction cognitive et programme

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150294588A1 (en) 2014-04-11 2015-10-15 Aspen Performance Technologies Neuroperformance
JP6269377B2 (ja) 2014-07-31 2018-01-31 株式会社Jvcケンウッド 診断支援装置および診断支援方法
JP6337362B1 (ja) 2017-11-02 2018-06-06 パナソニックIpマネジメント株式会社 認知機能評価装置、及び、認知機能評価システム

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6337362B2 (fr) * 1980-02-18 1988-07-25 Nippon Denshin Denwa Kk
WO2009069756A1 (fr) * 2007-11-30 2009-06-04 Panasonic Electric Works Co., Ltd. Dispositif de maintien et d'amélioration de la fonction cérébrale
JP2015180933A (ja) * 2014-03-07 2015-10-15 国立大学法人鳥取大学 認知症予防システム
US20170105666A1 (en) * 2014-07-08 2017-04-20 Samsung Electronics Co., Ltd. Cognitive function test device and method
JP2016071897A (ja) * 2014-09-30 2016-05-09 株式会社プローバホールディングス 認知症介護支援システム、認知症介護支援サーバおよび認知症介護支援プログラム
JP2018512202A (ja) * 2015-03-12 2018-05-17 アキリ・インタラクティヴ・ラブズ・インコーポレイテッド 認知能力を測定するためのプロセッサ実装システムおよび方法
JP2017217052A (ja) * 2016-06-03 2017-12-14 一生 重松 認知症診断支援装置とその作動方法および作動プログラム、並びに認知症診断支援システム
WO2019044255A1 (fr) * 2017-08-28 2019-03-07 パナソニックIpマネジメント株式会社 Dispositif d'évaluation de fonction cognitive, système d'évaluation de fonction cognitive, procédé d'évaluation de fonction cognitive et programme

Also Published As

Publication number Publication date
JP7241321B2 (ja) 2023-03-17
JPWO2020195164A1 (fr) 2020-10-01
TW202034852A (zh) 2020-10-01

Similar Documents

Publication Publication Date Title
TWI680453B (zh) 認知功能評估裝置、認知功能評估系統、認知功能評估方法及程式
Picou et al. Visual cues and listening effort: Individual variability
Pinget et al. Native speakers’ perceptions of fluency and accent in L2 speech
Babel et al. The role of fundamental frequency in phonetic accommodation
McKechnie et al. Automated speech analysis tools for children’s speech production: A systematic literature review
Van Nuffelen et al. Speech technology‐based assessment of phoneme intelligibility in dysarthria
Munson et al. Acoustic and perceptual correlates of stress in nonwords produced by children with suspected developmental apraxia of speech and children with phonological disorder
Borrie et al. Generalized adaptation to dysarthric speech
Hustad et al. Development of speech intelligibility between 30 and 47 months in typically developing children: A cross-sectional study of growth
Womack et al. Disfluencies as extra-propositional indicators of cognitive processing
McGuire A brief primer on experimental designs for speech perception research
WO2019188405A1 (fr) Dispositif d'évaluation de fonction cognitive, système d'évaluation de fonction cognitive, procédé d'évaluation de fonction cognitive et programme
JP2010054549A (ja) 回答音声認識システム
Bleaman et al. Medium-shifting and intraspeaker variation in conversational interviews
WO2020195164A1 (fr) Méthode de test des fonctions cognitives, programme et système de test des fonctions cognitives
WO2020195163A1 (fr) Procédé de test de fonction cognitive, programme et système de test de fonction cognitive
WO2020195165A1 (fr) Méthode de test des fonctions cognitives, programme et système de test des fonctions cognitives
JP7135372B2 (ja) 学習支援装置、学習支援方法およびプログラム
Gravano et al. Who Do You Think Will Speak Next? Perception of Turn-Taking Cues in Slovak and Argentine Spanish.
Luthra et al. Boosting lexical support does not enhance lexically guided perceptual learning.
KR102336015B1 (ko) 동영상 기반의 언어장애 분석 시스템, 방법 및 이를 수행하기 위한 프로그램을 기록한 기록매체
JP6639857B2 (ja) 聴力検査装置、聴力検査方法および聴力検査プログラム
Arts et al. Development and structure of the VariaNTS corpus: A spoken Dutch corpus containing talker and linguistic variability
Gittleman et al. Effects of noise and talker intelligibility on judgments of accentedness
Wilder Investigating hybrid models of speech perception

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20778034

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021508164

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20778034

Country of ref document: EP

Kind code of ref document: A1