WO2019225241A1 - Swallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system - Google Patents

Swallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system Download PDF

Info

Publication number
WO2019225241A1
WO2019225241A1 PCT/JP2019/016771 JP2019016771W WO2019225241A1 WO 2019225241 A1 WO2019225241 A1 WO 2019225241A1 JP 2019016771 W JP2019016771 W JP 2019016771W WO 2019225241 A1 WO2019225241 A1 WO 2019225241A1
Authority
WO
WIPO (PCT)
Prior art keywords
evaluated
function
person
swallowing
evaluation
Prior art date
Application number
PCT/JP2019/016771
Other languages
French (fr)
Japanese (ja)
Inventor
絢子 中嶋
健一 入江
松村 吉浩
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Priority to JP2020521105A priority Critical patent/JPWO2019225241A1/en
Publication of WO2019225241A1 publication Critical patent/WO2019225241A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/22Social work
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition

Definitions

  • the present invention relates to a swallowing function evaluation method, a program, a swallowing function evaluation device, and a swallowing function evaluation system, which can evaluate a subject's swallowing function.
  • Eating dysphagia has risks such as aspiration, malnutrition, loss of eating pleasure, dehydration, weakness in physical strength and immunity, oral contamination and aspiration pneumonia, and prevent dysphagia Is required.
  • a device for evaluating the swallowing function is attached to the neck of the person to be evaluated, the pharyngeal movement feature quantity is acquired as a swallowing function evaluation index (marker), and the person's swallowing function is evaluated.
  • a method for evaluation is disclosed (for example, see Patent Document 1).
  • the swallowing function can be evaluated by visual inspection, interview, or palpation by a specialist such as a dentist, dental hygienist, speech auditor, or internal medicine doctor. Diagnosis by experts after dysphagia becomes serious, such as when paralysis occurs or when a dysphagia is caused by surgery on an organ related to dysphagia (eg, tongue, soft palate or pharynx) There are many cases to do. However, due to the effects of aging, elderly people may be overlooked or spilled, but they may overlook a decline in swallowing function as a natural symptom because they are old. .
  • an object of the present invention is to provide a method for evaluating a swallowing function that can easily evaluate the swallowing function of a person to be evaluated.
  • the method for evaluating a swallowing function includes: audio data obtained by collecting sound by a non-contact of a person who utters a predetermined syllable or a predetermined sentence; and the face of the person to be evaluated Or an acquisition step of acquiring at least two of a first image obtained by imaging the neck in a non-contact manner and a second image obtained by imaging the oral cavity of the evaluated person in a non-contact manner; A calculation step of calculating each feature amount from at least two of the audio data, the first image, and the second image, and an evaluation for evaluating a swallowing function of the evaluated person from the calculated feature amount Steps.
  • a program according to an aspect of the present invention is a program for causing a computer to execute the above-described swallowing function evaluation method.
  • the device for evaluating a swallowing function includes: audio data obtained by collecting, in a non-contact manner, a voice in which an evaluated person utters a predetermined syllable or a predetermined sentence; An acquisition unit that acquires at least two of a first image obtained by imaging the face or neck of the subject without contact and a second image obtained by imaging the oral cavity of the evaluation subject without contact; From the audio data acquired by the acquisition unit, the first image, and the second image, a calculation unit that calculates each feature amount, and the feature amount calculated by the calculation unit, the evaluation target The evaluation part which evaluates a person's ingestion swallowing function, and the output part which outputs the evaluation result which the said evaluation part evaluated are provided.
  • a swallowing function evaluation system is the above-described swallowing function evaluation device, and images the face, neck, or oral cavity of the evaluator, or the evaluator
  • Voice data obtained when the device collects the spoken voice without contact, a first image obtained when the device images the face or neck of the person to be evaluated without contact, and the evaluated At least two of the second images obtained by imaging the inside of a person's mouth without contact with the device are acquired.
  • the method for evaluating the swallowing function of the present invention it is possible to easily evaluate the swallowing function of the person to be evaluated.
  • FIG. 1 is a diagram illustrating a configuration of a swallowing function evaluation system according to an embodiment.
  • FIG. 2 is a block diagram illustrating a characteristic functional configuration of the swallowing function evaluation system according to the embodiment.
  • FIG. 3 is a flowchart illustrating a processing procedure for evaluating a person to be swallowed by the swallowing function evaluation method according to the embodiment.
  • FIG. 4 is a diagram illustrating an outline of a method for acquiring a speech of an evaluated person by the method for evaluating a swallowing function according to the embodiment.
  • FIG. 5 is a diagram illustrating an outline of a method for acquiring a first image obtained by imaging the face or neck of the person to be evaluated by the swallowing function evaluation method according to the embodiment.
  • FIG. 6 is a diagram illustrating an outline of a method for acquiring a second image obtained by imaging of the evaluation subject in the oral cavity by the swallowing function evaluation method according to the embodiment.
  • FIG. 7 is a diagram illustrating an example of voice data indicating voice uttered by the person to be evaluated.
  • FIG. 8 is a frequency spectrum diagram for explaining the formant frequency.
  • FIG. 9 is a diagram illustrating an example of a temporal change in formant frequency.
  • FIG. 10 is a diagram for explaining a method of calculating the tongue color as a feature amount.
  • FIG. 11 is a diagram illustrating a specific example of the swallowing function in the preparation period, the oral period, and the pharyngeal period, and symptoms when each function decreases.
  • FIG. 12 is a diagram illustrating an example of the evaluation result.
  • FIG. 13 is a diagram illustrating an example of the evaluation result.
  • FIG. 14 is a diagram illustrating an example of the evaluation result.
  • FIG. 15 is a diagram illustrating an example of the evaluation result.
  • FIG. 16 is a diagram illustrating an example of the evaluation result.
  • the present invention relates to a method for evaluating a swallowing function and the like. First, the swallowing function will be described.
  • the swallowing function is a function of the human body that is necessary for recognizing food, taking it into the mouth, and achieving a series of processes from the stomach to the stomach.
  • the swallowing function consists of five stages: the early phase, the preparation phase, the oral phase, the pharyngeal phase, and the esophageal phase.
  • the preceding period also called the cognitive period
  • the swallowing function in the preceding period is, for example, an eye viewing function.
  • the nature and condition of the food are recognized and the necessary preparations for eating such as how to eat, salivation and posture are made.
  • the preparatory period for swallowing also called chewing
  • food taken into the oral cavity is chewed with teeth, crushed (ie chewed), and the chewed food is mixed with saliva by the tongue And put together into a bolus.
  • the swallowing function during the preparation period for example, recognizes the motor function of facial muscles (such as lip muscles and cheek muscles) that take food into the oral cavity without spilling it, recognizes the taste of food, and recognizes hardness.
  • Tongue recognition function or tongue movement function to push food to teeth or mix finely mixed food with saliva, occlusal state of teeth to chew and crush food, tooth and
  • the cheek movement function that prevents food from entering between the cheeks, the movement function of the masticatory muscles (such as the masseter and temporal muscles), which is the generic name of the muscles used for mastication, and the fine food
  • the saliva secretion function is affected by the occlusal state of the teeth, the function of the masticatory muscles, the function of the tongue, and the like. Due to these swallowing functions during the preparation period, the bolus has physical properties that make it easy to swallow (size, lump, viscosity), and the bolus easily moves from the oral cavity through the pharynx to the stomach.
  • the swallowing function in the oral phase includes, for example, a tongue movement function for moving the bolus to the pharynx, a soft palate raising function for closing the space between the pharynx and the nasal cavity, and the like.
  • the bolus In the pharyngeal phase during swallowing, when the bolus reaches the pharynx, a swallowing reflex occurs and the bolus is sent to the esophagus within a short time (about 1 second). Specifically, the soft palate is raised and the space between the nasal cavity and the pharynx is closed, the base of the tongue (specifically the hyoid bone that supports the base of the tongue) and the larynx are raised, and the bolus becomes the pharynx In that case, the epiglottis is inverted downward, the trachea entrance is blocked, and the bolus is sent to the esophagus so that aspiration does not occur.
  • the swallowing function in the pharyngeal phase includes, for example, a pharyngeal motor function (specifically, a motor function that raises the soft palate) to close the space between the nasal cavity and the pharynx, and a tongue ( Specifically, when the bolus moves from the pharynx to the esophagus, or when the bolus flows into the pharynx, the glottis closes and closes the trachea, and the epiglottis from above reaches the entrance to the trachea It is a motor function of the larynx that is covered by hanging down.
  • a pharyngeal motor function specifically, a motor function that raises the soft palate
  • peristaltic movement of the esophageal wall is induced, and the bolus is sent from the esophagus to the stomach.
  • the swallowing function in the esophageal stage is, for example, a peristaltic function of the esophagus for moving the bolus to the stomach.
  • Decreased swallowing function also called oral flail
  • Decreased swallowing function can be a factor that accelerates the progression from the flail phase to the state of need for care. For this reason, we notice how the swallowing function has declined at the pre-frail stage, and by performing prevention and improvement in advance, it becomes difficult to fall into the nursing care state that continues from the flail stage, and a healthy and independent life You can keep it long.
  • a voice uttered by the evaluator a first image (still image or moving image) obtained by imaging the face or neck of the evaluator, or an image of the evaluator's oral cavity It is possible to evaluate the swallowing function of the person to be evaluated from the second image (still image or moving image) obtained in the above.
  • Specific features are found in the teeth or tongue in the oral cavity of the subject whose function is reduced, and by calculating these as feature quantities, the subject's eating and swallowing function can be evaluated It is.
  • the present invention relates to a method for evaluating a swallowing function, a program for causing a computer to execute the method, a swallowing function evaluating device that is an example of the computer, and a swallowing function evaluating system including the swallowing function evaluating device. Realized. Below, the swallowing function evaluation method etc. are demonstrated, showing the swallowing function evaluation system.
  • FIG. 1 is a diagram illustrating a configuration of a swallowing function evaluation system 200 according to an embodiment.
  • the swallowing function evaluation system 200 is obtained by imaging the voice of the person being evaluated U, the first image obtained by imaging the face or neck of the person being evaluated U, and the oral cavity of the person being evaluated U.
  • 1 is a system for evaluating the swallowing function of the evaluation subject U by analyzing at least two of the second images, and as shown in FIG. 1, as shown in FIG. A portable terminal 300.
  • the swallowing function evaluation system 200 may evaluate the eating / swallowing function of the person to be evaluated U by analyzing not only a still image but also a moving image.
  • one still image and a plurality of continuous images (moving images) may be simply referred to as a first image or a second image.
  • the eating and swallowing function evaluation device 100 uses the mobile terminal 300 to generate voice data indicating a voice uttered by the evaluated person U, a first image obtained by imaging the face or neck of the evaluated person U, and the evaluated The user U obtains at least two of the second images obtained by imaging the oral cavity, and the subject U's eating swallowing function from at least two of the acquired voice data, the first image, and the second image It is a device that evaluates.
  • the portable terminal 300 is a device that picks up the voice, the neck, or the mouth of the person to be evaluated U, or collects the sound of the person to be evaluated U speaking a predetermined syllable or a predetermined sentence in a non-contact manner. Audio data indicating the sound that has been sounded, and the first image or the second image obtained by imaging are output to the swallowing function evaluation apparatus 100.
  • the mobile terminal 300 is an imaging device that performs the above imaging and a sound collecting device that performs the above sound collection.
  • the mobile terminal 300 is a smartphone or tablet having an imaging function (camera) and a microphone. Etc.
  • the mobile terminal 300 Since it is necessary to perform flash photography when imaging the oral cavity of the person to be evaluated U, the mobile terminal 300 also has a flash function (light source).
  • the mobile terminal 300 is not limited to a smartphone or a tablet as long as the device has an imaging function, a flash function, and a sound collection function, and may be a notebook PC, for example.
  • the swallowing function evaluation system 200 may include a sound collection device (microphone), an imaging device (camera), and a light source provided separately from each other, instead of the mobile terminal 300.
  • the swallowing function evaluation system 200 is mentioned later, you may be provided with the input interface for acquiring the to-be-evaluated person's U personal information.
  • the input interface is not particularly limited as long as it has an input function such as a keyboard and a touch panel.
  • the mobile terminal 300 may be a display device that has a display and displays an image or the like based on image data output from the swallowing function evaluation device 100.
  • the display device may not be the portable terminal 300 but may be a monitor device configured by a liquid crystal panel or an organic EL panel. That is, the mobile terminal 300 and the display device may be provided separately.
  • the imaging device camera
  • the sound collection device microphone
  • the light source the input interface
  • the display device may be provided separately.
  • the swallowing function evaluation device 100 and the portable terminal 300 only need to be able to transmit and receive audio data, an image obtained by imaging, or image data for displaying an image indicating an evaluation result to be described later. It may be connected or may be connected wirelessly.
  • the eating and swallowing function evaluation device 100 analyzes the voice of the person to be evaluated U based on the voice data collected by the mobile terminal 300, and based on the first image obtained by imaging by the mobile terminal 300, the person to be evaluated U Or the position of the laryngeal protuberance in the neck, or the state of the teeth or tongue in the oral cavity of the evaluation subject U based on the second image obtained by imaging with the portable terminal 300 From the result, the swallowing function of the person to be evaluated U is evaluated, and the evaluation result is output.
  • the swallowing function evaluation apparatus 100 uses image data for displaying an image indicating the evaluation result, or data for making a proposal regarding swallowing for the person to be evaluated U generated based on the evaluation result. Output to the mobile terminal 300. In this way, the swallowing function evaluation device 100 can notify the person to be evaluated U of the degree of the swallowing function and the proposal for preventing the deterioration of the swallowing function. Can prevent or improve the deterioration of swallowing function.
  • the swallowing function evaluation apparatus 100 is, for example, a personal computer, but may be a server apparatus.
  • the swallowing function evaluation device 100 may be a portable terminal 300. That is, the portable terminal 300 may have the function of the swallowing function evaluation device 100 described below.
  • FIG. 2 is a block diagram showing a characteristic functional configuration of the swallowing function evaluation apparatus 100 according to the embodiment.
  • the swallowing function evaluation apparatus 100 includes an acquisition unit 110, a calculation unit 120, an evaluation unit 130, an output unit 140, a suggestion unit 150, and a storage unit 160.
  • the acquisition unit 110 captures voice data obtained by non-contacting sound collection by the evaluated person U uttering a predetermined syllable or a predetermined sentence, and images the face or neck of the evaluated person U without contact. At least two of the first image obtained by the above and the second image obtained by imaging the inside of the oral cavity of the person to be evaluated U without contact are acquired. In the present embodiment, acquisition unit 110 acquires all of the audio data, the first image, and the second image. Note that the acquisition unit 110 may acquire only two of the audio data and the first image, may acquire only the audio data and the second image, or may acquire the first image and the second image. Only two of these may be acquired. Further, the acquisition unit 110 may further acquire personal information of the person to be evaluated U.
  • personal information is information input to the mobile terminal 300, such as age, weight, height, sex, BMI (Body Mass Index), dental information (for example, number of teeth, presence of dentures, location of occlusal support, etc.) Serum albumin level or eating rate.
  • the personal information may be acquired by a swallowing screening tool called EAT-10 (Eat Ten), a sacramental swallowing questionnaire or an interview.
  • the acquisition unit 110 is, for example, a communication interface that performs wired communication or wireless communication.
  • the calculation unit 120 is a processing unit that analyzes the voice data, the first image, or the second image of the evaluated person U acquired by the acquisition unit 110.
  • the calculating unit 120 is realized by a processor, a microcomputer, or a dedicated circuit.
  • the calculation unit 120 calculates each feature amount from at least two of the audio data, the first image, and the second image acquired by the acquisition unit 110. In the present embodiment, the calculation unit 120 calculates each feature amount from all of the audio data, the first image, and the second image. Note that the calculating unit 120 may calculate each feature amount from only two of the audio data and the first image, or may calculate each feature amount from only two of the audio data and the second image. Alternatively, the respective feature amounts may be calculated from only two of the first image and the second image.
  • the feature amount calculated from the voice data is a numerical value indicating the voice feature of the evaluated person U calculated from the voice data used by the evaluation unit 130 to evaluate the eating and swallowing function of the evaluated person U.
  • the feature amount calculated from the first image is the movement of the face of the evaluated person U calculated from the first image used by the evaluating unit 130 to evaluate the eating and swallowing function of the evaluated person U, or the like It is a numerical value indicating characteristics such as the position of the laryngeal protuberance in the neck.
  • the feature amount calculated from the second image refers to the state of the teeth or tongue in the oral cavity of the evaluated person U calculated from the image used by the evaluating unit 130 to evaluate the eating and swallowing function of the evaluated person U It is a numerical value indicating the characteristics of. Details of the calculation unit 120 will be described later.
  • the evaluation unit 130 compares the feature amount calculated by the calculation unit 120 with the reference data 161 stored in the storage unit 160, and evaluates the eating / swallowing function of the person to be evaluated U. For example, the evaluation unit 130 may evaluate the subject U's swallowing function after distinguishing whether it is a swallowing function in the preparation period, the oral period, or the pharyngeal stage.
  • the evaluation unit 130 is realized by a processor, a microcomputer, or a dedicated circuit. Details of the evaluation unit 130 will be described later.
  • the output unit 140 outputs the evaluation result of the swallowing function of the person to be evaluated U evaluated by the evaluation unit 130 to the suggestion unit 150. Further, the output unit 140 outputs the evaluation result to the storage unit 160, and the evaluation result is stored in the storage unit 160.
  • the output unit 140 is realized by a processor, a microcomputer, or a dedicated circuit.
  • the proposing unit 150 makes a proposal regarding swallowing to the person to be evaluated U by collating the evaluation result output by the output unit 140 with predetermined proposal data 162.
  • the suggestion unit 150 may collate the personal information acquired by the acquisition unit 110 with the proposal data 162 and make a proposal regarding swallowing to the evaluated person U.
  • Proposal unit 150 outputs the proposal to portable terminal 300.
  • the proposing unit 150 is realized by, for example, a processor, a microcomputer or a dedicated circuit, and a communication interface that performs wired communication or wireless communication. Details of the proposal unit 150 will be described later.
  • the storage unit 160 includes reference data 161 that indicates the relationship between the feature amount and the person's swallowing function, proposal data 162 that indicates the relationship between the evaluation result of the swallowing function and the proposed content,
  • the storage device stores personal information data 163 indicating personal information.
  • the reference data 161 is referred to by the evaluation unit 130 when the degree of the swallowing function of the evaluation subject U is evaluated.
  • the proposal data 162 is referred to by the suggestion unit 150 when a proposal related to swallowing for the person to be evaluated U is made.
  • the personal information data 163 is data acquired via the acquisition unit 110, for example.
  • the personal information data 163 may be stored in the storage unit 160 in advance.
  • the storage unit 160 is realized by, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a semiconductor memory, an HDD (Hard Disk Drive), or the like.
  • the storage unit 160 stores the program executed by the calculation unit 120, the evaluation unit 130, the output unit 140, and the suggestion unit 150, and the evaluation result used when the evaluation result of the swallowing function of the person to be evaluated U is output. And data such as an image, a moving image, a voice or a text indicating the proposal content are also stored.
  • the storage unit 160 may also store instruction images and audio data described below.
  • the eating and swallowing function evaluation apparatus 100 allows the evaluated person U to pronounce a predetermined syllable or a predetermined sentence and to image the face, neck and oral cavity of the evaluated person U.
  • the instruction unit instructs to sound a predetermined syllable or a predetermined sentence stored in the storage unit 160 and to image the face or neck and the oral cavity of the person to be evaluated U.
  • the image data of the instruction image and the audio data are acquired, and the image data and the audio data are output to the portable terminal 300.
  • FIG. 3 is a flowchart showing a processing procedure for evaluating the swallowing function of the person to be evaluated U by the swallowing function evaluation method according to the embodiment.
  • FIG. 4 is a diagram showing an outline of a method for acquiring the voice of the person to be evaluated U by the method for evaluating the swallowing function.
  • FIG. 5 is a diagram showing an overview of a method for acquiring a first image obtained by imaging the face or neck of the person U to be evaluated by the method for evaluating the swallowing function.
  • FIG. 6 is a diagram showing an overview of a method for acquiring a second image obtained by imaging the intra-oral cavity of the person to be evaluated U by the swallowing function evaluation method.
  • the instruction unit instructs to sound a predetermined syllable or a predetermined sentence (a sentence including a specific sound) and to image the face or neck and the oral cavity of the person to be evaluated U (step S100).
  • step S ⁇ b> 100 the instruction unit acquires image data of an image for instruction to the person to be evaluated U stored in the storage unit 160, and outputs the image data to the mobile terminal 300. Then, as illustrated in FIG. 4A, an image for instructing the person to be evaluated U is displayed on the mobile terminal 300.
  • the specified sentence to be instructed is “Katakata Totaiyo”, “Kitakaze Totaiyo”, “Aiueo”, “Papapapapa ⁇ ,” It may be “slaps”, “high heels”, “la la la la”, “patters”, etc.
  • the pronunciation instruction does not have to be given in a predetermined sentence, but is performed in a predetermined syllable of one character such as “ki”, “ta”, “ka”, “ra”, “ze” or “pa”. It may be broken.
  • the pronunciation instruction may be an instruction to utter a meaningless phrase of two or more syllables including only vowels such as “Eo” and “Iea”.
  • the pronunciation instruction may be an instruction to repeatedly utter such meaningless phrases.
  • the instruction unit obtains voice data of voice for instruction to the person to be evaluated U stored in the storage unit 160 and outputs the voice data to the mobile terminal 300 to instruct to pronounce the voice data.
  • the instruction may be performed using an instruction voice for instructing sound generation without using an instruction image.
  • an evaluator family, doctor, etc. who wants to evaluate the eating and swallowing function of the evaluated person U without using the instruction image and the sound for instructing the sound to the evaluated person U with his own voice Instructions may be given.
  • step S100 the instruction unit acquires the audio data of the audio for instructing the person to be evaluated U stored in the storage unit 160, and outputs the audio data to the mobile terminal 300. Then, as shown in (a) of FIG. 5 and (a) of FIG. 6, a voice for instructing the person to be evaluated U is output to the portable terminal 300.
  • the instructed contents are “Please open your mouth to shoot a movie”, but “Please shoot a movie while moving your mouth”, “Open your mouth. Shoot video while moving tongue “,” shoot video while closing mouth and inflating cheeks “,” shoot with biting teeth ",” shoot with mouth angle raised “ “Please shoot the throat Buddha", “Please open your mouth wide and shoot with flash”, etc.
  • imaging may be performed by the person to be evaluated U himself or by an evaluator (family, doctor, etc.) who wants to evaluate the person to be evaluated U's swallowing function.
  • the instruction unit obtains image data of an image for instructing the person to be evaluated U stored in the storage unit 160 and outputs the image data to the portable terminal 300 to instruct imaging.
  • the instruction may be given using an instruction image for instructing imaging without using the instruction voice.
  • the evaluator family, doctor, etc.
  • the predetermined syllable may be composed of a consonant and a vowel following the consonant.
  • such predetermined syllables are “ki”, “ta”, “ka”, “ze”, and the like.
  • Ki is composed of a consonant “k” and a vowel “i” following the consonant.
  • Ta is composed of a consonant “t” and a vowel “a” following the consonant.
  • Ka” is composed of a consonant “k” and a vowel “a” following the consonant.
  • “Ze” is composed of a consonant “z” and a vowel “e” following the consonant.
  • the predetermined sentence may include a syllable portion including a consonant, a vowel following the consonant, and a consonant following the vowel.
  • a syllable part is a “kaz” part in “Kaze”.
  • the syllable part includes a consonant “k”, a vowel “a” following the consonant, and a consonant “z” following the vowel.
  • the predetermined sentence may include a character string in which syllables including vowels are continuous.
  • a character string in which syllables including vowels are continuous.
  • such a character string is “Aiueo” or the like.
  • the predetermined sentence may include a predetermined word.
  • a predetermined word For example, in Japanese, such words are “Taiyo: Taiyo”, “Kitakaze: North wind”, and the like.
  • the predetermined sentence may include a phrase in which a syllable composed of a consonant and a vowel following the consonant is repeated.
  • such phrases are “papapapapa ⁇ ”, “tatatata ⁇ ”, “kakakaka ⁇ ”, “la la la la ⁇ ”, and the like.
  • Pa is composed of a consonant “p” and a vowel “a” following the consonant.
  • Ta is composed of a consonant “t” and a vowel “a” following the consonant.
  • Ka” is composed of a consonant “k” and a vowel “a” following the consonant.
  • “Ra” is composed of a consonant “r” and a vowel “a” following the consonant.
  • the acquisition unit 110 receives the voice data of the evaluated person U who received the instruction in step S100, the first image of the face or neck of the evaluated person U, and the oral cavity of the evaluated person U.
  • the second image is acquired via the portable terminal 300 (step S101).
  • step S ⁇ b> 101 for example, the person to be evaluated U issues a predetermined sentence or the like such as “Singing from the side” to the mobile terminal 300.
  • the acquisition unit 110 acquires a predetermined sentence or a predetermined syllable issued by the evaluated person U as voice data. As shown in FIG.
  • step S ⁇ b> 101 for example, the person to be evaluated U photographs his / her face with his mouth open using the mobile terminal 300.
  • the acquisition unit 110 acquires a first image obtained by imaging the face of the person to be evaluated U.
  • the evaluated person U takes an image of his / her oral cavity using the mobile terminal 300 (flash photography).
  • the acquisition part 110 acquires the 2nd image obtained by imaging the to-be-evaluated person U's intraoral area.
  • the calculation unit 120 calculates each feature amount from the audio data, the first image, and the second image acquired by the acquisition unit 110 (step S102), and the evaluation unit 130 calculates the feature amount calculated by the calculation unit 120. From this, the swallowing function of the person to be evaluated U is evaluated (step S103).
  • the calculation unit 120 calculates the consonant and the vowel. Is calculated as a feature amount. This will be described with reference to FIG.
  • FIG. 7 is a diagram showing an example of voice data indicating the voice uttered by the evaluator U. Specifically, FIG. 7 is a graph showing voice data when the person to be evaluated U utters “Singing from the side”. The horizontal axis of the graph shown in FIG. 7 is time, and the vertical axis is power (sound pressure). The unit of power shown on the vertical axis of the graph of FIG. 7 is decibel (dB).
  • the graph shown in FIG. 7 includes “ki”, “ta”, “ka”, “ra”, “ki”, “ta”, “ka”, “ta”, “ta”, “ta”, “ki” ”, A change in sound pressure corresponding to“ ki ”is confirmed.
  • the acquisition unit 110 acquires the data shown in FIG. 7 as voice data from the person to be evaluated U in step S101 shown in FIG.
  • the calculation unit 120 uses the known method to calculate the sound pressures “k” and “i” of “ki” included in the audio data shown in FIG.
  • the calculation unit 120 calculates the sound pressures of “z” and “e” in “ze”.
  • the calculation unit 120 calculates the sound pressure difference ⁇ P1 between “t” and “a” as a feature amount from the calculated sound pressures “t” and “a”.
  • the calculation unit 120 calculates the sound pressure difference ⁇ P3 between “k” and “i” and the sound pressure difference (not shown) between “z” and “e” as feature amounts.
  • the reference data 161 includes a threshold corresponding to each sound pressure difference, and the evaluation unit 130 evaluates the swallowing function according to whether each sound pressure difference is equal to or greater than the threshold, for example.
  • the tip of the tongue needs to contact or approach the upper front teeth.
  • the presence of teeth is important, such as supporting the side of the tongue with dentition. Estimating the presence of the dentition including the upper front teeth (sound pressure difference between “z” and “e”), estimating whether there are more or fewer remaining teeth, and affecting the masticatory ability if there are fewer teeth The occlusal state of the teeth can be evaluated.
  • the voice data acquired by the acquisition unit 110 is voice data obtained from a voice that utters a predetermined sentence including a consonant, a vowel following the consonant, and a syllable part following the consonant
  • the calculation unit 120 calculates the time required to emit the syllable part as a feature amount.
  • the predetermined sentence includes the consonant “k”, the vowel “a” following the consonant, and the consonant “z” following the vowel. It contains a syllable part consisting of The calculation unit 120 calculates the time required to generate such a syllable part composed of “kaz” as a feature amount.
  • the reference data 161 includes a threshold value corresponding to the time required to emit the syllable part. For example, the evaluation unit 130 determines whether the time required to issue the syllable part is equal to or greater than the threshold value. Evaluate swallowing function according to whether or not.
  • the time required to generate a syllable part consisting of “consonant-vowel-consonant” varies depending on the tongue's motor function (such as tongue sophistication or tongue pressure).
  • the tongue motor function in the preparation period the tongue motor function in the oral period, and the tongue motor function in the pharyngeal period can be evaluated.
  • the calculation unit 120 calculates the spectrum from the vowel part.
  • the amount of change such as the first formant frequency or the second formant frequency obtained is calculated as the feature amount, and the variation of the first formant frequency or the second formant frequency obtained from the spectrum of the vowel part is calculated as the feature amount.
  • the first formant frequency is the peak frequency of the amplitude first seen from the low frequency side of the human voice, and it is known that characteristics relating to tongue movement (particularly vertical movement) are easily reflected. In addition, it is also known that characteristics related to jaw opening are easily reflected.
  • the second formant frequency is the peak frequency of the amplitude seen second from the low frequency side of human speech.
  • oral cavity such as lips and tongue, nasal cavity, etc.
  • the influence on the position is easily reflected.
  • the occlusal state (the number of teeth) of the teeth in the preparation period has an influence on the second formant frequency because the utterance cannot be correctly performed when there are no teeth.
  • saliva secretion function in the preparation period is considered to have an influence on the second formant frequency.
  • the motor function of the tongue, the saliva secretion function, or the occlusal state of the teeth is obtained from any one of the feature values obtained from the first formant frequency and the feature values obtained from the second formant frequency. It may be calculated.
  • FIG. 8 is a frequency spectrum diagram for explaining the formant frequency.
  • the horizontal axis of the graph shown in FIG. 8 is the frequency [Hz], and the vertical axis is the amplitude.
  • the calculation unit 120 extracts a vowel part from the voice data acquired by the acquisition unit 110 by a known method, and converts the extracted voice data of the vowel part into an amplitude with respect to the frequency, thereby converting the vowel part.
  • the formant frequency obtained from the spectrum of the vowel part is calculated.
  • the graph shown in FIG. 8 is calculated by converting voice data obtained from the person to be evaluated U into amplitude data with respect to frequency and obtaining an envelope thereof.
  • the envelope for example, cepstrum analysis, linear predictive coding (LPC), or the like is employed.
  • FIG. 9 is a diagram showing an example of the time change of the formant frequency. Specifically, FIG. 9 is a graph for explaining an example of a temporal change in frequency of the first formant frequency F1, the second formant frequency F2, and the third formant frequency F3.
  • the calculating unit 120 calculates the first formant frequency F1 and the second formant frequency F2 of each of the plurality of vowels from the sound data indicating the sound uttered by the person to be evaluated U. Furthermore, the calculation unit 120 calculates the amount of change (time change amount) of the first formant frequency F1 and the amount of change (time change amount) of the second formant frequency F2 of the character string including continuous vowels as the feature amount.
  • the reference data 161 includes a threshold corresponding to the amount of change, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the amount of change is equal to or greater than the threshold.
  • the jaw opening is shown.
  • the second formant frequency F2 there is an influence on the position of the front and back of the tongue, which indicates that the movement of the tongue in the preparation period and the pharyngeal stage where the movement affects is reduced.
  • the second formant frequency F2 for example, it indicates that there is no tooth and cannot speak correctly, that is, the occlusal state of the tooth in the preparation period is deteriorated.
  • the second formant frequency F2 shows that, for example, there is little saliva and speech cannot be correctly performed, that is, the secretory function of saliva in the preparation period is lowered. That is, by evaluating the amount of change in the second formant frequency F2, the salivary secretion function in the preparation period can be evaluated.
  • the calculation unit 120 calculates the variation of the first formant frequency F1 of the character string in which the vowels are continuous as the feature amount. For example, if the voice data includes n vowels (n is a natural number), n first formant frequencies F1 are obtained, and the variation of the first formant frequency F1 is calculated using all or part of them. Is done.
  • the degree of variation calculated as the feature amount is, for example, standard deviation.
  • the reference data 161 includes a threshold value corresponding to the variation, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the variation is equal to or greater than the threshold value.
  • a large variation in the first formant frequency F1 indicates, for example, that the vertical movement of the tongue is dull. In other words, in the oral phase, the tip of the tongue is pressed against the upper jaw and eaten. Indicates that the motor function of the tongue that sends the mass to the pharynx is reduced. That is, by evaluating the variation of the first formant frequency F1, the motor function of the tongue in the oral cavity can be evaluated.
  • the calculation unit 120 calculates the pitch (height) of the voice in which the evaluated person U utters a predetermined syllable or a predetermined sentence as a feature amount.
  • the reference data 161 includes a threshold corresponding to the pitch, and the evaluation unit 130 evaluates the swallowing function depending on whether the pitch is equal to or greater than the threshold, for example.
  • the calculation unit 120 calculates the time required to utter the predetermined word. Calculated as a feature quantity.
  • the person to be evaluated U utters a predetermined sentence including “Taiyo”, the person to be evaluated U says “Taiyo” after recognizing the character string “Taiyo” as the word “Sun”. Say a string.
  • the evaluated person U may have dementia.
  • the number of teeth is said to affect dementia.
  • the number of teeth affects the brain activity, and the decrease in the number of teeth reduces the stimulation to the brain and increases the risk of developing dementia. That is, that the person to be evaluated U may have dementia corresponds to the number of teeth, and further corresponds to the occlusal state of the teeth for chewing and crushing food in the preparation period.
  • the evaluated person U may have dementia, in other words, the occlusal state of the teeth in the preparation period deteriorates.
  • the occlusal state of the teeth in the preparation period can be evaluated by evaluating the time required for the evaluated person U to issue the predetermined word.
  • the calculation unit 120 may calculate the time required for issuing the entire predetermined sentence as a feature amount. Even in this case, the occlusal state of the teeth in the preparation period can be evaluated in the same manner by evaluating the time required for the person to be evaluated U to issue the entire predetermined sentence.
  • the voice data acquired by the acquisition unit 110 is voice data obtained from a voice that utters a predetermined sentence including a consonant and a syllable composed of vowels following the consonant
  • the calculation is performed.
  • the unit 120 calculates the number of times a repeated syllable is emitted within a predetermined time (for example, 5 seconds) as a feature amount.
  • the reference data 161 includes a threshold value corresponding to the number of times, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the number of times is equal to or greater than the threshold value.
  • the to-be-evaluated U follows a consonant such as “papapapapa ⁇ ”, “tatatata ⁇ ”, “high heels ⁇ ” or “la la la la ⁇ ”, and the consonant.
  • a predetermined sentence including a phrase in which a syllable composed of vowels is repeated is uttered.
  • the function of quickly producing “ka”, that is, the function of rapidly and repeatedly contacting the base of the tongue with the soft palate is the tongue for passing the bolus in the pharyngeal phase through the pharynx (specifically, the base of the tongue)
  • the motor function of the tongue corresponds to the function of preventing food from flowing into the pharynx and the function of preventing mumps.
  • the function of quickly issuing “ra”, that is, the function of quickly and repeatedly warping the tongue corresponds to the function of the tongue for mixing food with saliva in the preparation period to form a bolus. That is, by evaluating the number of times “ra” is issued within a predetermined time, the motor function of the tongue in the preparation period can be evaluated.
  • the calculation unit 120 performs the continuous Mouth movement in an image (moving image) is calculated as a feature amount. Specifically, the calculation unit 120 calculates a difference between the movement amount on the left side of the mouth and the movement amount on the right side (referred to as a mouth left / right difference) as a feature amount.
  • the reference data 161 includes a threshold value corresponding to the left-right difference of the mouth, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the left-right difference of the mouth is equal to or greater than the threshold value. .
  • a large difference between the left and right mouths indicates, for example, that there is paralysis on the left or right side of the mouth, that is, an expression for taking food into the oral cavity without spilling food during the preparation period. Indicates that the motor function of the muscles has declined. That is, by evaluating the movement of the mouth of the person to be evaluated U, it is possible to evaluate the motor function of the facial muscles during the preparation period.
  • the calculation unit 120 determines whether the mouth is open in the image. Is calculated as a feature amount.
  • the reference data 161 includes a threshold corresponding to the degree of opening of the mouth, and the evaluation unit 130 performs, for example, a swallowing function depending on whether the degree of opening of the mouth is equal to or greater than the threshold. evaluate.
  • Small mouth openness that is, not opening the mouth wide, for example, during the preparation period, the facial muscle function and masseter muscles for taking food into the mouth without spilling It shows that the occlusal function of temporal muscles (masticatory muscles) is reduced. That is, by evaluating the degree of opening of the person to be evaluated U, the motor function of the facial muscles and the motor function of the masticatory muscles in the preparation period can be evaluated.
  • the calculation unit 120 calculates the movement of the tongue in the continuous image (moving image) as a feature amount. Specifically, the calculation unit 120 calculates the length that allows the tongue to be put out from the mouth as the feature amount. In addition, the calculation unit 120 calculates a difference between the amount of movement of the tongue on the left side and the amount of movement on the right side (referred to as tongue left-right difference) as a feature amount. Further, the calculation unit 120 calculates the approach amount of the tip of the tongue to the upper jaw behind the front teeth as a feature amount.
  • the reference data 161 includes a threshold value corresponding to the length of the tongue that can be taken out of the mouth, and the evaluation unit 130, for example, determines whether or not the length is greater than or equal to the threshold value.
  • the reference data 161 includes a threshold value corresponding to the left-right difference of the tongue.
  • the evaluation unit 130 performs a swallowing function depending on whether the left-right difference of the tongue is equal to or greater than the threshold value.
  • the reference data 161 includes a threshold value corresponding to the approach amount, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the approach amount is equal to or greater than the threshold value. .
  • the motor function of the tongue (specifically, the root of the tongue, more specifically, the suprahyoid muscle group) in the oral cavity is reduced.
  • a large difference between the left and right tongues indicates, for example, that the tongue is paralyzed.
  • food in the preparation period is pressed against the teeth or fine food is saliva.
  • the motor function of the tongue for mixing together is reduced.
  • the said approach amount is small (being less than a threshold value) shows that the motor function of the tongue (superhyoid muscle group) in the oral cavity period is falling, for example. That is, the tongue movement function in the preparation period and the tongue movement function in the oral period can be evaluated by evaluating the movement of the tongue of the evaluation subject U.
  • the calculation unit 120 calculates whether or not the cheek bulge can be maintained as a feature amount.
  • the reference data 161 includes a threshold value corresponding to the maintenance degree of the cheek bulge, and the evaluation unit 130, for example, ingestion swallowing according to whether the cheek bulge degree is equal to or greater than the threshold value. Evaluate functionality.
  • the fact that the cheeks do not swell (below the threshold value) indicates that, for example, the space between the nasal cavity and the pharynx cannot be blocked, and air is leaking from the pharynx to the nasal cavity (nasal pharyngeal insufficiency) That is, the motor function of the pharynx (specifically, soft palate) for closing the space between the nasal cavity and the pharynx in the pharyngeal stage is reduced. That is, by evaluating the movement of the cheek of the person to be evaluated U, it is possible to evaluate the facial muscle function in the preparation period and the pharyngeal movement function in the pharynx stage.
  • the calculation unit 120 when the first image acquired by the acquisition unit 110 is an image obtained by imaging the cheek of the person to be evaluated U when the person to be evaluated U is biting a tooth, the calculation unit 120 includes The bulge of the cheek muscles caused by the teeth is calculated as a feature value.
  • the calculation unit 120 calculates the mouth angle in the image. The elevation of the muscles of the cheeks caused by raising is calculated as a feature amount.
  • the reference data 161 includes a threshold corresponding to the uplift, and the evaluation unit 130 evaluates the swallowing function depending on whether the uplift is equal to or higher than the threshold, for example.
  • the fact that the muscles of the cheeks do not rise (below the threshold value) indicates, for example, that the motor function (muscle strength) of the masticatory muscles during the preparation period is reduced. That is, by evaluating the bulge of the cheek muscles of the person to be evaluated U, the motor function of the masticatory muscles in the preparation period can be evaluated.
  • the calculation unit 120 calculates the position of the laryngeal protuberance of the person to be evaluated U in the image as a feature amount. Calculate as
  • the reference data 161 includes a threshold corresponding to the position of the laryngeal bump (for example, the distance from the face), and the evaluation unit 130 determines whether the position of the laryngeal bump is equal to or greater than the threshold, for example. Evaluate the swallowing function as appropriate.
  • the fact that the laryngeal protuberance is far from the face indicates, for example, a state that requires extra effort to raise the laryngeal protuberance when swallowing a bolus, that is, the larynx in the pharyngeal phase (specifically, Indicates that the motor function of the subhyoid muscle group is reduced (in other words, the laryngeal state is deteriorated).
  • the state of the larynx deteriorates in this way, a person with poor swallowing function often has insufficient muscular strength, but the elevation of the laryngeal protuberance is further insufficient, and it can be avoided.
  • the calculation unit 120 calculates the number of teeth in the image as a feature amount.
  • the reference data 161 includes a threshold corresponding to the number of teeth, and the evaluation unit 130, for example, eats depending on whether the number of teeth is equal to or greater than the threshold (for example, 20). Evaluate swallowing function.
  • a small number of teeth indicates, for example, that the occlusal state of the teeth for chewing and crushing food in the preparation period is deteriorated. That is, by evaluating the number of teeth of the person to be evaluated U, the occlusal state of the teeth in the preparation period can be evaluated.
  • the calculation unit 120 uses the position of the remaining tooth (residual tooth) in the image as a feature amount. Calculate as
  • the reference data 161 includes a threshold value corresponding to the position of the remaining tooth, and the evaluation unit 130 performs, for example, a swallowing function depending on whether or not the position of the remaining tooth is equal to or greater than the threshold value. evaluate.
  • the remaining teeth are below the threshold, that is, the front teeth remain and the molars do not remain, it indicates that it is difficult to crush food, for example, the teeth for chewing and crushing food during preparation Indicates that the occlusal state has deteriorated. That is, by evaluating the position of the remaining teeth of the person to be evaluated U, the occlusal state of the teeth in the preparation period can be evaluated.
  • the calculation unit 120 calculates the color of the tongue in the image as a feature amount.
  • the reference data 161 includes a threshold value corresponding to the color of the tongue (for example, whiteness of the tongue), and the evaluation unit 130 determines whether the whiteness of the tongue is equal to or greater than the threshold value, for example. To evaluate the swallowing function. This will be described with reference to FIG.
  • FIG. 10 is a diagram for explaining a method of calculating the tongue color as a feature amount.
  • TCI Tugue Coating Index
  • scores are recorded for nine portions from the A portion to the I portion of the tongue according to the adhesion state of tongue coating. For example, if tongue coating is not recognized for each part, the score is 0. If tongue coating is thin enough to recognize the tongue papillae, the score is 1. If the tongue papillae are not recognized, thick tongue coating is attached. If so, score 2 is recorded.
  • the evaluation unit 130 evaluates the swallowing function according to whether or not the total score (for example, 0 to 18).
  • a white tongue (beyond the threshold), that is, thick tongue coating on the tongue, indicates that the oral cavity is unclean, for example, recognizing the taste of food during the preparation period or recognizing hardness It shows that the recognition function of the tongue for doing so has declined. That is, by evaluating the color of the tongue of the person to be evaluated U, the recognition function of the tongue in the preparation period can be evaluated.
  • the calculation unit 120 calculates the degree of reflection of the tongue light in the image as a feature amount. .
  • the reference data 161 includes a threshold value corresponding to the degree of light reflection of the tongue, and the evaluation unit 130, for example, takes a value according to whether or not the degree of light reflection of the tongue is equal to or greater than the threshold value. Evaluate swallowing function.
  • the light When the light is applied to the tongue, the light does not reflect and does not shine (below the threshold value), indicating that the tongue is dry, for example, saliva secretion to gather fine food during preparation Indicates that the function is degraded. That is, the saliva secretion function in the preparation period can be evaluated by evaluating the degree of reflection of light on the tongue of the person to be evaluated U.
  • the evaluation unit 130 may provide the swallowing function of the person to be evaluated U, such as the tongue movement function in the “preparation period” or the tongue movement function in the “oral period”, We evaluate after distinguishing whether it is swallowing function in oral cavity stage or pharyngeal stage.
  • the reference data 161 includes a correspondence relationship between the type of feature quantity and the swallowing function in at least one stage of the preparation period, the oral period, and the pharyngeal period. For example, when focusing on the sound pressure difference between “k” and “i” as the feature quantity, the sound pressure difference between “k” and “i” is associated with the motor function of the tongue in the pharyngeal period.
  • the evaluation part 130 can evaluate the to-be-evaluated person's U swallowing function, after distinguishing whether it is a swallowing function in a preparation period, an oral cavity period, and a pharyngeal period. What kind of symptom is given to the subject U by evaluating the subject's U swallowing function after distinguishing whether it is a swallowing function in the preparation phase, oral phase or pharyngeal phase You can see if there is a risk of occurrence. This will be described with reference to FIG.
  • FIG. 11 is a diagram showing a specific example of the swallowing function in the preparation period, the oral period and the pharyngeal period, and symptoms when each function is lowered.
  • the swallowing function of the subject U can be set in any stage of the preparation stage, the oral stage and the pharyngeal stage. It is possible to make detailed countermeasures for each corresponding symptom by evaluating after distinguishing whether or not it is a swallowing function. Moreover, although mentioned later for details, the proposal part 150 can propose the countermeasure according to evaluation result to the to-be-evaluated person U. FIG.
  • the output unit 140 outputs the evaluation result of the swallowing function of the person to be evaluated U evaluated by the evaluation unit 130 (step S104).
  • the output unit 140 outputs the evaluation result of the swallowing function of the evaluated person U evaluated by the evaluation unit 130 to the suggestion unit 150.
  • the output unit 140 may output the evaluation result to the mobile terminal 300.
  • the output unit 140 may include a communication interface that performs wired communication or wireless communication, for example.
  • the output unit 140 acquires image data of an image corresponding to the evaluation result from the storage unit 160 and transmits the acquired image data to the mobile terminal 300.
  • An example of the image data (evaluation result) is shown in FIGS.
  • the evaluation result is a two-stage evaluation result of OK or NG.
  • OK means normal and NG means abnormal.
  • the evaluation result is not limited to a two-stage evaluation result, and may be a fine evaluation result in which the degree of evaluation is divided into three or more stages. That is, the threshold value corresponding to each feature amount included in the reference data 161 stored in the storage unit 160 is not limited to one threshold value, and may be a plurality of threshold values. Specifically, for a certain feature amount, the evaluation result is normal when it is equal to or greater than the first threshold, and the evaluation result is slightly abnormal when it is smaller than the first threshold and greater than the second threshold, and is equal to or less than the second threshold.
  • the evaluation result may be abnormal.
  • a circle mark or the like may be shown instead of OK (normal)
  • a triangle mark or the like may be shown instead of slightly abnormal
  • a cross mark or the like may be shown instead of NG (abnormal).
  • normality and abnormality may not be shown for each swallowing function, and for example, only items that are suspected of lowering the swallowing function may be shown.
  • the image data of the image corresponding to the evaluation result is, for example, a table as shown in FIGS.
  • the person to be evaluated U can confirm such a table showing the evaluation results after distinguishing whether the function is the swallowing function in the preparation stage, the oral stage or the pharyngeal stage. For example, if the person to be evaluated U knows in advance what measures should be taken when the function decreases for each of the swallowing functions in the preparation period, oral period and pharyngeal period, U can make detailed countermeasures by confirming such a table.
  • the proposing unit 150 makes a proposal regarding swallowing to the person to be evaluated U by collating the evaluation result output by the output unit 140 with predetermined proposal data 162. .
  • the proposal data 162 includes proposal contents regarding swallowing for the person to be evaluated U corresponding to each combination of evaluation results for the swallowing function in the preparation period, the oral period, and the pharyngeal period.
  • the storage unit 160 includes data (for example, an image, a moving image, sound, text, etc.) indicating the proposal content.
  • the suggestion unit 150 makes a proposal regarding swallowing to the person to be evaluated U using such data.
  • FIGS the evaluation result evaluated after distinguishing whether the to-be-evaluated person's U swallowing function is the swallowing function in the preparation stage, the oral cavity stage, or the pharyngeal stage is shown in FIGS.
  • the motor function of the tongue in the preparation period In the evaluation results shown in FIG. 12, the motor function of the tongue in the preparation period, the motor function of the tongue in the oral and pharyngeal stages, and the pharyngeal motor function and the laryngeal motor function in the pharyngeal stage are NG.
  • the swallowing function is OK.
  • there is a possibility that there is a problem in the chewing ability because the motor function of the tongue in the preparation period is NG.
  • avoiding hard-to-eat foods can result in unbalanced nutrition and take time to eat.
  • the swallowing of the bolus may be problematic because the motility function of the tongue in the oral and pharyngeal phases and the pharyngeal and laryngeal motor functions in the pharyngeal phase are NG. As a result, it takes time to swallow or swallow.
  • the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination.
  • the suggestion unit 150 proposes to reduce the amount of food that is put into the mouth at a time, such as softening a hard object. This is because by reducing the amount of food put into the mouth at a time, it becomes possible to chew without difficulty, and the bolus becomes smaller and it becomes easier to swallow the bolus.
  • the suggestion unit 150 may use the mobile terminal 300 by an image, text or voice, etc. “Reducing the amount to put into the mouth and eat slowly. Propose a content such as
  • the suggestion unit 150 proposes to thicken the liquid contained in the food.
  • the suggestion unit 150 proposes a content such as “Let's eat a soup or a liquid such as a soup” via an image, text, or voice via the mobile terminal 300.
  • the saliva secretion function in the preparation period is NG, and the other swallowing functions are OK.
  • the saliva secretion function is NG.
  • the bolus cannot be formed correctly, and it becomes difficult to swallow the dried food.
  • the nutrition is biased or it takes time to eat.
  • the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination. Specifically, when eating food (bread, cake, grilled fish, rice cracker, etc.) that absorbs moisture in the oral cavity, it is proposed to eat while taking moisture. This is because it becomes easy to form a bolus with water taken instead of saliva, and the difficulty of swallowing can be eliminated.
  • the suggestion unit 150 uses the mobile terminal 300 to display content such as “Let's take water together when eating bread” or “Yaki fish etc. It may be a good idea to make a trick. ”
  • the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination. Specifically, when eating hard food (vegetables, meat, etc.), it is proposed to make it fine or soft before eating. This is because even if there is a problem with the chewing ability and the occlusal ability, it becomes possible to eat hard food.
  • the suggestion unit 150 uses the mobile terminal 300 to display a content such as “let's chop it small if it is hard and difficult to eat”, or “because it is difficult to eat leafy vegetables. We recommend that you take it aggressively by softening, chopping, etc. so that you do n’t avoid eating, but rather nutrition.
  • the recognition function of the tongue and the secretory function of saliva in the preparation period are NG, and the other swallowing functions are OK.
  • the oral cavity is in an unclean state because the tongue recognition function and saliva secretion function in the preparation period are NG.
  • the appetite is lost due to a decrease in taste, which may lead to aspiration pneumonia due to a malnutrition state.
  • the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination.
  • oral care is proposed. This is because oral hygiene can eliminate oral hygiene and restore the recognition function of the tongue.
  • the suggestion unit 150 may use personal information (for example, an address) of the evaluated person U acquired by the acquisition unit 110.
  • the suggestion unit 150 can visit a medical institution through an image, text, voice, or the like via the mobile terminal 300, and can check the dentist or the swallowing function in the vicinity of the person U to be evaluated. Present a map showing the institution.
  • the tongue recognition function and saliva secretion function in the preparation period are OK, and the other swallowing functions are NG.
  • the swallowing function may be reduced in the preparation period, the oral period, and the pharyngeal period.
  • the muscular strength of the lips declines due to a decline in the facial muscle function during the preparation period
  • the masseter muscle declines due to the deterioration of the occlusal state of the teeth during the preparation period
  • the tongue movement function declines during the preparation period
  • oral period and pharyngeal period Muscular strength is expected to decline, suggesting the risk of sarcopenia.
  • the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination.
  • the suggestion unit 150 may use personal information (for example, age, weight) of the evaluated person U acquired by the acquisition unit 110.
  • the suggestion unit 150 uses an image, text or voice via the mobile terminal 300, “Let's take protein. Since the current weight is 60 kg, the protein is 20 g to 24 g per serving, 3 meals total. “Let ’s take 60g to 72g.” In order to avoid losing it when eating, let ’s eat with a thick liquid soup and soup.
  • the proposal unit 150 proposes specific training content related to rehabilitation.
  • the suggestion unit 150 uses the mobile terminal 300 to perform muscle strength training for the whole body according to the age of the person to be evaluated U (such as training that repeats standing and sitting) and lip muscle strength recovery training (such as training that repeats standing and sitting). Examples such as training that repeats breath blowing and inhalation), recovery training of tongue muscle strength (training that moves the tongue in and out, movement up and down, left and right, etc.) are shown.
  • installation of an application for such rehabilitation may be proposed.
  • details of training actually performed during rehabilitation may be recorded. Thereby, it is possible to reflect the recorded contents in rehabilitation by the specialists by confirming the recorded contents by a specialist (such as a doctor, a dentist, a speech therapist, or a nurse).
  • the evaluation unit 130 does not need to evaluate the swallowing function of the person to be evaluated U after distinguishing whether the swallowing function is in the preparation period, the oral cavity period, or the pharyngeal period. That is, the evaluation unit 130 may evaluate what kind of swallowing function of the person to be evaluated U is deteriorated.
  • the suggestion unit 150 may make a proposal described below according to a combination of evaluation results for each swallowing function.
  • the suggestion unit 150 may present a code indicating a food form such as a code of “swallowing adjusted meal classification 2013” of the Japanese Society for Swallowing Rehabilitation.
  • a code indicating a food form such as a code of “swallowing adjusted meal classification 2013” of the Japanese Society for Swallowing Rehabilitation.
  • the person to be evaluated U purchases a product corresponding to a dysphagia, it is difficult to describe the “meal form” in words, but the code is used in a one-to-one correspondence with the code. Meals can be easily purchased.
  • the proposal part 150 may present the site for purchasing such goods, and may enable it to purchase using the internet. For example, after evaluating the swallowing function via the mobile terminal 300, the mobile terminal 300 may be used for purchase.
  • the suggestion unit 150 may present other products that supplement the nutrition so that the nutrition of the evaluated person U is not biased.
  • the proposing unit 150 uses the personal information (for example, body weight, BMI (Body Mass Index), serum albumin value, eating rate, etc.) of the evaluated person U acquired by the acquiring unit 110, so that the evaluated person U After determining the nutritional status of the product, a product supplementing nutrition may be presented.
  • the suggestion unit 150 may propose a posture at the time of eating. This is because the ease of swallowing food varies depending on the posture.
  • the proposing unit 150 proposes to eat with a leaning posture in which the angle from the pharynx to the trachea is not likely to be a straight line.
  • the suggestion unit 150 may present a menu in consideration of nutritional bias due to a decrease in the swallowing function (present a menu site describing such a menu).
  • a menu site is a site where ingredients and cooking procedures necessary for completing a menu are described.
  • the proposal part 150 may present the menu which considered the bias
  • the suggestion unit 150 may present a menu that is nutritionally balanced in a specific period over a specific period such as one week.
  • the suggestion unit 150 may transmit information indicating the degree to which food is fined or softened to a cooker that has been converted to IoT (Internet of Things). Thereby, food can be finely and softened correctly. In addition, it is possible to save time and effort for the person to be evaluated U to make food fine or soft.
  • IoT Internet of Things
  • the person to be evaluated U collects a voice that utters a predetermined syllable or a predetermined sentence without contact. Audio data obtained by doing this, a first image obtained by imaging the face or neck of the person to be evaluated U without contact, and a first image obtained by imaging the mouth of the person to be evaluated U without contact.
  • the voice data suitable for the evaluation of the swallowing function collected by non-contact, the first image or the second image suitable for the evaluation of the swallowing function obtained by imaging without contact By acquiring, it becomes possible to easily evaluate the swallowing function of the person to be evaluated U. That is, the person to be evaluated U speaks a predetermined syllable or a predetermined sentence toward the sound collecting device such as the portable terminal 300, or the face, neck, or mouth of the person to be evaluated U using the imaging device such as the portable terminal 300. It is possible to evaluate the swallowing function of the person to be evaluated U simply by imaging the inside. Moreover, in this invention, since the to-be-evaluated person's U swallowing function is evaluated using at least 2 of audio
  • the facial swallowing function includes the facial muscle function, tongue function, tongue recognition function, saliva secretion function, tooth occlusion, masticatory muscle function, pharyngeal function and larynx. At least one of the motor functions may be evaluated.
  • the facial muscle function in the preparation period for example, the facial muscle function in the preparation period, the tongue recognition function in the preparation period, the tongue movement function in the preparation period, the occlusal state of the teeth in the preparation period, the masticatory muscle movement function in the preparation period, and the preparation
  • the salivary secretion function in the period, the tongue movement function in the oral period, the tongue movement function in the pharyngeal period, the pharyngeal movement function in the pharyngeal period, or the laryngeal movement function in the pharyngeal period can be evaluated.
  • the swallowing function evaluation method may further include an output step (step S104) for outputting an evaluation result.
  • the swallowing function evaluation method further includes a proposing step (step S105) of making a proposal regarding swallowing to the evaluated person U by collating the output evaluation result with predetermined data. May be.
  • the to-be-evaluated person U can receive a proposal as to what countermeasures should be taken regarding swallowing when the swallowing function decreases. For example, it is possible to prevent aspiration pneumonia by suppressing aspiration by allowing the evaluated person U to perform rehabilitation based on the proposal or to take a diet based on the proposal. Can reduce malnutrition due to decline.
  • the suggestion step at least one of a proposal related to a meal corresponding to the evaluation result of the swallowing function and a proposal related to exercise corresponding to the evaluation result of the swallowing function may be performed.
  • the to-be-evaluated person U can receive a suggestion of what kind of meal should be performed or what kind of exercise should be performed when the swallowing function is lowered.
  • personal information of the person to be evaluated U may be acquired.
  • a more effective proposal can be made to the evaluated person U by combining the evaluation result of the swallowing function of the evaluated person U and personal information. Can do.
  • the swallowing function evaluation device 100 is configured to collect voice data obtained by non-contact collecting voices of the evaluated person U uttering predetermined syllables or predetermined sentences, the evaluated person U
  • the acquisition unit 110 acquires at least two of a first image obtained by imaging the face or neck of the subject without contact and a second image obtained by imaging the oral cavity of the evaluation subject U without contact. From the audio data acquired by the acquisition unit 110, the first image and the second image, the calculation unit 120 that calculates the respective feature amounts, and the feature amount calculated by the calculation unit 120, the evaluation target U's An evaluation unit 130 that evaluates the swallowing function and an output unit 140 that outputs an evaluation result evaluated by the evaluation unit 130 are provided.
  • the swallowing function evaluation device 100 that can easily evaluate the swallowing function of the person U to be evaluated.
  • the swallowing function evaluation system 200 images the swallowing function evaluation apparatus 100 and the face, neck, or oral cavity of the person to be evaluated U, or the person to be evaluated U has a predetermined value.
  • a device in this embodiment, a portable terminal 300 that collects syllables or voices uttering a predetermined sentence in a non-contact manner.
  • the acquisition unit 110 of the swallowing function evaluation apparatus 100 is configured to collect voice data obtained when the apparatus U collects a predetermined syllable or a predetermined sentence without contact, and the apparatus U At least two of a first image obtained by imaging the face or neck of the subject in a non-contact manner and a second image obtained by imaging the subject's U in the oral cavity of the subject U in a non-contact manner To get.
  • the swallowing function evaluation system 200 that enables the evaluation of the swallowing function of the person to be evaluated U easily.
  • the reference data 161 is predetermined data, but may be updated based on an evaluation result obtained when an expert actually diagnoses the swallowing function of the person to be evaluated U. Thereby, the evaluation precision of a swallowing function can be improved. Note that machine learning may be used to improve the evaluation accuracy of the swallowing function.
  • the proposal data 162 is predetermined data, but the evaluated person U may evaluate the proposal content and may be updated based on the evaluation result. That is, for example, when a proposal corresponding to the fact that the person to be evaluated U cannot chew based on a certain feature amount even though the person to be evaluated U can chew without problems, It is evaluated that it is wrong. Then, by updating the proposal data 162 based on the evaluation result, the erroneous proposal as described above is not made based on the same feature amount. Thus, the proposal content regarding swallowing for the person to be evaluated U can be made more effective. Note that machine learning may be used to make proposals related to swallowing more effective.
  • the evaluation result of the swallowing function may be stored as big data together with personal information and used for machine learning.
  • the proposal content regarding swallowing may be accumulated as big data together with personal information and used for machine learning.
  • the mobile terminal 300 is an imaging device and a sound collection device, but is not limited thereto.
  • acquisition step acquisition unit 110
  • the evaluator U does not acquire voice data obtained by collecting sounds uttering a predetermined syllable or a predetermined sentence without contact
  • the mobile terminal 300 is The sound collector may not be used.
  • the swallowing function evaluation system 200 may not include the sound collection device.
  • the eating and swallowing function evaluation method includes the suggestion step (step S105) for making a suggestion regarding swallowing, but it may not be included.
  • the swallowing function evaluation device 100 may not include the suggestion unit 150.
  • the personal information of the person to be evaluated U is acquired, but it is not necessary to acquire it.
  • the acquisition unit 110 may not acquire the personal information of the evaluated person U.
  • the steps in the swallowing function evaluation method may be executed by a computer (computer system).
  • the present invention can be realized as a program for causing a computer to execute the steps included in these methods.
  • the present invention can be realized as a non-transitory computer-readable recording medium such as a CD-ROM on which the program is recorded.
  • each step is executed by executing the program using hardware resources such as a CPU, a memory, and an input / output circuit of a computer. . That is, each step is executed by the CPU obtaining data from a memory or an input / output circuit or the like, and outputting the calculation result to the memory or the input / output circuit or the like.
  • each component included in the swallowing function evaluation device 100 and the swallowing function evaluation system 200 of the above embodiment may be realized as a dedicated or general-purpose circuit.
  • each component included in the swallowing function evaluation apparatus 100 and the swallowing function evaluation system 200 of the above embodiment is realized as an LSI (Large Scale Integration) that is an integrated circuit (IC). Also good.
  • LSI Large Scale Integration
  • IC integrated circuit
  • the integrated circuit is not limited to an LSI, and may be realized by a dedicated circuit or a general-purpose processor.
  • a programmable FPGA (Field Programmable Gate Array) or a reconfigurable processor in which connection and setting of circuit cells inside the LSI can be reconfigured may be used.

Abstract

A swallowing function evaluation method that includes: an acquisition step (step S101) for contactlessly acquiring at least two of audio data that is obtained by contactlessly collecting the audio produced when an evaluee speaks a prescribed syllable or a prescribed sentence, a first image that is contactlessly captured of the face or neck of the evaluee, and a second image that is contactlessly captured of the inside of the oral cavity of the evaluee; a calculation step (step S102) for calculating a feature value from each of the acquired at least two of the audio data, first image, and second image; and an evaluation step (step S103) for evaluating the swallowing function of the evaluee from the calculated feature values.

Description

摂食嚥下機能評価方法、プログラム、摂食嚥下機能評価装置および摂食嚥下機能評価システムSwallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system
 本発明は、被評価者の摂食嚥下機能を評価することができる、摂食嚥下機能評価方法、プログラム、摂食嚥下機能評価装置および摂食嚥下機能評価システムに関する。 The present invention relates to a swallowing function evaluation method, a program, a swallowing function evaluation device, and a swallowing function evaluation system, which can evaluate a subject's swallowing function.
 摂食嚥下障害には、誤嚥、低栄養、食べることの楽しみの喪失、脱水、体力・免疫力の低下、口内汚染および誤嚥性肺炎等のリスクがあり、摂食嚥下障害を予防することが求められている。従来から、摂食嚥下機能を評価することで、例えば、適切な食形態の食事を摂食する、適切な機能回復へのリハビリなどを行う等の摂食嚥下障害への対応がなされており、その評価方法には様々なものがある。例えば、被評価者の首に摂食嚥下機能を評価するための器具を装着させ、摂食嚥下機能評価指標(マーカー)として、咽頭運動特徴量を取得し、被評価者の摂食嚥下機能を評価する方法が開示されている(例えば、特許文献1参照)。 Eating dysphagia has risks such as aspiration, malnutrition, loss of eating pleasure, dehydration, weakness in physical strength and immunity, oral contamination and aspiration pneumonia, and prevent dysphagia Is required. Conventionally, by evaluating the swallowing function, for example, eating a meal with an appropriate dietary form, rehabilitation to recovering an appropriate function, etc. has been handled, There are various evaluation methods. For example, a device for evaluating the swallowing function is attached to the neck of the person to be evaluated, the pharyngeal movement feature quantity is acquired as a swallowing function evaluation index (marker), and the person's swallowing function is evaluated. A method for evaluation is disclosed (for example, see Patent Document 1).
特開2017-23676号公報JP 2017-23676
 しかしながら、上記特許文献1に開示された方法では、被評価者に器具を装着する必要があり、被評価者に不快感を与える場合がある。また、歯科医師、歯科衛生士、言語聴覚士または内科医師等の専門家による視診、問診または触診等によっても摂食嚥下機能を評価することはできるが、例えば、脳卒中などで摂食嚥下機能関連の麻痺が起きたり、摂食嚥下関連の器官(例えば、舌、軟口蓋または咽頭等)の手術等により摂食嚥下障害を引き起こしたりした場合等、摂食嚥下障害が重症化してから専門家が診断するという場合が多い。しかし、高齢者は、加齢による影響で、ずっとむせていたり、食べこぼしをしたりしているにもかかわらず、高齢だから当然の症状であるとして摂食嚥下機能の低下が見過ごされることがある。摂食嚥下の低下が見過ごされることで、例えば食事量の低下からくる低栄養を招き、低栄養が免疫力の低下を招く。加えて、誤嚥もしやすく、誤嚥と免疫力低下が結果として誤嚥性肺炎に至らしめるおそれにつながる悪循環を招く。 However, in the method disclosed in Patent Document 1, it is necessary to attach an instrument to the person to be evaluated, which may give the person to be evaluated uncomfortable. In addition, the swallowing function can be evaluated by visual inspection, interview, or palpation by a specialist such as a dentist, dental hygienist, speech auditor, or internal medicine doctor. Diagnosis by experts after dysphagia becomes serious, such as when paralysis occurs or when a dysphagia is caused by surgery on an organ related to dysphagia (eg, tongue, soft palate or pharynx) There are many cases to do. However, due to the effects of aging, elderly people may be overlooked or spilled, but they may overlook a decline in swallowing function as a natural symptom because they are old. . By overlooking the lowering of swallowing, for example, undernutrition resulting from a decrease in the amount of meal is caused, and undernutrition causes a decrease in immunity. In addition, aspiration is easy, and aspiration and reduced immunity result in a vicious circle that can lead to aspiration pneumonia.
 そこで、本発明は、簡便に被評価者の摂食嚥下機能の評価が可能な摂食嚥下機能評価方法等の提供を目的とする。 Therefore, an object of the present invention is to provide a method for evaluating a swallowing function that can easily evaluate the swallowing function of a person to be evaluated.
 本発明の一態様に係る摂食嚥下機能評価方法は、被評価者が所定の音節または所定の文を発話した音声を非接触により集音することで得られる音声データ、前記被評価者の顔または首を非接触により撮像することで得られる第1画像、および、前記被評価者の口腔内を非接触により撮像することで得られる第2画像の少なくとも2つを取得する取得ステップと、取得した前記音声データ、前記第1画像および前記第2画像の少なくとも2つからそれぞれの特徴量を算出する算出ステップと、算出した前記特徴量から、前記被評価者の摂食嚥下機能を評価する評価ステップと、を含む。 The method for evaluating a swallowing function according to one aspect of the present invention includes: audio data obtained by collecting sound by a non-contact of a person who utters a predetermined syllable or a predetermined sentence; and the face of the person to be evaluated Or an acquisition step of acquiring at least two of a first image obtained by imaging the neck in a non-contact manner and a second image obtained by imaging the oral cavity of the evaluated person in a non-contact manner; A calculation step of calculating each feature amount from at least two of the audio data, the first image, and the second image, and an evaluation for evaluating a swallowing function of the evaluated person from the calculated feature amount Steps.
 また、本発明の一態様に係るプログラムは、上記の摂食嚥下機能評価方法をコンピュータに実行させるためのプログラムである。 Also, a program according to an aspect of the present invention is a program for causing a computer to execute the above-described swallowing function evaluation method.
 また、本発明の一態様に係る摂食嚥下機能評価装置は、被評価者が所定の音節または所定の文を発話した音声を非接触により集音することで得られる音声データ、前記被評価者の顔または首を非接触により撮像することで得られる第1画像、および、前記被評価者の口腔内を非接触により撮像することで得られる第2画像の少なくとも2つを取得する取得部と、前記取得部が取得した前記音声データ、前記第1画像および前記第2画像の少なくとも2つからそれぞれの特徴量を算出する算出部と、前記算出部が算出した前記特徴量から、前記被評価者の摂食嚥下機能を評価する評価部と、前記評価部が評価した評価結果を出力する出力部と、を備える。 In addition, the device for evaluating a swallowing function according to one aspect of the present invention includes: audio data obtained by collecting, in a non-contact manner, a voice in which an evaluated person utters a predetermined syllable or a predetermined sentence; An acquisition unit that acquires at least two of a first image obtained by imaging the face or neck of the subject without contact and a second image obtained by imaging the oral cavity of the evaluation subject without contact; From the audio data acquired by the acquisition unit, the first image, and the second image, a calculation unit that calculates each feature amount, and the feature amount calculated by the calculation unit, the evaluation target The evaluation part which evaluates a person's ingestion swallowing function, and the output part which outputs the evaluation result which the said evaluation part evaluated are provided.
 また、本発明の一態様に係る摂食嚥下機能評価システムは、上記の摂食嚥下機能評価装置と、前記被評価者の顔、首もしくは口腔内を撮像し、または、前記被評価者が前記所定の音節もしくは前記所定の文を発話した音声を非接触により集音する装置と、を備え、前記摂食嚥下機能評価装置の取得部は、前記被評価者が所定の音節または所定の文を発話した音声を前記装置が非接触により集音することで得られる音声データ、前記被評価者の顔または首を前記装置が非接触により撮像することで得られる第1画像、および、前記被評価者の口腔内を前記装置が非接触により撮像することで得られる第2画像の少なくとも2つを取得する。 In addition, a swallowing function evaluation system according to an aspect of the present invention is the above-described swallowing function evaluation device, and images the face, neck, or oral cavity of the evaluator, or the evaluator A device that collects a predetermined syllable or a voice that utters the predetermined sentence in a non-contact manner, and the acquisition unit of the swallowing function evaluation device receives the predetermined syllable or the predetermined sentence. Voice data obtained when the device collects the spoken voice without contact, a first image obtained when the device images the face or neck of the person to be evaluated without contact, and the evaluated At least two of the second images obtained by imaging the inside of a person's mouth without contact with the device are acquired.
 本発明の摂食嚥下機能評価方法等によれば、簡便に被評価者の摂食嚥下機能の評価が可能となる。 According to the method for evaluating the swallowing function of the present invention, it is possible to easily evaluate the swallowing function of the person to be evaluated.
図1は、実施の形態に係る摂食嚥下機能評価システムの構成を示す図である。FIG. 1 is a diagram illustrating a configuration of a swallowing function evaluation system according to an embodiment. 図2は、実施の形態に係る摂食嚥下機能評価システムの特徴的な機能構成を示すブロック図である。FIG. 2 is a block diagram illustrating a characteristic functional configuration of the swallowing function evaluation system according to the embodiment. 図3は、実施の形態に係る摂食嚥下機能評価方法による被評価者の摂食嚥下機能を評価する処理手順を示すフローチャートである。FIG. 3 is a flowchart illustrating a processing procedure for evaluating a person to be swallowed by the swallowing function evaluation method according to the embodiment. 図4は、実施の形態に係る摂食嚥下機能評価方法による被評価者の音声の取得方法の概要を示す図である。FIG. 4 is a diagram illustrating an outline of a method for acquiring a speech of an evaluated person by the method for evaluating a swallowing function according to the embodiment. 図5は、実施の形態に係る摂食嚥下機能評価方法による被評価者の顔または首の撮像により得られる第1画像の取得方法の概要を示す図である。FIG. 5 is a diagram illustrating an outline of a method for acquiring a first image obtained by imaging the face or neck of the person to be evaluated by the swallowing function evaluation method according to the embodiment. 図6は、実施の形態に係る摂食嚥下機能評価方法による被評価者の口腔内の撮像により得られる第2画像の取得方法の概要を示す図である。FIG. 6 is a diagram illustrating an outline of a method for acquiring a second image obtained by imaging of the evaluation subject in the oral cavity by the swallowing function evaluation method according to the embodiment. 図7は、被評価者が発話した音声を示す音声データの一例を示す図である。FIG. 7 is a diagram illustrating an example of voice data indicating voice uttered by the person to be evaluated. 図8は、フォルマント周波数を説明するための周波数スペクトル図である。FIG. 8 is a frequency spectrum diagram for explaining the formant frequency. 図9は、フォルマント周波数の時間変化の一例を示す図である。FIG. 9 is a diagram illustrating an example of a temporal change in formant frequency. 図10は、舌の色を特徴量として算出する方法を説明するための図である。FIG. 10 is a diagram for explaining a method of calculating the tongue color as a feature amount. 図11は、準備期、口腔期および咽頭期における摂食嚥下機能の具体例と、各機能が低下したときの症状を示す図である。FIG. 11 is a diagram illustrating a specific example of the swallowing function in the preparation period, the oral period, and the pharyngeal period, and symptoms when each function decreases. 図12は、評価結果の一例を示す図である。FIG. 12 is a diagram illustrating an example of the evaluation result. 図13は、評価結果の一例を示す図である。FIG. 13 is a diagram illustrating an example of the evaluation result. 図14は、評価結果の一例を示す図である。FIG. 14 is a diagram illustrating an example of the evaluation result. 図15は、評価結果の一例を示す図である。FIG. 15 is a diagram illustrating an example of the evaluation result. 図16は、評価結果の一例を示す図である。FIG. 16 is a diagram illustrating an example of the evaluation result.
 以下、実施の形態について、図面を参照しながら説明する。なお、以下で説明する実施の形態は、いずれも包括的または具体的な例を示すものである。以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置位置および接続形態、ステップ、ステップの順序等は、一例であり、本発明を限定する主旨ではない。また、以下の実施の形態における構成要素のうち、最上位概念を示す独立請求項に記載されていない構成要素については、任意の構成要素として説明される。 Hereinafter, embodiments will be described with reference to the drawings. It should be noted that each of the embodiments described below shows a comprehensive or specific example. Numerical values, shapes, materials, constituent elements, arrangement positions and connection forms of constituent elements, steps, order of steps, and the like shown in the following embodiments are merely examples, and are not intended to limit the present invention. In addition, among the constituent elements in the following embodiments, constituent elements that are not described in the independent claims indicating the highest concept are described as optional constituent elements.
 なお、各図は模式図であり、必ずしも厳密に図示されたものではない。また、各図において、実質的に同一の構成に対しては同一の符号を付しており、重複する説明は省略または簡略化される場合がある。 Each figure is a schematic diagram and is not necessarily shown strictly. Moreover, in each figure, the same code | symbol is attached | subjected to the substantially same structure, and the overlapping description may be abbreviate | omitted or simplified.
 (実施の形態)
 [摂食嚥下機能]
 本発明は、摂食嚥下機能の評価方法等に関するものであり、まず摂食嚥下機能について説明する。
(Embodiment)
[Swallowing function]
The present invention relates to a method for evaluating a swallowing function and the like. First, the swallowing function will be described.
 摂食嚥下機能とは、食物を認識して口に取り込みそして胃に至るまでの一連の過程を達成するのに必要な人体の機能である。摂食嚥下機能は、先行期、準備期、口腔期、咽頭期および食道期の5つの段階からなる。 The swallowing function is a function of the human body that is necessary for recognizing food, taking it into the mouth, and achieving a series of processes from the stomach to the stomach. The swallowing function consists of five stages: the early phase, the preparation phase, the oral phase, the pharyngeal phase, and the esophageal phase.
 摂食嚥下における先行期(認知期とも呼ばれる)では、食物の形、硬さおよび温度等が判断される。先行期における摂食嚥下機能は、例えば、目の視認機能等である。先行期において、食物の性質および状態が認知され、食べ方、唾液分泌および姿勢といった摂食に必要な準備が整えられる。 In the preceding period (also called the cognitive period) of swallowing, the shape, hardness, temperature, etc. of food are determined. The swallowing function in the preceding period is, for example, an eye viewing function. In the preceding period, the nature and condition of the food are recognized and the necessary preparations for eating such as how to eat, salivation and posture are made.
 摂食嚥下における準備期(咀嚼期とも呼ばれる)では、口腔内に取り込まれた食物が歯で噛み砕かれ、すり潰され(つまり咀嚼され)、そして、咀嚼された食物を舌によって唾液と混ぜ合わせられて食塊にまとめられる。準備期における摂食嚥下機能は、例えば、食物をこぼさずに口腔内に取り込むための表情筋(口唇の筋肉および頬の筋肉等)の運動機能、食物の味を認識したり硬さを認識したりするための舌の認識機能、食物を歯に押し当てたり細かくなった食物を唾液と混ぜ合わせてまとめたりするための舌の運動機能、食物を噛み砕きすり潰すための歯の咬合状態、歯と頬の間に食物が入り込むのを防ぐ頬の運動機能、咀嚼するための筋肉の総称である咀嚼筋(咬筋および側頭筋等)の運動機能(咀嚼機能)、ならびに、細かくなった食物をまとめるための唾液の分泌機能等である。咀嚼機能は、歯の咬合状態、咀嚼筋の運動機能、舌の機能などに影響される。準備期におけるこれらの摂食嚥下機能によって、食塊は飲み込みやすい物性(サイズ、塊、粘度)となるため、食塊が口腔内から咽頭を通って胃までスムーズに移動しやすくなる。 During the preparatory period for swallowing (also called chewing), food taken into the oral cavity is chewed with teeth, crushed (ie chewed), and the chewed food is mixed with saliva by the tongue And put together into a bolus. The swallowing function during the preparation period, for example, recognizes the motor function of facial muscles (such as lip muscles and cheek muscles) that take food into the oral cavity without spilling it, recognizes the taste of food, and recognizes hardness. Tongue recognition function, or tongue movement function to push food to teeth or mix finely mixed food with saliva, occlusal state of teeth to chew and crush food, tooth and The cheek movement function that prevents food from entering between the cheeks, the movement function of the masticatory muscles (such as the masseter and temporal muscles), which is the generic name of the muscles used for mastication, and the fine food For example, the saliva secretion function. The masticatory function is affected by the occlusal state of the teeth, the function of the masticatory muscles, the function of the tongue, and the like. Due to these swallowing functions during the preparation period, the bolus has physical properties that make it easy to swallow (size, lump, viscosity), and the bolus easily moves from the oral cavity through the pharynx to the stomach.
 摂食嚥下における口腔期では、舌(舌の先端)が持ち上がり、食塊が口腔内から咽頭に移動させられる。口腔期における摂食嚥下機能は、例えば、食塊を咽頭へ移動させるための舌の運動機能、咽頭と鼻腔との間を閉鎖する軟口蓋の上昇機能等である。 During the oral phase of swallowing, the tongue (tip of the tongue) is lifted and the bolus is moved from the oral cavity to the pharynx. The swallowing function in the oral phase includes, for example, a tongue movement function for moving the bolus to the pharynx, a soft palate raising function for closing the space between the pharynx and the nasal cavity, and the like.
 摂食嚥下における咽頭期では、食塊が咽頭に達すると嚥下反射が生じて短時間(約1秒)の間に食塊が食道へ送られる。具体的には、軟口蓋が挙上して鼻腔と咽頭との間が塞がれ、舌の根元(具体的には舌の根元を支持する舌骨)および喉頭が挙上して食塊が咽頭を通過し、その際に喉頭蓋が下方に反転し気管の入口が塞がれ、誤嚥が生じないように食塊が食道へ送られる。咽頭期における摂食嚥下機能は、例えば、鼻腔と咽頭との間を塞ぐための咽頭の運動機能(具体的には、軟口蓋を挙上する運動機能)、食塊を咽頭へ送るための舌(具体的には舌の根元)の運動機能、食塊を咽頭から食道へ送ったり、食塊が咽頭へ流れ込んできた際に、声門が閉じて気管を塞ぎ、その上から喉頭蓋が気管の入り口に垂れ下がることで蓋をしたりする喉頭の運動機能等である。 In the pharyngeal phase during swallowing, when the bolus reaches the pharynx, a swallowing reflex occurs and the bolus is sent to the esophagus within a short time (about 1 second). Specifically, the soft palate is raised and the space between the nasal cavity and the pharynx is closed, the base of the tongue (specifically the hyoid bone that supports the base of the tongue) and the larynx are raised, and the bolus becomes the pharynx In that case, the epiglottis is inverted downward, the trachea entrance is blocked, and the bolus is sent to the esophagus so that aspiration does not occur. The swallowing function in the pharyngeal phase includes, for example, a pharyngeal motor function (specifically, a motor function that raises the soft palate) to close the space between the nasal cavity and the pharynx, and a tongue ( Specifically, when the bolus moves from the pharynx to the esophagus, or when the bolus flows into the pharynx, the glottis closes and closes the trachea, and the epiglottis from above reaches the entrance to the trachea It is a motor function of the larynx that is covered by hanging down.
 摂食嚥下における食道期では、食道壁の蠕動運動が誘発され、食塊が食道から胃へと送り込まれる。食道期における摂食嚥下機能は、例えば、食塊を胃へ移動させるための食道の蠕動機能等である。 In the esophageal phase of swallowing, peristaltic movement of the esophageal wall is induced, and the bolus is sent from the esophagus to the stomach. The swallowing function in the esophageal stage is, for example, a peristaltic function of the esophagus for moving the bolus to the stomach.
 例えば、人は加齢とともに、健康状態からプレフレイル期およびフレイル期を経て要介護状態へとなる。摂食嚥下機能の低下(オーラルフレイルとも呼ばれる)は、プレフレイル期に現れはじめるとされている。摂食嚥下機能の低下は、フレイル期から続く要介護状態への進行を早める要因となり得る。このため、プレフレイル期の段階で摂食嚥下機能がどのように低下しているかに気付き、事前に予防や改善を行うことで、フレイル期から続く要介護状態に陥りにくくなり健やかで自立した暮らしを長く保つことができるようになる。 For example, as a person ages, he goes from a healthy state to a state requiring care through a pre-frail period and a flail period. Decreased swallowing function (also called oral flail) is said to begin to appear during the prefrail period. Decreased swallowing function can be a factor that accelerates the progression from the flail phase to the state of need for care. For this reason, we notice how the swallowing function has declined at the pre-frail stage, and by performing prevention and improvement in advance, it becomes difficult to fall into the nursing care state that continues from the flail stage, and a healthy and independent life You can keep it long.
 本発明によれば、被評価者が発した音声、被評価者の顔もしくは首を撮像することで得られる第1画像(静止画像または動画)、または、被評価者の口腔内を撮像することで得られる第2画像(静止画像または動画)から被評価者の摂食嚥下機能を評価することができる。摂食嚥下機能が低下している被評価者が発話した音声、摂食嚥下機能が低下している被評価者の顔の動き等または首における喉頭隆起(喉仏)の位置、および、摂食嚥下機能が低下している被評価者の口腔内の歯または舌には特定の特徴がみられ、これらを特徴量として算出することで、被評価者の摂食嚥下機能を評価することができるためである。以下では、準備期、口腔期および咽頭期における摂食嚥下機能の評価について説明する。本発明は、摂食嚥下機能評価方法、当該方法をコンピュータに実行させるプログラム、当該コンピュータの一例である摂食嚥下機能評価装置、および、摂食嚥下機能評価装置を備える摂食嚥下機能評価システムによって実現される。以下では、摂食嚥下機能評価システムを示しながら、摂食嚥下機能評価方法等について説明する。 According to the present invention, a voice uttered by the evaluator, a first image (still image or moving image) obtained by imaging the face or neck of the evaluator, or an image of the evaluator's oral cavity It is possible to evaluate the swallowing function of the person to be evaluated from the second image (still image or moving image) obtained in the above. Voice spoken by the subject with reduced swallowing function, movement of the subject's face with poor swallowing function, or position of the laryngeal protuberance (throat Buddha) in the neck, and swallowing Specific features are found in the teeth or tongue in the oral cavity of the subject whose function is reduced, and by calculating these as feature quantities, the subject's eating and swallowing function can be evaluated It is. In the following, the evaluation of the swallowing function in the preparation period, the oral period and the pharyngeal period will be described. The present invention relates to a method for evaluating a swallowing function, a program for causing a computer to execute the method, a swallowing function evaluating device that is an example of the computer, and a swallowing function evaluating system including the swallowing function evaluating device. Realized. Below, the swallowing function evaluation method etc. are demonstrated, showing the swallowing function evaluation system.
 [摂食嚥下機能評価システムの構成]
 実施の形態に係る摂食嚥下機能評価システムの構成に関して説明する。
[Configuration of the swallowing function evaluation system]
The configuration of the swallowing function evaluation system according to the embodiment will be described.
 図1は、実施の形態に係る摂食嚥下機能評価システム200の構成を示す図である。 FIG. 1 is a diagram illustrating a configuration of a swallowing function evaluation system 200 according to an embodiment.
 摂食嚥下機能評価システム200は、被評価者Uの音声、被評価者Uの顔または首を撮像することで得られる第1画像、および、被評価者Uの口腔内を撮像することで得られる第2画像のうちの少なくとも2つを解析することで被評価者Uの摂食嚥下機能を評価するためのシステムであり、図1に示されるように、摂食嚥下機能評価装置100と、携帯端末300とを備える。なお、摂食嚥下機能評価システム200は、静止画像だけでなく、動画を解析することで、被評価者Uの摂食嚥下機能を評価してもよい。以下では、1つの静止画像および複数の連続する画像(動画)を、単に第1画像または第2画像とも呼ぶことがある。 The swallowing function evaluation system 200 is obtained by imaging the voice of the person being evaluated U, the first image obtained by imaging the face or neck of the person being evaluated U, and the oral cavity of the person being evaluated U. 1 is a system for evaluating the swallowing function of the evaluation subject U by analyzing at least two of the second images, and as shown in FIG. 1, as shown in FIG. A portable terminal 300. In addition, the swallowing function evaluation system 200 may evaluate the eating / swallowing function of the person to be evaluated U by analyzing not only a still image but also a moving image. Hereinafter, one still image and a plurality of continuous images (moving images) may be simply referred to as a first image or a second image.
 摂食嚥下機能評価装置100は、携帯端末300によって、被評価者Uが発した音声を示す音声データ、被評価者Uの顔または首を撮像することで得られる第1画像、および、被評価者Uの口腔内を撮像することで得られる第2画像の少なくとも2つを取得し、取得した音声データ、第1画像および前記第2画像の少なくとも2つから被評価者Uの摂食嚥下機能を評価する装置である。 The eating and swallowing function evaluation device 100 uses the mobile terminal 300 to generate voice data indicating a voice uttered by the evaluated person U, a first image obtained by imaging the face or neck of the evaluated person U, and the evaluated The user U obtains at least two of the second images obtained by imaging the oral cavity, and the subject U's eating swallowing function from at least two of the acquired voice data, the first image, and the second image It is a device that evaluates.
 携帯端末300は、被評価者Uの顔、首もしくは口腔内を撮像し、または、被評価者Uが所定の音節もしくは所定の文を発話した音声を非接触により集音する装置であり、集音した音声を示す音声データ、撮像により得られる第1画像または第2画像を摂食嚥下機能評価装置100へ出力する。本実施の形態では、携帯端末300は、上記撮像を行う撮像装置であり、かつ、上記集音を行う集音装置であり、具体的には、撮像機能(カメラ)およびマイクを有するスマートフォンまたはタブレット等である。被評価者Uの口腔内を撮像する際には、フラッシュ撮影を行う必要があるため、携帯端末300は、フラッシュ機能(光源)も有している。なお、携帯端末300は、撮像機能、フラッシュ機能および集音機能を有する装置であれば、スマートフォンまたはタブレット等に限らず、例えば、ノートPC等であってもよい。また、摂食嚥下機能評価システム200は、携帯端末300の代わりに、それぞれ別体に設けられた集音装置(マイク)と撮像装置(カメラ)と光源とを備えていてもよい。また、摂食嚥下機能評価システム200は、後述するが、被評価者Uの個人情報を取得するための入力インターフェースを備えていてもよい。当該入力インターフェースは、例えば、キーボード、タッチパネル等の入力機能を有するものであれば特に限定されない。 The portable terminal 300 is a device that picks up the voice, the neck, or the mouth of the person to be evaluated U, or collects the sound of the person to be evaluated U speaking a predetermined syllable or a predetermined sentence in a non-contact manner. Audio data indicating the sound that has been sounded, and the first image or the second image obtained by imaging are output to the swallowing function evaluation apparatus 100. In the present embodiment, the mobile terminal 300 is an imaging device that performs the above imaging and a sound collecting device that performs the above sound collection. Specifically, the mobile terminal 300 is a smartphone or tablet having an imaging function (camera) and a microphone. Etc. Since it is necessary to perform flash photography when imaging the oral cavity of the person to be evaluated U, the mobile terminal 300 also has a flash function (light source). Note that the mobile terminal 300 is not limited to a smartphone or a tablet as long as the device has an imaging function, a flash function, and a sound collection function, and may be a notebook PC, for example. Moreover, the swallowing function evaluation system 200 may include a sound collection device (microphone), an imaging device (camera), and a light source provided separately from each other, instead of the mobile terminal 300. Moreover, although the swallowing function evaluation system 200 is mentioned later, you may be provided with the input interface for acquiring the to-be-evaluated person's U personal information. The input interface is not particularly limited as long as it has an input function such as a keyboard and a touch panel.
 また、携帯端末300は、ディスプレイを有し、摂食嚥下機能評価装置100から出力される画像データに基づいた画像等を表示する表示装置であってもよい。なお、表示装置は携帯端末300でなくてもよく、液晶パネルまたは有機ELパネルなどによって構成されるモニタ装置であってもよい。つまり、携帯端末300と表示装置とが別体に設けられていてもよい。さらに、携帯端末300が用いられない場合には、撮像装置(カメラ)と集音装置(マイク)と光源と入力インターフェースと表示装置とが別体に設けられていてもよい。 Further, the mobile terminal 300 may be a display device that has a display and displays an image or the like based on image data output from the swallowing function evaluation device 100. The display device may not be the portable terminal 300 but may be a monitor device configured by a liquid crystal panel or an organic EL panel. That is, the mobile terminal 300 and the display device may be provided separately. Furthermore, when the portable terminal 300 is not used, the imaging device (camera), the sound collection device (microphone), the light source, the input interface, and the display device may be provided separately.
 摂食嚥下機能評価装置100と携帯端末300とは、音声データ、撮像により得られる画像、または、後述する評価結果を示す画像を表示するための画像データ等を送受信可能であればよく、有線で接続されていてもよいし、無線で接続されていてもよい。 The swallowing function evaluation device 100 and the portable terminal 300 only need to be able to transmit and receive audio data, an image obtained by imaging, or image data for displaying an image indicating an evaluation result to be described later. It may be connected or may be connected wirelessly.
 摂食嚥下機能評価装置100は、携帯端末300によって集音された音声データに基づいて被評価者Uの音声を分析し、携帯端末300による撮像によって得られる第1画像に基づいて被評価者Uの顔の動き等または首における喉頭隆起の位置を分析し、または、携帯端末300による撮像によって得られる第2画像に基づいて被評価者Uの口腔内の歯または舌の状態を分析し、分析した結果から被評価者Uの摂食嚥下機能を評価し、評価結果を出力する。例えば、摂食嚥下機能評価装置100は、評価結果を示す画像を表示するための画像データ、もしくは、評価結果に基づいて生成された被評価者Uに対する摂食嚥下に関する提案をするためのデータを携帯端末300へ出力する。こうすることで、摂食嚥下機能評価装置100は、被評価者Uへ摂食嚥下機能の程度や摂食嚥下機能の低下の予防等するための提案を通知できるため、例えば、被評価者Uは摂食嚥下機能の低下の予防や改善を行うことができる。 The eating and swallowing function evaluation device 100 analyzes the voice of the person to be evaluated U based on the voice data collected by the mobile terminal 300, and based on the first image obtained by imaging by the mobile terminal 300, the person to be evaluated U Or the position of the laryngeal protuberance in the neck, or the state of the teeth or tongue in the oral cavity of the evaluation subject U based on the second image obtained by imaging with the portable terminal 300 From the result, the swallowing function of the person to be evaluated U is evaluated, and the evaluation result is output. For example, the swallowing function evaluation apparatus 100 uses image data for displaying an image indicating the evaluation result, or data for making a proposal regarding swallowing for the person to be evaluated U generated based on the evaluation result. Output to the mobile terminal 300. In this way, the swallowing function evaluation device 100 can notify the person to be evaluated U of the degree of the swallowing function and the proposal for preventing the deterioration of the swallowing function. Can prevent or improve the deterioration of swallowing function.
 なお、摂食嚥下機能評価装置100は、例えば、パーソナルコンピュータであるが、サーバ装置であってもよい。また、摂食嚥下機能評価装置100は、携帯端末300であってもよい。つまり、以下で説明する摂食嚥下機能評価装置100が有する機能を携帯端末300が有していてもよい。 Note that the swallowing function evaluation apparatus 100 is, for example, a personal computer, but may be a server apparatus. In addition, the swallowing function evaluation device 100 may be a portable terminal 300. That is, the portable terminal 300 may have the function of the swallowing function evaluation device 100 described below.
 図2は、実施の形態に係る摂食嚥下機能評価装置100の特徴的な機能構成を示すブロック図である。摂食嚥下機能評価装置100は、取得部110と、算出部120と、評価部130と、出力部140と、提案部150と、記憶部160とを備える。 FIG. 2 is a block diagram showing a characteristic functional configuration of the swallowing function evaluation apparatus 100 according to the embodiment. The swallowing function evaluation apparatus 100 includes an acquisition unit 110, a calculation unit 120, an evaluation unit 130, an output unit 140, a suggestion unit 150, and a storage unit 160.
 取得部110は、被評価者Uが所定の音節または所定の文を発話した音声を非接触により集音することで得られる音声データ、被評価者Uの顔または首を非接触により撮像することで得られる第1画像、および、被評価者Uの口腔内を非接触により撮像することで得られる第2画像の少なくとも2つを取得する。本実施の形態では、取得部110は、音声データ、第1画像および第2画像のすべてを取得する。なお、取得部110は、音声データおよび第1画像の2つのみを取得してもよいし、音声データおよび第2画像の2つのみを取得してもよいし、第1画像および第2画像の2つのみを取得してもよい。また、取得部110は、さらに、被評価者Uの個人情報を取得してもよい。例えば、個人情報は携帯端末300に入力された情報であり、年齢、体重、身長、性別、BMI(Body Mass Index)、歯科情報(例えば、歯の数、入れ歯の有無、咬合支持の場所など)、血清アルブミン値または喫食率等である。なお、個人情報は、EAT-10(イート・テン)と呼ばれる嚥下スクリーニングツール、聖隷式嚥下質問紙または問診等により取得されてもよい。取得部110は、例えば、有線通信または無線通信を行う通信インターフェースである。 The acquisition unit 110 captures voice data obtained by non-contacting sound collection by the evaluated person U uttering a predetermined syllable or a predetermined sentence, and images the face or neck of the evaluated person U without contact. At least two of the first image obtained by the above and the second image obtained by imaging the inside of the oral cavity of the person to be evaluated U without contact are acquired. In the present embodiment, acquisition unit 110 acquires all of the audio data, the first image, and the second image. Note that the acquisition unit 110 may acquire only two of the audio data and the first image, may acquire only the audio data and the second image, or may acquire the first image and the second image. Only two of these may be acquired. Further, the acquisition unit 110 may further acquire personal information of the person to be evaluated U. For example, personal information is information input to the mobile terminal 300, such as age, weight, height, sex, BMI (Body Mass Index), dental information (for example, number of teeth, presence of dentures, location of occlusal support, etc.) Serum albumin level or eating rate. The personal information may be acquired by a swallowing screening tool called EAT-10 (Eat Ten), a sacramental swallowing questionnaire or an interview. The acquisition unit 110 is, for example, a communication interface that performs wired communication or wireless communication.
 算出部120は、取得部110が取得した被評価者Uの音声データ、第1画像または第2画像を解析する処理部である。算出部120は、具体的には、プロセッサ、マイクロコンピュータ、または、専用回路によって実現される。 The calculation unit 120 is a processing unit that analyzes the voice data, the first image, or the second image of the evaluated person U acquired by the acquisition unit 110. Specifically, the calculating unit 120 is realized by a processor, a microcomputer, or a dedicated circuit.
 算出部120は、取得部110が取得した音声データ、第1画像および第2画像の少なくとも2つからそれぞれの特徴量を算出する。本実施の形態では、算出部120は、音声データ、第1画像および第2画像のすべてからそれぞれの特徴量を算出する。なお、算出部120は、音声データおよび第1画像の2つのみからそれぞれの特徴量を算出してもよいし、音声データおよび第2画像の2つのみからそれぞれの特徴量を算出してもよいし、第1画像および第2画像の2つのみからそれぞれの特徴量を算出してもよい。音声データから算出される特徴量とは、評価部130が被評価者Uの摂食嚥下機能を評価するために用いる音声データから算出される被評価者Uの音声の特徴を示す数値である。また、第1画像から算出される特徴量とは、評価部130が被評価者Uの摂食嚥下機能を評価するために用いる第1画像から算出される被評価者Uの顔の動き等または首における喉頭隆起の位置等の特徴を示す数値である。第2画像から算出される特徴量とは、評価部130が被評価者Uの摂食嚥下機能を評価するために用いる画像から算出される被評価者Uの口腔内の歯または舌の状態等の特徴を示す数値である。算出部120の詳細については後述する。 The calculation unit 120 calculates each feature amount from at least two of the audio data, the first image, and the second image acquired by the acquisition unit 110. In the present embodiment, the calculation unit 120 calculates each feature amount from all of the audio data, the first image, and the second image. Note that the calculating unit 120 may calculate each feature amount from only two of the audio data and the first image, or may calculate each feature amount from only two of the audio data and the second image. Alternatively, the respective feature amounts may be calculated from only two of the first image and the second image. The feature amount calculated from the voice data is a numerical value indicating the voice feature of the evaluated person U calculated from the voice data used by the evaluation unit 130 to evaluate the eating and swallowing function of the evaluated person U. The feature amount calculated from the first image is the movement of the face of the evaluated person U calculated from the first image used by the evaluating unit 130 to evaluate the eating and swallowing function of the evaluated person U, or the like It is a numerical value indicating characteristics such as the position of the laryngeal protuberance in the neck. The feature amount calculated from the second image refers to the state of the teeth or tongue in the oral cavity of the evaluated person U calculated from the image used by the evaluating unit 130 to evaluate the eating and swallowing function of the evaluated person U It is a numerical value indicating the characteristics of. Details of the calculation unit 120 will be described later.
 評価部130は、算出部120が算出した特徴量と、記憶部160に記憶されている参照データ161とを照合し、被評価者Uの摂食嚥下機能を評価する。例えば、評価部130は、被評価者Uの摂食嚥下機能を、準備期、口腔期および咽頭期のいずれの段階における摂食嚥下機能であるかを区別した上で評価してもよい。評価部130は、具体的には、プロセッサ、マイクロコンピュータ、または、専用回路によって実現される。評価部130の詳細については後述する。 The evaluation unit 130 compares the feature amount calculated by the calculation unit 120 with the reference data 161 stored in the storage unit 160, and evaluates the eating / swallowing function of the person to be evaluated U. For example, the evaluation unit 130 may evaluate the subject U's swallowing function after distinguishing whether it is a swallowing function in the preparation period, the oral period, or the pharyngeal stage. Specifically, the evaluation unit 130 is realized by a processor, a microcomputer, or a dedicated circuit. Details of the evaluation unit 130 will be described later.
 出力部140は、評価部130が評価した被評価者Uの摂食嚥下機能の評価結果を提案部150へ出力する。また、出力部140は、評価結果を記憶部160に出力し、評価結果は記憶部160に記憶される。出力部140は、具体的には、プロセッサ、マイクロコンピュータ、または、専用回路によって実現される。 The output unit 140 outputs the evaluation result of the swallowing function of the person to be evaluated U evaluated by the evaluation unit 130 to the suggestion unit 150. Further, the output unit 140 outputs the evaluation result to the storage unit 160, and the evaluation result is stored in the storage unit 160. Specifically, the output unit 140 is realized by a processor, a microcomputer, or a dedicated circuit.
 提案部150は、出力部140が出力した評価結果と予め定められた提案データ162とを照合することで、被評価者Uに対する摂食嚥下に関する提案を行う。また、提案部150は、取得部110が取得した個人情報についても提案データ162と照合して、被評価者Uに対する摂食嚥下に関する提案を行ってもよい。提案部150は、当該提案を携帯端末300へ出力する。提案部150は、例えば、プロセッサ、マイクロコンピュータまたは専用回路、および、有線通信または無線通信を行う通信インターフェースによって実現される。提案部150の詳細については後述する。 The proposing unit 150 makes a proposal regarding swallowing to the person to be evaluated U by collating the evaluation result output by the output unit 140 with predetermined proposal data 162. In addition, the suggestion unit 150 may collate the personal information acquired by the acquisition unit 110 with the proposal data 162 and make a proposal regarding swallowing to the evaluated person U. Proposal unit 150 outputs the proposal to portable terminal 300. The proposing unit 150 is realized by, for example, a processor, a microcomputer or a dedicated circuit, and a communication interface that performs wired communication or wireless communication. Details of the proposal unit 150 will be described later.
 記憶部160は、特徴量と人の摂食嚥下機能との関係を示す参照データ161、摂食嚥下機能の評価結果と提案内容との関係を示す提案データ162、および、被評価者Uの上記個人情報を示す個人情報データ163が記憶されている記憶装置である。参照データ161は、被評価者Uの摂食嚥下機能の程度の評価が行われるときに評価部130によって参照される。提案データ162は、被評価者Uに対する摂食嚥下に関する提案が行われるときに提案部150によって参照される。個人情報データ163は、例えば、取得部110を介して取得されたデータである。なお、個人情報データ163は、予め記憶部160に記憶されていてもよい。記憶部160は、例えば、ROM(Read Only Memory)、RAM(Random Access Memory)、半導体メモリ、HDD(Hard Disk Drive)等によって実現される。 The storage unit 160 includes reference data 161 that indicates the relationship between the feature amount and the person's swallowing function, proposal data 162 that indicates the relationship between the evaluation result of the swallowing function and the proposed content, The storage device stores personal information data 163 indicating personal information. The reference data 161 is referred to by the evaluation unit 130 when the degree of the swallowing function of the evaluation subject U is evaluated. The proposal data 162 is referred to by the suggestion unit 150 when a proposal related to swallowing for the person to be evaluated U is made. The personal information data 163 is data acquired via the acquisition unit 110, for example. The personal information data 163 may be stored in the storage unit 160 in advance. The storage unit 160 is realized by, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a semiconductor memory, an HDD (Hard Disk Drive), or the like.
 また、記憶部160には、算出部120、評価部130、出力部140および提案部150が実行するプログラム、被評価者Uの摂食嚥下機能の評価結果を出力する際に用いられる当該評価結果を示す画像データ、および、提案内容を示す画像、動画、音声またはテキスト等のデータも記憶されている。また、記憶部160には、後述する指示用の画像および音声データも記憶されていてもよい。 In addition, the storage unit 160 stores the program executed by the calculation unit 120, the evaluation unit 130, the output unit 140, and the suggestion unit 150, and the evaluation result used when the evaluation result of the swallowing function of the person to be evaluated U is output. And data such as an image, a moving image, a voice or a text indicating the proposal content are also stored. The storage unit 160 may also store instruction images and audio data described below.
 図示していないが、摂食嚥下機能評価装置100は、所定の音節または所定の文を発音すること、ならびに、被評価者Uの顔、首および口腔内を撮像することを被評価者Uに指示するための指示部を備えていてもよい。指示部は、具体的には、記憶部160に記憶された、所定の音節または所定の文を発音すること、および、被評価者Uの顔もしくは首および口腔内を撮像することを指示するための指示用の画像の画像データ、ならびに、音声データを取得し、当該画像データおよび当該音声データを携帯端末300に出力する。 Although not shown, the eating and swallowing function evaluation apparatus 100 allows the evaluated person U to pronounce a predetermined syllable or a predetermined sentence and to image the face, neck and oral cavity of the evaluated person U. You may provide the instruction | indication part for instruct | indicating. Specifically, the instruction unit instructs to sound a predetermined syllable or a predetermined sentence stored in the storage unit 160 and to image the face or neck and the oral cavity of the person to be evaluated U. The image data of the instruction image and the audio data are acquired, and the image data and the audio data are output to the portable terminal 300.
 [摂食嚥下機能評価方法の処理手順]
 続いて、摂食嚥下機能評価装置100が実行する摂食嚥下機能評価方法における具体的な処理手順について説明する。
[Processing procedure for evaluating swallowing function]
Then, the specific process sequence in the swallowing function evaluation method which the swallowing function evaluation apparatus 100 performs is demonstrated.
 図3は、実施の形態に係る摂食嚥下機能評価方法による被評価者Uの摂食嚥下機能を評価する処理手順を示すフローチャートである。図4は、摂食嚥下機能評価方法による被評価者Uの音声の取得方法の概要を示す図である。図5は、摂食嚥下機能評価方法による被評価者Uの顔または首の撮像により得られる第1画像の取得方法の概要を示す図である。図6は、摂食嚥下機能評価方法による被評価者Uの口腔内の撮像により得られる第2画像の取得方法の概要を示す図である。 FIG. 3 is a flowchart showing a processing procedure for evaluating the swallowing function of the person to be evaluated U by the swallowing function evaluation method according to the embodiment. FIG. 4 is a diagram showing an outline of a method for acquiring the voice of the person to be evaluated U by the method for evaluating the swallowing function. FIG. 5 is a diagram showing an overview of a method for acquiring a first image obtained by imaging the face or neck of the person U to be evaluated by the method for evaluating the swallowing function. FIG. 6 is a diagram showing an overview of a method for acquiring a second image obtained by imaging the intra-oral cavity of the person to be evaluated U by the swallowing function evaluation method.
 まず、指示部は、所定の音節または所定の文(特定の音を含む文章)を発音すること、被評価者Uの顔もしくは首および口腔内を撮像することを指示する(ステップS100)。 First, the instruction unit instructs to sound a predetermined syllable or a predetermined sentence (a sentence including a specific sound) and to image the face or neck and the oral cavity of the person to be evaluated U (step S100).
 例えば、ステップS100において、指示部は、記憶部160に記憶された、被評価者Uへの指示用の画像の画像データを取得し、当該画像データを、携帯端末300に出力する。そうすると、図4の(a)に示すように、携帯端末300には、被評価者Uへの指示用の画像が表示される。なお、図4の(a)では、指示される所定の文は、「きたからきたかたたたきき」となっているが、「きたかぜとたいよう」、「あいうえお」、「ぱぱぱぱぱ・・」、「たたたたた・・」、「かかかかか・・」、「ららららら・・」または「ぱんだのかたたき」等であってもよい。また、発音の指示は、所定の文で行われなくてもよく、「き」、「た」、「か」、「ら」、「ぜ」または「ぱ」等の一文字の所定の音節で行われてもよい。また、発音の指示は、「えお」及び「いえあ」などの母音のみからなる二音節以上の無意味なフレーズを発声させる指示であってもよい。発音の指示は、このような無意味なフレーズを繰り返し発声させる指示であってもよい。 For example, in step S <b> 100, the instruction unit acquires image data of an image for instruction to the person to be evaluated U stored in the storage unit 160, and outputs the image data to the mobile terminal 300. Then, as illustrated in FIG. 4A, an image for instructing the person to be evaluated U is displayed on the mobile terminal 300. In FIG. 4 (a), the specified sentence to be instructed is “Katakata Totaiyo”, “Kitakaze Totaiyo”, “Aiueo”, “Papapapapa ·,” It may be “slaps”, “high heels”, “la la la la”, “patters”, etc. The pronunciation instruction does not have to be given in a predetermined sentence, but is performed in a predetermined syllable of one character such as “ki”, “ta”, “ka”, “ra”, “ze” or “pa”. It may be broken. The pronunciation instruction may be an instruction to utter a meaningless phrase of two or more syllables including only vowels such as “Eo” and “Iea”. The pronunciation instruction may be an instruction to repeatedly utter such meaningless phrases.
 また、指示部は、記憶部160に記憶された、被評価者Uへの指示用の音声の音声データを取得し、当該音声データを、携帯端末300に出力することで、発音することを指示する指示用の画像を用いずに発音することを指示する指示用の音声を用いて上記指示を行ってもよい。さらに、発音することを指示する指示用の画像および音声を用いずに、被評価者Uの摂食嚥下機能を評価したい評価者(家族、医師等)が自身の声で被評価者Uに上記指示を行ってもよい。 In addition, the instruction unit obtains voice data of voice for instruction to the person to be evaluated U stored in the storage unit 160 and outputs the voice data to the mobile terminal 300 to instruct to pronounce the voice data. The instruction may be performed using an instruction voice for instructing sound generation without using an instruction image. Further, an evaluator (family, doctor, etc.) who wants to evaluate the eating and swallowing function of the evaluated person U without using the instruction image and the sound for instructing the sound to the evaluated person U with his own voice Instructions may be given.
 また、例えば、ステップS100において、指示部は、記憶部160に記憶された、被評価者Uへの指示用の音声の音声データを取得し、当該音声データを、携帯端末300に出力する。そうすると、図5の(a)および図6の(a)に示すように、携帯端末300には、被評価者Uへの指示用の音声が出力される。なお、図5の(a)では、指示される内容は、「口を大きく開けて撮影してください」となっているが、「口を動かしながら動画を撮影してください」、「口を開けて舌を動かしながら動画を撮影してください」、「口を閉じて頬を膨らませながら動画を撮影してください」、「歯を噛みしめて撮影してください」、「口角を上げて撮影してください」、「喉仏を撮影してください」、「口を大きく開けてフラッシュ撮影してください」等であってもよい。また、撮像は、被評価者U自身が行ってもよく、被評価者Uの摂食嚥下機能を評価したい評価者(家族、医師等)が行ってもよい。また、指示部は、記憶部160に記憶された、被評価者Uへの指示用の画像の画像データを取得し、当該画像データを、携帯端末300に出力することで、撮像することを指示する指示用の音声を用いずに撮像することを指示する指示用の画像を用いて指示を行ってもよい。さらに、撮像することを指示する指示用の画像および音声を用いずに、上記評価者(家族、医師等)が自身の声で被評価者Uに指示を行ってもよい。 Also, for example, in step S100, the instruction unit acquires the audio data of the audio for instructing the person to be evaluated U stored in the storage unit 160, and outputs the audio data to the mobile terminal 300. Then, as shown in (a) of FIG. 5 and (a) of FIG. 6, a voice for instructing the person to be evaluated U is output to the portable terminal 300. In addition, in (a) of FIG. 5, the instructed contents are “Please open your mouth to shoot a movie”, but “Please shoot a movie while moving your mouth”, “Open your mouth. Shoot video while moving tongue "," shoot video while closing mouth and inflating cheeks "," shoot with biting teeth "," shoot with mouth angle raised " "Please shoot the throat Buddha", "Please open your mouth wide and shoot with flash", etc. Further, imaging may be performed by the person to be evaluated U himself or by an evaluator (family, doctor, etc.) who wants to evaluate the person to be evaluated U's swallowing function. Further, the instruction unit obtains image data of an image for instructing the person to be evaluated U stored in the storage unit 160 and outputs the image data to the portable terminal 300 to instruct imaging. The instruction may be given using an instruction image for instructing imaging without using the instruction voice. Furthermore, the evaluator (family, doctor, etc.) may give an instruction to the person to be evaluated U with his / her own voice, without using an instruction image and sound for instructing to take an image.
 例えば、所定の音節は、子音および当該子音に後続した母音によって構成されてもよい。例えば、日本語においては、このような所定の音節は、「き」、「た」、「か」、「ぜ」等である。「き」は、子音「k」および当該子音に後続した母音「i」によって構成される。「た」は、子音「t」および当該子音に後続した母音「a」によって構成される。「か」は、子音「k」および当該子音に後続した母音「a」によって構成される。「ぜ」は、子音「z」および当該子音に後続した母音「e」によって構成される。 For example, the predetermined syllable may be composed of a consonant and a vowel following the consonant. For example, in Japanese, such predetermined syllables are “ki”, “ta”, “ka”, “ze”, and the like. “Ki” is composed of a consonant “k” and a vowel “i” following the consonant. “Ta” is composed of a consonant “t” and a vowel “a” following the consonant. “Ka” is composed of a consonant “k” and a vowel “a” following the consonant. “Ze” is composed of a consonant “z” and a vowel “e” following the consonant.
 また、例えば、所定の文は、子音、当該子音に後続した母音および当該母音に後続した子音からなる音節部分を含んでいてもよい。例えば、日本語においては、このような音節部分は、「かぜ」における「kaz」部分である。具体的には、当該音節部分は、子音「k」、当該子音に後続した母音「a」および当該母音に後続した子音「z」からなる。 Also, for example, the predetermined sentence may include a syllable portion including a consonant, a vowel following the consonant, and a consonant following the vowel. For example, in Japanese, such a syllable part is a “kaz” part in “Kaze”. Specifically, the syllable part includes a consonant “k”, a vowel “a” following the consonant, and a consonant “z” following the vowel.
 また、例えば、所定の文は、母音を含む音節が連続した文字列を含んでいてもよい。例えば、日本語においては、このような文字列は、「あいうえお」等である。 Further, for example, the predetermined sentence may include a character string in which syllables including vowels are continuous. For example, in Japanese, such a character string is “Aiueo” or the like.
 また、例えば、所定の文は、所定の単語を含んでいてもよい。例えば、日本語においては、このような単語は、「たいよう:太陽」、「きたかぜ:北風」等である。 Further, for example, the predetermined sentence may include a predetermined word. For example, in Japanese, such words are “Taiyo: Taiyo”, “Kitakaze: North wind”, and the like.
 また、例えば、所定の文は、子音、および、当該子音に続く母音によって構成される音節が繰り返されるフレーズを含んでいてもよい。例えば、日本語においては、このようなフレーズは、「ぱぱぱぱぱ・・」、「たたたたた・・」、「かかかかか・・」、または「ららららら・・」等である。「ぱ」は、子音「p」および当該子音に後続した母音「a」によって構成される。「た」は、子音「t」および当該子音に後続した母音「a」によって構成される。「か」は、子音「k」および当該子音に後続した母音「a」によって構成される。「ら」は、子音「r」および当該子音に後続した母音「a」によって構成される。 Further, for example, the predetermined sentence may include a phrase in which a syllable composed of a consonant and a vowel following the consonant is repeated. For example, in Japanese, such phrases are “papapapapa ··”, “tatatata ··”, “kakakaka ··”, “la la la la ··”, and the like. “Pa” is composed of a consonant “p” and a vowel “a” following the consonant. “Ta” is composed of a consonant “t” and a vowel “a” following the consonant. “Ka” is composed of a consonant “k” and a vowel “a” following the consonant. “Ra” is composed of a consonant “r” and a vowel “a” following the consonant.
 次に、図3に示されるように、取得部110は、ステップS100において指示を受けた被評価者Uの音声データ、被評価者Uの顔または首の第1画像および被評価者Uの口腔内の第2画像を携帯端末300を介して取得する(ステップS101)。図4の(b)に示すように、ステップS101において、例えば、被評価者Uは、「きたからきたかたたたきき」等の所定の文等を携帯端末300に向けて発する。取得部110は、被評価者Uが発した所定の文または所定の音節を、音声データとして取得する。図5の(b)に示すように、ステップS101において、例えば、被評価者Uは、口を開けた自身の顔を携帯端末300を用いて撮影する。これにより、取得部110は、被評価者Uの顔を撮像することにより得られる第1画像を取得する。図6の(b)に示すように、ステップS101において、例えば、被評価者Uは、自身の口腔内を携帯端末300を用いて撮影(フラッシュ撮影)する。これにより、取得部110は、被評価者Uの口腔内を撮像することにより得られる第2画像を取得する。 Next, as illustrated in FIG. 3, the acquisition unit 110 receives the voice data of the evaluated person U who received the instruction in step S100, the first image of the face or neck of the evaluated person U, and the oral cavity of the evaluated person U. The second image is acquired via the portable terminal 300 (step S101). As illustrated in FIG. 4B, in step S <b> 101, for example, the person to be evaluated U issues a predetermined sentence or the like such as “Singing from the side” to the mobile terminal 300. The acquisition unit 110 acquires a predetermined sentence or a predetermined syllable issued by the evaluated person U as voice data. As shown in FIG. 5B, in step S <b> 101, for example, the person to be evaluated U photographs his / her face with his mouth open using the mobile terminal 300. Thereby, the acquisition unit 110 acquires a first image obtained by imaging the face of the person to be evaluated U. As shown in FIG. 6B, in step S <b> 101, for example, the evaluated person U takes an image of his / her oral cavity using the mobile terminal 300 (flash photography). Thereby, the acquisition part 110 acquires the 2nd image obtained by imaging the to-be-evaluated person U's intraoral area.
 次に、算出部120は、取得部110が取得した音声データ、第1画像および第2画像からそれぞれの特徴量を算出し(ステップS102)、評価部130は、算出部120が算出した特徴量から、被評価者Uの摂食嚥下機能を評価する(ステップS103)。 Next, the calculation unit 120 calculates each feature amount from the audio data, the first image, and the second image acquired by the acquisition unit 110 (step S102), and the evaluation unit 130 calculates the feature amount calculated by the calculation unit 120. From this, the swallowing function of the person to be evaluated U is evaluated (step S103).
 例えば、取得部110が取得した音声データが、子音および当該子音に後続した母音によって構成される所定の音節を発話した音声から得られる音声データの場合、算出部120は、当該子音と当該母音との音圧差を特徴量として算出する。これについて、図7を用いて説明する。 For example, when the voice data acquired by the acquisition unit 110 is voice data obtained from a voice uttered by a predetermined syllable composed of a consonant and a vowel following the consonant, the calculation unit 120 calculates the consonant and the vowel. Is calculated as a feature amount. This will be described with reference to FIG.
 図7は、被評価者Uが発話した音声を示す音声データの一例を示す図である。具体的には、図7は、被評価者Uが「きたからきたかたたたきき」と発話した場合の音声データを示すグラフである。図7に示すグラフの横軸は時間であり、縦軸はパワー(音圧)である。なお、図7のグラフの縦軸に示すパワーの単位は、デシベル(dB)である。 FIG. 7 is a diagram showing an example of voice data indicating the voice uttered by the evaluator U. Specifically, FIG. 7 is a graph showing voice data when the person to be evaluated U utters “Singing from the side”. The horizontal axis of the graph shown in FIG. 7 is time, and the vertical axis is power (sound pressure). The unit of power shown on the vertical axis of the graph of FIG. 7 is decibel (dB).
 図7に示すグラフには、「き」、「た」、「か」、「ら」、「き」、「た」、「か」、「た」、「た」、「た」、「き」、「き」に対応する音圧の変化が確認される。取得部110は、図3に示すステップS101において、被評価者Uから音声データとして、図7に示すデータを取得する。算出部120は、例えば、図3に示すステップS102において、既知の方法により、図7に示す音声データに含まれる「き(ki)」における「k」および「i」の各音圧、「た(ta)」における「t」および「a」の各音圧を算出する。また、被評価者Uが「きたかぜとたいよう」と発話した場合には、算出部120は、「ぜ(ze)」における「z」および「e」の各音圧を算出する。算出部120は、算出した「t」および「a」の各音圧から、「t」および「a」の音圧差ΔP1を特徴量として算出する。同じように、算出部120は、「k」および「i」の音圧差ΔP3、「z」および「e」の音圧差(図示せず)を特徴量として算出する。 The graph shown in FIG. 7 includes “ki”, “ta”, “ka”, “ra”, “ki”, “ta”, “ka”, “ta”, “ta”, “ta”, “ki” ”, A change in sound pressure corresponding to“ ki ”is confirmed. The acquisition unit 110 acquires the data shown in FIG. 7 as voice data from the person to be evaluated U in step S101 shown in FIG. For example, in step S102 shown in FIG. 3, the calculation unit 120 uses the known method to calculate the sound pressures “k” and “i” of “ki” included in the audio data shown in FIG. The sound pressures “t” and “a” in (ta) ”are calculated. When the person to be evaluated U utters “Kitakaze to Taiyo”, the calculation unit 120 calculates the sound pressures of “z” and “e” in “ze”. The calculation unit 120 calculates the sound pressure difference ΔP1 between “t” and “a” as a feature amount from the calculated sound pressures “t” and “a”. Similarly, the calculation unit 120 calculates the sound pressure difference ΔP3 between “k” and “i” and the sound pressure difference (not shown) between “z” and “e” as feature amounts.
 参照データ161には、各音圧差に対応する閾値が含まれており、評価部130は、例えば、各音圧差が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold corresponding to each sound pressure difference, and the evaluation unit 130 evaluates the swallowing function according to whether each sound pressure difference is equal to or greater than the threshold, for example.
 例えば、「き(ki)」を発するためには、舌の根元を軟口蓋へ接触させる必要がある。舌の根元を軟口蓋へ接触させる機能(「k」および「i」の音圧差)を評価することで、咽頭期における舌の運動機能(舌圧等も含む)を評価することができる。 For example, in order to emit “ki”, it is necessary to bring the base of the tongue into contact with the soft palate. By evaluating the function of bringing the base of the tongue into contact with the soft palate (the difference in sound pressure between “k” and “i”), the motor function of the tongue (including tongue pressure and the like) in the pharyngeal stage can be evaluated.
 例えば、「た(ta)」を発するためには、舌の先端を前歯後方の上顎へ接触させる必要がある。舌の先端を前歯後方の上顎へ接触させる機能(「t」および「a」の音圧差)を評価することで、準備期における舌の運動機能を評価することができる。 For example, in order to emit “ta”, it is necessary to bring the tip of the tongue into contact with the upper jaw behind the front teeth. By evaluating the function of bringing the tip of the tongue into contact with the upper jaw behind the front teeth (the difference in sound pressure between “t” and “a”), the motor function of the tongue in the preparation period can be evaluated.
 例えば、「ぜ(ze)」を発するためには、舌の先端を上前歯へ接触または接近させる必要がある。舌の側面は、歯列で支えるなど、歯の存在が重要となる。上前歯を含む歯列の存在(「z」および「e」の音圧差)を評価することで、残存歯の多いか少ないかの推定や、少ない場合には咀嚼能力に影響するなど、準備期における歯の咬合状態を評価することができる。 For example, in order to emit a “ze”, the tip of the tongue needs to contact or approach the upper front teeth. The presence of teeth is important, such as supporting the side of the tongue with dentition. Estimating the presence of the dentition including the upper front teeth (sound pressure difference between “z” and “e”), estimating whether there are more or fewer remaining teeth, and affecting the masticatory ability if there are fewer teeth The occlusal state of the teeth can be evaluated.
 また、例えば、取得部110が取得した音声データが、子音、当該子音に後続した母音および当該母音に後続した子音からなる音節部分を含む所定の文を発話した音声から得られる音声データの場合、算出部120は、当該音節部分を発するのに要した時間を特徴量として算出する。 Further, for example, in the case where the voice data acquired by the acquisition unit 110 is voice data obtained from a voice that utters a predetermined sentence including a consonant, a vowel following the consonant, and a syllable part following the consonant, The calculation unit 120 calculates the time required to emit the syllable part as a feature amount.
 例えば、被評価者Uが「かぜ」を含む所定の文を発話した場合、当該所定の文は、子音「k」、当該子音に後続した母音「a」および当該母音に後続した子音「z」からなる音節部分を含む。算出部120は、このような「k-a-z」からなる音節部分を発するのに要した時間を特徴量として算出する。 For example, when the person to be evaluated U utters a predetermined sentence including “cold”, the predetermined sentence includes the consonant “k”, the vowel “a” following the consonant, and the consonant “z” following the vowel. It contains a syllable part consisting of The calculation unit 120 calculates the time required to generate such a syllable part composed of “kaz” as a feature amount.
 参照データ161には、当該音節部分を発するのに要した時間に対応する閾値が含まれており、評価部130は、例えば、当該音節部分を発するのに要した時間が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold value corresponding to the time required to emit the syllable part. For example, the evaluation unit 130 determines whether the time required to issue the syllable part is equal to or greater than the threshold value. Evaluate swallowing function according to whether or not.
 例えば、「子音-母音-子音」からなる音節部分を発するのに要する時間は、舌の運動機能(舌の巧緻性または舌圧等)に応じて変わってくる。当該音節部分を発するのに要した時間を評価することで、準備期における舌の運動機能、口腔期における舌の運動機能、および、咽頭期における舌の運動機能を評価することができる。 For example, the time required to generate a syllable part consisting of “consonant-vowel-consonant” varies depending on the tongue's motor function (such as tongue sophistication or tongue pressure). By evaluating the time required to emit the syllable part, the tongue motor function in the preparation period, the tongue motor function in the oral period, and the tongue motor function in the pharyngeal period can be evaluated.
 また、例えば、取得部110が取得した音声データが、母音を含む音節が連続した文字列を含む所定の文を発話した音声から得られる音声データの場合、算出部120は、母音部分のスペクトルから得られる第一フォルマント周波数もしくは第二フォルマント周波数等の変化量を特徴量として算出し、また、母音部分のスペクトルから得られる第一フォルマント周波数もしくは第二フォルマント周波数等のばらつきを特徴量として算出する。 For example, in the case where the voice data acquired by the acquisition unit 110 is voice data obtained from a voice that utters a predetermined sentence including a character string including syllables including vowels, the calculation unit 120 calculates the spectrum from the vowel part. The amount of change such as the first formant frequency or the second formant frequency obtained is calculated as the feature amount, and the variation of the first formant frequency or the second formant frequency obtained from the spectrum of the vowel part is calculated as the feature amount.
 第一フォルマント周波数は、人の音声の低周波数側から数えて1番目に見られる振幅のピーク周波数であり、舌の動き(特に上下運動)に関する特徴が反映されやすいことが知られている。加えて、顎の開きに関する特徴が反映されやすいことも知られている。 The first formant frequency is the peak frequency of the amplitude first seen from the low frequency side of the human voice, and it is known that characteristics relating to tongue movement (particularly vertical movement) are easily reflected. In addition, it is also known that characteristics related to jaw opening are easily reflected.
 第二フォルマント周波数は、人の音声の低周波数側から数えて2番目に見られる振幅のピーク周波数であり、声帯音源が声道、口唇や舌等の口腔、鼻腔等で生じる共鳴のうち、舌の位置(特に前後位置)に関する影響が反映されやすいことが知られている。また、例えば、歯が存在しない場合に正しく発話できないことから、準備期における歯の咬合状態(歯の数)は、第二フォルマント周波数に影響があると考えられる。また、例えば、唾液が少ない場合に正しく発話できないことから、準備期における唾液の分泌機能は、第二フォルマント周波数に影響があると考えられる。なお、舌の運動機能、唾液の分泌機能または歯の咬合状態(歯の数)は、第一フォルマント周波数から得られる特徴量および第二フォルマント周波数から得られる特徴量のうちのいずれの特徴量から算出してもよい。 The second formant frequency is the peak frequency of the amplitude seen second from the low frequency side of human speech. Of the resonance generated by the vocal cord sound source in the vocal tract, oral cavity such as lips and tongue, nasal cavity, etc. It is known that the influence on the position (especially the front-rear position) is easily reflected. Further, for example, it is considered that the occlusal state (the number of teeth) of the teeth in the preparation period has an influence on the second formant frequency because the utterance cannot be correctly performed when there are no teeth. For example, since saliva cannot be correctly spoken when saliva is low, the saliva secretion function in the preparation period is considered to have an influence on the second formant frequency. In addition, the motor function of the tongue, the saliva secretion function, or the occlusal state of the teeth (the number of teeth) is obtained from any one of the feature values obtained from the first formant frequency and the feature values obtained from the second formant frequency. It may be calculated.
 図8は、フォルマント周波数を説明するための周波数スペクトル図である。なお、図8に示すグラフの横軸は周波数[Hz]であり、縦軸は振幅である。 FIG. 8 is a frequency spectrum diagram for explaining the formant frequency. The horizontal axis of the graph shown in FIG. 8 is the frequency [Hz], and the vertical axis is the amplitude.
 図8に破線で示すように、音声データの横軸を周波数に変換して得られるデータには、複数のピークが確認される。複数のピークのうち、周波数の最も低いピークの周波数は、第一フォルマント周波数F1である。また、第一フォルマント周波数F1の次に周波数の低いピークの周波数は、第二フォルマント周波数F2である。また、第二フォルマント周波数F2の次に周波数の低いピークの周波数は、第三フォルマント周波数F3である。このように、算出部120は、取得部110が取得した音声データから既知の方法により母音部分を抽出して、抽出した母音の部分の音声データを、周波数に対する振幅にデータ変換することにより母音部分のスペクトルを算出して、母音部分のスペクトルから得られるフォルマント周波数を算出する。 As shown by a broken line in FIG. 8, a plurality of peaks are confirmed in the data obtained by converting the horizontal axis of the audio data into the frequency. The frequency of the lowest peak among the plurality of peaks is the first formant frequency F1. Further, the second lowest formant frequency F2 is a peak frequency having the second lowest frequency after the first formant frequency F1. The peak frequency having the second lowest frequency after the second formant frequency F2 is the third formant frequency F3. As described above, the calculation unit 120 extracts a vowel part from the voice data acquired by the acquisition unit 110 by a known method, and converts the extracted voice data of the vowel part into an amplitude with respect to the frequency, thereby converting the vowel part. The formant frequency obtained from the spectrum of the vowel part is calculated.
 なお、図8に示すグラフは、被評価者Uから得られる音声データを周波数に対する振幅のデータに変換し、その包絡線を求めることにより算出される。包絡線の計算には、例えば、ケプストラム分析、線形予測分析(Linear Predictive Coding:LPC)等が採用される。 The graph shown in FIG. 8 is calculated by converting voice data obtained from the person to be evaluated U into amplitude data with respect to frequency and obtaining an envelope thereof. For the calculation of the envelope, for example, cepstrum analysis, linear predictive coding (LPC), or the like is employed.
 図9は、フォルマント周波数の時間変化の一例を示す図である。具体的には、図9は、第一フォルマント周波数F1と、第二フォルマント周波数F2と、第三フォルマント周波数F3との周波数の時間変化の一例を説明するためのグラフである。 FIG. 9 is a diagram showing an example of the time change of the formant frequency. Specifically, FIG. 9 is a graph for explaining an example of a temporal change in frequency of the first formant frequency F1, the second formant frequency F2, and the third formant frequency F3.
 例えば、被評価者Uに、「あいうえお」等の連続した複数の母音を含む音節を発話させる。算出部120は、被評価者Uが発話した音声を示す音声データから、複数の母音それぞれの第一フォルマント周波数F1および第二フォルマント周波数F2を算出する。さらに、算出部120は、母音が連続した文字列の第一フォルマント周波数F1の変化量(時間変化量)と第二フォルマント周波数F2の変化量(時間変化量)を特徴量として算出する。 For example, let the evaluated person U utter a syllable including a plurality of continuous vowels such as “Aiueo”. The calculating unit 120 calculates the first formant frequency F1 and the second formant frequency F2 of each of the plurality of vowels from the sound data indicating the sound uttered by the person to be evaluated U. Furthermore, the calculation unit 120 calculates the amount of change (time change amount) of the first formant frequency F1 and the amount of change (time change amount) of the second formant frequency F2 of the character string including continuous vowels as the feature amount.
 参照データ161には、当該変化量に対応する閾値が含まれており、評価部130は、例えば、当該変化量が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold corresponding to the amount of change, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the amount of change is equal to or greater than the threshold.
 第一フォルマント周波数F1をみると、例えば、顎の開きを示している。第二フォルマント周波数F2をみると、舌の前後の位置に関する影響があり、その動きが影響する準備期、咽頭期における舌の動きが低下していることを示す。第二フォルマント周波数F2をみると、例えば、歯がなく正しく発話できないことを示しており、つまりは、準備期における歯の咬合状態が劣化していることを示す。また、第二フォルマント周波数F2をみると、例えば、唾液が少なく正しく発話できないことを示しており、つまりは、準備期における唾液の分泌機能が低下していることを示す。すなわち、第二フォルマント周波数F2の変化量を評価することで、準備期における唾液の分泌機能を評価することができる。 Looking at the first formant frequency F1, for example, the jaw opening is shown. Looking at the second formant frequency F2, there is an influence on the position of the front and back of the tongue, which indicates that the movement of the tongue in the preparation period and the pharyngeal stage where the movement affects is reduced. Looking at the second formant frequency F2, for example, it indicates that there is no tooth and cannot speak correctly, that is, the occlusal state of the tooth in the preparation period is deteriorated. Further, the second formant frequency F2 shows that, for example, there is little saliva and speech cannot be correctly performed, that is, the secretory function of saliva in the preparation period is lowered. That is, by evaluating the amount of change in the second formant frequency F2, the salivary secretion function in the preparation period can be evaluated.
 また、算出部120は、母音が連続した文字列の第一フォルマント周波数F1のばらつきを特徴量として算出する。例えば、音声データに母音がn個(nは自然数)含まれる場合には、n個の第一フォルマント周波数F1が得られ、これらの全部または一部を用いて第一フォルマント周波数F1のばらつきが算出される。特徴量として算出されるばらつきの度合いは、例えば、標準偏差である。 Also, the calculation unit 120 calculates the variation of the first formant frequency F1 of the character string in which the vowels are continuous as the feature amount. For example, if the voice data includes n vowels (n is a natural number), n first formant frequencies F1 are obtained, and the variation of the first formant frequency F1 is calculated using all or part of them. Is done. The degree of variation calculated as the feature amount is, for example, standard deviation.
 参照データ161には、当該ばらつきに対応する閾値が含まれており、評価部130は、例えば、当該ばらつきが当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold value corresponding to the variation, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the variation is equal to or greater than the threshold value.
 第一フォルマント周波数F1のばらつきが大きいこと(閾値以上であること)は、例えば、舌の上下運動が鈍いことを示しており、つまりは、口腔期における、舌の先端を上顎に押し当てて食塊を咽頭へ送り込む舌の運動機能が低下していることを示す。すなわち、第一フォルマント周波数F1のばらつきを評価することで、口腔期における舌の運動機能を評価することができる。 A large variation in the first formant frequency F1 (being greater than or equal to the threshold value) indicates, for example, that the vertical movement of the tongue is dull. In other words, in the oral phase, the tip of the tongue is pressed against the upper jaw and eaten. Indicates that the motor function of the tongue that sends the mass to the pharynx is reduced. That is, by evaluating the variation of the first formant frequency F1, the motor function of the tongue in the oral cavity can be evaluated.
 また、例えば、算出部120は、被評価者Uが所定の音節または所定の文を発話した音声のピッチ(高さ)を特徴量として算出する。 Also, for example, the calculation unit 120 calculates the pitch (height) of the voice in which the evaluated person U utters a predetermined syllable or a predetermined sentence as a feature amount.
 参照データ161には、当該ピッチに対応する閾値が含まれており、評価部130は、例えば、当該ピッチが当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold corresponding to the pitch, and the evaluation unit 130 evaluates the swallowing function depending on whether the pitch is equal to or greater than the threshold, for example.
 また、例えば、取得部110が取得した音声データが、所定の単語を含む所定の文を発話した音声から得られる音声データの場合、算出部120は、所定の単語を発するのに要した時間を特徴量として算出する。 Further, for example, when the voice data acquired by the acquisition unit 110 is voice data obtained from a voice that utters a predetermined sentence including a predetermined word, the calculation unit 120 calculates the time required to utter the predetermined word. Calculated as a feature quantity.
 例えば、被評価者Uが「たいよう」を含む所定の文を発話する場合、被評価者Uは、「たいよう」という文字列を「太陽」という単語であることを認識してから「たいよう」という文字列を発話する。所定の単語を発するのに時間を要する場合、被評価者Uは、認知症のおそれがある。ここで、歯の本数は認知症に影響を与えると言われている。歯の本数は、脳活動に影響しており、歯の本数が減ることにより脳への刺激が減り、認知症を発症する危険性が高まるためである。つまり、被評価者Uが認知症のおそれがあることは、歯の本数とは対応しており、さらには、準備期における食物を噛み砕きすり潰すための歯の咬合状態と対応している。したがって、所定の単語を発するのに要した時間が大きいこと(閾値以上であること)は、被評価者Uが認知症のおそれがあること、言い換えると、準備期における歯の咬合状態が劣化していることを示す。すなわち、被評価者Uが所定の単語を発するのに要した時間を評価することで、準備期における歯の咬合状態を評価することができる。 For example, when the person to be evaluated U utters a predetermined sentence including “Taiyo”, the person to be evaluated U says “Taiyo” after recognizing the character string “Taiyo” as the word “Sun”. Say a string. If it takes time to utter a predetermined word, the evaluated person U may have dementia. Here, the number of teeth is said to affect dementia. The number of teeth affects the brain activity, and the decrease in the number of teeth reduces the stimulation to the brain and increases the risk of developing dementia. That is, that the person to be evaluated U may have dementia corresponds to the number of teeth, and further corresponds to the occlusal state of the teeth for chewing and crushing food in the preparation period. Therefore, if the time required for uttering a predetermined word is large (being the threshold value or more), the evaluated person U may have dementia, in other words, the occlusal state of the teeth in the preparation period deteriorates. Indicates that That is, the occlusal state of the teeth in the preparation period can be evaluated by evaluating the time required for the evaluated person U to issue the predetermined word.
 なお、算出部120は、所定の文全体を発するのに要した時間を特徴量として算出してもよい。この場合でも、同じように、被評価者Uが所定の文全体を発するのに要した時間を評価することで、準備期における歯の咬合状態を評価することができる。 Note that the calculation unit 120 may calculate the time required for issuing the entire predetermined sentence as a feature amount. Even in this case, the occlusal state of the teeth in the preparation period can be evaluated in the same manner by evaluating the time required for the person to be evaluated U to issue the entire predetermined sentence.
 また、例えば、取得部110が取得した音声データが、子音、および、当該子音に続く母音によって構成される音節が繰り返されるフレーズを含む所定の文を発話した音声から得られる音声データの場合、算出部120は、繰り返される音節を所定の時間(例えば5秒等)内に発した回数を特徴量として算出する。 In addition, for example, when the voice data acquired by the acquisition unit 110 is voice data obtained from a voice that utters a predetermined sentence including a consonant and a syllable composed of vowels following the consonant, the calculation is performed. The unit 120 calculates the number of times a repeated syllable is emitted within a predetermined time (for example, 5 seconds) as a feature amount.
 参照データ161には、当該回数に対応する閾値が含まれており、評価部130は、例えば、当該回数が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold value corresponding to the number of times, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the number of times is equal to or greater than the threshold value.
 例えば、被評価者Uは、「ぱぱぱぱぱ・・」、「たたたたた・・」、「かかかかか・・」または「ららららら・・」などの子音、および、当該子音に続く母音によって構成される音節が繰り返されるフレーズを含む所定の文を発話する。 For example, the to-be-evaluated U follows a consonant such as “papapapapa ··”, “tatatata ··”, “high heels ··” or “la la la la ··”, and the consonant. A predetermined sentence including a phrase in which a syllable composed of vowels is repeated is uttered.
 例えば、「ぱ(pa)」を発するためには、口(口唇)を上下に開け閉めする必要がある。口唇を上下に開け閉めする機能が低下している場合、「ぱ」を所定時間内に素早く所定回数(閾値)以上発話できなくなる。口唇を上下に開け閉めする動作は、準備期における食物をこぼさずに口腔内に取り込む動作に類似している。このため、「ぱ(pa)」を素早く発する、つまり、口唇を上下に素早く繰り返し開け閉めする機能は、準備期における食物をこぼさずに口腔内に取り込むための表情筋の運動機能と対応している。すなわち、「ぱ(pa)」を所定の時間内に発した回数を評価することで、準備期における表情筋の運動機能を評価することができる。 For example, in order to emit “pa”, it is necessary to open and close the mouth (lips) up and down. When the function of opening and closing the lips up and down is reduced, “Pa” cannot be spoken more quickly than a predetermined number of times (threshold) within a predetermined time. The action of opening and closing the lips up and down is similar to the action of taking food into the oral cavity without spilling food during the preparation period. For this reason, the function of quickly producing “pa”, that is, the function of opening and closing the lips quickly and repeatedly, corresponds to the movement function of facial muscles for taking food into the oral cavity without spilling food during the preparation period. Yes. That is, by evaluating the number of times “pa” is issued within a predetermined time, the motor function of the facial muscles during the preparation period can be evaluated.
 例えば、「た(ta)」を発するためには、上述したように、舌の先端を前歯後方の上顎へ接触させる必要がある。舌の先端を前歯後方の上顎へ接触させる動作は、準備期における食物を歯に押し当てたり細かくなった食物を唾液と混ぜ合わせてまとめたりする際に行われる動作、および、口腔期における舌(舌の先端)を持ち上げて食塊を口腔内から咽頭に移動させる際に行われる動作と類似している。このため、「た(ta)」を素早く発する、つまり、舌の先端を素早く前歯後方の上顎へ繰り返し接触させる機能は、準備期における食物を歯に押し当てたり細かくなった食物を唾液と混ぜ合わせてまとめたりするための舌の運動機能、および、口腔期における食塊を咽頭へ移動させるための舌の運動機能と対応している。すなわち、「た(ta)」を所定の時間内に発した回数を評価することで、準備期における舌の運動機能および口腔期における舌の運動機能を評価することができる。 For example, in order to emit “ta”, it is necessary to bring the tip of the tongue into contact with the upper jaw behind the front teeth as described above. The action of bringing the tip of the tongue into contact with the upper jaw behind the front teeth is the action performed when pressing the food in the preparation period against the teeth or mixing the fine food with saliva and the tongue in the oral period ( This is similar to the action performed when lifting the tip of the tongue) and moving the bolus from the mouth to the pharynx. For this reason, the function of quickly producing “ta”, that is, the function of rapidly bringing the tip of the tongue into contact with the upper jaw behind the front teeth, presses the food in the preparation period against the teeth or mixes fine food with saliva. It corresponds to the motor function of the tongue for gathering together and the motor function of the tongue to move the bolus to the pharynx in the oral phase. That is, by evaluating the number of times that “ta (ta)” is issued within a predetermined time, the motor function of the tongue in the preparation period and the motor function of the tongue in the oral period can be evaluated.
 例えば、「か(ka)」を発するためには、上述した「き(ki)」と同じように、舌の根元を軟口蓋へ接触させる必要がある。舌の根元を軟口蓋へ接触させる動作は、咽頭期における食塊を咽頭を通過させる(飲み込む)際に行われる動作と類似している。さらに、食べ物や飲み物を口に含む際(準備期)、及び、食べ物を口の中で咀嚼し食改形成をしている際(口腔期)には、舌の根元は軟口蓋に接触し、咽頭流入を防ぐ動作、及び、むせを防ぐ動作を行うが、これは「k」を発する時の舌の動作と類似している。このため、「か(ka)」を素早く発する、つまり、舌の根元を軟口蓋へ素早く繰り返し接触させる機能は、咽頭期における食塊を咽頭を通過させるための舌(具体的には舌の根元)の運動機能と対応している。すなわち、「か(ka)」を所定の時間内に発した回数を評価することで、準備期、口腔期、咽頭期における舌の運動機能を評価することができる。また、この舌の運動機能は、食べ物を咽頭流入させない機能、むせを防ぐ機能と対応している。 For example, in order to emit “ka”, it is necessary to bring the base of the tongue into contact with the soft palate in the same manner as “ki” described above. The action of bringing the base of the tongue into contact with the soft palate is similar to the action performed when the bolus passes (swallows) the bolus in the pharynx. Furthermore, when food or drink is included in the mouth (preparation period) and when food is chewed in the mouth and food reforming (oral period), the base of the tongue touches the soft palate and the pharynx The action of preventing inflow and the action of preventing tearing are carried out, which is similar to the action of the tongue when emitting “k”. For this reason, the function of quickly producing “ka”, that is, the function of rapidly and repeatedly contacting the base of the tongue with the soft palate is the tongue for passing the bolus in the pharyngeal phase through the pharynx (specifically, the base of the tongue) Corresponds to the motor function. That is, by evaluating the number of times that “ka (ka)” is issued within a predetermined time, the motor function of the tongue in the preparation period, the oral period, and the pharyngeal period can be evaluated. In addition, the motor function of the tongue corresponds to the function of preventing food from flowing into the pharynx and the function of preventing mumps.
 例えば、「ら(ra)」を発するためには、舌を反らせる必要がある。舌を反らせる動作は、準備期における食物を唾液と混ぜ合わせて食塊を形成する動作と類似している。このため、「ら(ra)」を素早く発する、つまり、舌を素早く繰り返し反らせる機能は、準備期における食物を唾液と混ぜ合わせて食塊を形成するための舌の運動機能と対応している。すなわち、「ら(ra)」を所定の時間内に発した回数を評価することで、準備期における舌の運動機能を評価することができる。 For example, in order to issue “ra”, it is necessary to warp the tongue. The action of bending the tongue is similar to the action of mixing food with saliva in the preparation period to form a bolus. For this reason, the function of quickly issuing “ra”, that is, the function of quickly and repeatedly warping the tongue, corresponds to the function of the tongue for mixing food with saliva in the preparation period to form a bolus. That is, by evaluating the number of times “ra” is issued within a predetermined time, the motor function of the tongue in the preparation period can be evaluated.
 また、例えば、取得部110が取得した第1画像が、被評価者Uが口を動かしている際の当該口の撮像により得られる複数の連続する画像の場合、算出部120は、当該連続する画像(動画)における口の動きを特徴量として算出する。具体的には、算出部120は、口の左側の移動量と右側の移動量との差(口左右差と呼ぶ)を特徴量として算出する。 For example, when the first image acquired by the acquisition unit 110 is a plurality of continuous images obtained by imaging the mouth when the person to be evaluated U is moving his / her mouth, the calculation unit 120 performs the continuous Mouth movement in an image (moving image) is calculated as a feature amount. Specifically, the calculation unit 120 calculates a difference between the movement amount on the left side of the mouth and the movement amount on the right side (referred to as a mouth left / right difference) as a feature amount.
 参照データ161には、当該口左右差に対応する閾値が含まれており、評価部130は、例えば、当該口左右差が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold value corresponding to the left-right difference of the mouth, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the left-right difference of the mouth is equal to or greater than the threshold value. .
 口左右差が大きいこと(閾値以上であること)は、例えば、口の左側もしくは右側に麻痺があることを示しており、つまりは、準備期における食物をこぼさずに口腔内に取り込むための表情筋の運動機能が低下していることを示す。すなわち、被評価者Uの口の動きを評価することで、準備期における表情筋の運動機能を評価することができる。 A large difference between the left and right mouths (greater than or equal to the threshold value) indicates, for example, that there is paralysis on the left or right side of the mouth, that is, an expression for taking food into the oral cavity without spilling food during the preparation period. Indicates that the motor function of the muscles has declined. That is, by evaluating the movement of the mouth of the person to be evaluated U, it is possible to evaluate the motor function of the facial muscles during the preparation period.
 また、例えば、取得部110が取得した第1画像が、被評価者Uが口を開いている際の当該口の撮像により得られる画像の場合、算出部120は、当該画像における口の開き具合を特徴量として算出する。 In addition, for example, when the first image acquired by the acquisition unit 110 is an image obtained by imaging the mouth when the person to be evaluated U opens the mouth, the calculation unit 120 determines whether the mouth is open in the image. Is calculated as a feature amount.
 参照データ161には、当該口の開き具合に対応する閾値が含まれており、評価部130は、例えば、当該口の開き具合が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold corresponding to the degree of opening of the mouth, and the evaluation unit 130 performs, for example, a swallowing function depending on whether the degree of opening of the mouth is equal to or greater than the threshold. evaluate.
 口の開き具合が小さいこと(閾値未満であること)、つまり、口を大きく開けないことは、例えば、準備期における、食物をこぼさずに口腔内に取り込むための表情筋の運動機能および咬筋および側頭筋(咀嚼筋)の咬合機能が低下していることを示す。すなわち、被評価者Uの口の開き具合を評価することで、準備期における表情筋の運動機能および咀嚼筋の運動機能を評価することができる。 Small mouth openness (below the threshold), that is, not opening the mouth wide, for example, during the preparation period, the facial muscle function and masseter muscles for taking food into the mouth without spilling It shows that the occlusal function of temporal muscles (masticatory muscles) is reduced. That is, by evaluating the degree of opening of the person to be evaluated U, the motor function of the facial muscles and the motor function of the masticatory muscles in the preparation period can be evaluated.
 また、例えば、取得部110が取得した第1画像が、被評価者Uが口を開きながら舌を動かしている際の当該舌の撮像により得られる複数の連続する画像の場合、算出部120は、当該連続の画像(動画)における舌の動きを特徴量として算出する。具体的には、算出部120は、舌を口から出せる長さを特徴量として算出する。また、算出部120は、舌の左側への移動量と右側への移動量との差(舌左右差と呼ぶ)を特徴量として算出する。また、算出部120は、舌の先端の前歯後方の上顎への接近量を特徴量として算出する。 In addition, for example, when the first image acquired by the acquisition unit 110 is a plurality of continuous images obtained by imaging the tongue when the person to be evaluated U moves the tongue while opening his mouth, the calculation unit 120 The movement of the tongue in the continuous image (moving image) is calculated as a feature amount. Specifically, the calculation unit 120 calculates the length that allows the tongue to be put out from the mouth as the feature amount. In addition, the calculation unit 120 calculates a difference between the amount of movement of the tongue on the left side and the amount of movement on the right side (referred to as tongue left-right difference) as a feature amount. Further, the calculation unit 120 calculates the approach amount of the tip of the tongue to the upper jaw behind the front teeth as a feature amount.
 参照データ161には、当該舌を口から出せる長さに対応する閾値が含まれており、評価部130は、例えば、当該長さが当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。また、参照データ161には、当該舌左右差に対応する閾値が含まれており、評価部130は、例えば、当該舌左右差が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。また、参照データ161には、当該接近量に対応する閾値が含まれており、評価部130は、例えば、当該接近量が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold value corresponding to the length of the tongue that can be taken out of the mouth, and the evaluation unit 130, for example, determines whether or not the length is greater than or equal to the threshold value. To evaluate. The reference data 161 includes a threshold value corresponding to the left-right difference of the tongue. For example, the evaluation unit 130 performs a swallowing function depending on whether the left-right difference of the tongue is equal to or greater than the threshold value. evaluate. The reference data 161 includes a threshold value corresponding to the approach amount, and the evaluation unit 130 evaluates the swallowing function according to, for example, whether the approach amount is equal to or greater than the threshold value. .
 舌を口から出せる長さが短いこと(閾値未満であること)は、例えば、口腔期における舌(具体的には舌の根元、より具体的には舌骨上筋群)の運動機能が低下していることを示す。また、舌左右差が大きいこと(閾値以上であること)は、例えば、舌に麻痺があることを示しており、つまりは、準備期における食物を歯に押し当てたり細かくなった食物を唾液と混ぜ合わせてまとめたりするための舌の運動機能が低下していることを示す。また、上記接近量が小さいこと(閾値未満であること)は、例えば、口腔期における舌(舌骨上筋群)の運動機能が低下していることを示す。すなわち、被評価者Uの舌の動きを評価することで、準備期における舌の運動機能および口腔期における舌の運動機能を評価することができる。 When the length of the tongue that can be taken out from the mouth is short (below the threshold), for example, the motor function of the tongue (specifically, the root of the tongue, more specifically, the suprahyoid muscle group) in the oral cavity is reduced. Indicates that In addition, a large difference between the left and right tongues (above a threshold value) indicates, for example, that the tongue is paralyzed. In other words, food in the preparation period is pressed against the teeth or fine food is saliva. It shows that the motor function of the tongue for mixing together is reduced. Moreover, that the said approach amount is small (being less than a threshold value) shows that the motor function of the tongue (superhyoid muscle group) in the oral cavity period is falling, for example. That is, the tongue movement function in the preparation period and the tongue movement function in the oral period can be evaluated by evaluating the movement of the tongue of the evaluation subject U.
 また、例えば、取得部110が取得した第1画像が、被評価者Uが口を閉じながら頬を膨らませている際の当該頬の撮像により得られる複数の連続する画像の場合、算出部120は、当該連続の画像(動画)における頬の動きを特徴量として算出する。具体的には、算出部120は、頬の膨らみを維持できているかを特徴量として算出する。 In addition, for example, when the first image acquired by the acquisition unit 110 is a plurality of continuous images obtained by imaging the cheek when the person to be evaluated U expands the cheek while closing his mouth, the calculation unit 120 The movement of the cheek in the continuous image (moving image) is calculated as a feature amount. Specifically, the calculation unit 120 calculates whether or not the cheek bulge can be maintained as a feature amount.
 参照データ161には、当該頬の膨らみの維持度合いに対応する閾値が含まれており、評価部130は、例えば、当該頬の膨らみ具合が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold value corresponding to the maintenance degree of the cheek bulge, and the evaluation unit 130, for example, ingestion swallowing according to whether the cheek bulge degree is equal to or greater than the threshold value. Evaluate functionality.
 頬の膨らみを維持できていないこと(閾値未満であること)は、例えば、口唇を閉じていられず口腔内から口外へ空気が漏れていること(口唇閉鎖不全)を示し、つまりは、準備期における食物をこぼさずに口腔内に取り込むための表情筋の運動機能が低下していることを示す。また、頬が膨らまないこと(閾値未満であること)は、例えば、鼻腔と咽頭との間を塞ぐことができず咽頭から鼻腔へ空気が漏れていること(鼻咽腔閉鎖不全)を示し、つまりは、咽頭期における鼻腔と咽頭との間を塞ぐための咽頭(具体的には軟口蓋)の運動機能が低下していることを示す。すなわち、被評価者Uの頬の動きを評価することで、準備期における表情筋の運動機能および咽頭期における咽頭の運動機能を評価することができる。 Inability to maintain the swelling of the cheeks (below the threshold) indicates, for example, that the lips cannot be closed and that air is leaking from the inside of the mouth to the outside of the mouth (lip closure failure). It shows that the motor function of the facial muscles for taking the food into the oral cavity without spilling is reduced. In addition, the fact that the cheeks do not swell (below the threshold value) indicates that, for example, the space between the nasal cavity and the pharynx cannot be blocked, and air is leaking from the pharynx to the nasal cavity (nasal pharyngeal insufficiency) That is, the motor function of the pharynx (specifically, soft palate) for closing the space between the nasal cavity and the pharynx in the pharyngeal stage is reduced. That is, by evaluating the movement of the cheek of the person to be evaluated U, it is possible to evaluate the facial muscle function in the preparation period and the pharyngeal movement function in the pharynx stage.
 また、例えば、取得部110が取得した第1画像が、被評価者Uが歯を噛みしめている際の被評価者Uの頬の撮像により得られる画像の場合、算出部120は、当該画像における歯の噛みしめによる頬の筋肉の隆起を特徴量として算出する。また、例えば、取得部110が取得した画像が、被評価者Uが口角を上げている際の被評価者Uの頬の撮像により得られる画像の場合、算出部120は、当該画像における口角を上げることによる頬の筋肉の隆起を特徴量として算出する。 For example, when the first image acquired by the acquisition unit 110 is an image obtained by imaging the cheek of the person to be evaluated U when the person to be evaluated U is biting a tooth, the calculation unit 120 includes The bulge of the cheek muscles caused by the teeth is calculated as a feature value. For example, when the image acquired by the acquisition unit 110 is an image obtained by imaging the cheek of the person to be evaluated U when the person to be evaluated U raises the mouth angle, the calculation unit 120 calculates the mouth angle in the image. The elevation of the muscles of the cheeks caused by raising is calculated as a feature amount.
 参照データ161には、当該隆起に対応する閾値が含まれており、評価部130は、例えば、当該隆起が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold corresponding to the uplift, and the evaluation unit 130 evaluates the swallowing function depending on whether the uplift is equal to or higher than the threshold, for example.
 頬の筋肉が隆起しないこと(閾値未満であること)は、例えば、準備期における咀嚼筋の運動機能(筋力)が低下していることを示す。すなわち、被評価者Uの頬の筋肉の隆起を評価することで、準備期における咀嚼筋の運動機能を評価することができる。 The fact that the muscles of the cheeks do not rise (below the threshold value) indicates, for example, that the motor function (muscle strength) of the masticatory muscles during the preparation period is reduced. That is, by evaluating the bulge of the cheek muscles of the person to be evaluated U, the motor function of the masticatory muscles in the preparation period can be evaluated.
 また、例えば、取得部110が取得した第1画像が、被評価者Uの首の撮像により得られる画像の場合、算出部120は、当該画像における被評価者Uの喉頭隆起の位置を特徴量として算出する。 For example, when the first image acquired by the acquisition unit 110 is an image obtained by imaging the neck of the person to be evaluated U, the calculation unit 120 calculates the position of the laryngeal protuberance of the person to be evaluated U in the image as a feature amount. Calculate as
 参照データ161には、当該喉頭隆起の位置(例えば、顔からの距離)に対応する閾値が含まれており、評価部130は、例えば、当該喉頭隆起の位置が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold corresponding to the position of the laryngeal bump (for example, the distance from the face), and the evaluation unit 130 determines whether the position of the laryngeal bump is equal to or greater than the threshold, for example. Evaluate the swallowing function as appropriate.
 喉頭隆起が顔から遠いこと(閾値以上であること)は、例えば、食塊の飲み込み時に喉頭隆起を挙げることに余計な労力が必要な状態を示し、つまりは、咽頭期における喉頭(具体的には、舌骨下筋群)の運動機能が低下(言い換えると、喉頭の状態が劣化)していることを示す。喉頭の状態がこのように劣化すると、摂食嚥下機能が低下している人は筋力不足となっていることが多いのに、さらに、喉頭隆起の挙上が不十分となり、むせてしまう。被評価者Uの喉頭隆起の位置を評価することで、嚥下時の運動機能を評価することができる。 The fact that the laryngeal protuberance is far from the face (beyond the threshold) indicates, for example, a state that requires extra effort to raise the laryngeal protuberance when swallowing a bolus, that is, the larynx in the pharyngeal phase (specifically, Indicates that the motor function of the subhyoid muscle group is reduced (in other words, the laryngeal state is deteriorated). When the state of the larynx deteriorates in this way, a person with poor swallowing function often has insufficient muscular strength, but the elevation of the laryngeal protuberance is further insufficient, and it can be avoided. By evaluating the position of the laryngeal protuberance of the subject U, the motor function during swallowing can be evaluated.
 また、例えば、取得部110が取得した第2画像が、被評価者Uの歯の撮像により得られる画像の場合、算出部120は、当該画像における歯の数を特徴量として算出する。 For example, when the second image acquired by the acquisition unit 110 is an image obtained by imaging the teeth of the person to be evaluated U, the calculation unit 120 calculates the number of teeth in the image as a feature amount.
 参照データ161には、当該歯の数に対応する閾値が含まれており、評価部130は、例えば、当該歯の数が当該閾値(例えば20本)以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold corresponding to the number of teeth, and the evaluation unit 130, for example, eats depending on whether the number of teeth is equal to or greater than the threshold (for example, 20). Evaluate swallowing function.
 歯の数が少ないこと(閾値未満であること)は、例えば、準備期における食物を噛み砕きすり潰すための歯の咬合状態が劣化していることを示す。すなわち、被評価者Uの歯の数を評価することで、準備期における歯の咬合状態を評価することができる。 A small number of teeth (below the threshold value) indicates, for example, that the occlusal state of the teeth for chewing and crushing food in the preparation period is deteriorated. That is, by evaluating the number of teeth of the person to be evaluated U, the occlusal state of the teeth in the preparation period can be evaluated.
 また、例えば、取得部110が取得した第2画像が、被評価者Uの歯の撮像により得られる画像の場合、算出部120は、当該画像における残存する歯(残存歯)の位置を特徴量として算出する。 For example, when the second image acquired by the acquisition unit 110 is an image obtained by imaging the teeth of the person to be evaluated U, the calculation unit 120 uses the position of the remaining tooth (residual tooth) in the image as a feature amount. Calculate as
 参照データ161には、当該残存歯の位置に対応する閾値が含まれており、評価部130は、例えば、当該残存歯の位置が当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold value corresponding to the position of the remaining tooth, and the evaluation unit 130 performs, for example, a swallowing function depending on whether or not the position of the remaining tooth is equal to or greater than the threshold value. evaluate.
 残存歯が閾値未満であること、つまり、前歯は残っており臼歯が残っていないことは、食物をすり潰すことが困難なことを示し、例えば、準備期における食物を噛み砕きすり潰すための歯の咬合状態が劣化していることを示す。すなわち、被評価者Uの残存歯の位置を評価することで、準備期における歯の咬合状態を評価することができる。 If the remaining teeth are below the threshold, that is, the front teeth remain and the molars do not remain, it indicates that it is difficult to crush food, for example, the teeth for chewing and crushing food during preparation Indicates that the occlusal state has deteriorated. That is, by evaluating the position of the remaining teeth of the person to be evaluated U, the occlusal state of the teeth in the preparation period can be evaluated.
 また、例えば、取得部110が取得した第2画像が、被評価者Uの舌の撮像により得られる画像の場合、算出部120は、当該画像における舌の色を特徴量として算出する。 For example, when the second image acquired by the acquisition unit 110 is an image obtained by imaging the tongue of the person to be evaluated U, the calculation unit 120 calculates the color of the tongue in the image as a feature amount.
 参照データ161には、当該舌の色(例えば舌の白さ)に対応する閾値が含まれており、評価部130は、例えば、当該舌の白さが当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。これについて、図10を用いて説明する。 The reference data 161 includes a threshold value corresponding to the color of the tongue (for example, whiteness of the tongue), and the evaluation unit 130 determines whether the whiteness of the tongue is equal to or greater than the threshold value, for example. To evaluate the swallowing function. This will be described with reference to FIG.
 図10は、舌の色を特徴量として算出する方法を説明するための図である。 FIG. 10 is a diagram for explaining a method of calculating the tongue color as a feature amount.
 例えば、舌の色を特徴量として算出する方法として、TCI(Tongue Coating Index)が用いられる。図10に示されるように、舌のA部分からI部分の9つの部分について、舌苔の付着状態に応じてスコアが記録される。例えば、各部分について舌苔が認められない場合にはスコア0、舌乳頭が認識可能な程度に薄い舌苔が付着している場合にはスコア1、舌乳頭が認識不可能な程度に厚い舌苔が付着している場合にはスコア2が記録される。評価部130は、その合計スコア(例えば0から18)であるか否かに応じて摂食嚥下機能を評価する。 For example, TCI (Tongue Coating Index) is used as a method of calculating the color of the tongue as a feature amount. As shown in FIG. 10, scores are recorded for nine portions from the A portion to the I portion of the tongue according to the adhesion state of tongue coating. For example, if tongue coating is not recognized for each part, the score is 0. If tongue coating is thin enough to recognize the tongue papillae, the score is 1. If the tongue papillae are not recognized, thick tongue coating is attached. If so, score 2 is recorded. The evaluation unit 130 evaluates the swallowing function according to whether or not the total score (for example, 0 to 18).
 舌が白いこと(閾値以上であること)、つまり、舌に付着した舌苔が厚いことは、口腔内が不潔であることを示し、例えば、準備期における食物の味を認識したり硬さを認識したりするための舌の認識機能が低下していることを示す。すなわち、被評価者Uの舌の色を評価することで、準備期における舌の認識機能を評価することができる。 A white tongue (beyond the threshold), that is, thick tongue coating on the tongue, indicates that the oral cavity is unclean, for example, recognizing the taste of food during the preparation period or recognizing hardness It shows that the recognition function of the tongue for doing so has declined. That is, by evaluating the color of the tongue of the person to be evaluated U, the recognition function of the tongue in the preparation period can be evaluated.
 また、例えば、取得部110が取得した第2画像が、被評価者Uの舌の撮像により得られる画像の場合、算出部120は、当該画像における舌の光の反射度合いを特徴量として算出する。 For example, when the second image acquired by the acquisition unit 110 is an image obtained by imaging the tongue of the person to be evaluated U, the calculation unit 120 calculates the degree of reflection of the tongue light in the image as a feature amount. .
 参照データ161には、当該舌の光の反射度合いに対応する閾値が含まれており、評価部130は、例えば、当該舌の光の反射度合いが当該閾値以上であるか否かに応じて摂食嚥下機能を評価する。 The reference data 161 includes a threshold value corresponding to the degree of light reflection of the tongue, and the evaluation unit 130, for example, takes a value according to whether or not the degree of light reflection of the tongue is equal to or greater than the threshold value. Evaluate swallowing function.
 舌に光を当てても光が反射せず光らないこと(閾値未満であること)は、舌が乾燥していることを示し、例えば、準備期における細かくなった食物をまとめるための唾液の分泌機能が低下していることを示す。すなわち、被評価者Uの舌の光の反射度合いを評価することで、準備期における唾液の分泌機能を評価することができる。 When the light is applied to the tongue, the light does not reflect and does not shine (below the threshold value), indicating that the tongue is dry, for example, saliva secretion to gather fine food during preparation Indicates that the function is degraded. That is, the saliva secretion function in the preparation period can be evaluated by evaluating the degree of reflection of light on the tongue of the person to be evaluated U.
 このように、評価部130は、被評価者Uの摂食嚥下機能を、例えば、「準備期における」舌の運動機能、または、「口腔期における」舌の運動機能といったように、準備期、口腔期および咽頭期のいずれの段階における摂食嚥下機能であるかを区別した上で評価する。例えば、参照データ161には、特徴量の種類と準備期、口腔期および咽頭期の少なくとも1つの段階における摂食嚥下機能との対応関係が含まれている。例えば、特徴量として「k」および「i」の音圧差に着目すると、「k」および「i」の音圧差と咽頭期における舌の運動機能とが対応付けられている。このため、評価部130は、準備期、口腔期および咽頭期のいずれの段階における摂食嚥下機能であるかを区別した上で被評価者Uの摂食嚥下機能を評価できる。被評価者Uの摂食嚥下機能を、準備期、口腔期および咽頭期のいずれの段階における摂食嚥下機能であるかを区別した上で評価することで、被評価者Uにどのような症状が発生するおそれがあるかがわかる。これについて、図11を用いて説明する。 In this way, the evaluation unit 130 may provide the swallowing function of the person to be evaluated U, such as the tongue movement function in the “preparation period” or the tongue movement function in the “oral period”, We evaluate after distinguishing whether it is swallowing function in oral cavity stage or pharyngeal stage. For example, the reference data 161 includes a correspondence relationship between the type of feature quantity and the swallowing function in at least one stage of the preparation period, the oral period, and the pharyngeal period. For example, when focusing on the sound pressure difference between “k” and “i” as the feature quantity, the sound pressure difference between “k” and “i” is associated with the motor function of the tongue in the pharyngeal period. For this reason, the evaluation part 130 can evaluate the to-be-evaluated person's U swallowing function, after distinguishing whether it is a swallowing function in a preparation period, an oral cavity period, and a pharyngeal period. What kind of symptom is given to the subject U by evaluating the subject's U swallowing function after distinguishing whether it is a swallowing function in the preparation phase, oral phase or pharyngeal phase You can see if there is a risk of occurrence. This will be described with reference to FIG.
 図11は、準備期、口腔期および咽頭期における摂食嚥下機能の具体例と、各機能が低下したときの症状を示す図である。 FIG. 11 is a diagram showing a specific example of the swallowing function in the preparation period, the oral period and the pharyngeal period, and symptoms when each function is lowered.
 準備期における表情筋の運動機能が低下することで、摂食嚥下において食べこぼしの症状がみられるようになる。準備期における舌の認識機能が低下することで、摂食嚥下において味覚が落ちて食欲不振の症状がみられるようになったり、食物の硬さを正確に認識できなくなり正しく咀嚼できない(食物を噛み砕いたり、すり潰したりできない)という症状がみられるようになる。準備期における舌の運動機能、歯の咬合状態および咀嚼筋の運動機能が低下することで、摂食嚥下において正しく咀嚼できないという症状がみられるようになる。準備期における唾液の分泌機能が低下することで、摂食嚥下において食物がばらばらのままで食塊を形成できないという症状がみられるようになる。また、口腔期および咽頭期における舌の運動機能、咽頭期における咽頭および喉頭の運動機能が低下することで、摂食嚥下において食塊を咽頭そして食道へと正しく飲み込むことができずむせるという症状がみられるようになる。 運動 Since the motor function of facial muscles declines during the preparation period, the symptoms of spilled food appear during swallowing. Decreased recognition of the tongue during the preparatory period may lead to a loss of appetite due to swallowing and symptoms of loss of appetite, and it will not be possible to accurately recognize the hardness of food and chew correctly (chewing food) Or cannot be crushed). When the motor function of the tongue, the occlusal state of the teeth, and the motor function of the masticatory muscles decrease during the preparation period, symptoms such that mastication cannot be properly performed during swallowing are observed. When the secretory function of saliva is reduced in the preparation period, there is a symptom that the food remains scattered during swallowing and cannot form a bolus. In addition, the movement function of the tongue in the oral and pharyngeal stages and the movement function of the pharynx and larynx in the pharyngeal stage, resulting in symptoms that the swallowing swallows the bolus correctly and cannot be swallowed into the pharynx and esophagus. Can be seen.
 各段階における摂食嚥下機能が低下したときに、このような症状がみられることがわかっているため、被評価者Uの摂食嚥下機能を、準備期、口腔期および咽頭期のいずれの段階における摂食嚥下機能であるかを区別した上で評価することで、対応する症状ごとの対応策を細かく立てることが可能となる。また、詳細は後述するが、提案部150は、評価結果に応じた対応策を被評価者Uに提案することができる。 Since it is known that such a symptom is observed when the swallowing function in each stage is lowered, the swallowing function of the subject U can be set in any stage of the preparation stage, the oral stage and the pharyngeal stage. It is possible to make detailed countermeasures for each corresponding symptom by evaluating after distinguishing whether or not it is a swallowing function. Moreover, although mentioned later for details, the proposal part 150 can propose the countermeasure according to evaluation result to the to-be-evaluated person U. FIG.
 次に、図3に示されるように、出力部140は、評価部130が評価した被評価者Uの摂食嚥下機能の評価結果を出力する(ステップS104)。出力部140は、評価部130が評価した被評価者Uの摂食嚥下機能の評価結果を提案部150へ出力する。なお、出力部140は、当該評価結果を携帯端末300へ出力してもよい。この場合、出力部140は、例えば、有線通信または無線通信を行う通信インターフェースを含んでいてもよい。この場合、例えば、出力部140は、当該評価結果に対応する画像の画像データを記憶部160から取得して、携帯端末300へ取得した画像データを送信する。当該画像データ(評価結果)の一例を図12から図16に示す。 Next, as shown in FIG. 3, the output unit 140 outputs the evaluation result of the swallowing function of the person to be evaluated U evaluated by the evaluation unit 130 (step S104). The output unit 140 outputs the evaluation result of the swallowing function of the evaluated person U evaluated by the evaluation unit 130 to the suggestion unit 150. Note that the output unit 140 may output the evaluation result to the mobile terminal 300. In this case, the output unit 140 may include a communication interface that performs wired communication or wireless communication, for example. In this case, for example, the output unit 140 acquires image data of an image corresponding to the evaluation result from the storage unit 160 and transmits the acquired image data to the mobile terminal 300. An example of the image data (evaluation result) is shown in FIGS.
 図12から図16は、評価結果の一例を示す図である。例えば、評価結果は、OKまたはNGの2段階の評価結果である。OKは正常を意味し、NGは異常を意味する。なお、評価結果は、2段階の評価結果に限らず、評価の程度が3段階以上に分かれた細かい評価結果であってもよい。つまり、記憶部160に記憶された参照データ161に含まれる、各特徴量に対応する閾値は、1つの閾値に限らず、複数の閾値であってもよい。具体的には、ある特徴量について、第1閾値以上の場合には評価結果は正常となり、第1閾値よりも小さく第2閾値よりも大きい場合には評価結果はやや異常となり、第2閾値以下の場合には評価結果は異常となってもよい。また、OK(正常)の代わりに丸印等が示され、やや異常の代わりに三角印等が示され、NG(異常)の代わりにクロス印等が示されてもよい。また、図12から図16に示すように、摂食嚥下機能ごとに正常、異常が示されなくてもよく、例えば、摂食嚥下機能の低下の疑いのある項目だけ示されてもよい。 12 to 16 are diagrams showing an example of the evaluation results. For example, the evaluation result is a two-stage evaluation result of OK or NG. OK means normal and NG means abnormal. The evaluation result is not limited to a two-stage evaluation result, and may be a fine evaluation result in which the degree of evaluation is divided into three or more stages. That is, the threshold value corresponding to each feature amount included in the reference data 161 stored in the storage unit 160 is not limited to one threshold value, and may be a plurality of threshold values. Specifically, for a certain feature amount, the evaluation result is normal when it is equal to or greater than the first threshold, and the evaluation result is slightly abnormal when it is smaller than the first threshold and greater than the second threshold, and is equal to or less than the second threshold. In this case, the evaluation result may be abnormal. Further, a circle mark or the like may be shown instead of OK (normal), a triangle mark or the like may be shown instead of slightly abnormal, and a cross mark or the like may be shown instead of NG (abnormal). Also, as shown in FIGS. 12 to 16, normality and abnormality may not be shown for each swallowing function, and for example, only items that are suspected of lowering the swallowing function may be shown.
 評価結果に対応する画像の画像データは、例えば、図12から図16に示されるような表である。準備期、口腔期および咽頭期のいずれの段階における摂食嚥下機能であるかを区別した上での評価結果を示すこのような表を、被評価者Uは確認することができる。例えば、準備期、口腔期および咽頭期における摂食嚥下機能のそれぞれについて、その機能が低下したときにどのような対策をすればよいかを被評価者Uが予め知っている場合、被評価者Uは、このような表を確認することで対応策を細かく立てることが可能となる。 The image data of the image corresponding to the evaluation result is, for example, a table as shown in FIGS. The person to be evaluated U can confirm such a table showing the evaluation results after distinguishing whether the function is the swallowing function in the preparation stage, the oral stage or the pharyngeal stage. For example, if the person to be evaluated U knows in advance what measures should be taken when the function decreases for each of the swallowing functions in the preparation period, oral period and pharyngeal period, U can make detailed countermeasures by confirming such a table.
 ただし、被評価者Uは、各段階における摂食嚥下機能が低下したときにどのような摂食嚥下に関する対策をすればよいかを予め知らない場合がある。そこで、図3に示されるように、提案部150は、出力部140が出力した評価結果と予め定められた提案データ162とを照合することで、被評価者Uに対する摂食嚥下に関する提案を行う。(ステップS105)。例えば、提案データ162は、準備期、口腔期および咽頭期における摂食嚥下機能のそれぞれについての評価結果の組み合わせごとに対応する、被評価者Uに対する摂食嚥下に関する提案内容を含む。また、記憶部160には、当該提案内容を示すデータ(例えば、画像、動画、音声、テキスト等)を含む。提案部150は、このようなデータを用いて被評価者Uへ摂食嚥下に関する提案を行う。 However, the to-be-evaluated U may not know in advance what countermeasures should be taken regarding swallowing when the swallowing function in each stage is reduced. Therefore, as shown in FIG. 3, the proposing unit 150 makes a proposal regarding swallowing to the person to be evaluated U by collating the evaluation result output by the output unit 140 with predetermined proposal data 162. . (Step S105). For example, the proposal data 162 includes proposal contents regarding swallowing for the person to be evaluated U corresponding to each combination of evaluation results for the swallowing function in the preparation period, the oral period, and the pharyngeal period. In addition, the storage unit 160 includes data (for example, an image, a moving image, sound, text, etc.) indicating the proposal content. The suggestion unit 150 makes a proposal regarding swallowing to the person to be evaluated U using such data.
 以下では、被評価者Uの摂食嚥下機能を、準備期、口腔期および咽頭期のいずれの段階における摂食嚥下機能であるかを区別した上で評価した評価結果が、図12から図16に示される結果であった場合についての提案内容をそれぞれ説明する。 Below, the evaluation result evaluated after distinguishing whether the to-be-evaluated person's U swallowing function is the swallowing function in the preparation stage, the oral cavity stage, or the pharyngeal stage is shown in FIGS. The contents of the proposal in the case where the result is shown in FIG.
 図12に示される評価結果では、準備期における舌の運動機能、口腔期および咽頭期における舌の運動機能、ならびに、咽頭期における咽頭の運動機能および喉頭の運動機能がNGとなっており、その他の摂食嚥下機能はOKとなっている。この場合、準備期における舌の運動機能がNGであることにより咀嚼能力に問題がある可能性がある。これにより、食べにくい食物を避けることで栄養が偏ってしまったり、食べるのに時間がかかってしまったりする。また、口腔期および咽頭期における舌の運動機能ならびに咽頭期における咽頭の運動機能および喉頭の運動機能がNGであることにより食塊の飲み込みに問題がある可能性がある。これにより、むせてしまったり、飲み込むのに時間がかかったりする。 In the evaluation results shown in FIG. 12, the motor function of the tongue in the preparation period, the motor function of the tongue in the oral and pharyngeal stages, and the pharyngeal motor function and the laryngeal motor function in the pharyngeal stage are NG. The swallowing function is OK. In this case, there is a possibility that there is a problem in the chewing ability because the motor function of the tongue in the preparation period is NG. As a result, avoiding hard-to-eat foods can result in unbalanced nutrition and take time to eat. In addition, the swallowing of the bolus may be problematic because the motility function of the tongue in the oral and pharyngeal phases and the pharyngeal and laryngeal motor functions in the pharyngeal phase are NG. As a result, it takes time to swallow or swallow.
 これに対して、提案部150は、当該評価結果の組み合わせと提案データ162とを照合することで、当該組み合わせに対応した提案を行う。具体的には、提案部150は、硬いものを柔らかくする等し、一度に口に入れる食物の量を少なくすることを提案する。一度に口に入れる食物の量を少なくすることで、無理なく咀嚼をすることができるようになり、また、食塊が小さくなり食塊を飲み込みやすくなるためである。例えば、提案部150は、携帯端末300を介して、画像、テキストまたは音声等により、「口に入れる量を減らして、ゆっくり食べましょう。疲れたらいったん休んでから食事を再開するのもよいかもしれません」といった内容の提案を行う。また、提案部150は、食物に含まれる液体にとろみをつけることを提案する。液体にとろみをつけることで、食物を咀嚼しやすくなり、また、咽頭において液体の流れる速度が遅くなりむせることを抑制できるためである。例えば、提案部150は、携帯端末300を介して、画像、テキストまたは音声等により、「汁物や、出汁などの液体にとろみをつけて食べましょう」といった内容の提案を行う。 In contrast, the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination. Specifically, the suggestion unit 150 proposes to reduce the amount of food that is put into the mouth at a time, such as softening a hard object. This is because by reducing the amount of food put into the mouth at a time, it becomes possible to chew without difficulty, and the bolus becomes smaller and it becomes easier to swallow the bolus. For example, the suggestion unit 150 may use the mobile terminal 300 by an image, text or voice, etc. “Reducing the amount to put into the mouth and eat slowly. Propose a content such as In addition, the suggestion unit 150 proposes to thicken the liquid contained in the food. By thickening the liquid, it becomes easy to chew food, and it is possible to prevent the liquid from flowing slowly in the pharynx. For example, the suggestion unit 150 proposes a content such as “Let's eat a soup or a liquid such as a soup” via an image, text, or voice via the mobile terminal 300.
 図13に示される評価結果では、準備期における唾液の分泌機能がNGとなっており、その他の摂食嚥下機能はOKとなっている。この場合、準備期における唾液の分泌機能がNGであることにより口腔内乾燥の問題がある可能性がある。これにより、食塊を正しく形成できず、乾燥したものを飲み込みにくくなり、乾燥した食物を避けることで栄養が偏ってしまったり、食べるのに時間がかかってしまったりする。 In the evaluation results shown in FIG. 13, the saliva secretion function in the preparation period is NG, and the other swallowing functions are OK. In this case, there is a possibility that there is a problem of dry mouth in the preparation period because the saliva secretion function is NG. As a result, the bolus cannot be formed correctly, and it becomes difficult to swallow the dried food. By avoiding the dried food, the nutrition is biased or it takes time to eat.
 これに対して、提案部150は、当該評価結果の組み合わせと提案データ162とを照合することで、当該組み合わせに対応した提案を行う。具体的には、口腔内の水分を吸収するような食物(パン、ケーキ、焼き魚または米菓等)を食べる際には、水分を摂取しながら食べることを提案する。唾液の代わりに摂取した水分によって食塊を形成しやすくなり、飲み込みづらさを解消できるためである。例えば、提案部150は、携帯端末300を介して、画像、テキストまたは音声等により、「パンなどを食べるときは、一緒に水分を摂りましょう」といった内容や、「焼き魚などは、出汁をかけてみましょう。餡かけにするのもよいかもしれません」といった内容の提案を行う。 In contrast, the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination. Specifically, when eating food (bread, cake, grilled fish, rice cracker, etc.) that absorbs moisture in the oral cavity, it is proposed to eat while taking moisture. This is because it becomes easy to form a bolus with water taken instead of saliva, and the difficulty of swallowing can be eliminated. For example, the suggestion unit 150 uses the mobile terminal 300 to display content such as “Let's take water together when eating bread” or “Yaki fish etc. It may be a good idea to make a trick. ”
 図14に示される評価結果では、準備期における歯の咬合状態および咀嚼筋の運動機能がNGとなっており、その他の摂食嚥下機能はOKとなっている。この場合、準備期における歯の咬合状態および咀嚼筋の運動機能がNGであることにより咀嚼能力および咬合能力に問題がある可能性がある。これにより、硬い食物を避けることで栄養が偏ってしまったり、食べるのに時間がかかってしまったりする。 In the evaluation results shown in FIG. 14, the occlusal state of the teeth and the motor function of the masticatory muscles in the preparation period are NG, and the other swallowing functions are OK. In this case, there is a possibility that there is a problem in the masticatory ability and the occlusal ability because the occlusal state of the teeth and the movement function of the masticatory muscles in the preparation period are NG. As a result, avoiding hard food can result in unbalanced nutrition and take time to eat.
 これに対して、提案部150は、当該評価結果の組み合わせと提案データ162とを照合することで、当該組み合わせに対応した提案を行う。具体的には、硬い食物(野菜または肉等)を食べる際には、細かくしたり、柔らかくしたりしてから食べることを提案する。咀嚼能力および咬合能力に問題があっても、硬い食物を食べることができるようになるからである。例えば、提案部150は、携帯端末300を介して、画像、テキストまたは音声等により、「硬くて食べにくいものに関しては、小さく刻んでみましょう」といった内容や、「葉物野菜が食べにくくなっている可能性があります。食べるのを避けるのではなく栄養が偏らないように、柔らかくする、刻むなどして、積極的に摂りましょう」といった内容の提案を行う。 In contrast, the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination. Specifically, when eating hard food (vegetables, meat, etc.), it is proposed to make it fine or soft before eating. This is because even if there is a problem with the chewing ability and the occlusal ability, it becomes possible to eat hard food. For example, the suggestion unit 150 uses the mobile terminal 300 to display a content such as “let's chop it small if it is hard and difficult to eat”, or “because it is difficult to eat leafy vegetables. We recommend that you take it aggressively by softening, chopping, etc. so that you do n’t avoid eating, but rather nutrition.
 図15に示される評価結果では、準備期における舌の認識機能および唾液の分泌機能がNGとなっており、その他の摂食嚥下機能はOKとなっている。この場合、準備期における舌の認識機能および唾液の分泌機能がNGであることにより口腔内が不潔な状態となっている可能性がある。これにより、味覚が落ちることで食欲不振となり、ひいては、低栄養状態による誤嚥性肺炎に至るおそれがある。 In the evaluation results shown in FIG. 15, the recognition function of the tongue and the secretory function of saliva in the preparation period are NG, and the other swallowing functions are OK. In this case, there is a possibility that the oral cavity is in an unclean state because the tongue recognition function and saliva secretion function in the preparation period are NG. As a result, the appetite is lost due to a decrease in taste, which may lead to aspiration pneumonia due to a malnutrition state.
 これに対して、提案部150は、当該評価結果の組み合わせと提案データ162とを照合することで、当該組み合わせに対応した提案を行う。具体的には、口腔ケアを提案する。口腔ケアを行うことにより、口腔内の不衛生が解消され、舌の認識機能を回復させ得るためである。このとき、提案部150は、取得部110が取得した被評価者Uの個人情報(例えば住所)を用いてもよい。例えば、提案部150は、携帯端末300を介して、画像、テキストまたは音声等により、医療機関に受診すること、および、被評価者Uの近所の歯科医または摂食嚥下機能を検査可能な医療機関を示す地図を提示する。 In contrast, the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination. Specifically, oral care is proposed. This is because oral hygiene can eliminate oral hygiene and restore the recognition function of the tongue. At this time, the suggestion unit 150 may use personal information (for example, an address) of the evaluated person U acquired by the acquisition unit 110. For example, the suggestion unit 150 can visit a medical institution through an image, text, voice, or the like via the mobile terminal 300, and can check the dentist or the swallowing function in the vicinity of the person U to be evaluated. Present a map showing the institution.
 図16に示される評価結果では、準備期における舌の認識機能および唾液の分泌機能がOKとなっており、その他の摂食嚥下機能はNGとなっている。この場合、準備期、口腔期および咽頭期においてそれぞれ摂食嚥下機能が低下している可能性がある。例えば、準備期における表情筋の運動機能の低下により口唇の筋力が衰え、準備期における歯の咬合状態の劣化により咬筋が衰え、準備期、口腔期および咽頭期における舌の運動機能の低下により舌の筋力が衰えていると予想され、サルコペニアのおそれが示唆される。 In the evaluation results shown in FIG. 16, the tongue recognition function and saliva secretion function in the preparation period are OK, and the other swallowing functions are NG. In this case, the swallowing function may be reduced in the preparation period, the oral period, and the pharyngeal period. For example, the muscular strength of the lips declines due to a decline in the facial muscle function during the preparation period, the masseter muscle declines due to the deterioration of the occlusal state of the teeth during the preparation period, and the tongue movement function declines during the preparation period, oral period and pharyngeal period Muscular strength is expected to decline, suggesting the risk of sarcopenia.
 これに対して、提案部150は、当該評価結果の組み合わせと提案データ162とを照合することで、当該組み合わせに対応した提案を行う。具体的には、たんぱく質を摂ることや、リハビリをすることを提案する。筋力の低下を解消できるためである。このとき、提案部150は、取得部110が取得した被評価者Uの個人情報(例えば年齢、体重)を用いてもよい。例えば、提案部150は、携帯端末300を介して、画像、テキストまたは音声等により、「たんぱく質を摂るようにしましょう。現在の体重は60kgですので、たんぱく質を1食あたり20g~24g、3食合計で60g~72g摂りましょう。食事の際にはむせないように、汁物や、出汁などの液体にとろみをつけて食べましょう」といった内容の提案を行う。また、提案部150は、リハビリに関する具体的なトレーニング内容を提案する。例えば、提案部150は、携帯端末300を介して、動画および音声等により、被評価者Uの年齢に応じた全身の筋力トレーニング(立ちと座りと繰り返すトレーニング等)、口唇の筋力の回復トレーニング(息の吹出しと吸込みとを繰り返すトレーニング等)、舌の筋力の回復トレーニング(舌の出し入れ、上下左右への移動を行うトレーニング等)等の手本等を示す。また、例えば、そのようなリハビリのためのアプリのインストールが提案されてもよい。また、リハビリの際に、実際に行ったトレーニング内容等が記録されてもよい。これにより、記録内容を専門家(医師、歯科医師、言語聴覚士または看護士等)が確認することで、専門家によるリハビリにも反映させることができる。 In contrast, the proposal unit 150 collates the combination of the evaluation results with the proposal data 162 to make a proposal corresponding to the combination. Specifically, we suggest taking protein and rehabilitating. This is because the decrease in muscle strength can be eliminated. At this time, the suggestion unit 150 may use personal information (for example, age, weight) of the evaluated person U acquired by the acquisition unit 110. For example, the suggestion unit 150 uses an image, text or voice via the mobile terminal 300, “Let's take protein. Since the current weight is 60 kg, the protein is 20 g to 24 g per serving, 3 meals total. “Let ’s take 60g to 72g.” In order to avoid losing it when eating, let ’s eat with a thick liquid soup and soup. In addition, the proposal unit 150 proposes specific training content related to rehabilitation. For example, the suggestion unit 150 uses the mobile terminal 300 to perform muscle strength training for the whole body according to the age of the person to be evaluated U (such as training that repeats standing and sitting) and lip muscle strength recovery training (such as training that repeats standing and sitting). Examples such as training that repeats breath blowing and inhalation), recovery training of tongue muscle strength (training that moves the tongue in and out, movement up and down, left and right, etc.) are shown. For example, installation of an application for such rehabilitation may be proposed. In addition, details of training actually performed during rehabilitation may be recorded. Thereby, it is possible to reflect the recorded contents in rehabilitation by the specialists by confirming the recorded contents by a specialist (such as a doctor, a dentist, a speech therapist, or a nurse).
 なお、評価部130は、被評価者Uの摂食嚥下機能を、準備期、口腔期および咽頭期のいずれの段階における摂食嚥下機能であるかを区別した上で評価しなくてもよい。つまり、評価部130は、被評価者Uのどのような摂食嚥下機能が低下しているかを評価してもよい。 Note that the evaluation unit 130 does not need to evaluate the swallowing function of the person to be evaluated U after distinguishing whether the swallowing function is in the preparation period, the oral cavity period, or the pharyngeal period. That is, the evaluation unit 130 may evaluate what kind of swallowing function of the person to be evaluated U is deteriorated.
 その他、図示しないが、提案部150は、摂食嚥下機能のそれぞれについての評価結果の組み合わせに応じて、以下に説明する提案を行ってもよい。 Other than that, although not shown, the suggestion unit 150 may make a proposal described below according to a combination of evaluation results for each swallowing function.
 例えば、提案部150は、食事内容を提案する際に、日本摂食嚥下リハビリテーション学会の「嚥下調整食分類2013」のコード等の食形態を示すコードを提示してもよい。例えば、被評価者Uが摂食嚥下障害に対応した商品を購入する際に、その「食形態」を言葉で説明するのは難しいが、上記コードを用いることで上記コードに一対一に対応した食形態の商品を容易に購入することができる。また、提案部150は、そのような商品を購入するためのサイトを提示して、インターネットを用いて購入できるようにしてもよい。例えば、携帯端末300を介して摂食嚥下機能を評価した後、その携帯端末300を用いて購入できるようにしてもよい。さらに、提案部150は、被評価者Uの栄養が偏らないように、栄養を補う他の商品を提示してもよい。その際に、提案部150は、取得部110が取得した被評価者Uの個人情報(例えば体重、BMI(Body Mass Index)、血清アルブミン値または喫食率等)を用いることで、被評価者Uの栄養状態を判断した上で、栄養を補う商品を提示してもよい。 For example, when the suggestion unit 150 proposes meal contents, the suggestion unit 150 may present a code indicating a food form such as a code of “swallowing adjusted meal classification 2013” of the Japanese Society for Swallowing Rehabilitation. For example, when the person to be evaluated U purchases a product corresponding to a dysphagia, it is difficult to describe the “meal form” in words, but the code is used in a one-to-one correspondence with the code. Meals can be easily purchased. Moreover, the proposal part 150 may present the site for purchasing such goods, and may enable it to purchase using the internet. For example, after evaluating the swallowing function via the mobile terminal 300, the mobile terminal 300 may be used for purchase. Further, the suggestion unit 150 may present other products that supplement the nutrition so that the nutrition of the evaluated person U is not biased. At that time, the proposing unit 150 uses the personal information (for example, body weight, BMI (Body Mass Index), serum albumin value, eating rate, etc.) of the evaluated person U acquired by the acquiring unit 110, so that the evaluated person U After determining the nutritional status of the product, a product supplementing nutrition may be presented.
 また、例えば、提案部150は、食事の際の姿勢を提案してもよい。姿勢によって、食物の飲み込みやすさが変わってくるためである。例えば、提案部150は、咽頭から気管への角度が直線となりにくい、前かがみの姿勢で食事を摂ることを提案する。 Further, for example, the suggestion unit 150 may propose a posture at the time of eating. This is because the ease of swallowing food varies depending on the posture. For example, the proposing unit 150 proposes to eat with a leaning posture in which the angle from the pharynx to the trachea is not likely to be a straight line.
 また、例えば、提案部150は、摂食嚥下機能の低下による、栄養の偏りを考慮した献立を提示(そのような献立が記載された献立サイトを提示)してもよい。献立サイトとは、献立を完成させるのに必要な食材および調理手順が記載されたサイトである。その際に、提案部150は、被評価者Uに入力されて取得部110が取得した被評価者Uの食べたいメニューを考慮しつつ、栄養の偏りを考慮した献立を提示してもよい。さらに、提案部150は、1週間等の特定の期間にわたって、特定の期間において栄養バランスが取れた献立を提示してもよい。 Also, for example, the suggestion unit 150 may present a menu in consideration of nutritional bias due to a decrease in the swallowing function (present a menu site describing such a menu). A menu site is a site where ingredients and cooking procedures necessary for completing a menu are described. In that case, the proposal part 150 may present the menu which considered the bias | inclination of nutrition, considering the menu which the to-be-evaluated person U input by the to-be-evaluated person U and which the acquisition part 110 acquired. Further, the suggestion unit 150 may present a menu that is nutritionally balanced in a specific period over a specific period such as one week.
 また、例えば、提案部150は、食物を細かくする程度や柔らかくする程度を示す情報をIoT(Internet of Things)化された調理器に送信してもよい。これにより、正しく食物を細かくしたり柔らかくしたりすることができる。また、被評価者U等が自身で食物を細かくしたり柔らかくしたりする手間を省くことができる。 Also, for example, the suggestion unit 150 may transmit information indicating the degree to which food is fined or softened to a cooker that has been converted to IoT (Internet of Things). Thereby, food can be finely and softened correctly. In addition, it is possible to save time and effort for the person to be evaluated U to make food fine or soft.
 [効果等]
 以上説明したように、本実施の形態に係る摂食嚥下機能評価方法は、図3に示されるように、被評価者Uが所定の音節または所定の文を発話した音声を非接触により集音することで得られる音声データ、被評価者Uの顔または首を非接触により撮像することで得られる第1画像、および、被評価者Uの口腔内を非接触により撮像することで得られる第2画像の少なくとも2つを取得する取得ステップ(ステップS101)と、取得した音声データ、第1画像および第2画像の少なくとも2つからそれぞれの特徴量を算出する算出ステップ(ステップS102)と、算出した特徴量から、被評価者Uの摂食嚥下機能を評価する評価ステップ(ステップS103)と、を含む。
[Effects]
As described above, in the method for evaluating swallowing function according to the present embodiment, as shown in FIG. 3, the person to be evaluated U collects a voice that utters a predetermined syllable or a predetermined sentence without contact. Audio data obtained by doing this, a first image obtained by imaging the face or neck of the person to be evaluated U without contact, and a first image obtained by imaging the mouth of the person to be evaluated U without contact. An acquisition step (step S101) for acquiring at least two of the two images, a calculation step (step S102) for calculating respective feature amounts from at least two of the acquired audio data, the first image, and the second image; And an evaluation step (step S103) for evaluating the swallowing function of the person to be evaluated U from the feature amount.
 これによれば、非接触により集音された摂食嚥下機能の評価に適した音声データ、非接触により撮像することで得られた摂食嚥下機能の評価に適した第1画像または第2画像を取得することで、簡便に被評価者Uの摂食嚥下機能の評価が可能となる。つまり、被評価者Uが携帯端末300等の集音装置に向けて所定の音節または所定の文を発話したり、携帯端末300等の撮像装置を用いて被評価者Uの顔、首または口腔内を撮像したりするだけで、被評価者Uの摂食嚥下機能の評価が可能となる。また、本発明では、音声データ、第1画像および第2画像の少なくとも2つを用いて被評価者Uの摂食嚥下機能を評価するため、より精度の高い評価が可能となる。 According to this, the voice data suitable for the evaluation of the swallowing function collected by non-contact, the first image or the second image suitable for the evaluation of the swallowing function obtained by imaging without contact By acquiring, it becomes possible to easily evaluate the swallowing function of the person to be evaluated U. That is, the person to be evaluated U speaks a predetermined syllable or a predetermined sentence toward the sound collecting device such as the portable terminal 300, or the face, neck, or mouth of the person to be evaluated U using the imaging device such as the portable terminal 300. It is possible to evaluate the swallowing function of the person to be evaluated U simply by imaging the inside. Moreover, in this invention, since the to-be-evaluated person's U swallowing function is evaluated using at least 2 of audio | voice data, a 1st image, and a 2nd image, evaluation with higher precision is attained.
 また、評価ステップでは、摂食嚥下機能として、表情筋の運動機能、舌の運動機能、舌の認識機能、唾液の分泌機能、歯の咬合状態、咀嚼筋の運動機能、咽頭の運動機能ならびに喉頭の運動機能の少なくとも1つを評価してもよい。 Also, in the evaluation step, the facial swallowing function includes the facial muscle function, tongue function, tongue recognition function, saliva secretion function, tooth occlusion, masticatory muscle function, pharyngeal function and larynx. At least one of the motor functions may be evaluated.
 これによれば、例えば、準備期における表情筋の運動機能、準備期における舌の認識機能、準備期における舌の運動機能、準備期における歯の咬合状態、準備期における咀嚼筋の運動機能、準備期における唾液の分泌機能、口腔期における舌の運動機能、咽頭期における舌の運動機能、咽頭期における咽頭の運動機能または咽頭期における喉頭の運動機能を評価できる。 According to this, for example, the facial muscle function in the preparation period, the tongue recognition function in the preparation period, the tongue movement function in the preparation period, the occlusal state of the teeth in the preparation period, the masticatory muscle movement function in the preparation period, and the preparation The salivary secretion function in the period, the tongue movement function in the oral period, the tongue movement function in the pharyngeal period, the pharyngeal movement function in the pharyngeal period, or the laryngeal movement function in the pharyngeal period can be evaluated.
 また、摂食嚥下機能評価方法は、さらに、評価結果を出力する出力ステップ(ステップS104)を含んでいてもよい。 Moreover, the swallowing function evaluation method may further include an output step (step S104) for outputting an evaluation result.
 これによれば、評価結果を確認できるようになる。 According to this, the evaluation result can be confirmed.
 また、摂食嚥下機能評価方法は、さらに、出力した評価結果と予め定められたデータとを照合することで、被評価者Uに対する摂食嚥下に関する提案を行う提案ステップ(ステップS105)を含んでいてもよい。 In addition, the swallowing function evaluation method further includes a proposing step (step S105) of making a proposal regarding swallowing to the evaluated person U by collating the output evaluation result with predetermined data. May be.
 これによれば、被評価者Uは、摂食嚥下機能が低下したときにどのような摂食嚥下に関する対策をすればよいかの提案を受けることができる。例えば、被評価者Uが提案に基づいたリハビリをしたり、提案に基づいた食生活をとったりすることで、誤嚥を抑制することで誤嚥性肺炎を予防でき、また、摂食嚥下機能の低下による低栄養状態を改善できる。 According to this, the to-be-evaluated person U can receive a proposal as to what countermeasures should be taken regarding swallowing when the swallowing function decreases. For example, it is possible to prevent aspiration pneumonia by suppressing aspiration by allowing the evaluated person U to perform rehabilitation based on the proposal or to take a diet based on the proposal. Can reduce malnutrition due to decline.
 また、提案ステップでは、摂食嚥下機能の評価結果に対応する食事に関する提案、及び、摂食嚥下機能の評価結果に対応する運動に関する提案の少なくとも一方が行われてもよい。 Further, in the suggestion step, at least one of a proposal related to a meal corresponding to the evaluation result of the swallowing function and a proposal related to exercise corresponding to the evaluation result of the swallowing function may be performed.
 これによれば、被評価者Uは、摂食嚥下機能が低下したときにどのような食事を行えばよいか、または、どのような運動を行えばよいかの提案を受けることができる。 According to this, the to-be-evaluated person U can receive a suggestion of what kind of meal should be performed or what kind of exercise should be performed when the swallowing function is lowered.
 また、取得ステップでは、さらに、被評価者Uの個人情報を取得してもよい。 In the acquisition step, personal information of the person to be evaluated U may be acquired.
 これによれば、例えば、摂食嚥下に関する提案において、被評価者Uの摂食嚥下機能の評価結果と個人情報とを組み合わせることで、被評価者Uに対してより効果的な提案をすることができる。 According to this, for example, in a proposal related to swallowing, a more effective proposal can be made to the evaluated person U by combining the evaluation result of the swallowing function of the evaluated person U and personal information. Can do.
 また、本実施の形態に係る摂食嚥下機能評価装置100は被評価者Uが所定の音節または所定の文を発話した音声を非接触により集音することで得られる音声データ、被評価者Uの顔または首を非接触により撮像することで得られる第1画像、および、被評価者Uの口腔内を非接触により撮像することで得られる第2画像の少なくとも2つを取得する取得部110と、取得部110が取得した音声データ、第1画像および第2画像の少なくとも2つからそれぞれの特徴量を算出する算出部120と、算出部120が算出した特徴量から、被評価者Uの摂食嚥下機能を評価する評価部130と、評価部130が評価した評価結果を出力する出力部140と、を備える。 In addition, the swallowing function evaluation device 100 according to the present embodiment is configured to collect voice data obtained by non-contact collecting voices of the evaluated person U uttering predetermined syllables or predetermined sentences, the evaluated person U The acquisition unit 110 acquires at least two of a first image obtained by imaging the face or neck of the subject without contact and a second image obtained by imaging the oral cavity of the evaluation subject U without contact. From the audio data acquired by the acquisition unit 110, the first image and the second image, the calculation unit 120 that calculates the respective feature amounts, and the feature amount calculated by the calculation unit 120, the evaluation target U's An evaluation unit 130 that evaluates the swallowing function and an output unit 140 that outputs an evaluation result evaluated by the evaluation unit 130 are provided.
 これによれば、簡便に被評価者Uの摂食嚥下機能の評価が可能となる摂食嚥下機能評価装置100を提供できる。 According to this, it is possible to provide the swallowing function evaluation device 100 that can easily evaluate the swallowing function of the person U to be evaluated.
 また、本実施の形態に係る摂食嚥下機能評価システム200は、摂食嚥下機能評価装置100と、被評価者Uの顔、首もしくは口腔内を撮像し、または、被評価者Uが所定の音節もしくは所定の文を発話した音声を非接触により集音する装置(本実施の形態では携帯端末300)と、を備える。摂食嚥下機能評価装置100の取得部110は、被評価者Uが所定の音節または所定の文を発話した音声を当該装置が非接触により集音することで得られる音声データ、被評価者Uの顔または首を当該装置が非接触により撮像することで得られる第1画像、および、被評価者Uの口腔内を当該装置が非接触により撮像することで得られる第2画像の少なくとも2つを取得する。 Moreover, the swallowing function evaluation system 200 according to the present embodiment images the swallowing function evaluation apparatus 100 and the face, neck, or oral cavity of the person to be evaluated U, or the person to be evaluated U has a predetermined value. And a device (in this embodiment, a portable terminal 300) that collects syllables or voices uttering a predetermined sentence in a non-contact manner. The acquisition unit 110 of the swallowing function evaluation apparatus 100 is configured to collect voice data obtained when the apparatus U collects a predetermined syllable or a predetermined sentence without contact, and the apparatus U At least two of a first image obtained by imaging the face or neck of the subject in a non-contact manner and a second image obtained by imaging the subject's U in the oral cavity of the subject U in a non-contact manner To get.
 これによれば、簡便に被評価者Uの摂食嚥下機能の評価が可能となる摂食嚥下機能評価システム200を提供できる。 According to this, it is possible to provide the swallowing function evaluation system 200 that enables the evaluation of the swallowing function of the person to be evaluated U easily.
 (その他の実施の形態)
 以上、実施の形態に係る摂食嚥下機能評価方法等について説明したが、本発明は、上記実施の形態に限定されるものではない。
(Other embodiments)
As mentioned above, although the swallowing function evaluation method etc. which concern on embodiment were demonstrated, this invention is not limited to the said embodiment.
 例えば、参照データ161は、予め定められたデータであるが、専門家が被評価者Uの摂食嚥下機能を実際に診断した際に得られた評価結果に基づいて、更新されてもよい。これにより、摂食嚥下機能の評価精度を高めることができる。なお、摂食嚥下機能の評価精度を高めるのに機械学習が用いられてもよい。 For example, the reference data 161 is predetermined data, but may be updated based on an evaluation result obtained when an expert actually diagnoses the swallowing function of the person to be evaluated U. Thereby, the evaluation precision of a swallowing function can be improved. Note that machine learning may be used to improve the evaluation accuracy of the swallowing function.
 また、例えば、提案データ162は、予め定められたデータであるが、被評価者Uが提案内容を評価して、その評価結果に基づいて更新されてもよい。つまり、例えば、被評価者Uが問題なく咀嚼をできているにもかかわらず、ある特徴量に基づいて咀嚼できないことに対応した提案がされた場合には、被評価者Uは、この提案内容に対して間違っていると評価する。そして、この評価結果に基づいて提案データ162が更新されることで、同じ特徴量に基づいて上記のような誤った提案がされないようになる。このように、被評価者Uに対する摂食嚥下に関する提案内容をより効果的なものとすることができる。なお、摂食嚥下に関する提案内容をより効果的なものとするのに機械学習が用いられてもよい。 Further, for example, the proposal data 162 is predetermined data, but the evaluated person U may evaluate the proposal content and may be updated based on the evaluation result. That is, for example, when a proposal corresponding to the fact that the person to be evaluated U cannot chew based on a certain feature amount even though the person to be evaluated U can chew without problems, It is evaluated that it is wrong. Then, by updating the proposal data 162 based on the evaluation result, the erroneous proposal as described above is not made based on the same feature amount. Thus, the proposal content regarding swallowing for the person to be evaluated U can be made more effective. Note that machine learning may be used to make proposals related to swallowing more effective.
 また、例えば、摂食嚥下機能の評価結果は、個人情報と共にビッグデータとして蓄積されて、機械学習に用いられてもよい。また、摂食嚥下に関する提案内容は、個人情報と共にビッグデータとして蓄積されて、機械学習に用いられてもよい。 Also, for example, the evaluation result of the swallowing function may be stored as big data together with personal information and used for machine learning. Moreover, the proposal content regarding swallowing may be accumulated as big data together with personal information and used for machine learning.
 また、例えば、上記実施の形態では、携帯端末300は、撮像装置であり、かつ、集音装置であったが、これに限らない。例えば、取得ステップにおいて(取得部110が)、被評価者Uが所定の音節または所定の文を発話した音声を非接触により集音することで得られる音声データを取得しない場合、携帯端末300は、集音装置でなくてもよい。言い換えると、この場合、摂食嚥下機能評価システム200は、集音装置を備えていなくてもよい。 Further, for example, in the above embodiment, the mobile terminal 300 is an imaging device and a sound collection device, but is not limited thereto. For example, in the acquisition step (acquisition unit 110), if the evaluator U does not acquire voice data obtained by collecting sounds uttering a predetermined syllable or a predetermined sentence without contact, the mobile terminal 300 is The sound collector may not be used. In other words, in this case, the swallowing function evaluation system 200 may not include the sound collection device.
 また、例えば、上記実施の形態では、摂食嚥下機能評価方法は、摂食嚥下に関する提案を行う提案ステップ(ステップS105)を含んでいたが、含んでいなくてもよい。言い換えると、摂食嚥下機能評価装置100は、提案部150を備えていなくてもよい。 Also, for example, in the above embodiment, the eating and swallowing function evaluation method includes the suggestion step (step S105) for making a suggestion regarding swallowing, but it may not be included. In other words, the swallowing function evaluation device 100 may not include the suggestion unit 150.
 また、例えば、上記実施の形態では、取得ステップ(ステップS101)では、被評価者Uの個人情報を取得したが、取得しなくてもよい。言い換えると、取得部110は、被評価者Uの個人情報を取得しなくてもよい。 Also, for example, in the above embodiment, in the acquisition step (step S101), the personal information of the person to be evaluated U is acquired, but it is not necessary to acquire it. In other words, the acquisition unit 110 may not acquire the personal information of the evaluated person U.
 また、例えば、上記実施の形態では、被評価者Uは日本語で発話するものとして説明が行われたが、被評価者Uは、英語などの日本語以外の言語で発話してもよい。つまり、日本語の音声データが信号処理の対象とされることは必須ではなく、日本語以外の言語の音声データが信号処理の対象とされてもよい。 In addition, for example, in the above-described embodiment, description has been made assuming that the evaluated person U speaks in Japanese, but the evaluated person U may speak in a language other than Japanese such as English. That is, it is not indispensable that the Japanese speech data is subject to signal processing, and speech data in a language other than Japanese may be subject to signal processing.
 また、例えば、摂食嚥下機能評価方法におけるステップは、コンピュータ(コンピュータシステム)によって実行されてもよい。そして、本発明は、それらの方法に含まれるステップを、コンピュータに実行させるためのプログラムとして実現できる。さらに、本発明は、そのプログラムを記録したCD-ROM等である非一時的なコンピュータ読み取り可能な記録媒体として実現できる。 Also, for example, the steps in the swallowing function evaluation method may be executed by a computer (computer system). The present invention can be realized as a program for causing a computer to execute the steps included in these methods. Furthermore, the present invention can be realized as a non-transitory computer-readable recording medium such as a CD-ROM on which the program is recorded.
 例えば、本発明が、プログラム(ソフトウェア)で実現される場合には、コンピュータのCPU、メモリおよび入出力回路等のハードウェア資源を利用してプログラムが実行されることによって、各ステップが実行される。つまり、CPUがデータをメモリまたは入出力回路等から取得して演算したり、演算結果をメモリまたは入出力回路等に出力したりすることによって、各ステップが実行される。 For example, when the present invention is realized by a program (software), each step is executed by executing the program using hardware resources such as a CPU, a memory, and an input / output circuit of a computer. . That is, each step is executed by the CPU obtaining data from a memory or an input / output circuit or the like, and outputting the calculation result to the memory or the input / output circuit or the like.
 また、上記実施の形態の摂食嚥下機能評価装置100および摂食嚥下機能評価システム200に含まれる各構成要素は、専用または汎用の回路として実現されてもよい。 In addition, each component included in the swallowing function evaluation device 100 and the swallowing function evaluation system 200 of the above embodiment may be realized as a dedicated or general-purpose circuit.
 また、上記実施の形態の摂食嚥下機能評価装置100および摂食嚥下機能評価システム200に含まれる各構成要素は、集積回路(IC:Integrated Circuit)であるLSI(Large Scale Integration)として実現されてもよい。 In addition, each component included in the swallowing function evaluation apparatus 100 and the swallowing function evaluation system 200 of the above embodiment is realized as an LSI (Large Scale Integration) that is an integrated circuit (IC). Also good.
 また、集積回路はLSIに限られず、専用回路または汎用プロセッサで実現されてもよい。プログラム可能なFPGA(Field Programmable Gate Array)、または、LSI内部の回路セルの接続および設定が再構成可能なリコンフィギュラブル・プロセッサが、利用されてもよい。 Further, the integrated circuit is not limited to an LSI, and may be realized by a dedicated circuit or a general-purpose processor. A programmable FPGA (Field Programmable Gate Array) or a reconfigurable processor in which connection and setting of circuit cells inside the LSI can be reconfigured may be used.
 さらに、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて、摂食嚥下機能評価装置100および摂食嚥下機能評価システム200に含まれる各構成要素の集積回路化が行われてもよい。 Furthermore, if integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technology, it is naturally included in the swallowing function evaluation device 100 and the swallowing function evaluation system 200 using that technology. Each component may be integrated into an integrated circuit.
 その他、実施の形態に対して当業者が思いつく各種変形を施して得られる形態や、本発明の趣旨を逸脱しない範囲で各実施の形態における構成要素および機能を任意に組み合わせることで実現される形態も本発明に含まれる。 Other forms obtained by subjecting the embodiments to various modifications conceived by those skilled in the art, and forms realized by arbitrarily combining the components and functions in the embodiments without departing from the spirit of the present invention. Are also included in the present invention.
 100 摂食嚥下機能評価装置
 110 取得部
 120 算出部
 130 評価部
 140 出力部
 161 参照データ
 162 提案データ(データ)
 200 摂食嚥下機能評価システム
 300 携帯端末(装置)
 F1 第一フォルマント周波数
 F2 第二フォルマント周波数
 U 被評価者
DESCRIPTION OF SYMBOLS 100 Swallowing function evaluation apparatus 110 Acquisition part 120 Calculation part 130 Evaluation part 140 Output part 161 Reference data 162 Proposal data (data)
200 Eating and swallowing function evaluation system 300 Mobile terminal (device)
F1 First formant frequency F2 Second formant frequency U Evaluated

Claims (8)

  1.  被評価者が所定の音節または所定の文を発話した音声を非接触により集音することで得られる音声データ、前記被評価者の顔または首を非接触により撮像することで得られる第1画像、および、前記被評価者の口腔内を非接触により撮像することで得られる第2画像の少なくとも2つを取得する取得ステップと、
     取得した前記音声データ、前記第1画像および前記第2画像の少なくとも2つからそれぞれの特徴量を算出する算出ステップと、
     算出した前記特徴量から、前記被評価者の摂食嚥下機能を評価する評価ステップと、を含む、
     摂食嚥下機能評価方法。
    Voice data obtained by collecting non-contact voice of speech uttered by the evaluator with a predetermined syllable or predetermined sentence, and a first image obtained by imaging the face or neck of the evaluator without contact And an acquisition step of acquiring at least two of the second images obtained by imaging the oral cavity of the subject to be evaluated in a non-contact manner;
    A calculation step of calculating respective feature amounts from at least two of the acquired audio data, the first image, and the second image;
    An evaluation step for evaluating the swallowing function of the person to be evaluated from the calculated feature amount,
    Method for evaluating swallowing function.
  2.  前記評価ステップでは、前記摂食嚥下機能として、表情筋の運動機能、舌の運動機能、舌の認識機能、唾液の分泌機能、歯の咬合状態、咀嚼筋の運動機能、咽頭の運動機能ならびに喉頭の運動機能の少なくとも1つを評価する、
     請求項1に記載の摂食嚥下機能評価方法。
    In the evaluation step, as the swallowing function, the facial muscle function, the tongue function, the tongue recognition function, saliva secretion function, the occlusal state of the teeth, the masticatory muscle function, the pharyngeal function and the larynx Assess at least one of the motor functions of
    The method for evaluating a swallowing function according to claim 1.
  3.  前記摂食嚥下機能評価方法は、さらに、評価結果を出力する出力ステップを含む、
     請求項1または2に記載の摂食嚥下機能評価方法。
    The swallowing function evaluation method further includes an output step of outputting an evaluation result,
    The method for evaluating a swallowing function according to claim 1 or 2.
  4.  摂食嚥下機能評価方法は、さらに、出力した前記評価結果と予め定められたデータとを照合することで、前記被評価者に対する摂食嚥下に関する提案を行う提案ステップを含む、
     請求項1~3のいずれか1項に記載の摂食嚥下機能評価方法。
    The method for evaluating swallowing function further includes a proposing step for making a proposal regarding swallowing for the person to be evaluated by collating the output evaluation result with predetermined data.
    The method for evaluating a swallowing function according to any one of claims 1 to 3.
  5.  前記取得ステップでは、さらに、前記被評価者の個人情報を取得する、
     請求項1~4のいずれか1項に記載の摂食嚥下機能評価方法。
    In the acquisition step, the personal information of the evaluated person is further acquired.
    The method for evaluating a swallowing function according to any one of claims 1 to 4.
  6.  請求項1~5のいずれか1項に記載の摂食嚥下機能評価方法をコンピュータに実行させるためのプログラム。 A program for causing a computer to execute the swallowing function evaluation method according to any one of claims 1 to 5.
  7.  被評価者が所定の音節または所定の文を発話した音声を非接触により集音することで得られる音声データ、前記被評価者の顔または首を非接触により撮像することで得られる第1画像、および、前記被評価者の口腔内を非接触により撮像することで得られる第2画像の少なくとも2つを取得する取得部と、
     前記取得部が取得した前記音声データ、前記第1画像および前記第2画像の少なくとも2つからそれぞれの特徴量を算出する算出部と、
     前記算出部が算出した前記特徴量から、前記被評価者の摂食嚥下機能を評価する評価部と、
     前記評価部が評価した評価結果を出力する出力部と、を備える、
     摂食嚥下機能評価装置。
    Voice data obtained by collecting non-contact voice of speech uttered by the evaluator with a predetermined syllable or predetermined sentence, and a first image obtained by imaging the face or neck of the evaluator without contact And an acquisition unit that acquires at least two of the second images obtained by imaging the oral cavity of the evaluation subject in a non-contact manner,
    A calculation unit that calculates each feature amount from at least two of the audio data, the first image, and the second image acquired by the acquisition unit;
    From the feature amount calculated by the calculation unit, an evaluation unit that evaluates the swallowing function of the evaluated person,
    An output unit that outputs an evaluation result evaluated by the evaluation unit,
    Ingestion swallowing function evaluation device.
  8.  請求項7に記載の摂食嚥下機能評価装置と、
     前記被評価者の顔、首もしくは口腔内を撮像し、または、前記被評価者が前記所定の音節もしくは前記所定の文を発話した音声を非接触により集音する装置と、を備え、
     前記摂食嚥下機能評価装置の取得部は、前記被評価者が所定の音節または所定の文を発話した音声を前記装置が非接触により集音することで得られる音声データ、前記被評価者の顔または首を前記装置が非接触により撮像することで得られる第1画像、および、前記被評価者の口腔内を前記装置が非接触により撮像することで得られる第2画像の少なくとも2つを取得する、
     摂食嚥下機能評価システム。
    The device for evaluating swallowing function according to claim 7;
    A device for capturing the face, neck or oral cavity of the person being evaluated, or collecting the sound of the person being uttered the predetermined syllable or the predetermined sentence in a non-contact manner;
    The acquisition unit of the swallowing function evaluation device is configured to collect voice data obtained by the device collecting non-contact sounds obtained when the evaluated person utters a predetermined syllable or a predetermined sentence; At least two of a first image obtained when the device images the face or neck without contact and a second image obtained when the device images the oral cavity of the evaluated person without contact get,
    A swallowing function evaluation system.
PCT/JP2019/016771 2018-05-23 2019-04-19 Swallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system WO2019225241A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020521105A JPWO2019225241A1 (en) 2018-05-23 2019-04-19 Eating and swallowing function evaluation method, program, eating and swallowing function evaluation device and eating and swallowing function evaluation system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-099173 2018-05-23
JP2018099173 2018-05-23

Publications (1)

Publication Number Publication Date
WO2019225241A1 true WO2019225241A1 (en) 2019-11-28

Family

ID=68616660

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/016771 WO2019225241A1 (en) 2018-05-23 2019-04-19 Swallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system

Country Status (2)

Country Link
JP (1) JPWO2019225241A1 (en)
WO (1) WO2019225241A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223498A (en) * 2021-05-20 2021-08-06 四川大学华西医院 Swallowing disorder identification method, device and apparatus based on throat voice information
CN115066716A (en) * 2020-02-19 2022-09-16 松下知识产权经营株式会社 Oral function visualization system, oral function visualization method, and program
KR20220156143A (en) * 2021-05-17 2022-11-25 단국대학교 산학협력단 Method to track trajectory of hyoid bone
JP7408096B2 (en) 2020-08-18 2024-01-05 国立大学法人静岡大学 Evaluation device and evaluation program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007260273A (en) * 2006-03-29 2007-10-11 Sumitomo Osaka Cement Co Ltd Swallowing function evaluation instrument
JP2009205330A (en) * 2008-02-27 2009-09-10 Nec Corp Portable telephone device, dental care system, dental care method, dental care program and program recording medium
EP2394574A1 (en) * 2010-06-11 2011-12-14 Ratiopharm GmbH System and method for analysing and/or training a speaking, swallowing and/or breathing process
JP2012075758A (en) * 2010-10-05 2012-04-19 Doshisha Dysphagia detecting system
WO2017140812A1 (en) * 2016-02-18 2017-08-24 Koninklijke Philips N.V. Device, system and method for detection and monitoring of dysphagia of a subject
CN107736891A (en) * 2017-11-09 2018-02-27 吉林大学 A kind of aphetite disorder screening system
US20180289308A1 (en) * 2017-04-05 2018-10-11 The Curators Of The University Of Missouri Quantification of bulbar function

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011048524A1 (en) * 2009-10-22 2011-04-28 Koninklijke Philips Electronics N.V. System and method for treating dysphagia
JP6362335B2 (en) * 2014-01-21 2018-07-25 順市 清水 Training system and training method
JP2018023748A (en) * 2016-08-02 2018-02-15 エーエムイー株式会社 Mastication frequency detector

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007260273A (en) * 2006-03-29 2007-10-11 Sumitomo Osaka Cement Co Ltd Swallowing function evaluation instrument
JP2009205330A (en) * 2008-02-27 2009-09-10 Nec Corp Portable telephone device, dental care system, dental care method, dental care program and program recording medium
EP2394574A1 (en) * 2010-06-11 2011-12-14 Ratiopharm GmbH System and method for analysing and/or training a speaking, swallowing and/or breathing process
JP2012075758A (en) * 2010-10-05 2012-04-19 Doshisha Dysphagia detecting system
WO2017140812A1 (en) * 2016-02-18 2017-08-24 Koninklijke Philips N.V. Device, system and method for detection and monitoring of dysphagia of a subject
US20180289308A1 (en) * 2017-04-05 2018-10-11 The Curators Of The University Of Missouri Quantification of bulbar function
CN107736891A (en) * 2017-11-09 2018-02-27 吉林大学 A kind of aphetite disorder screening system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ISHII, KAORI ET AL.: "Clinical application of acoustic analysis in evaluation of tongue function", ORTHODONTIC WAVES, vol. 71, no. 3, 25 October 2012 (2012-10-25), pages 170 - 177, XP028989643 *
NAGAOSA, SHUICHIRO ET AL.: "Proposal of a simple oral and maxillofacial function evaluation method in stroke - evaluation of oral dysfunction in eating disorder and dysphagia", THE JAPANESE JOURNAL OF DYSPHAGIA REHABILITATION, vol. 7, no. 1, 2003, pages 53 - 56 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115066716A (en) * 2020-02-19 2022-09-16 松下知识产权经营株式会社 Oral function visualization system, oral function visualization method, and program
JP7408096B2 (en) 2020-08-18 2024-01-05 国立大学法人静岡大学 Evaluation device and evaluation program
KR20220156143A (en) * 2021-05-17 2022-11-25 단국대학교 산학협력단 Method to track trajectory of hyoid bone
KR102650787B1 (en) 2021-05-17 2024-03-25 단국대학교 산학협력단 Method to track trajectory of hyoid bone
CN113223498A (en) * 2021-05-20 2021-08-06 四川大学华西医院 Swallowing disorder identification method, device and apparatus based on throat voice information

Also Published As

Publication number Publication date
JPWO2019225241A1 (en) 2021-07-15

Similar Documents

Publication Publication Date Title
WO2019225242A1 (en) Swallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system
WO2019225241A1 (en) Swallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system
Guzman et al. Computerized tomography measures during and after artificial lengthening of the vocal tract in subjects with voice disorders
McKenna et al. Magnitude of neck-surface vibration as an estimate of subglottal pressure during modulations of vocal effort and intensity in healthy speakers
Knipfer et al. Speech intelligibility enhancement through maxillary dental rehabilitation with telescopic prostheses and complete dentures: a prospective study using automatic, computer-based speech analysis.
Reinheimer et al. Formant frequencies, cephalometric measures, and pharyngeal airway width in adults with congenital, isolated, and untreated growth hormone deficiency
Namasivayam-MacDonald et al. Impact of dysphagia rehabilitation in adults on swallowing physiology measured with videofluoroscopy: A mapping review
WO2019225230A1 (en) Swallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system
WO2019225243A1 (en) Swallowing function evaluation method, program, swallowing function evaluation device, and swallowing function evaluation system
JP7291896B2 (en) Recipe output method, recipe output system
Cichero Clinical assessment, cervical auscultation and pulse oximetry
US20230000427A1 (en) Oral function visualization system, oral function visualization method, and recording medium medium
Kummer et al. Assessment of velopharyngeal function
Brady et al. Pilot date on swallow function in nondysphagic patients requiring a tracheotomy tube
WO2023228615A1 (en) Speech feature quantity calculation method, speech feature quantity calculation device, and oral function evaluation device
RU2814761C1 (en) Method of assessing speech
WO2022254973A1 (en) Oral function evaluation method, program, oral function evaluation device, and oral function evaluation system
Tezuka et al. Perceptual and videofluoroscopic analyses of relation between backed articulation and velopharyngeal closure following cleft palate repair
Pravitharangul Differences of sound and morphology in skeletal class III patients
Üstünc et al. Efficacy of Low Mandible Maneuver on Mutational Falsetto
Naeem et al. Maximum Phonation Time of School-Aged Children in Pakistan: A Normative Study: Maximum Phonation Time of School-Aged Children
Duggan et al. Speech Therapy: Communication and Swallowing
Speech Speech, Mastication, and Swallowing Considerations in the Treatment of Dentofacial Deformities
Klopchin Use of Video Resources in Home Programs for Voice Intervention
Gubrynowicz et al. Assessment of velum malfunction in children through simultaneous nasal and oral acoustic signals measurements

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19807020

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020521105

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19807020

Country of ref document: EP

Kind code of ref document: A1