WO2021024987A1 - Determination method for cognitive function, program, and determination system for cognitive function - Google Patents

Determination method for cognitive function, program, and determination system for cognitive function Download PDF

Info

Publication number
WO2021024987A1
WO2021024987A1 PCT/JP2020/029682 JP2020029682W WO2021024987A1 WO 2021024987 A1 WO2021024987 A1 WO 2021024987A1 JP 2020029682 W JP2020029682 W JP 2020029682W WO 2021024987 A1 WO2021024987 A1 WO 2021024987A1
Authority
WO
WIPO (PCT)
Prior art keywords
cognitive function
target person
subject
voice
unit
Prior art date
Application number
PCT/JP2020/029682
Other languages
French (fr)
Japanese (ja)
Inventor
祐輝 小川
満春 細川
美紗 吉崎
潤一 穗積
中島 博文
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Priority to JP2021537304A priority Critical patent/JP7479013B2/en
Publication of WO2021024987A1 publication Critical patent/WO2021024987A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • A61B5/1172Identification of persons based on the shapes or appearances of their bodies or parts thereof using fingerprinting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/15Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being formant information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Definitions

  • the present disclosure generally relates to a cognitive function determination method, a program, and a cognitive function determination system, and more particularly, to a cognitive function determination method, a program, and a cognitive function determination system used in connection with services at a driver's license center. ..
  • Patent Document 1 discloses a system for managing the progress of a cognitive function test conducted for the elderly at the time of renewal of a driver's license for automobiles and the like.
  • Patent Document 1 The system of Patent Document 1 is composed of an inspector device used by an inspector and a plurality of terminal devices used by an examinee.
  • Each of the plurality of terminal devices is a tablet type terminal device, and has a display input device provided with a touch panel on the display screen. An explanation, a caution, a question, an answer column, and the like are displayed on the display screen of the terminal device.
  • the examinee inputs an answer, etc. by touching the display screen with a finger or a special pen, or by handwriting letters, numbers, etc.
  • the inspector device has a display device that displays information for managing the progress of the cognitive function test.
  • the present disclosure has been made in view of the above reasons, and an object of the present disclosure is to provide a cognitive function determination method, a program, and a cognitive function determination system that can reduce the burden of renewing a license.
  • the method for determining the cognitive function is used when providing the above-mentioned service to the target person at the driver's license center that performs at least one of the services of issuing and renewing the driver's license.
  • the determination method includes a voice acquisition process, an estimation process, and a reflection process.
  • the voice acquisition process the target person is made to read a fixed phrase and acquire voice data related to the reading.
  • the estimation process the cognitive function of the subject is estimated based on the voice data acquired in the voice acquisition process.
  • the reflection process the result of the estimation process is reflected in the procedure for providing the service to the target person at the driver's license center or in the determination of whether or not the service can be provided.
  • the program according to one aspect of the present disclosure is a program for causing one or more processors to execute the above-mentioned method for determining cognitive function.
  • the cognitive function determination system is used when providing the above-mentioned services to the target person at a driver's license center that performs at least one of the services of issuing and renewing a driver's license.
  • the determination system includes a voice acquisition unit, an estimation unit, and a reflection unit.
  • the voice acquisition unit causes the target person to read out a fixed phrase and acquires voice data related to the reading.
  • the estimation unit estimates the cognitive function of the subject based on the voice data acquired by the voice acquisition unit.
  • the reflection unit reflects the estimation result of the estimation unit in the procedure for providing the service to the target person at the driver's license center or in determining whether or not the service can be provided.
  • FIG. 1 is a block diagram showing a configuration of a determination system for executing the determination method of the cognitive function of one embodiment.
  • FIG. 2 is a schematic diagram schematically showing the same determination system.
  • FIG. 3 is a flowchart showing an example of the same determination method.
  • the cognitive function determination method (hereinafter, also simply referred to as “determination method”) and the cognitive function determination system 10 (hereinafter, also simply referred to as “determination system 10”) of the present embodiment are, for example, such a cognitive function test. It is assumed that it will be used as an alternative to the above, or as a primary screening for narrowing down the candidates for cognitive function tests.
  • a driver who is obliged to perform a cognitive function test when renewing his driver's license receives a notice of renewal of his driver's license by mail, etc., at the driver's license center 1.
  • target person 2 receives a notice of renewal of his driver's license by mail, etc., at the driver's license center 1.
  • This test result is used as a substitute for the above-mentioned cognitive function test when the subject 2 renews the driver's license at the driver's license center 1, or to narrow down the examinees who need to take the above-mentioned cognitive function test. Can be used for. This makes it possible to reduce the burden of renewing the license.
  • the Driver's License Center 1 is a facility or place where at least one of the new issuance (hereinafter, also referred to as "issuance”) and renewal of a driver's license (hereinafter, also simply referred to as “driver's license”) is performed. Means.
  • the driver's license center 1 in the present embodiment may perform at least one of the services of issuing and renewing the driver's license, and is a driver's license examination center, a traffic safety (education) center, a general traffic center, and a driver's (person) education center. , Driver training center, license center, safe driving school, test center, (license) renewal center, etc. may be included. Further, the driver's license center 1 in the present embodiment may include a police station as a facility that performs the service of renewing the driver's license. The Driver's License Center 1 may further perform other services such as reissuing a driver's license.
  • the determination method of the present embodiment includes the voice acquisition process S6, the estimation process S9, and the reflection process S12.
  • the voice acquisition process S6 includes having the target person 2 read out a fixed phrase and acquire voice data related to the reading out.
  • the estimation process S9 includes estimating the cognitive function of the subject 2 based on the voice data acquired in the voice acquisition process S6.
  • the reflection process S12 includes reflecting the result of the estimation process S9 in the procedure for providing the service to the target person 2 at the driver's license center 1 or in determining whether or not the service can be provided.
  • the determination system 10 (see FIG. 1) is an aspect that embodies the determination method of the present embodiment. That is, the determination system 10 is used when providing services to the target person 2 at the driver's license center 1 which performs at least one of the services of issuing and renewing the driver's license.
  • the determination system 10 includes a voice acquisition unit F5 for executing the voice acquisition process S6, an estimation unit F7 for executing the estimation process S9, and a reflection unit F13 for executing the reflection process S12. ..
  • the estimation of the cognitive function of the subject 2 to be reflected in the procedure for issuing or renewing the driver's license to the subject 2 or the determination of the propriety is performed. This is done based on the voice data indicating the voice when the subject 2 reads out the fixed phrase. Therefore, compared to, for example, an inspection method in which an answer is entered on a test sheet and scored, the time and effort required for the cognitive function examination can be shortened, and the target person 2 and the driver's license center 1 can be notified when the license is renewed. It is possible to reduce such a burden. In short, according to the determination method and the determination system 10 of the present embodiment, it is possible to reduce the burden at the time of license renewal. Further, even when the cognitive function test is performed at the time of issuing the driver's license, the burden at the time of issuing the license can be reduced by using the cognitive function determination method and the determination system 10 of the present embodiment.
  • FIGS. 1 and 2 show the configuration of the determination system 10 for executing the determination method of the present embodiment.
  • the determination system 10 includes a first server 4 and a second server 5.
  • the determination system 10 can communicate with the communication device 3 possessed by the target person 2.
  • the communication device 3 is assumed to be a device that can be carried by the target person 2. Further, it is assumed that the first server 4 is installed in the facility of an organization (company or the like) that provides at least a part of the determination method of the present embodiment as a service. Further, it is assumed that the second server 5 is installed in the driver's license center 1 or the facility of the organization having jurisdiction over the driver's license center 1. However, this embodiment is not limited to such an arrangement.
  • the call device 3 is a device for converting the voice of the target person 2 into voice data and transmitting the voice data obtained by the conversion to another device. Further, the communication device 3 is configured to convert voice data transmitted from another device into voice and output the voice data.
  • the communication device 3 is a mobile terminal, a mobile phone, or a personal computer possessed by the subject 2.
  • the mobile terminal is, for example, a smartphone, a tablet terminal, or the like.
  • the communication device 3 is a smartphone. Therefore, in the following, the communication of the communication device 3 with another device (first server 4) is also referred to as "making a call".
  • the communication device 3 includes a communication unit 31 and a communication unit 32. Further, the communication device 3 of the present embodiment further includes a display unit 33, an operation unit 34, and a processing unit 35.
  • the communication unit 31 is a communication interface.
  • the communication unit 31 is a communication interface that can be connected to the telecommunication line NT1 and has a function of performing communication through the telecommunication line NT1.
  • the communication device 3 can communicate with the first server 4 through the telecommunication line NT1 (see FIG. 2).
  • the telecommunications line NT1 may include, for example, a mobile communication network, a PSTN (public switched telephone network), the Internet, and the like.
  • the telecommunication line NT1 may be composed of not only a network compliant with a single communication protocol but also a plurality of networks compliant with different communication protocols.
  • the communication protocol can be selected from a variety of well-known wired and wireless communication standards.
  • the telecommunications line NT1 may include data communication equipment such as repeater hubs, switching hubs, bridges, gateways, routers and the like.
  • the communication unit 31 transmits a signal to the first server 4 via the telecommunication line NT1.
  • the signal transmitted by the communication unit 31 to the first server 4 includes, for example, voice data (voice signal) indicating the voice of the target person 2 acquired by the communication unit 32.
  • the signal transmitted by the communication unit 31 to the first server 4 includes, for example, an operation signal output in response to the operation input of the target person 2 to the operation unit 34.
  • the communication unit 31 receives the signal from the first server 4 via the telecommunication line NT1.
  • the signal received by the communication unit 31 from the first server 4 is, for example, voice data (voice signal) that the first server 4 automatically transmits in response to a request from the communication device 3 (hereinafter, also referred to as "automatic voice signal").
  • voice signal voice signal
  • the signal received by the communication unit 31 from the first server 4 includes, for example, an estimation result signal indicating an estimation result of the cognitive function by the estimation unit F7.
  • the telephone unit 32 includes a speaker and a microphone.
  • the microphone converts the sound including the voice emitted by the subject 2 into voice data (voice signal) and outputs the sound to the outside via the communication unit 31.
  • the speaker converts voice data (voice signal) input from the outside via the communication unit 31 into voice (sound) and outputs it.
  • the display unit 33 displays using the data or the like received by the communication unit 31.
  • the display unit 33 includes, for example, an LCD (Liquid Crystal Display) or an organic EL display.
  • the operation unit 34 receives the operation input from the target person 2 and outputs a signal corresponding to the received operation.
  • the display unit 33 and the operation unit 34 are integrated, for example, like a touch panel display.
  • the call device 3 has a button or the like when an operation (tap, swipe, drag, etc.) of an object such as a button on each screen displayed on the display unit 33 is detected by the operation unit 34.
  • the display unit 33 and the operation unit 34 function as a user interface that receives operation input from the target person 2 in addition to various displays.
  • the processing unit 35 is configured to control the overall control of the communication device 3, that is, the operation of the communication unit 31, the communication unit 32, the display unit 33, and the operation unit 34.
  • the processing unit 35 can be realized, for example, by a computer system including one or more processors (microprocessors) and one or more memories. That is, one or more processors execute one or more programs (applications) stored in one or more memories, thereby functioning as the processing unit 35.
  • the program is recorded in advance in the memory of the processing unit 35 here, the program may be provided by being recorded in a non-temporary recording medium such as a memory card or through a telecommunication line such as the Internet.
  • the communication device 3 may include a biometric information acquisition unit for acquiring the biometric information of the target person 2.
  • the biological information acquisition unit may have, for example, a fingerprint acquisition unit that acquires fingerprint information of the subject 2.
  • the biological information acquisition unit may have, for example, a face information acquisition unit that acquires face information of the subject 2.
  • the fingerprint acquisition unit and / or the face information acquisition unit may be, for example, an imaging unit (camera) provided in the communication device 3.
  • the biological information acquisition unit may have a vein information acquisition unit that acquires the vein information of the subject 2.
  • the first server 4 includes a communication unit 41, a storage unit 42, and a processing unit 43.
  • the first server 4 constitutes the main body of the determination system 10 that executes the determination method of the present embodiment.
  • the first server 4 has "target person identification process S2", “attribute acquisition process S3", “identity verification process S4", "presentation process S5", and "voice acquisition process” among the processes of the determination method (see FIG. 3).
  • “S6”, “recording process S7”, “feature amount acquisition process S8”, “estimation process (cognitive function estimation process) S9", and "output process S10" are executed.
  • the communication unit 41 is a communication interface.
  • the communication unit 41 is a communication interface that can be connected to the telecommunication line NT1 and has a function of performing communication through the telecommunication line NT1.
  • the first server 4 can communicate with the communication device 3 through the telecommunication line NT1. Further, the first server 4 can communicate with the second server 5 through the telecommunication line NT1.
  • the communication protocol for the communication unit 41 to communicate with the communication device 3 may be the same as or different from the communication protocol for the communication unit 41 to communicate with the second server 5.
  • the communication unit 41 transmits a signal to the communication device 3 via the telecommunication line NT1.
  • the signals transmitted by the communication unit 41 to the communication device 3 include the above-mentioned automatic voice signal, estimation result signal, and the like. Further, the communication unit 41 receives the signal from the communication device 3 via the telecommunication line NT1.
  • the signals received by the communication unit 41 from the communication device 3 include the above-mentioned voice signals, operation signals, and the like.
  • the communication unit 41 transmits a signal to the second server 5 via the telecommunication line NT1.
  • the signal transmitted by the communication unit 41 to the second server 5 includes, for example, a request signal that requests identity verification information used for verifying the identity of the target person 2.
  • the signal transmitted by the communication unit 41 to the second server 5 includes an estimation result signal indicating an estimation result of the cognitive function by the estimation unit F7.
  • the communication unit 41 receives the signal from the second server 5 via the telecommunication line NT1.
  • the signal received from the second server 5 by the communication unit 41 includes, for example, a signal indicating identity verification information transmitted from the second server 5 in response to a request signal.
  • the storage unit 42 is a device for storing information.
  • the storage unit 42 may include a ROM (Read Only Memory), a RAM (Random Access Memory), an EEPROM, and the like.
  • the storage unit 42 stores voice data of automatic voice that is automatically transmitted in response to a request (call by telephone) from the calling device 3. Further, the learned model (described later) is stored in the storage unit 42. Further, the storage unit 42 has an area for storing the voice data (voice signal) of the target person 2 transmitted from the communication device 3.
  • the processing unit 43 can be realized by, for example, a computer system including one or more processors (microprocessors) and one or more memories. That is, one or more processors execute one or more programs (applications) stored in one or more memories, thereby functioning as the processing unit 43.
  • processors microprocessors
  • applications programs
  • the processing unit 43 is configured to control the overall control of the first server 4, that is, the operations of the communication unit 41 and the storage unit 42. Further, as shown in FIG. 1, the processing unit 43 estimates that the target person identification unit F1, the attribute acquisition unit F2, the identity verification unit F3, the presentation unit F4, the voice acquisition unit F5, and the recording unit F6. A unit F7 and an output unit F8 are provided.
  • the target person identification unit F1, the attribute acquisition unit F2, the identity verification unit F3, the presentation unit F4, the voice acquisition unit F5, the recording unit F6, the estimation unit F7, and the output unit F8 do not show a substantive configuration. It shows the function realized by the processing unit 43.
  • the target person identification unit F1 identifies the target person 2 who is talking on the call device 3. In the present embodiment, the target person identification unit F1 identifies the target person 2 in cooperation with the response unit F11 (described later) of the second server 5.
  • the target person identification unit F1 When the target person identification unit F1 receives a request (call by telephone) from the communication device 3, for example, the target person identification unit F1 transmits the voice data (automatic voice signal) of the automatic voice stored in the storage unit 42 via the communication unit 41. Is transmitted to the communication device 3.
  • the automatic voice transmitted by the target person identification unit F1 may include a content prompting the input of the target person information for identifying the target person 2.
  • the target person information may include, for example, the license number of the target person 2.
  • the target person information may include the name, address, and the like of the target person 2.
  • the target person information may be the telephone number of the calling device 3, and in this case, the target person 2 does not need to further input the target person information.
  • the target person identification unit F1 receives the target person information input from the target person 2 to the call device 3 for the automatic voice from the call device 3, and identifies the target person 2 based on the received target person information.
  • the target person 2 is specified by the target person identification unit F1, for example, by comparing the received target person information with the personal information stored in the predetermined information database DB1.
  • the information database DB 1 including the personal information is stored in the storage unit 52 of the second server 5.
  • the target person identification unit F1 transmits the target person information to the second server 5, and receives the comparison result between the target person information and the information database DB1 from the second server 5.
  • the control result includes information indicating the subject 2 identified by the second server 5.
  • the target person identification unit F1 identifies the target person 2.
  • the target person identification unit F1 itself (that is, the processing unit 43 of the first server 4) may specify the target person 2 by comparing the target person information with the personal information. For example, the target person identification unit F1 receives a part or all of the information of the information database DB1 from the second server 5, and compares the received information of the information database DB1 with the target person information to obtain the target person 2. You may specify.
  • the determination method of the present embodiment includes the target person identification process S2 for specifying the target person 2.
  • the target person information may include biometric information such as the fingerprint of the target person 2.
  • the subject information may include voiceprint information obtained from the voice of the subject 2 as the biological information of the subject 2.
  • the target person identification process S2 may be performed using the biological information (fingerprint, voiceprint, etc.) of the target person 2.
  • the attribute acquisition unit F2 acquires the attribute data related to the attributes of the target person 2 specified by the target person identification unit F1.
  • the attributes of the subject 2 include, for example, race, gender, age (age), educational history, and the like.
  • the attribute data is stored, for example, in the above-mentioned information database DB1 in association with the personal information.
  • the attribute acquisition unit F2 acquires the attribute data of the target person 2 from the second server 5 via the communication unit 41.
  • the determination method of the present embodiment includes the attribute acquisition process S3 for acquiring the attribute data related to the attributes of the target person 2.
  • the identity verification unit F3 confirms that the person who is actually talking on the call device 3 is the two target persons specified by the target person identification unit F1. That is, since there is a possibility that the person talking on the calling device 3 mistakenly inputs the target person information (license number, etc.) of another person who is not his / her own, the identity verification unit F3 is making a call. Confirm that the person is the two subjects.
  • the identity verification unit F3 transmits a request signal to the second server 5 via the communication unit 41, and the second server 5 sends the request signal as a response to the request signal. (Control information) is acquired.
  • the identity verification information is, for example, voice data obtained by pre-recording the voice of the subject 2 (hereinafter referred to as “control voice data”).
  • control voice data voice data obtained by pre-recording the voice of the subject 2
  • the identity verification unit F3 determines that the voice speaker (the person who is talking) indicated by the voice data is the two target persons. Check.
  • the identity verification unit F3 confirms the identity by comparing the voiceprints of the voices indicated by the voice data and determining the degree of matching.
  • the determination method of the present embodiment includes the identity verification process S4 for confirming that the subject 2 is the principal.
  • the identity verification process S4 is performed using the biological information of the subject 2, and more specifically, the voiceprint of the subject 2.
  • the identity verification process S4 may be performed at an arbitrary timing, for example, when voice data is received from the calling device 3.
  • the identity verification process S4 may be performed in parallel with the target person identification process S2. That is, the target person 2 is specified by using the biological information (voiceprint information) of the target person 2 in the target person identification process S2, and the biological information (voiceprint information) of the target person 2 is used in the identity verification process S4.
  • the identity verification process S4 can be performed in parallel with the target person identification process S2. Further, the identity verification process S4 may be performed at any time using the voice data acquired by the voice acquisition process S6 during and after the execution of the determination method.
  • the identity verification process S4 may be performed using information other than voice data.
  • the identity verification process S4 may be performed using biometric information other than the voiceprint such as the fingerprint of the subject 2, or may be performed using information other than the biometric information such as the driver's license number. Further, the identity verification process S4 may be performed using the same information as the target person identification process S2.
  • the presentation unit F4 transmits the voice data (automatic voice signal) of the automatic voice stored in the storage unit 42 to the communication device 3 via the communication unit 41. ..
  • the automatic voice transmitted by the presentation unit F4 includes an instruction content that prompts the target person 2 to read a fixed phrase.
  • the instruction content may include, for example, a fixed phrase to be read aloud to the subject 2 and a sound (for example, a “pi” sound) indicating the start point and the end point of the period in which the fixed phrase should be read aloud.
  • the instruction content may include a plurality of different fixed phrases.
  • the instruction content may include content for sequentially reading out six types of fixed phrases.
  • the fixed phrase may include, for example, five or more characters consisting of a closed consonant and a subsequent vowel.
  • a fixed phrase there is a sentence such as "Kitakara Kita Tataki".
  • the instruction content may include a content instructing the reading of the same fixed phrase a plurality of times.
  • the instruction content may include the number of times the fixed phrase should be read aloud.
  • the determination method of the present embodiment includes the presentation process S5 that presents the information prompting the reading of the fixed phrase to the target person 2.
  • the presentation process S5 includes reproducing an automatic voice prompting the target person 2 to read a fixed phrase.
  • the voice acquisition unit F5 transmits the voice data transmitted from the call device 3 via the communication unit 41, which is data indicating the voice emitted from the target person 2 in response to the automatic voice transmitted by the presentation unit F4. get. That is, the determination method of the present embodiment includes the voice acquisition process S6 in which the target person 2 reads out a fixed phrase and acquires the voice data related to the reading. In the voice acquisition process S6, voice data is acquired through the telecommunication line NT1.
  • the recording unit F6 causes the storage unit 42 to record (record) the voice data acquired by the voice acquisition unit F5. That is, the determination method of the present embodiment includes the recording process S7 for recording the voice data acquired in the voice acquisition process S6.
  • the estimation unit F7 estimates the cognitive function of the subject 2 by processing the voice data (voice data acquired by the voice acquisition unit F5) recorded in the storage unit 42. That is, the determination method of the present embodiment includes an estimation process S9 that estimates the cognitive function of the subject 2 based on the voice data acquired in the voice acquisition process S6.
  • the estimation unit F7 estimates the cognitive function of the subject 2 by using, for example, the feature amount extracted from the voice data. That is, the determination method of the present embodiment includes the feature amount acquisition process S8 for extracting the feature amount from the voice data. Further, the estimation process S9 executed by the estimation unit F7 includes estimating the cognitive function using the feature amount extracted from the voice data.
  • the estimation unit F7 further uses the attribute data acquired by the attribute acquisition unit F2 to estimate the cognitive function. That is, the estimation process S9 executed by the estimation unit F7 includes estimating the cognitive function by further using the attribute data related to the attributes of the subject 2 in addition to the voice data.
  • the estimation unit F7 estimates the cognitive function of the subject 2 by using the learned model M1.
  • the trained model M1 is, for example, a logistic regression model. That is, in the estimation process S9 executed by the estimation unit F7, the cognitive function of the subject 2 is estimated using the learned model M1 generated by machine learning.
  • the trained model M1 is designed to output a value indicating the degree of cognitive function of the subject 2 with respect to a given input (feature amount).
  • the estimation unit F7 gives the feature amount obtained from the voice data to the trained model M1, and estimates the degree of the cognitive function of the subject 2 based on the value obtained from the trained model M1.
  • Such a trained model M1 can be generated by supervised learning using learning data (data set) that defines the relationship between a value indicating the degree of cognitive function and a feature amount.
  • the trained model M1 is stored in the storage unit 42.
  • the feature amount used in the estimation process S9 may include an amount related to the first formant frequency or the second formant frequency of the vowel in the syllable included in the voice of the subject 2 indicated by the voice data.
  • the first formant frequency is the frequency corresponding to the lowest frequency peak among the frequency peaks included in the voice emitted from a person.
  • the second formant frequency is a frequency corresponding to the second lowest frequency peak among the frequency peaks included in the voice emitted from a person.
  • the quantity is the first formant frequency, the second formant frequency, or the second formant frequency relative to the first formant frequency of a plurality of vowels in a plurality of vowels included in the voice of the subject 2 represented by voice data. It can include variations in ratio. The variability is, for example, the standard deviation.
  • the above amount may include a change in the first formant frequency or the second formant frequency of a plurality of vowels in a plurality of syllables included in the voice of the subject 2 indicated by voice data.
  • the amount may include the time required to change the first formant frequency or the second formant frequency of a plurality of vowels in a plurality of syllables included in the voice of the subject 2 indicated by voice data.
  • the above amount is the rate of change, which is the ratio of the amount of change of the first formant frequency or the second formant frequency of a plurality of vowels in a plurality of syllables included in the voice of the subject 2 represented by the voice data to the required time. May include.
  • the quantity is the first formant of a plurality of vowels in a plurality of vowels included in the voice of the subject 2 represented by voice data in a coordinate space formed by a second formant frequency relative to the first formant frequency of the vowel. It may include the positional relationship between the plotted points, or the shape or area of the polygon formed by those points, when plotting the value of the second formant frequency with respect to the frequency.
  • the feature amount used in the estimation process S9 includes the sound pressure difference between the sound pressure of the consonant in the open syllable included in the voice of the subject 2 indicated by the voice data and the sound pressure of the vowel following the consonant. obtain.
  • the feature amount used in the estimation process S9 is the difference in sound pressure between the sound pressure of a consonant and the sound pressure of a vowel following the consonant in a plurality of open syllables included in the voice of the subject 2 indicated by voice data. , May include variability.
  • the feature amount used in the estimation process S9 may include the total time required for the subject 2 to read aloud the fixed phrase.
  • the feature amount used in the estimation process S9 may include the amount of change in the time required for each of the multiple readings of the same fixed phrase when the same fixed phrase is read a plurality of times.
  • Japanese Patent No. 6337362 for the amount of voice features of subject 2 that can be used for the examination of cognitive function using the determination method of the present embodiment.
  • the feature amount used in the estimation process S9 may include the attribute of the target person 2 acquired by the attribute acquisition unit F2.
  • the estimation unit F7 outputs the estimation result of the cognitive function of the subject 2.
  • the estimation result is indicated by, for example, a numerical value indicating the degree of cognitive function of the subject 2.
  • the estimation unit F7 may output, for example, a numerical value (for example, the second stage out of the five stages) indicating which stage of the plurality of predetermined stages the estimated degree of cognitive function belongs to.
  • the output unit F8 outputs an estimation result signal indicating result information based on the estimation result of the estimation unit F7 to the communication device 3 and the second server 5 via the communication unit 41.
  • the result information output by the output unit F8 may be the estimation result (for example, a numerical value) output from the estimation unit F7 itself, or may be in another form.
  • the result information output by the output unit F8 is, for example, information indicating the next action to be taken by the subject 2 based on the estimation result (for example, information prompting the patient to undergo a cognitive function test by a doctor at a hospital), precautions ( Information that promotes improvement of cognitive function) or the like.
  • the calling device 3 may notify the target person 2 by outputting the estimation result by voice by the calling unit 32 or displaying it as an image by the display unit 33. Good.
  • the second server 5 includes a communication unit 51, a storage unit 52, and a processing unit 53. As shown in FIG. 2, the second server 5 is installed in, for example, the driver's license center 1.
  • the second server 5 executes the "recording process S11" and the “reflection process S12" among the processes of the determination method (see FIG. 3).
  • the communication unit 51 is a communication interface.
  • the communication unit 51 is a communication interface that can be connected to the telecommunication line NT1 and has a function of performing communication through the telecommunication line NT1.
  • the second server 5 can communicate with the first server 4 through the telecommunication line NT1.
  • the communication unit 51 transmits a signal to the first server 4 via the telecommunication line NT1.
  • the signal transmitted by the communication unit 51 to the first server 4 includes, for example, the above-mentioned signal indicating the identity verification information. Further, the communication unit 51 receives the signal from the first server 4 via the telecommunication line NT1.
  • the signals received by the communication unit 51 from the first server 4 include, for example, the above-mentioned request signal, estimation result signal, and the like.
  • the storage unit 52 is a device for storing information.
  • the storage unit 52 may include a ROM, RAM, EEPROM, and the like.
  • the above-mentioned information database DB1 is stored in the storage unit 52.
  • the processing unit 53 can be realized by, for example, a computer system including one or more processors (microprocessors) and one or more memories. That is, one or more processors execute one or more programs (applications) stored in one or more memories, thereby functioning as the processing unit 53.
  • processors microprocessors
  • applications programs
  • the processing unit 53 is configured to control the overall control of the second server 5, that is, the operations of the communication unit 51 and the storage unit 52. Further, as shown in FIG. 1, the processing unit 53 includes a response unit F11, a registration unit F12, and a reflection unit F13. The response unit F11, the registration unit F12, and the reflection unit F13 do not show a substantive configuration, but show a function realized by the processing unit 53.
  • the response unit F11 makes a predetermined response according to the signal transmitted from the first server 4.
  • the response unit F11 identifies the target person 2 indicated by the target person information according to the signal indicating the target person information transmitted from the first server 4.
  • the response unit F11 identifies the target person 2 by, for example, comparing the received target person information with the information database DB1.
  • the response unit F11 transmits information indicating the control result (identified subject 2) to the first server 4. That is, the response unit F11 collaborates with the target person identification unit F1 of the first server 4 to specify the target person 2.
  • the response unit F11 also transmits the attribute data related to the attributes of the specified target person 2.
  • the response unit F11 responds to the request signal transmitted from the first server 4 and transmits the identity verification information.
  • the identity verification information is control voice data which is voice data obtained by pre-recording the voice of the subject 2.
  • the control voice data may be recorded, for example, when a driver's license is issued (newly issued) to the target person 2, may be recorded when a past driver's license is renewed, or the driver's license may be recorded. It may be recorded regardless of the delivery or update of.
  • the registration unit F12 When the registration unit F12 receives the estimation result signal from the first server 4, the registration unit F12 registers the estimation result of the cognitive function of the subject 2 indicated by the estimation result signal in the information database DB1.
  • the information of the estimation result is registered in the information database DB1 in association with the personal information of the corresponding target person 2.
  • the registration unit F12 may rewrite (update) the already registered information with new information, or leave the already registered information. You may add new information while doing so.
  • an expiration date is set for the estimation result of the cognitive function of the subject 2 registered in the information database DB1.
  • the expiration date is not particularly limited, but may be, for example, one year, half a year, three months, one month, half a month, one week, or the like.
  • the reflection unit F13 has a cognitive function of the subject 2 registered in the information database DB1 for the procedure of issuing or renewing the driver's license to the subject 2 at the driver's license center 1 and determining whether or not the license can be issued or renewed. Reflect the estimation result of.
  • the reflection unit F13 makes the procedure for the subject 2 to renew the driver's license at the driver's license center 1 different depending on, for example, the degree of the estimation result of the cognitive function. For example, when the cognitive function of the subject 2 is deteriorated (for example, when the numerical value indicating the degree of the cognitive function is the worst stage among the five stages), the reflection unit F13 has the driver's license of the subject 2. A doctor may be required to perform a cognitive function test when renewing the license. On the other hand, when there is no problem in the cognitive function of the subject 2 (for example, when the numerical value indicating the degree of the cognitive function is the best stage among the five stages), the reflection unit F13 indicates that the subject 2 has a driver's license. You may omit the cognitive function test when you go to the Driver's License Center 1 to renew your driver's license.
  • the reflection unit F13 may notify the staff of the driver's license center 1 of information indicating whether or not it is necessary for the doctor to carry out a cognitive function test for each subject 2.
  • the reflection unit F13 may cause the display device connected to the second server 5 to display for each subject 2 whether or not it is necessary to carry out a cognitive function test by a doctor.
  • the staff of the Driver's License Center 1 shall refer to this information on the display device to determine whether or not it is necessary to carry out a cognitive function test when renewing the driver's license of each subject 2. Is possible.
  • the reflection unit F13 may reflect only the latest estimation results, which is effective. Multiple estimation results within the deadline may be reflected.
  • the subject 2 who is obliged to take a cognitive function test when renewing his driver's license, uses the telephone device 3 at home, for example, before going to the driver's license center 1 to renew his driver's license. 1 Call the server 4 by telephone (call processing S1).
  • the first server 4 When the first server 4 receives a call by the telephone device 3, the first server 4 transmits an automatic voice prompting the input of the target person information for identifying the target person 2 to the call device 3.
  • the target person 2 responds to the automatic voice and inputs the target person information such as the driver's license number via the communication device 3, and the first server 4 acquires the target person information via the telecommunication line NT1. ..
  • the first server 4 transmits the acquired target person information to the second server 5, and the second server 5 identifies the target person 2 by comparing the received target person information with the information database DB1.
  • the second server 5 transmits information indicating the identified target person 2 to the first server 4, and the first server 4 identifies the target person 2 by receiving this information from the second server 5 (target). Person identification process S2). Further, the second server 5 transmits the attribute data indicating the attributes of the specified target person 2 to the first server 4, and the first server 4 acquires the attribute data of the target person 2 (attribute acquisition process S3).
  • the first server 4 sends a request signal to the second server 5 and acquires the identity verification information from the second server 5.
  • the first server 4 confirms that the caller of the calling device 3 is the two target persons by using the identity verification information during the call with the target person 2 after the identification of the target person 2 (identity verification process). S4).
  • the first server 4 transmits an automatic voice prompting the reading of the fixed phrase to the call device 3 (presentation process). S5). Further, the first server 4 acquires the voice data when the target person 2 reads out the fixed phrase from the call device 3 (voice acquisition process S6), and records (records) the acquired voice data in the storage unit 42 (recording). Recording process S7). The first server 4 extracts (acquires) a feature amount from the recorded voice data (feature amount acquisition process S8).
  • the first server 4 estimates the cognitive function of the target person 2 by using the feature amount acquired in the feature amount acquisition process S8 and the attribute data acquired in the attribute acquisition process S3 (cognitive function estimation process (estimation process) S9). ). The first server 4 outputs the estimation result of the cognitive function obtained in the estimation process S9 to the communication device 3 and the second server 5 (output process S10).
  • the call device 3 notifies the target person 2 of the estimation result of the cognitive function obtained from the first server 4 via the call unit 32 or the display unit 33.
  • the second server 5 records the estimation result of the cognitive function obtained from the first server 4 in the storage unit 52 (recording process S11). Then, the second server 5 reflects the estimation result of the cognitive function of the target person 2 in the procedure of issuing or renewing the driver's license to the target person 2 or in the determination of whether or not the issuance or renewal is possible (reflection process S12). ..
  • the subject 2 when the subject 2 goes to the driver's license center 1 to renew the driver's license, the subject 2 follows the procedure that reflects the estimation result of the cognitive function by the judgment method and the driver's license. Can be updated.
  • the staff of the driver's license center 1 informs the subject 2 to that effect and conducts a cognitive function test by a doctor. The subject 2 may be urged to take the test.
  • the subject 2 is made to read a fixed phrase and the voice data of the read voice is processed.
  • the cognitive function of subject 2 is being examined.
  • the time required to execute this inspection is about 3 minutes even when the subject 2 reads out 6 types of fixed phrases, for example.
  • the conventional inspection method in which an answer is written on a test sheet and the answer is scored, it takes at least about 30 minutes including the answer time and the scoring time. Therefore, by using the determination method and the determination system 10 of the present embodiment, it is possible to shorten the time required for the inspection as compared with the conventional inspection method.
  • the subject 2 can take the inspection by the determination method and the determination system 10 of the cognitive function of the present embodiment as long as he / she possesses at least the communication device 3. Therefore, the subject 2 can be inspected at any place, and there is no need to go to the inspection site and undergo the inspection unlike the conventional inspection method. Therefore, according to the determination method and the determination system 10 of the present embodiment, it is possible to reduce the burden on the subject 2 as compared with the conventional inspection method.
  • the cognitive function determination method and determination system 10 of the present embodiment it is possible to reduce the burden at the time of license renewal.
  • the burden at the time of issuing the license is borne. It is possible to reduce the amount.
  • the (computer) program is a program for causing one or more processors to execute the determination method of the above-described embodiment.
  • the cognitive function determination system 10 in the present disclosure includes a computer system in, for example, a communication device 3, a first server 4, a second server 5, and the like.
  • the main configuration of a computer system is a processor and memory as hardware.
  • the processor executes the program recorded in the memory of the computer system, the functions as the communication device 3, the first server 4, and the second server 5 are realized.
  • the program may be pre-recorded in the memory of the computer system, may be provided through a telecommunications line, and may be recorded on a non-temporary recording medium such as a memory card, optical disk, hard disk drive, etc. that can be read by the computer system. May be provided.
  • a processor in a computer system is composed of one or more electronic circuits including a semiconductor integrated circuit (IC) or a large scale integrated circuit (LSI).
  • the integrated circuit such as an IC or LSI referred to here has a different name depending on the degree of integration, and includes an integrated circuit called a system LSI, VLSI (Very Large Scale Integration), or ULSI (Ultra Large Scale Integration).
  • an FPGA Field-Programmable Gate Array
  • a logical device capable of reconfiguring the junction relationship inside the LSI or reconfiguring the circuit partition inside the LSI should also be adopted as a processor. Can be done.
  • a plurality of electronic circuits may be integrated on one chip, or may be distributed on a plurality of chips.
  • the plurality of chips may be integrated in one device, or may be distributed in a plurality of devices.
  • the computer system referred to here includes a microcontroller having one or more processors and one or more memories. Therefore, the microcontroller is also composed of one or more electronic circuits including a semiconductor integrated circuit or a large-scale integrated circuit.
  • the determination system 10 it is not an essential configuration for the determination system 10 that a plurality of functions in each of the first server 4 and the second server 5 are integrated in one housing, and the first server 4 and the second server 5 have different functions. Each component may be distributed in a plurality of housings. Further, at least a part of the functions of the determination system 10, for example, a part of the functions of the first server 4 and the second server 5 may be realized by the cloud (cloud computing) or the like.
  • At least a part of the functions of the determination system 10 distributed in a plurality of devices may be integrated in one housing.
  • some functions of the determination system 10 distributed in the first server 4 and the second server 5 may be integrated in one housing.
  • information prompting the target person 2 to read a fixed phrase may be visually given.
  • a sentence, an image, a video, or the like that prompts the target person 2 to read a fixed phrase may be displayed on the display unit 33 of the call device 3.
  • the content (content of automatic voice, sentence, image, etc.) presented in the presentation process S5 may include a question whose answer is uniquely determined.
  • the question may be related to a driver's license.
  • Examples of questions include, for example, a question that asks the display unit 33 to answer the color of the stop sign, a question that causes the display unit 33 to display an image of the speedometer and answers the speed indicated by the meter, and a display unit 33 that displays an image of the number plate. There are questions that ask you to answer the place name, number, etc.
  • the image displayed on the display unit 33 may be a moving image, for example, an image whose size gradually changes (gradually increases).
  • the display unit 33 temporarily displays the image and stores the content in the target person 2, the image is hidden and a question about the content shown in the image is presented to the target person 2.
  • a question related to the driver's license may be presented to the subject 2, and this question may prompt the subject 2 to read a fixed phrase as an answer to the question.
  • the content presented in the presentation process S5 may further include a question whose answer is not uniquely determined.
  • a question for example, there is a question to answer the name, age, educational history, etc. of the subject 2.
  • the feature amount used in the estimation process S9 may further include information regarding the operation input of the target person 2 to the communication device 3 (for example, the operation speed for the touch panel). That is, the determination method may further include an operation acceptance process for receiving an operation input from the target person 2 to the communication device 3.
  • the cognitive function of the subject 2 may be estimated by further using the result of the operation input in addition to the voice data acquired in the voice acquisition process S6.
  • the information database DB1 including the personal information may be recorded in the storage unit 42 of the first server 4.
  • the target person identification process S2, the attribute acquisition process S3, and the identity verification process S4 can be executed only by the first server 4.
  • the target person 2 may be made to input the attribute data using the communication device 3.
  • the communication device 3 is not limited to the device that the target person 2 can carry.
  • the communication device 3 may be, for example, a so-called fixed telephone provided in the dwelling unit of the target person 2, a public telephone installed so as to be used by an unspecified number of people including the target person 2.
  • the communication device 3 is not limited to a telephone, and may be provided with a communication means such as a microphone and a speaker and a communication unit 31.
  • the calling device 3 may be provided in the driver's license center 1. That is, the subject 2 may take the inspection by the determination method of the present disclosure at the driver's license center 1.
  • the identity verification process S4 is not essential in the determination method and may be omitted.
  • the determination method may include a proxy inspection suppression process in place of or in addition to the identity verification process S4.
  • the proxy inspection deterrence process is a process of deterring a person other than the subject 2 from taking the inspection by the determination method on behalf of the subject 2.
  • the identity verification process S4 also functions as a proxy test suppression process.
  • the proxy acceptance suppression process may include displaying a predetermined caution statement on the display unit 33 of the call device 3.
  • a cautionary sentence for example, there is a sentence such as "During the test, it is confirmed that the same person is always speaking by personal authentication by voiceprint".
  • the proxy acceptance suppression process includes displaying the image of the target person 2 taken by the calling device 3 on the display unit 33 of the calling device 3. It may be. In this case, it is not necessary to actually confirm the identity of the subject 2 from the processed image data of the captured image of the subject 2. Further, in this case, the determination method may include a discontinuation process for discontinuing the inspection when the subject 2 deviates from the imaging range of the imaging unit.
  • the proxy examination deterrence process may include monitoring by a person (for example, a person other than the staff of the Driver's License Center 1). For example, if the driver's license renewal notice is mailed, the proxy inspection deterrence process may include oversight by a post office employee who mails the renewal notice.
  • the identity verification process S4 may use the second personal identification number (one-time password) used when renewing the driver's license.
  • the notification of the estimation result to the target person 2 does not have to be performed immediately.
  • the target person 2 may be able to view the estimation result.
  • the notification of the estimation result may be performed via means other than the communication device 3 (for example, by mail).
  • the method for determining the cognitive function of the first aspect is the target person (2) at the driver's license center (1), which performs at least one of the services of issuing and renewing the driver's license. It is used when providing services to.
  • the determination method includes a voice acquisition process (S6), an estimation process (S9), and a reflection process (S12).
  • the voice acquisition process (S6) the target person (2) is made to read aloud a fixed phrase and acquire the voice data related to the reading.
  • the cognitive function of the subject (2) is estimated based on the voice data acquired in the voice acquisition process (S6).
  • the reflection process (S12) the result of the estimation process (S9) is reflected in the procedure for providing the service to the target person (2) at the driver's license center (1) or in the determination of whether or not the service can be provided.
  • the cognitive function is estimated using the feature amount extracted from the voice data.
  • the feature amount includes one or more selected from the following groups.
  • the above group includes a quantity related to the first formant frequency or the second formant frequency of the vowel in the syllable included in the voice of the subject (2) represented by the voice data.
  • the above group includes a sound pressure difference between the sound pressure of a consonant in an open syllable included in the voice of the subject (2) indicated by voice data and the sound pressure of a vowel following the consonant.
  • the above group includes variations in the sound pressure difference between the sound pressure of a consonant and the sound pressure of a vowel following the consonant in a plurality of open syllables included in the voice of the subject (2) indicated by voice data. ..
  • the above group includes the total time required to read the above fixed phrase.
  • the above group includes the amount of change in the time required for each of the multiple readings of the fixed phrase.
  • the method for determining the cognitive function of the fourth aspect further includes a recording process (S7) for recording the voice data acquired by the voice acquisition process (S6) in any one of the first to third aspects. ..
  • the method for determining the cognitive function of the fifth aspect is that in any one of the first to fourth aspects, the estimation process (S9) uses the trained model (M1) generated by machine learning to recognize. Estimate function.
  • the method for determining the cognitive function in the sixth aspect is one of the first to fifth aspects, in which the voice acquisition process (S6) acquires voice data through a telecommunication line (NT1).
  • the method for determining the cognitive function in the seventh aspect is that in any one of the first to sixth aspects, in the estimation process (S9), in addition to the voice data acquired in the voice acquisition process (S6), a target is used.
  • the cognitive function is estimated by further using the attribute data related to the attribute of the person (2).
  • the method for determining the cognitive function in the eighth aspect further includes, in any one of the first to seventh aspects, a presentation process (S5) for presenting information prompting the subject to read a fixed phrase.
  • a presentation process S5 for presenting information prompting the subject to read a fixed phrase.
  • the method for determining the cognitive function in the ninth aspect is that in the eighth aspect, in the presentation process (S5), an automatic voice prompting the target person (2) to read a fixed phrase is reproduced.
  • the method for determining the cognitive function in the tenth aspect is, in the eighth or ninth aspect, in the presentation process (S5), the subject (2) is visually provided with information prompting the subject (2) to read a fixed phrase.
  • the method for determining the cognitive function in the eleventh aspect is one of the eighth to tenth aspects, in which the presentation process (S5) presents a question related to the driver's license to the subject (2).
  • the above-mentioned question includes a content that prompts the subject (2) to read a fixed phrase as an answer to the question.
  • the method for determining the cognitive function of the twelfth aspect further includes an identity verification process (S4) for confirming that the subject (2) is the person in any one of the first to eleventh aspects. ..
  • the method for determining the cognitive function in the thirteenth aspect is that in the twelfth aspect, the identity verification process (S4) is performed using the biological information of the subject (2).
  • the biological information includes the voiceprint information of the subject (2).
  • the method for determining the cognitive function of the fifteenth aspect further includes a target person identification process (S2) for identifying the target person (2) in any one of the first to fourteenth aspects.
  • the target person identification process (S2) is performed using the biological information of the target person (2).
  • the method for determining the cognitive function in the 17th aspect further includes an operation acceptance process for receiving an operation input from the target person (2) in any one of the 1st to 16th aspects.
  • the estimation process (S9) estimates the cognitive function by further using the result of the operation input in addition to the voice data acquired in the voice acquisition process (S6).
  • the program of the eighteenth aspect is a program for causing one or more processors to execute the method of determining the cognitive function of any one of the first to seventeenth aspects.
  • the cognitive function determination system (10) of the nineteenth aspect is used when providing services to the target person (2) at the driver's license center (1), which performs at least one of the services of issuing and renewing the driver's license. Be done.
  • the determination system (10) includes a voice acquisition unit (F5), an estimation unit (F7), and a reflection unit (F13).
  • the voice acquisition unit (F5) causes the target person (2) to read aloud a fixed phrase and acquires voice data related to the reading.
  • the estimation unit (F7) estimates the cognitive function of the subject (2) based on the voice data acquired by the voice acquisition unit (F5).
  • the reflection unit (F13) reflects the estimation result of the estimation unit (F7) in the procedure for providing the service to the target person (2) at the driver's license center (1) or in the determination of whether or not the service can be provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • Computational Linguistics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Development Economics (AREA)
  • Educational Technology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)

Abstract

The present disclosure addresses the problem of reducing a burden at the time of license renewal. A determination method for a cognitive function is used in rendering a service to a subject (2) in a drivers' license center (1) for performing the service of issuance and/or renewal of a driver's license. The determination method comprises voice acquisition processing, estimation processing, and reflection processing. In the voice acquisition processing, by causing the subject (2) to read out a fixed sentence, voice data related to the readout is acquired. In the estimation processing, the cognitive function of the subject (2) is estimated on the basis of the voice data acquired in the voice acquisition processing. In the reflection processing, the result of the estimation processing is reflected in a procedure for rendering the service to the subject (2) in the drivers' license center (1) or a determination as to whether the rendering of the service is permitted or not.

Description

認知機能の判定方法、プログラム、認知機能の判定システムCognitive function judgment method, program, cognitive function judgment system
 本開示は一般に認知機能の判定方法、プログラム、認知機能の判定システムに関し、より詳細には、運転免許センターでの役務に関連して用いられる認知機能の判定方法、プログラム、認知機能の判定システムに関する。 The present disclosure generally relates to a cognitive function determination method, a program, and a cognitive function determination system, and more particularly, to a cognitive function determination method, a program, and a cognitive function determination system used in connection with services at a driver's license center. ..
 特許文献1には、自動車等の運転免許更新時における後期高齢者を対象として実施する認知機能検査の進行を管理するためのシステムが、開示されている。 Patent Document 1 discloses a system for managing the progress of a cognitive function test conducted for the elderly at the time of renewal of a driver's license for automobiles and the like.
 特許文献1のシステムは、検査員が使用する検査員装置と、受検者が使用する複数台の端末装置とから構成される。 The system of Patent Document 1 is composed of an inspector device used by an inspector and a plurality of terminal devices used by an examinee.
 複数の端末装置の各々は、タブレット型端末装置であり、表示画面上にタッチパネルを備える表示入力装置を有する。端末装置の表示画面上には、説明文、注意文、設問、回答欄等が表示される。受検者は、表示画面上に指または専用ペンを用いてタッチしたり、文字、数字等を手書きしたりして、解答等を入力する。検査員装置は、認知機能検査の進行管理のための情報を表示する表示装置を有する。 Each of the plurality of terminal devices is a tablet type terminal device, and has a display input device provided with a touch panel on the display screen. An explanation, a caution, a question, an answer column, and the like are displayed on the display screen of the terminal device. The examinee inputs an answer, etc. by touching the display screen with a finger or a special pen, or by handwriting letters, numbers, etc. The inspector device has a display device that displays information for managing the progress of the cognitive function test.
 特許文献1のシステムでは、認知機能検査の実施に時間がかかるため、免許更新時に認知機能検査を受検する受検者にかかる負担が大きくなり得る。また、高齢者人口の増加に伴い、免許更新時に認知機能検査の受検が必要な対象者の数も増加しているため、一人当たりの認知機能検査の実施にかかる時間が長くなると、運転免許センターにかかる負担も大きくなり得る。 In the system of Patent Document 1, since it takes time to carry out the cognitive function test, the burden on the examinee who takes the cognitive function test at the time of renewal of the license may increase. In addition, as the elderly population increases, the number of people who need to take a cognitive function test when renewing their license is also increasing, so if it takes a long time to carry out a cognitive function test per person, the driver's license center The burden on the driver can also increase.
特許6517987号公報Japanese Patent No. 6517987
 本開示は、上記事由に鑑みてなされており、免許更新時の負担の軽減を図ることが可能な認知機能の判定方法、プログラム、及び認知機能の判定システムを提供することを目的とする。 The present disclosure has been made in view of the above reasons, and an object of the present disclosure is to provide a cognitive function determination method, a program, and a cognitive function determination system that can reduce the burden of renewing a license.
 本開示の一態様に係る認知機能の判定方法は、運転免許証の発行と更新との少なくとも一方の役務を行う運転免許センターにおける、対象者への前記役務の提供に際して用いられる。前記判定方法は、音声取得処理と、推定処理と、反映処理と、を備える。前記音声取得処理では、前記対象者に定型文の読み上げを実行させ前記読み上げに係る音声データを取得する。前記推定処理では、前記音声取得処理で取得された前記音声データに基づいて、前記対象者の認知機能を推定する。前記反映処理では、前記運転免許センターでの前記対象者への前記役務の提供の手順又は前記役務の提供の可否の判断に、前記推定処理の結果を反映させる。 The method for determining the cognitive function according to one aspect of the present disclosure is used when providing the above-mentioned service to the target person at the driver's license center that performs at least one of the services of issuing and renewing the driver's license. The determination method includes a voice acquisition process, an estimation process, and a reflection process. In the voice acquisition process, the target person is made to read a fixed phrase and acquire voice data related to the reading. In the estimation process, the cognitive function of the subject is estimated based on the voice data acquired in the voice acquisition process. In the reflection process, the result of the estimation process is reflected in the procedure for providing the service to the target person at the driver's license center or in the determination of whether or not the service can be provided.
 本開示の一態様に係るプログラムは、上記の認知機能の判定方法を、1以上のプロセッサに実行させるためのプログラムである。 The program according to one aspect of the present disclosure is a program for causing one or more processors to execute the above-mentioned method for determining cognitive function.
 本開示の一態様に係る認知機能の判定システムは、運転免許証の発行と更新との少なくとも一方の役務を行う運転免許センターにおける、対象者への前記役務の提供に際して用いられる。前記判定システムは、音声取得部と、推定部と、反映部と、を備える。前記音声取得部は、前記対象者に定型文の読み上げを実行させ前記読み上げに係る音声データを取得する。前記推定部は、前記音声取得部で取得された前記音声データに基づいて、前記対象者の認知機能を推定する。前記反映部は、前記運転免許センターでの前記対象者への前記役務の提供の手順又は前記役務の提供の可否の判断に、前記推定部の推定結果を反映させる。 The cognitive function determination system according to one aspect of the present disclosure is used when providing the above-mentioned services to the target person at a driver's license center that performs at least one of the services of issuing and renewing a driver's license. The determination system includes a voice acquisition unit, an estimation unit, and a reflection unit. The voice acquisition unit causes the target person to read out a fixed phrase and acquires voice data related to the reading. The estimation unit estimates the cognitive function of the subject based on the voice data acquired by the voice acquisition unit. The reflection unit reflects the estimation result of the estimation unit in the procedure for providing the service to the target person at the driver's license center or in determining whether or not the service can be provided.
図1は、一実施形態の認知機能の判定方法を実行するための判定システムの構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of a determination system for executing the determination method of the cognitive function of one embodiment. 図2は、同上の判定システムを模式的に示す模式図である。FIG. 2 is a schematic diagram schematically showing the same determination system. 図3は、同上の判定方法の一例を示すフローチャートである。FIG. 3 is a flowchart showing an example of the same determination method.
 (1)概要
 現在日本国では、75才以上のドライバーには、運転免許証の更新時に認知機能検査を受検することが義務づけられている。現状この認知機能検査は、主に、指導員が出す問題に対して受検者がテスト用紙に解答を記入する方式で実施されている。しかしながら、このような方式では、検査会場及び指導員の確保、解答内容の採点等のため、検査を実施する側の負担が大きくなる可能性がある。また、検査自体に時間がかかり、指定された時間に検査会場に向かう必要もあるため、受検者側にも負担を強いる可能性がある。本実施形態の認知機能の判定方法(以下、単に「判定方法」ともいう)及び認知機能の判定システム10(以下、単に「判定システム10」ともいう)は、一例として、このような認知機能検査の代替、或いは認知機能検査の受検者を絞り込むための一次スクリーニングとして用いられることを想定する。
(1) Overview Currently, in Japan, drivers over the age of 75 are obliged to take a cognitive function test when renewing their driver's license. Currently, this cognitive function test is mainly carried out by the examinee writing the answer on the test sheet for the question given by the instructor. However, in such a method, there is a possibility that the burden on the side performing the inspection will increase in order to secure the inspection venue and the instructor, score the answer contents, and the like. In addition, the examination itself takes time, and it is necessary to go to the examination site at the designated time, which may impose a burden on the examinee side. The cognitive function determination method (hereinafter, also simply referred to as “determination method”) and the cognitive function determination system 10 (hereinafter, also simply referred to as “determination system 10”) of the present embodiment are, for example, such a cognitive function test. It is assumed that it will be used as an alternative to the above, or as a primary screening for narrowing down the candidates for cognitive function tests.
 例えば、運転免許証の更新時に認知機能検査が義務づけられているドライバー(以下、「対象者2」ともいう)は、運転免許証の更新の通知を郵便等で受け取った場合、運転免許センター1で免許の更新を行う前に、本実施形態の認知機能の判定方法による検査を受検する。この検査結果は、運転免許センター1にて対象者2が運転免許証の更新を行う際に、上記の認知機能検査の代わりとして、或いは上記の認知機能検査の受検が必要な受検者を絞り込むために、用いられ得る。これにより、免許更新時にかかる負担を軽減することが可能となる。 For example, a driver who is obliged to perform a cognitive function test when renewing his driver's license (hereinafter, also referred to as "target person 2") receives a notice of renewal of his driver's license by mail, etc., at the driver's license center 1. Before renewing the license, take a test according to the method for determining cognitive function of the present embodiment. This test result is used as a substitute for the above-mentioned cognitive function test when the subject 2 renews the driver's license at the driver's license center 1, or to narrow down the examinees who need to take the above-mentioned cognitive function test. Can be used for. This makes it possible to reduce the burden of renewing the license.
 まず、本実施形態の認知機能の判定方法(以下、単に「判定方法」ともいう)、及び認知機能の判定システム10の概要について、説明する。 First, an outline of the cognitive function determination method (hereinafter, also simply referred to as “determination method”) of the present embodiment and the cognitive function determination system 10 will be described.
 本実施形態の判定方法は、上述のように、運転免許センター1(図2参照)における、対象者2への役務の提供に際して用いられる。運転免許センター1は、ここでは、自動車運転免許証(以下、単に「運転免許証」ともいう)の新規交付(以下、「発行」ともいう)と更新との少なくとも一方の役務を行う施設又は場所を意味する。 As described above, the determination method of the present embodiment is used when providing services to the target person 2 at the driver's license center 1 (see FIG. 2). The Driver's License Center 1 is a facility or place where at least one of the new issuance (hereinafter, also referred to as "issuance") and renewal of a driver's license (hereinafter, also simply referred to as "driver's license") is performed. Means.
 本実施形態での運転免許センター1は、運転免許証の発行と更新との少なくとも一方の役務を行えばよく、運転免許試験場、交通安全(教育)センター、総合交通センター、運転(者)教育センター、運転者講習センター、免許センター、安全運転学校、試験場、(免許)更新センター等と呼ばれる施設を含み得る。また、本実施形態での運転免許センター1は、運転免許証の更新の役務を行う施設としての警察署を含んでもよい。運転免許センター1は、運転免許証の再交付等の、他の役務を更に行ってもよい。 The driver's license center 1 in the present embodiment may perform at least one of the services of issuing and renewing the driver's license, and is a driver's license examination center, a traffic safety (education) center, a general traffic center, and a driver's (person) education center. , Driver training center, license center, safe driving school, test center, (license) renewal center, etc. may be included. Further, the driver's license center 1 in the present embodiment may include a police station as a facility that performs the service of renewing the driver's license. The Driver's License Center 1 may further perform other services such as reissuing a driver's license.
 図3に示すように、本実施形態の判定方法は、音声取得処理S6と、推定処理S9と、反映処理S12と、を含む。 As shown in FIG. 3, the determination method of the present embodiment includes the voice acquisition process S6, the estimation process S9, and the reflection process S12.
 音声取得処理S6は、対象者2に定型文の読み上げを実行させ、読み上げに係る音声データを取得することを含む。推定処理S9は、音声取得処理S6で取得された音声データに基づいて、対象者2の認知機能を推定することを含む。反映処理S12は、運転免許センター1での対象者2への役務の提供の手順又は役務の提供の可否の判断に、推定処理S9の結果を反映させることを含む。 The voice acquisition process S6 includes having the target person 2 read out a fixed phrase and acquire voice data related to the reading out. The estimation process S9 includes estimating the cognitive function of the subject 2 based on the voice data acquired in the voice acquisition process S6. The reflection process S12 includes reflecting the result of the estimation process S9 in the procedure for providing the service to the target person 2 at the driver's license center 1 or in determining whether or not the service can be provided.
 判定システム10(図1参照)は、本実施形態の判定方法を具現化する一態様である。つまり、判定システム10は、運転免許証の発行と更新との少なくとも一方の役務を行う運転免許センター1における、対象者2への役務の提供に際して用いられる。判定システム10は、音声取得処理S6を実行するための音声取得部F5と、推定処理S9を実行するための推定部F7と、反映処理S12を実行するための反映部F13と、を備えている。 The determination system 10 (see FIG. 1) is an aspect that embodies the determination method of the present embodiment. That is, the determination system 10 is used when providing services to the target person 2 at the driver's license center 1 which performs at least one of the services of issuing and renewing the driver's license. The determination system 10 includes a voice acquisition unit F5 for executing the voice acquisition process S6, an estimation unit F7 for executing the estimation process S9, and a reflection unit F13 for executing the reflection process S12. ..
 本実施形態の判定方法及び判定システム10では、対象者2への運転免許証の発行若しくは更新の際の手順又は可否の判断に反映されることとなる、対象者2の認知機能の推定を、対象者2が定型文の読み上げを行った際の音声を示す音声データに基づいて行っている。そのため、例えばテスト用紙に解答を記入してそれを採点する検査方法に比べて、認知機能の検査に要する時間及び手間を短縮することが可能となり、免許更新時に対象者2及び運転免許センター1にかかる負担を低減することが可能となる。要するに、本実施形態の判定方法及び判定システム10によれば、免許更新時の負担の軽減を図ることが可能となる。また、運転免許証の発行時に認知機能検査を行なう場合についても、本実施形態の認知機能の判定方法及び判定システム10を用いることで、免許発行時の負担の軽減を図ることが可能となる。 In the determination method and determination system 10 of the present embodiment, the estimation of the cognitive function of the subject 2 to be reflected in the procedure for issuing or renewing the driver's license to the subject 2 or the determination of the propriety is performed. This is done based on the voice data indicating the voice when the subject 2 reads out the fixed phrase. Therefore, compared to, for example, an inspection method in which an answer is entered on a test sheet and scored, the time and effort required for the cognitive function examination can be shortened, and the target person 2 and the driver's license center 1 can be notified when the license is renewed. It is possible to reduce such a burden. In short, according to the determination method and the determination system 10 of the present embodiment, it is possible to reduce the burden at the time of license renewal. Further, even when the cognitive function test is performed at the time of issuing the driver's license, the burden at the time of issuing the license can be reduced by using the cognitive function determination method and the determination system 10 of the present embodiment.
 (2)詳細
 以下、本実施形態の判定方法、判定システム10の詳細について、図1~図3を参照して説明する。
(2) Details Hereinafter, the determination method of the present embodiment and the details of the determination system 10 will be described with reference to FIGS. 1 to 3.
 (2.1)全体構成
 図1、図2に、本実施形態の判定方法を実行するための判定システム10の構成を示す。
(2.1) Overall Configuration FIGS. 1 and 2 show the configuration of the determination system 10 for executing the determination method of the present embodiment.
 図1に示すように、判定システム10は、第1サーバ4と、第2サーバ5と、を備えている。判定システム10は、対象者2が所持する通話装置3と通信可能である。 As shown in FIG. 1, the determination system 10 includes a first server 4 and a second server 5. The determination system 10 can communicate with the communication device 3 possessed by the target person 2.
 ここでは、通話装置3は、対象者2によって携帯可能な装置を想定する。また、第1サーバ4は、本実施形態の判定方法の少なくとも一部をサービスとして提供する団体(企業等)の施設に設置されることを想定する。また、第2サーバ5は、運転免許センター1或いは運転免許センター1を管轄する団体の施設に設置されることを想定する。ただし、本実施形態はこのような配置に限定されない。 Here, the communication device 3 is assumed to be a device that can be carried by the target person 2. Further, it is assumed that the first server 4 is installed in the facility of an organization (company or the like) that provides at least a part of the determination method of the present embodiment as a service. Further, it is assumed that the second server 5 is installed in the driver's license center 1 or the facility of the organization having jurisdiction over the driver's license center 1. However, this embodiment is not limited to such an arrangement.
 通話装置3は、対象者2の音声を音声データに変換し、変換して得られた音声データを他の機器へ送信するための装置である。また、通話装置3は、他の機器から送信された音声データを、音声に変換して出力するよう構成されている。一例として、通話装置3としては、対象者2が所持する携帯端末、携帯電話又はパーソナルコンピュータである。携帯端末は、一例として、スマートフォン、タブレット端末等である。ここでは、通話装置3がスマートフォンであることを想定する。そのため、以下では、通話装置3が他の装置(第1サーバ4)と通信することを、「電話をかける」とも表現する。 The call device 3 is a device for converting the voice of the target person 2 into voice data and transmitting the voice data obtained by the conversion to another device. Further, the communication device 3 is configured to convert voice data transmitted from another device into voice and output the voice data. As an example, the communication device 3 is a mobile terminal, a mobile phone, or a personal computer possessed by the subject 2. The mobile terminal is, for example, a smartphone, a tablet terminal, or the like. Here, it is assumed that the communication device 3 is a smartphone. Therefore, in the following, the communication of the communication device 3 with another device (first server 4) is also referred to as "making a call".
 図1に示すように、通話装置3は、通信部31と通話部32とを備えている。また、本実施形態の通話装置3は、表示部33、操作部34及び処理部35を更に備えている。 As shown in FIG. 1, the communication device 3 includes a communication unit 31 and a communication unit 32. Further, the communication device 3 of the present embodiment further includes a display unit 33, an operation unit 34, and a processing unit 35.
 通信部31は、通信インターフェースである。特に、通信部31は、電気通信回線NT1に接続可能な通信インターフェースであり、電気通信回線NT1を通じた通信を行う機能を有する。これにより、通話装置3は、電気通信回線NT1を通じて第1サーバ4と通信可能である(図2参照)。 The communication unit 31 is a communication interface. In particular, the communication unit 31 is a communication interface that can be connected to the telecommunication line NT1 and has a function of performing communication through the telecommunication line NT1. As a result, the communication device 3 can communicate with the first server 4 through the telecommunication line NT1 (see FIG. 2).
 電気通信回線NT1は、例えば、移動体通信網、PSTN(公衆交換電話網)、インターネット等を含み得る。電気通信回線NT1は、単一の通信プロトコルに準拠したネットワークだけではなく、異なる通信プロトコルに準拠した複数のネットワークで構成され得る。通信プロトコルは、周知の様々な有線及び無線通信規格から選択され得る。図2では簡略化されているが、電気通信回線NT1は、リピータハブ、スイッチングハブ、ブリッジ、ゲートウェイ、ルータ等のデータ通信機器を含み得る。 The telecommunications line NT1 may include, for example, a mobile communication network, a PSTN (public switched telephone network), the Internet, and the like. The telecommunication line NT1 may be composed of not only a network compliant with a single communication protocol but also a plurality of networks compliant with different communication protocols. The communication protocol can be selected from a variety of well-known wired and wireless communication standards. Although simplified in FIG. 2, the telecommunications line NT1 may include data communication equipment such as repeater hubs, switching hubs, bridges, gateways, routers and the like.
 通信部31は、電気通信回線NT1を介して第1サーバ4へ信号を送信する。通信部31が第1サーバ4へ送信する信号には、例えば、通話部32が取得した対象者2の音声を示す音声データ(音声信号)がある。その他、通信部31が第1サーバ4へ送信する信号には、例えば、操作部34に対する対象者2の操作入力に応じて出力される操作信号がある。 The communication unit 31 transmits a signal to the first server 4 via the telecommunication line NT1. The signal transmitted by the communication unit 31 to the first server 4 includes, for example, voice data (voice signal) indicating the voice of the target person 2 acquired by the communication unit 32. In addition, the signal transmitted by the communication unit 31 to the first server 4 includes, for example, an operation signal output in response to the operation input of the target person 2 to the operation unit 34.
 通信部31は、電気通信回線NT1を介して第1サーバ4からの信号を受信する。通信部31が第1サーバ4から受信する信号には、例えば、第1サーバ4が通話装置3からの要求を受けて自動送信する音声データ(音声信号)(以下「自動音声信号」ともいう)がある。その他、通信部31が第1サーバ4から受信する信号には、例えば、推定部F7による認知機能の推定結果を示す推定結果信号がある。 The communication unit 31 receives the signal from the first server 4 via the telecommunication line NT1. The signal received by the communication unit 31 from the first server 4 is, for example, voice data (voice signal) that the first server 4 automatically transmits in response to a request from the communication device 3 (hereinafter, also referred to as "automatic voice signal"). There is. In addition, the signal received by the communication unit 31 from the first server 4 includes, for example, an estimation result signal indicating an estimation result of the cognitive function by the estimation unit F7.
 通話部32は、スピーカとマイクロホンとを備えている。マイクロホンは、対象者2が発した音声を含む音を、音声データ(音声信号)へと変換し、通信部31を介して外部へ出力する。スピーカは、外部から通信部31を介して入力された音声データ(音声信号)を、音声(音)に変換して出力する。 The telephone unit 32 includes a speaker and a microphone. The microphone converts the sound including the voice emitted by the subject 2 into voice data (voice signal) and outputs the sound to the outside via the communication unit 31. The speaker converts voice data (voice signal) input from the outside via the communication unit 31 into voice (sound) and outputs it.
 表示部33は、通信部31で受信されたデータ等を用いて表示を行う。表示部33は、例えばLCD(Liquid Crystal Display)又は有機ELディスプレイを含んでいる。 The display unit 33 displays using the data or the like received by the communication unit 31. The display unit 33 includes, for example, an LCD (Liquid Crystal Display) or an organic EL display.
 操作部34は、対象者2からの操作入力を受け付け、受け付けた操作に応じた信号を出力する。本実施形態では、通話装置3は汎用のスマートフォンであるので、例えばタッチパネルディスプレイのように、表示部33と操作部34とが一体化されている。タッチパネルディスプレイにおいては、通話装置3は、表示部33に表示される各画面上でのボタン等のオブジェクトの操作(タップ、スワイプ、ドラッグ等)が操作部34で検出されることをもって、ボタン等のオブジェクトが操作されたことと判断する。つまり、表示部33及び操作部34は、各種の表示に加えて、対象者2からの操作入力を受け付けるユーザインタフェースとして機能する。 The operation unit 34 receives the operation input from the target person 2 and outputs a signal corresponding to the received operation. In the present embodiment, since the communication device 3 is a general-purpose smartphone, the display unit 33 and the operation unit 34 are integrated, for example, like a touch panel display. In the touch panel display, the call device 3 has a button or the like when an operation (tap, swipe, drag, etc.) of an object such as a button on each screen displayed on the display unit 33 is detected by the operation unit 34. Judge that the object has been manipulated. That is, the display unit 33 and the operation unit 34 function as a user interface that receives operation input from the target person 2 in addition to various displays.
 処理部35は、通話装置3の全体的な制御、すなわち、通信部31、通話部32、表示部33及び操作部34の動作を制御するように構成される。処理部35は、例えば、1以上のプロセッサ(マイクロプロセッサ)と1以上のメモリとを含むコンピュータシステムにより実現され得る。つまり、1以上のプロセッサが1以上のメモリに記憶された1以上のプログラム(アプリケーション)を実行することで、処理部35として機能する。プログラムは、ここでは処理部35のメモリに予め記録されているが、インターネット等の電気通信回線を通じて、又はメモリカード等の非一時的な記録媒体に記録されて提供されてもよい。 The processing unit 35 is configured to control the overall control of the communication device 3, that is, the operation of the communication unit 31, the communication unit 32, the display unit 33, and the operation unit 34. The processing unit 35 can be realized, for example, by a computer system including one or more processors (microprocessors) and one or more memories. That is, one or more processors execute one or more programs (applications) stored in one or more memories, thereby functioning as the processing unit 35. Although the program is recorded in advance in the memory of the processing unit 35 here, the program may be provided by being recorded in a non-temporary recording medium such as a memory card or through a telecommunication line such as the Internet.
 通話装置3は、対象者2の生体情報を取得するための生体情報取得部を備えていてもよい。生体情報取得部は、例えば、対象者2の指紋の情報を取得する指紋取得部を有していてもよい。生体情報取得部は、例えば、対象者2の顔の情報を取得する顔情報取得部を有していてもよい。指紋取得部及び/又は顔情報取得部は、例えば通話装置3に設けられている撮像部(カメラ)であってもよい。その他、生体情報取得部は、対象者2の静脈情報を取得する静脈情報取得部を有していてもよい。 The communication device 3 may include a biometric information acquisition unit for acquiring the biometric information of the target person 2. The biological information acquisition unit may have, for example, a fingerprint acquisition unit that acquires fingerprint information of the subject 2. The biological information acquisition unit may have, for example, a face information acquisition unit that acquires face information of the subject 2. The fingerprint acquisition unit and / or the face information acquisition unit may be, for example, an imaging unit (camera) provided in the communication device 3. In addition, the biological information acquisition unit may have a vein information acquisition unit that acquires the vein information of the subject 2.
 第1サーバ4は、図1に示すように、通信部41と記憶部42と処理部43とを備えている。 As shown in FIG. 1, the first server 4 includes a communication unit 41, a storage unit 42, and a processing unit 43.
 第1サーバ4は、本実施形態の判定方法を実行する判定システム10の主体を構成する。第1サーバ4は、判定方法の各処理(図3参照)のうち「対象者特定処理S2」、「属性取得処理S3」、「本人確認処理S4」、「提示処理S5」、「音声取得処理S6」、「記録処理S7」、「特徴量取得処理S8」、「推定処理(認知機能推定処理)S9」、「出力処理S10」を実行する。 The first server 4 constitutes the main body of the determination system 10 that executes the determination method of the present embodiment. The first server 4 has "target person identification process S2", "attribute acquisition process S3", "identity verification process S4", "presentation process S5", and "voice acquisition process" among the processes of the determination method (see FIG. 3). "S6", "recording process S7", "feature amount acquisition process S8", "estimation process (cognitive function estimation process) S9", and "output process S10" are executed.
 通信部41は、通信インターフェースである。通信部41は、電気通信回線NT1に接続可能な通信インターフェースであり、電気通信回線NT1を通じた通信を行う機能を有する。第1サーバ4は、電気通信回線NT1を通じて通話装置3と通信可能である。また、第1サーバ4は、電気通信回線NT1を通じて第2サーバ5と通信可能である。なお、通信部41が通話装置3と通信するための通信プロトコルは、通信部41が第2サーバ5と通信するための通信プロトコルと同じであってもよいし異なっていてもよい。 The communication unit 41 is a communication interface. The communication unit 41 is a communication interface that can be connected to the telecommunication line NT1 and has a function of performing communication through the telecommunication line NT1. The first server 4 can communicate with the communication device 3 through the telecommunication line NT1. Further, the first server 4 can communicate with the second server 5 through the telecommunication line NT1. The communication protocol for the communication unit 41 to communicate with the communication device 3 may be the same as or different from the communication protocol for the communication unit 41 to communicate with the second server 5.
 通信部41は、電気通信回線NT1を介して通話装置3へ信号を送信する。通信部41が通話装置3へ送信する信号には、上述の自動音声信号、推定結果信号等がある。また、通信部41は、電気通信回線NT1を介して通話装置3からの信号を受信する。通信部41が通話装置3から受信する信号には、上述の音声信号、操作信号等がある。 The communication unit 41 transmits a signal to the communication device 3 via the telecommunication line NT1. The signals transmitted by the communication unit 41 to the communication device 3 include the above-mentioned automatic voice signal, estimation result signal, and the like. Further, the communication unit 41 receives the signal from the communication device 3 via the telecommunication line NT1. The signals received by the communication unit 41 from the communication device 3 include the above-mentioned voice signals, operation signals, and the like.
 通信部41は、電気通信回線NT1を介して第2サーバ5へ信号を送信する。通信部41が第2サーバ5へ送信する信号には、例えば、対象者2の本人確認を行うために用いられる本人確認情報を要求する、要求信号がある。その他、通信部41が第2サーバ5へ送信する信号には、推定部F7による認知機能の推定結果を示す推定結果信号がある。 The communication unit 41 transmits a signal to the second server 5 via the telecommunication line NT1. The signal transmitted by the communication unit 41 to the second server 5 includes, for example, a request signal that requests identity verification information used for verifying the identity of the target person 2. In addition, the signal transmitted by the communication unit 41 to the second server 5 includes an estimation result signal indicating an estimation result of the cognitive function by the estimation unit F7.
 通信部41は、電気通信回線NT1を介して第2サーバ5からの信号を受信する。通信部41が第2サーバ5から受信する信号には、例えば、要求信号に応じて第2サーバ5から送信される、本人確認情報を示す信号がある。 The communication unit 41 receives the signal from the second server 5 via the telecommunication line NT1. The signal received from the second server 5 by the communication unit 41 includes, for example, a signal indicating identity verification information transmitted from the second server 5 in response to a request signal.
 記憶部42は、情報を記憶するための装置である。記憶部42は、ROM(Read Only Memory)、RAM(Random Access Memory)、EEPROM等を含み得る。記憶部42には、通話装置3からの要求(電話での呼び出し)を受けて自動送信される自動音声の音声データが、記憶される。また、記憶部42には、学習済モデル(後述する)が記憶される。また、記憶部42は、通話装置3から送信される対象者2の音声データ(音声信号)を記憶するための領域を、有している。 The storage unit 42 is a device for storing information. The storage unit 42 may include a ROM (Read Only Memory), a RAM (Random Access Memory), an EEPROM, and the like. The storage unit 42 stores voice data of automatic voice that is automatically transmitted in response to a request (call by telephone) from the calling device 3. Further, the learned model (described later) is stored in the storage unit 42. Further, the storage unit 42 has an area for storing the voice data (voice signal) of the target person 2 transmitted from the communication device 3.
 処理部43は、例えば、1以上のプロセッサ(マイクロプロセッサ)と1以上のメモリとを含むコンピュータシステムにより実現され得る。つまり、1以上のプロセッサが1以上のメモリに記憶された1以上のプログラム(アプリケーション)を実行することで、処理部43として機能する。 The processing unit 43 can be realized by, for example, a computer system including one or more processors (microprocessors) and one or more memories. That is, one or more processors execute one or more programs (applications) stored in one or more memories, thereby functioning as the processing unit 43.
 処理部43は、第1サーバ4の全体的な制御、すなわち、通信部41及び記憶部42の動作を制御するように構成される。また、図1に示すように、処理部43は、対象者特定部F1と、属性取得部F2と、本人確認部F3と、提示部F4と、音声取得部F5と、記録部F6と、推定部F7と、出力部F8とを備えている。なお、対象者特定部F1、属性取得部F2、本人確認部F3、提示部F4、音声取得部F5、記録部F6、推定部F7及び出力部F8は、実体のある構成を示しているわけではなく、処理部43によって実現される機能を示している。 The processing unit 43 is configured to control the overall control of the first server 4, that is, the operations of the communication unit 41 and the storage unit 42. Further, as shown in FIG. 1, the processing unit 43 estimates that the target person identification unit F1, the attribute acquisition unit F2, the identity verification unit F3, the presentation unit F4, the voice acquisition unit F5, and the recording unit F6. A unit F7 and an output unit F8 are provided. The target person identification unit F1, the attribute acquisition unit F2, the identity verification unit F3, the presentation unit F4, the voice acquisition unit F5, the recording unit F6, the estimation unit F7, and the output unit F8 do not show a substantive configuration. It shows the function realized by the processing unit 43.
 対象者特定部F1は、通話装置3で通話している対象者2を特定する。本実施形態では、対象者特定部F1は、第2サーバ5の応答部F11(後述する)と協働して対象者2を特定する。 The target person identification unit F1 identifies the target person 2 who is talking on the call device 3. In the present embodiment, the target person identification unit F1 identifies the target person 2 in cooperation with the response unit F11 (described later) of the second server 5.
 対象者特定部F1は、例えば、通話装置3からの要求(電話での呼び出し)を受けると、記憶部42に記憶されている自動音声の音声データ(自動音声信号)を、通信部41を介して通話装置3へ送信する。 When the target person identification unit F1 receives a request (call by telephone) from the communication device 3, for example, the target person identification unit F1 transmits the voice data (automatic voice signal) of the automatic voice stored in the storage unit 42 via the communication unit 41. Is transmitted to the communication device 3.
 対象者特定部F1が送信させる自動音声は、対象者2を特定するための対象者情報の入力を促す内容を含み得る。対象者情報は、例えば、対象者2の免許証番号を含み得る。或いは、対象者情報は、対象者2の氏名、住所等を含み得る。 The automatic voice transmitted by the target person identification unit F1 may include a content prompting the input of the target person information for identifying the target person 2. The target person information may include, for example, the license number of the target person 2. Alternatively, the target person information may include the name, address, and the like of the target person 2.
 なお、対象者情報は通話装置3の電話番号であってもよく、この場合には対象者2による対象者情報の更なる入力は不要である。 Note that the target person information may be the telephone number of the calling device 3, and in this case, the target person 2 does not need to further input the target person information.
 対象者特定部F1は、自動音声に対して対象者2から通話装置3に入力された対象者情報を、通話装置3から受信し、受信した対象者情報に基づいて対象者2を特定する。対象者特定部F1による対象者2の特定は、例えば、受信した対象者情報を、所定の情報データベースDB1に記憶されている本人情報と対照することで、行われる。本人情報を含む情報データベースDB1は、本実施形態では、第2サーバ5の記憶部52に記憶されている。 The target person identification unit F1 receives the target person information input from the target person 2 to the call device 3 for the automatic voice from the call device 3, and identifies the target person 2 based on the received target person information. The target person 2 is specified by the target person identification unit F1, for example, by comparing the received target person information with the personal information stored in the predetermined information database DB1. In the present embodiment, the information database DB 1 including the personal information is stored in the storage unit 52 of the second server 5.
 対象者特定部F1は、対象者情報を第2サーバ5へ送信し、第2サーバ5から、対象者情報と情報データベースDB1との対照結果を受信する。対照結果は、第2サーバ5によって特定された対象者2を示す情報を含む。これにより、対象者特定部F1は、対象者2を特定する。なお、対象者特定部F1自体(すなわち第1サーバ4の処理部43)が、対象者情報と本人情報とを対照することで、対象者2を特定してもよい。例えば、対象者特定部F1は、第2サーバ5から情報データベースDB1の一部又は全部の情報を受信し、受信した情報データベースDB1の情報と対象者情報とを対照することで、対象者2を特定してもよい。 The target person identification unit F1 transmits the target person information to the second server 5, and receives the comparison result between the target person information and the information database DB1 from the second server 5. The control result includes information indicating the subject 2 identified by the second server 5. As a result, the target person identification unit F1 identifies the target person 2. The target person identification unit F1 itself (that is, the processing unit 43 of the first server 4) may specify the target person 2 by comparing the target person information with the personal information. For example, the target person identification unit F1 receives a part or all of the information of the information database DB1 from the second server 5, and compares the received information of the information database DB1 with the target person information to obtain the target person 2. You may specify.
 すなわち、本実施形態の判定方法は、対象者2を特定するための対象者特定処理S2を含む。 That is, the determination method of the present embodiment includes the target person identification process S2 for specifying the target person 2.
 なお、通話装置3が指紋取得部等の生体情報取得部を有している場合、対象者情報は、対象者2の指紋等の生体情報を含み得る。或いは、対象者情報は、対象者2の生体情報として、対象者2の音声から得られる声紋の情報を含み得る。この場合、対象者特定処理S2は、対象者2の生体情報(指紋、声紋等)を用いて行われてもよい。 When the communication device 3 has a biometric information acquisition unit such as a fingerprint acquisition unit, the target person information may include biometric information such as the fingerprint of the target person 2. Alternatively, the subject information may include voiceprint information obtained from the voice of the subject 2 as the biological information of the subject 2. In this case, the target person identification process S2 may be performed using the biological information (fingerprint, voiceprint, etc.) of the target person 2.
 属性取得部F2は、対象者特定部F1で特定された対象者2の属性に関する属性データを取得する。対象者2の属性としては、例えば、人種・性別・年齢(年代)・教育歴等がある。属性データは、例えば、上述の情報データベースDB1に、本人情報と紐付けて記憶されている。属性取得部F2は、通信部41を介して、第2サーバ5から対象者2の属性データを取得する。 The attribute acquisition unit F2 acquires the attribute data related to the attributes of the target person 2 specified by the target person identification unit F1. The attributes of the subject 2 include, for example, race, gender, age (age), educational history, and the like. The attribute data is stored, for example, in the above-mentioned information database DB1 in association with the personal information. The attribute acquisition unit F2 acquires the attribute data of the target person 2 from the second server 5 via the communication unit 41.
 すなわち、本実施形態の判定方法は、対象者2の属性に関する属性データを取得する属性取得処理S3を含む。 That is, the determination method of the present embodiment includes the attribute acquisition process S3 for acquiring the attribute data related to the attributes of the target person 2.
 本人確認部F3は、通話装置3で実際に通話している人が、対象者特定部F1で特定された対象者2本人で間違いないかを確認する。すなわち、通話装置3で通話している人が、自分のものではない他人の対象者情報(免許証番号等)を誤って入力する可能性があるので、本人確認部F3が、通話している人が対象者2本人であることを確認する。本人確認部F3は、対象者特定部F1で対象者2を特定すると、通信部41を介して第2サーバ5へ要求信号を送信し、要求信号の応答として第2サーバ5から、本人確認情報(対照情報)を取得する。本人確認情報は、例えば、対象者2の音声を事前に録音して得られた音声データ(以下、「対照音声データ」という)である。本人確認部F3は、通話装置3から得られる音声データを、対照音声データと比較することで、音声データで示される音声の発声者(通話している人)が対象者2本人であることを確認する。例えば、本人確認部F3は、各音声データで示される音声の声紋を比較し、その一致度を判定することで、本人確認を行う。 The identity verification unit F3 confirms that the person who is actually talking on the call device 3 is the two target persons specified by the target person identification unit F1. That is, since there is a possibility that the person talking on the calling device 3 mistakenly inputs the target person information (license number, etc.) of another person who is not his / her own, the identity verification unit F3 is making a call. Confirm that the person is the two subjects. When the target person 2 is identified by the target person identification unit F1, the identity verification unit F3 transmits a request signal to the second server 5 via the communication unit 41, and the second server 5 sends the request signal as a response to the request signal. (Control information) is acquired. The identity verification information is, for example, voice data obtained by pre-recording the voice of the subject 2 (hereinafter referred to as “control voice data”). By comparing the voice data obtained from the call device 3 with the control voice data, the identity verification unit F3 determines that the voice speaker (the person who is talking) indicated by the voice data is the two target persons. Check. For example, the identity verification unit F3 confirms the identity by comparing the voiceprints of the voices indicated by the voice data and determining the degree of matching.
 すなわち、本実施形態の判定方法は、対象者2が本人であることを確認するための本人確認処理S4を含む。本人確認処理S4は、対象者2の生体情報、より詳細には対象者2の声紋を用いて行われる。 That is, the determination method of the present embodiment includes the identity verification process S4 for confirming that the subject 2 is the principal. The identity verification process S4 is performed using the biological information of the subject 2, and more specifically, the voiceprint of the subject 2.
 本人確認処理S4は、例えば、通話装置3から音声データを受信した場合に、任意のタイミングで行われてよい。例えば、本人確認処理S4は、対象者特定処理S2と平行して行われてよい。すなわち、対象者特定処理S2において対象者2の生体情報(声紋の情報)を用いて対象者2を特定し、かつ、本人確認処理S4において対象者2の生体情報(声紋の情報)を用いて本人確認を行なう構成の場合、対象者特定処理S2と平行して本人確認処理S4を行うことが可能である。また、本人確認処理S4は、判定方法の実行中及び実行後に、音声取得処理S6で取得される音声データを用いて随時行われてよい。 The identity verification process S4 may be performed at an arbitrary timing, for example, when voice data is received from the calling device 3. For example, the identity verification process S4 may be performed in parallel with the target person identification process S2. That is, the target person 2 is specified by using the biological information (voiceprint information) of the target person 2 in the target person identification process S2, and the biological information (voiceprint information) of the target person 2 is used in the identity verification process S4. In the case of the configuration in which the identity verification is performed, the identity verification process S4 can be performed in parallel with the target person identification process S2. Further, the identity verification process S4 may be performed at any time using the voice data acquired by the voice acquisition process S6 during and after the execution of the determination method.
 本人確認処理S4は、音声データ以外の情報を用いて行われてもよい。例えば、本人確認処理S4は、対象者2の指紋等の声紋以外の生体情報を用いて行われてもよいし、免許証番号等の生体情報以外の情報を用いて行われてもよい。また、本人確認処理S4は、対象者特定処理S2と同じ情報を用いて行われてもよい。 The identity verification process S4 may be performed using information other than voice data. For example, the identity verification process S4 may be performed using biometric information other than the voiceprint such as the fingerprint of the subject 2, or may be performed using information other than the biometric information such as the driver's license number. Further, the identity verification process S4 may be performed using the same information as the target person identification process S2.
 提示部F4は、対象者特定部F1による対象者2の特定後、記憶部42に記憶されている自動音声の音声データ(自動音声信号)を、通信部41を介して通話装置3へ送信する。 After the target person 2 is identified by the target person identification unit F1, the presentation unit F4 transmits the voice data (automatic voice signal) of the automatic voice stored in the storage unit 42 to the communication device 3 via the communication unit 41. ..
 提示部F4が送信させる自動音声は、対象者2に定型文の読み上げを促す指示内容を含む。指示内容は、例えば、対象者2に読み上げられるべき定型文と、この定型文の読み上げを行うべき期間の始点及び終点を示す音(例えば「ピ」音)と、を含んでもよい。 The automatic voice transmitted by the presentation unit F4 includes an instruction content that prompts the target person 2 to read a fixed phrase. The instruction content may include, for example, a fixed phrase to be read aloud to the subject 2 and a sound (for example, a “pi” sound) indicating the start point and the end point of the period in which the fixed phrase should be read aloud.
 指示内容は、異なる複数の定型文を含んでもよい。例えば、指示内容は、6種類の定型文を順次読み上げさせる内容を含んでもよい。定型文は、例えば、閉鎖子音と後続母音とからなる文字が、5以上含まれていてもよい。定型文の一例としては、例えば、「きたからきたかたたたきき」等の文がある。 The instruction content may include a plurality of different fixed phrases. For example, the instruction content may include content for sequentially reading out six types of fixed phrases. The fixed phrase may include, for example, five or more characters consisting of a closed consonant and a subsequent vowel. As an example of a fixed phrase, there is a sentence such as "Kitakara Kita Tataki".
 また、指示内容は、同一の定型文の複数回の読み上げを指示する内容を含んでもよい。この場合、指示内容は、定型文を読み上げるべき回数を含んでもよい。 Further, the instruction content may include a content instructing the reading of the same fixed phrase a plurality of times. In this case, the instruction content may include the number of times the fixed phrase should be read aloud.
 すなわち、本実施形態の判定方法は、定型文の読み上げを促す情報を対象者2に提示する提示処理S5を含む。また、提示処理S5は、対象者2に定型文の読み上げを促す自動音声を再生することを含む。 That is, the determination method of the present embodiment includes the presentation process S5 that presents the information prompting the reading of the fixed phrase to the target person 2. In addition, the presentation process S5 includes reproducing an automatic voice prompting the target person 2 to read a fixed phrase.
 音声取得部F5は、提示部F4によって送信された自動音声に応答して対象者2から発せられた音声を示すデータであって通話装置3から送信される音声データを、通信部41を介して取得する。すなわち、本実施形態の判定方法は、対象者2に定型文の読み上げを実行させ読み上げに係る音声データを取得する音声取得処理S6を含む。音声取得処理S6では、電気通信回線NT1を通じて、音声データが取得される。 The voice acquisition unit F5 transmits the voice data transmitted from the call device 3 via the communication unit 41, which is data indicating the voice emitted from the target person 2 in response to the automatic voice transmitted by the presentation unit F4. get. That is, the determination method of the present embodiment includes the voice acquisition process S6 in which the target person 2 reads out a fixed phrase and acquires the voice data related to the reading. In the voice acquisition process S6, voice data is acquired through the telecommunication line NT1.
 記録部F6は、音声取得部F5で取得した音声データを、記憶部42に記録(録音)させる。すなわち、本実施形態の判定方法は、音声取得処理S6で取得された音声データを記録する記録処理S7を含む。 The recording unit F6 causes the storage unit 42 to record (record) the voice data acquired by the voice acquisition unit F5. That is, the determination method of the present embodiment includes the recording process S7 for recording the voice data acquired in the voice acquisition process S6.
 推定部F7は、記憶部42に記録された音声データ(音声取得部F5で取得した音声データ)を処理することで、対象者2の認知機能を推定する。すなわち、本実施形態の判定方法は、音声取得処理S6で取得された音声データに基づいて対象者2の認知機能を推定する推定処理S9を含む。 The estimation unit F7 estimates the cognitive function of the subject 2 by processing the voice data (voice data acquired by the voice acquisition unit F5) recorded in the storage unit 42. That is, the determination method of the present embodiment includes an estimation process S9 that estimates the cognitive function of the subject 2 based on the voice data acquired in the voice acquisition process S6.
 推定部F7は、例えば、音声データから抽出される特徴量を用いて、対象者2の認知機能を推定する。すなわち、本実施形態の判定方法は、音声データから特徴量を抽出する特徴量取得処理S8を含む。また、推定部F7によって実行される推定処理S9は、音声データから抽出される特徴量を用いて認知機能を推定することを含む。 The estimation unit F7 estimates the cognitive function of the subject 2 by using, for example, the feature amount extracted from the voice data. That is, the determination method of the present embodiment includes the feature amount acquisition process S8 for extracting the feature amount from the voice data. Further, the estimation process S9 executed by the estimation unit F7 includes estimating the cognitive function using the feature amount extracted from the voice data.
 本実施形態の判定システム10において、推定部F7は、属性取得部F2で取得した属性データを更に用いて、認知機能を推定する。すなわち、推定部F7によって実行される推定処理S9は、音声データに加えて、対象者2の属性に関する属性データを更に用いて認知機能を推定することを含む。 In the determination system 10 of the present embodiment, the estimation unit F7 further uses the attribute data acquired by the attribute acquisition unit F2 to estimate the cognitive function. That is, the estimation process S9 executed by the estimation unit F7 includes estimating the cognitive function by further using the attribute data related to the attributes of the subject 2 in addition to the voice data.
 本実施形態の判定システム10において、推定部F7は、学習済モデルM1を利用して対象者2の認知機能を推定する。本実施形態の判定システム10において、学習済モデルM1は、例えばロジスティック回帰モデルである。すなわち、推定部F7によって実行される推定処理S9では、機械学習により生成された学習済モデルM1を用いて、対象者2の認知機能の推定を行う。 In the determination system 10 of the present embodiment, the estimation unit F7 estimates the cognitive function of the subject 2 by using the learned model M1. In the determination system 10 of the present embodiment, the trained model M1 is, for example, a logistic regression model. That is, in the estimation process S9 executed by the estimation unit F7, the cognitive function of the subject 2 is estimated using the learned model M1 generated by machine learning.
 学習済モデルM1は、与えられた入力(特徴量)に対して、対象者2の認知機能の程度を示す値を出力するように設計されている。推定部F7は、音声データから得られた特徴量を学習済モデルM1に与え、これによって学習済モデルM1から得られた値に基づいて、対象者2の認知機能の程度を推定する。このような学習済モデルM1は、認知機能の程度を示す値と特徴量との関係を規定する学習用データ(データセット)を用いた教師あり学習により生成することができる。学習済モデルM1は、記憶部42に記憶されている。 The trained model M1 is designed to output a value indicating the degree of cognitive function of the subject 2 with respect to a given input (feature amount). The estimation unit F7 gives the feature amount obtained from the voice data to the trained model M1, and estimates the degree of the cognitive function of the subject 2 based on the value obtained from the trained model M1. Such a trained model M1 can be generated by supervised learning using learning data (data set) that defines the relationship between a value indicating the degree of cognitive function and a feature amount. The trained model M1 is stored in the storage unit 42.
 本実施形態において、推定処理S9に用いられる特徴量は、音声データで示される対象者2の音声に含まれる音節における母音の第一フォルマント周波数又は第二フォルマント周波数に関する量を含み得る。 In the present embodiment, the feature amount used in the estimation process S9 may include an amount related to the first formant frequency or the second formant frequency of the vowel in the syllable included in the voice of the subject 2 indicated by the voice data.
 ここにおいて、第一フォルマント周波数とは、人から発せられる音声に含まれる周波数ピークのうちで最も周波数が小さいピークに対応する周波数である。第二フォルマント周波数とは、人から発せられる音声に含まれる周波数ピークのうちで二番目に周波数が小さいピークに対応する周波数である。 Here, the first formant frequency is the frequency corresponding to the lowest frequency peak among the frequency peaks included in the voice emitted from a person. The second formant frequency is a frequency corresponding to the second lowest frequency peak among the frequency peaks included in the voice emitted from a person.
 一例において、上記量は、音声データで示される対象者2の音声に含まれる複数の音節における複数の母音の、第一フォルマント周波数、第二フォルマント周波数、又は第一フォルマント周波数に対する第二フォルマント周波数の比のばらつきを含み得る。ばらつきは、例えば標準偏差である。 In one example, the quantity is the first formant frequency, the second formant frequency, or the second formant frequency relative to the first formant frequency of a plurality of vowels in a plurality of vowels included in the voice of the subject 2 represented by voice data. It can include variations in ratio. The variability is, for example, the standard deviation.
 一例において、上記量は、音声データで示される対象者2の音声に含まれる複数の音節における複数の母音の、第一フォルマント周波数又は第二フォルマント周波数の変化量を含み得る。 In one example, the above amount may include a change in the first formant frequency or the second formant frequency of a plurality of vowels in a plurality of syllables included in the voice of the subject 2 indicated by voice data.
 一例において、上記量は、音声データで示される対象者2の音声に含まれる複数の音節における複数の母音の第一フォルマント周波数又は第二フォルマント周波数の変化にかかる所要時間を含み得る。 In one example, the amount may include the time required to change the first formant frequency or the second formant frequency of a plurality of vowels in a plurality of syllables included in the voice of the subject 2 indicated by voice data.
 一例において、上記量は、音声データで示される対象者2の音声に含まれる複数の音節における複数の母音の第一フォルマント周波数又は第二フォルマント周波数の、所要時間に対する変化量の比である変化率を含み得る。 In one example, the above amount is the rate of change, which is the ratio of the amount of change of the first formant frequency or the second formant frequency of a plurality of vowels in a plurality of syllables included in the voice of the subject 2 represented by the voice data to the required time. May include.
 一例において、上記量は、母音の第一フォルマント周波数に対する第二フォルマント周波数で形成される座標空間において、音声データで示される対象者2の音声に含まれる複数の音節における複数の母音の第一フォルマント周波数に対する第二フォルマント周波数の値をプロットした場合の、プロットされた複数の点の間の位置関係、又はそれらの点で形成される多角形の形状又は面積を含み得る。 In one example, the quantity is the first formant of a plurality of vowels in a plurality of vowels included in the voice of the subject 2 represented by voice data in a coordinate space formed by a second formant frequency relative to the first formant frequency of the vowel. It may include the positional relationship between the plotted points, or the shape or area of the polygon formed by those points, when plotting the value of the second formant frequency with respect to the frequency.
 また、推定処理S9に用いられる特徴量は、音声データで示される対象者2の音声に含まれる開音節における子音の音圧と当該子音に後続する母音の音圧との間の音圧差を含み得る。 Further, the feature amount used in the estimation process S9 includes the sound pressure difference between the sound pressure of the consonant in the open syllable included in the voice of the subject 2 indicated by the voice data and the sound pressure of the vowel following the consonant. obtain.
 推定処理S9に用いられる特徴量は、音声データで示される対象者2の音声に含まれる複数の開音節における、子音の音圧と当該子音に後続する母音の音圧との間の音圧差の、ばらつきを含み得る。 The feature amount used in the estimation process S9 is the difference in sound pressure between the sound pressure of a consonant and the sound pressure of a vowel following the consonant in a plurality of open syllables included in the voice of the subject 2 indicated by voice data. , May include variability.
 推定処理S9に用いられる特徴量は、定型文の読み上げに際し、対象者2が読み上げに要した総時間を含み得る。 The feature amount used in the estimation process S9 may include the total time required for the subject 2 to read aloud the fixed phrase.
 推定処理S9に用いられる特徴量は、同一の定型文の複数回の読み上げに際し、複数回の読み上げにそれぞれ要した時間の変化量を含み得る。 The feature amount used in the estimation process S9 may include the amount of change in the time required for each of the multiple readings of the same fixed phrase when the same fixed phrase is read a plurality of times.
 なお、本実施形態の判定方法を用いた認知機能の検査に用いられ得る対象者2の音声の特徴量については、特許第6337362号を参照されたい。 Please refer to Japanese Patent No. 6337362 for the amount of voice features of subject 2 that can be used for the examination of cognitive function using the determination method of the present embodiment.
 また、推定処理S9に用いられる特徴量は、属性取得部F2で取得された対象者2の属性を含み得る。 Further, the feature amount used in the estimation process S9 may include the attribute of the target person 2 acquired by the attribute acquisition unit F2.
 推定部F7は、対象者2の認知機能の推定結果を出力する。推定結果は、例えば、対象者2の認知機能の程度を示す数値で示される。推定部F7は、例えば、推定した認知機能の程度が、予め決められた複数段階のうちのどの段階に属するかを示す数値(例えば5段階中の2段階目等)を出力してもよい。 The estimation unit F7 outputs the estimation result of the cognitive function of the subject 2. The estimation result is indicated by, for example, a numerical value indicating the degree of cognitive function of the subject 2. The estimation unit F7 may output, for example, a numerical value (for example, the second stage out of the five stages) indicating which stage of the plurality of predetermined stages the estimated degree of cognitive function belongs to.
 出力部F8は、推定部F7の推定結果に基づく結果情報を示す推定結果信号を、通信部41を介して通話装置3及び第2サーバ5へ出力する。出力部F8が出力する結果情報は、推定部F7から出力された推定結果(例えば数値)そのものであってもよいし、別の形態であってもよい。出力部F8が出力する結果情報は、例えば、推定結果に基づいて対象者2が次に取るべき行動を示す情報(例えば、病院で医師による認知機能検査を受けることを促す情報)、注意事項(認知機能の改善を促す情報)等であってもよい。通話装置3は、第1サーバ4から推定結果信号を受け取ると、その推定結果を、通話部32により音声で出力する又は表示部33により画像で表示することで、対象者2に通知してもよい。 The output unit F8 outputs an estimation result signal indicating result information based on the estimation result of the estimation unit F7 to the communication device 3 and the second server 5 via the communication unit 41. The result information output by the output unit F8 may be the estimation result (for example, a numerical value) output from the estimation unit F7 itself, or may be in another form. The result information output by the output unit F8 is, for example, information indicating the next action to be taken by the subject 2 based on the estimation result (for example, information prompting the patient to undergo a cognitive function test by a doctor at a hospital), precautions ( Information that promotes improvement of cognitive function) or the like. When the calling device 3 receives the estimation result signal from the first server 4, the calling device 3 may notify the target person 2 by outputting the estimation result by voice by the calling unit 32 or displaying it as an image by the display unit 33. Good.
 図1に示すように、第2サーバ5は、通信部51と記憶部52と処理部53とを備えている。図2に示すように、第2サーバ5は、例えば運転免許センター1に設置される。 As shown in FIG. 1, the second server 5 includes a communication unit 51, a storage unit 52, and a processing unit 53. As shown in FIG. 2, the second server 5 is installed in, for example, the driver's license center 1.
 第2サーバ5は、判定方法の各処理(図3参照)のうち「記録処理S11」、「反映処理S12」を実行する。 The second server 5 executes the "recording process S11" and the "reflection process S12" among the processes of the determination method (see FIG. 3).
 通信部51は、通信インターフェースである。通信部51は、電気通信回線NT1に接続可能な通信インターフェースであり、電気通信回線NT1を通じた通信を行う機能を有する。第2サーバ5は、電気通信回線NT1を通じて第1サーバ4と通信可能である。 The communication unit 51 is a communication interface. The communication unit 51 is a communication interface that can be connected to the telecommunication line NT1 and has a function of performing communication through the telecommunication line NT1. The second server 5 can communicate with the first server 4 through the telecommunication line NT1.
 通信部51は、電気通信回線NT1を介して第1サーバ4へ信号を送信する。通信部51が第1サーバ4へ送信する信号には、例えば、上述の本人確認情報を示す信号等がある。また、通信部51は、電気通信回線NT1を介して第1サーバ4からの信号を受信する。通信部51が第1サーバ4から受信する信号には、例えば、上述の要求信号、推定結果信号等がある。 The communication unit 51 transmits a signal to the first server 4 via the telecommunication line NT1. The signal transmitted by the communication unit 51 to the first server 4 includes, for example, the above-mentioned signal indicating the identity verification information. Further, the communication unit 51 receives the signal from the first server 4 via the telecommunication line NT1. The signals received by the communication unit 51 from the first server 4 include, for example, the above-mentioned request signal, estimation result signal, and the like.
 記憶部52は、情報を記憶するための装置である。記憶部52は、ROM、RAM、EEPROM等を含み得る。記憶部52には、上述の情報データベースDB1が記憶される。 The storage unit 52 is a device for storing information. The storage unit 52 may include a ROM, RAM, EEPROM, and the like. The above-mentioned information database DB1 is stored in the storage unit 52.
 処理部53は、例えば、1以上のプロセッサ(マイクロプロセッサ)と1以上のメモリとを含むコンピュータシステムにより実現され得る。つまり、1以上のプロセッサが1以上のメモリに記憶された1以上のプログラム(アプリケーション)を実行することで、処理部53として機能する。 The processing unit 53 can be realized by, for example, a computer system including one or more processors (microprocessors) and one or more memories. That is, one or more processors execute one or more programs (applications) stored in one or more memories, thereby functioning as the processing unit 53.
 処理部53は、第2サーバ5の全体的な制御、すなわち、通信部51及び記憶部52の動作を制御するように構成される。また、図1に示すように、処理部53は、応答部F11と、登録部F12と、反映部F13とを備えている。なお、応答部F11、登録部F12及び反映部F13は、実体のある構成を示しているわけではなく、処理部53によって実現される機能を示している。 The processing unit 53 is configured to control the overall control of the second server 5, that is, the operations of the communication unit 51 and the storage unit 52. Further, as shown in FIG. 1, the processing unit 53 includes a response unit F11, a registration unit F12, and a reflection unit F13. The response unit F11, the registration unit F12, and the reflection unit F13 do not show a substantive configuration, but show a function realized by the processing unit 53.
 応答部F11は、第1サーバ4から送信される信号に応じて、所定の応答を行う。 The response unit F11 makes a predetermined response according to the signal transmitted from the first server 4.
 例えば、応答部F11は、第1サーバ4から送信される対象者情報を示す信号に応じて、対象者情報で示される対象者2を特定する。応答部F11は、例えば、受信した対象者情報を情報データベースDB1と対照することで、対象者2を特定する。応答部F11は、対照結果(特定された対象者2)を示す情報を、第1サーバ4へ送信する。すなわち、応答部F11は、第1サーバ4の対象者特定部F1と協働して、対象者2を特定する。このとき、応答部F11は、特定された対象者2の属性に関する属性データも合わせて送信する。 For example, the response unit F11 identifies the target person 2 indicated by the target person information according to the signal indicating the target person information transmitted from the first server 4. The response unit F11 identifies the target person 2 by, for example, comparing the received target person information with the information database DB1. The response unit F11 transmits information indicating the control result (identified subject 2) to the first server 4. That is, the response unit F11 collaborates with the target person identification unit F1 of the first server 4 to specify the target person 2. At this time, the response unit F11 also transmits the attribute data related to the attributes of the specified target person 2.
 また、応答部F11は、第1サーバ4から送信される要求信号に応答して、本人確認情報を送信する。本人確認情報は、上述のように、対象者2の音声を事前に録音して得られた音声データである対照音声データである。対照音声データは、例えば、対象者2に対して運転免許証を発行(新規交付)する際に録音されてもよいし、過去の運転免許証の更新時に録音されてもよいし、運転免許証の交付又は更新に関係なく録音されてもよい。 Further, the response unit F11 responds to the request signal transmitted from the first server 4 and transmits the identity verification information. As described above, the identity verification information is control voice data which is voice data obtained by pre-recording the voice of the subject 2. The control voice data may be recorded, for example, when a driver's license is issued (newly issued) to the target person 2, may be recorded when a past driver's license is renewed, or the driver's license may be recorded. It may be recorded regardless of the delivery or update of.
 登録部F12は、第1サーバ4から推定結果信号を受信すると、推定結果信号で示される対象者2の認知機能の推定結果を、情報データベースDB1へ登録する。推定結果の情報は、情報データベースDB1において、対応する対象者2の本人情報と紐付けて登録される。登録部F12は、情報データベースDB1に既に推定結果の情報が登録されている場合、既に登録されている情報を新たな情報で書き換え(更新し)てもよいし、既に登録されている情報を残しつつ新たな情報を追加してもよい。 When the registration unit F12 receives the estimation result signal from the first server 4, the registration unit F12 registers the estimation result of the cognitive function of the subject 2 indicated by the estimation result signal in the information database DB1. The information of the estimation result is registered in the information database DB1 in association with the personal information of the corresponding target person 2. When the information of the estimation result is already registered in the information database DB1, the registration unit F12 may rewrite (update) the already registered information with new information, or leave the already registered information. You may add new information while doing so.
 なお、情報データベースDB1に登録される対象者2の認知機能の推定結果には、有効期限が設定されていることが好ましい。有効期限に特に制限はないが、例えば、1年、半年、3ヶ月、1ヶ月、半月、1週間等であってもよい。 It is preferable that an expiration date is set for the estimation result of the cognitive function of the subject 2 registered in the information database DB1. The expiration date is not particularly limited, but may be, for example, one year, half a year, three months, one month, half a month, one week, or the like.
 反映部F13は、運転免許センター1での対象者2への運転免許証の交付又は更新の手順、及び交付又は更新の可否の判断に、情報データベースDB1へ登録されている対象者2の認知機能の推定結果を反映させる。 The reflection unit F13 has a cognitive function of the subject 2 registered in the information database DB1 for the procedure of issuing or renewing the driver's license to the subject 2 at the driver's license center 1 and determining whether or not the license can be issued or renewed. Reflect the estimation result of.
 反映部F13は、例えば、認知機能の推定結果の程度に応じて、対象者2が運転免許センター1にて運転免許証の更新を行う際の手順を異ならせる。例えば、対象者2の認知機能が低下している場合(例えば、認知機能の程度を示す数値が、5段階中の最も悪い段階である場合)、反映部F13は、この対象者2の運転免許証の更新の際に、医師による認知機能検査の実施を義務づけてもよい。一方、対象者2の認知機能に問題がない場合(例えば、認知機能の程度を示す数値が、5段階中の最も良い段階である場合)、反映部F13は、この対象者2が運転免許証の更新のために運転免許センター1に赴いた際に、認知機能検査を省略させてもよい。 The reflection unit F13 makes the procedure for the subject 2 to renew the driver's license at the driver's license center 1 different depending on, for example, the degree of the estimation result of the cognitive function. For example, when the cognitive function of the subject 2 is deteriorated (for example, when the numerical value indicating the degree of the cognitive function is the worst stage among the five stages), the reflection unit F13 has the driver's license of the subject 2. A doctor may be required to perform a cognitive function test when renewing the license. On the other hand, when there is no problem in the cognitive function of the subject 2 (for example, when the numerical value indicating the degree of the cognitive function is the best stage among the five stages), the reflection unit F13 indicates that the subject 2 has a driver's license. You may omit the cognitive function test when you go to the Driver's License Center 1 to renew your driver's license.
 反映部F13が推定結果を反映させる具体的な手段は、特に限定されない。例えば、反映部F13は、医師による認知機能検査の実施が必要か否かを対象者2毎に示す情報を、運転免許センター1の職員に対して通知してもよい。例えば、反映部F13は、第2サーバ5に接続されている表示装置に、医師による認知機能検査の実施が必要か否かを対象者2毎に表示させてもよい。運転免許センター1の職員は、表示装置にてこの情報を参照することで、各対象者2の運転免許証の更新の際に、認知機能検査の実施が必要であるか否かを判断することが可能となる。 The specific means by which the reflection unit F13 reflects the estimation result is not particularly limited. For example, the reflection unit F13 may notify the staff of the driver's license center 1 of information indicating whether or not it is necessary for the doctor to carry out a cognitive function test for each subject 2. For example, the reflection unit F13 may cause the display device connected to the second server 5 to display for each subject 2 whether or not it is necessary to carry out a cognitive function test by a doctor. The staff of the Driver's License Center 1 shall refer to this information on the display device to determine whether or not it is necessary to carry out a cognitive function test when renewing the driver's license of each subject 2. Is possible.
 なお、情報データベースDB1において、ある対象者2の本人情報に対して認知機能の推定結果が複数紐付けられている場合、反映部F13は、最新の推定結果のみを反映させてもよいし、有効期限内の複数の推定結果を反映させてもよい。 In the information database DB1, when a plurality of cognitive function estimation results are linked to the personal information of a certain subject 2, the reflection unit F13 may reflect only the latest estimation results, which is effective. Multiple estimation results within the deadline may be reflected.
 (2.2)動作
 次に、図3を参照して本実施形態の判定方法について簡単に説明する。
(2.2) Operation Next, the determination method of the present embodiment will be briefly described with reference to FIG.
 運転免許証の更新時に認知機能検査の受検を義務づけられている対象者2は、運転免許証の更新のために運転免許センター1に赴く前に、例えば自宅にて、通話装置3を用いて第1サーバ4へ電話で呼び出しを行う(呼出処理S1)。 The subject 2, who is obliged to take a cognitive function test when renewing his driver's license, uses the telephone device 3 at home, for example, before going to the driver's license center 1 to renew his driver's license. 1 Call the server 4 by telephone (call processing S1).
 第1サーバ4は、通話装置3によって電話で呼び出しを受けると、対象者2を特定するための対象者情報の入力を促す自動音声を、通話装置3へ送信する。対象者2は、自動音声に応答して、通話装置3を介して運転免許証番号等の対象者情報を入力し、第1サーバ4は、電気通信回線NT1を介して対象者情報を取得する。第1サーバ4は、取得した対象者情報を第2サーバ5へ送信し、第2サーバ5は、受信した対象者情報を情報データベースDB1と対照することで、対象者2を特定する。第2サーバ5は、特定した対象者2を示す情報を第1サーバ4へ送信し、第1サーバ4は、この情報を第2サーバ5から受信することで、対象者2を特定する(対象者特定処理S2)。また、第2サーバ5は、特定した対象者2の属性を示す属性データを第1サーバ4へ送信し、第1サーバ4は、対象者2の属性データを取得する(属性取得処理S3)。 When the first server 4 receives a call by the telephone device 3, the first server 4 transmits an automatic voice prompting the input of the target person information for identifying the target person 2 to the call device 3. The target person 2 responds to the automatic voice and inputs the target person information such as the driver's license number via the communication device 3, and the first server 4 acquires the target person information via the telecommunication line NT1. .. The first server 4 transmits the acquired target person information to the second server 5, and the second server 5 identifies the target person 2 by comparing the received target person information with the information database DB1. The second server 5 transmits information indicating the identified target person 2 to the first server 4, and the first server 4 identifies the target person 2 by receiving this information from the second server 5 (target). Person identification process S2). Further, the second server 5 transmits the attribute data indicating the attributes of the specified target person 2 to the first server 4, and the first server 4 acquires the attribute data of the target person 2 (attribute acquisition process S3).
 対象者2を特定すると、第1サーバ4は第2サーバ5へ要求信号を送信し、第2サーバ5から本人確認情報を取得する。第1サーバ4は、対象者2の特定後において対象者2との通話中に、本人確認情報を用いて、通話装置3の通話者が対象者2本人であることを確認する(本人確認処理S4)。 When the target person 2 is specified, the first server 4 sends a request signal to the second server 5 and acquires the identity verification information from the second server 5. The first server 4 confirms that the caller of the calling device 3 is the two target persons by using the identity verification information during the call with the target person 2 after the identification of the target person 2 (identity verification process). S4).
 また、対象者2を特定する(及び通話者が対象者2本人であることを確認する)と、第1サーバ4は、定型文の読み上げを促す自動音声を通話装置3へ送信する(提示処理S5)。また、第1サーバ4は、対象者2が定型文を読み上げた際の音声データを通話装置3から取得し(音声取得処理S6)、取得した音声データを記憶部42に記録(録音)する(記録処理S7)。第1サーバ4は、記録した音声データから、特徴量を抽出(取得)する(特徴量取得処理S8)。 Further, when the target person 2 is specified (and it is confirmed that the caller is the two target persons), the first server 4 transmits an automatic voice prompting the reading of the fixed phrase to the call device 3 (presentation process). S5). Further, the first server 4 acquires the voice data when the target person 2 reads out the fixed phrase from the call device 3 (voice acquisition process S6), and records (records) the acquired voice data in the storage unit 42 (recording). Recording process S7). The first server 4 extracts (acquires) a feature amount from the recorded voice data (feature amount acquisition process S8).
 第1サーバ4は、特徴量取得処理S8で取得した特徴量、及び属性取得処理S3で取得した属性データを用いて、対象者2の認知機能を推定する(認知機能推定処理(推定処理)S9)。第1サーバ4は、推定処理S9で得られた認知機能の推定結果を、通話装置3及び第2サーバ5へ出力する(出力処理S10)。 The first server 4 estimates the cognitive function of the target person 2 by using the feature amount acquired in the feature amount acquisition process S8 and the attribute data acquired in the attribute acquisition process S3 (cognitive function estimation process (estimation process) S9). ). The first server 4 outputs the estimation result of the cognitive function obtained in the estimation process S9 to the communication device 3 and the second server 5 (output process S10).
 通話装置3は、第1サーバ4から得られた認知機能の推定結果を、通話部32又は表示部33を介して、対象者2へ通知する。 The call device 3 notifies the target person 2 of the estimation result of the cognitive function obtained from the first server 4 via the call unit 32 or the display unit 33.
 第2サーバ5は、第1サーバ4から得られた認知機能の推定結果を、記憶部52へ記録する(記録処理S11)。そして、第2サーバ5は、対象者2への運転免許証の交付若しくは更新の手順、又は交付若しくは更新の可否の判断に、対象者2の認知機能の推定結果を反映させる(反映処理S12)。 The second server 5 records the estimation result of the cognitive function obtained from the first server 4 in the storage unit 52 (recording process S11). Then, the second server 5 reflects the estimation result of the cognitive function of the target person 2 in the procedure of issuing or renewing the driver's license to the target person 2 or in the determination of whether or not the issuance or renewal is possible (reflection process S12). ..
 その後、対象者2が、運転免許証の更新のために運転免許センター1に赴いた際には、対象者2は、判定法法による認知機能の推定結果が反映された手順に従って、運転免許証の更新を行うことができる。或いは、認知機能の推定結果が、運転免許証の更新に適さないとの結果を示す場合には、運転免許センター1の職員は、その旨を対象者2に告げて、医師による認知機能検査を受検することを対象者2に促してもよい。 After that, when the subject 2 goes to the driver's license center 1 to renew the driver's license, the subject 2 follows the procedure that reflects the estimation result of the cognitive function by the judgment method and the driver's license. Can be updated. Alternatively, if the estimated result of cognitive function indicates that it is not suitable for renewal of the driver's license, the staff of the driver's license center 1 informs the subject 2 to that effect and conducts a cognitive function test by a doctor. The subject 2 may be urged to take the test.
 (2.3)利点
 上述のように、本実施形態の認知機能の判定方法及び判定システム10によれば、対象者2に定型文を読み上げさせ、読み上げられた音声の音声データを処理することで、対象者2の認知機能の検査を行っている。この検査を実行する(定型文の読み上げ及び処理を行う)のに要する時間は、例えば6種類の定型文を対象者2に読み上げさせる場合であっても、3分程度である。一方、例えばテスト用紙に解答を記入してそれを採点する従来の検査方法では、解答時間及び採点時間を含めると、少なくとも30分程度の時間を要する。そのため、本実施形態の判定方法及び判定システム10を用いることで、従来の検査方法に比べて、検査に要する時間を短縮することが可能となる。
(2.3) Advantages As described above, according to the cognitive function determination method and determination system 10 of the present embodiment, the subject 2 is made to read a fixed phrase and the voice data of the read voice is processed. , The cognitive function of subject 2 is being examined. The time required to execute this inspection (read out and process the fixed phrase) is about 3 minutes even when the subject 2 reads out 6 types of fixed phrases, for example. On the other hand, for example, in the conventional inspection method in which an answer is written on a test sheet and the answer is scored, it takes at least about 30 minutes including the answer time and the scoring time. Therefore, by using the determination method and the determination system 10 of the present embodiment, it is possible to shorten the time required for the inspection as compared with the conventional inspection method.
 また、対象者2は、少なくとも通話装置3さえ所持していれば、本実施形態の認知機能の判定方法及び判定システム10による検査を受検することが可能である。そのため、対象者2は、任意の場所で検査を受けることが可能であり、従来の検査方法のように検査会場に赴いて検査を受ける必要がない。そのため、本実施形態の判定方法及び判定システム10によれば、従来の検査方法に比べて、対象者2の負担を軽減することが可能である。 Further, the subject 2 can take the inspection by the determination method and the determination system 10 of the cognitive function of the present embodiment as long as he / she possesses at least the communication device 3. Therefore, the subject 2 can be inspected at any place, and there is no need to go to the inspection site and undergo the inspection unlike the conventional inspection method. Therefore, according to the determination method and the determination system 10 of the present embodiment, it is possible to reduce the burden on the subject 2 as compared with the conventional inspection method.
 要するに、本実施形態の認知機能の判定方法及び判定システム10によれば、免許更新時の負担の軽減を図ることが可能となる。 In short, according to the cognitive function determination method and determination system 10 of the present embodiment, it is possible to reduce the burden at the time of license renewal.
 また、運転免許証の更新時に限らず、運転免許証の発行時に認知機能検査を行なう場合についても、本実施形態の認知機能の判定方法及び判定システム10を用いることで、免許発行時の負担の軽減を図ることが可能となる。 Further, not only when the driver's license is renewed, but also when the cognitive function test is performed at the time of issuing the driver's license, by using the cognitive function determination method and the determination system 10 of the present embodiment, the burden at the time of issuing the license is borne. It is possible to reduce the amount.
 (3)変形例
 本開示の実施形態は、上記実施形態に限定されない。上記実施形態は、本開示の目的を達成できれば、設計等に応じて種々の変更が可能である。また、上記実施形態に係る認知機能の判定方法と同様の機能は、コンピュータプログラム、又はコンピュータプログラムを記録した非一時的記録媒体等で具現化されてもよい。
(3) Modified Example The embodiment of the present disclosure is not limited to the above embodiment. The above-described embodiment can be changed in various ways depending on the design and the like as long as the object of the present disclosure can be achieved. Further, the same function as the cognitive function determination method according to the above embodiment may be embodied in a computer program, a non-temporary recording medium on which the computer program is recorded, or the like.
 一態様に係る(コンピュータ)プログラムは、1以上のプロセッサに、上述の実施形態の判定方法を実行させるためのプログラムである。 The (computer) program according to one aspect is a program for causing one or more processors to execute the determination method of the above-described embodiment.
 以下、上述の実施形態の変形例を列挙する。以下に説明する変形例は、適宜組み合わせて適用可能である。 Hereinafter, modified examples of the above-described embodiment are listed. The modifications described below can be applied in combination as appropriate.
 本開示における認知機能の判定システム10は、例えば、通話装置3、第1サーバ4、第2サーバ5等に、コンピュータシステムを含んでいる。コンピュータシステムは、ハードウェアとしてのプロセッサ及びメモリを主構成とする。コンピュータシステムのメモリに記録されたプログラムをプロセッサが実行することによって、通話装置3、第1サーバ4、第2サーバ5としての機能が実現される。プログラムは、コンピュータシステムのメモリに予め記録されてもよく、電気通信回線を通じて提供されてもよく、コンピュータシステムで読み取り可能なメモリカード、光学ディスク、ハードディスクドライブ等の非一時的記録媒体に記録されて提供されてもよい。コンピュータシステムのプロセッサは、半導体集積回路(IC)又は大規模集積回路(LSI)を含む1ないし複数の電子回路で構成される。ここでいうIC又はLSI等の集積回路は、集積の度合いによって呼び方が異なっており、システムLSI、VLSI(Very Large Scale Integration)、又はULSI(Ultra Large Scale Integration)と呼ばれる集積回路を含む。さらに、LSIの製造後にプログラムされる、FPGA(Field-Programmable Gate Array)、又はLSI内部の接合関係の再構成若しくはLSI内部の回路区画の再構成が可能な論理デバイスについても、プロセッサとして採用することができる。複数の電子回路は、1つのチップに集約されていてもよいし、複数のチップに分散して設けられていてもよい。複数のチップは、1つの装置に集約されていてもよいし、複数の装置に分散して設けられていてもよい。ここでいうコンピュータシステムは、1以上のプロセッサ及び1以上のメモリを有するマイクロコントローラを含む。したがって、マイクロコントローラについても、半導体集積回路又は大規模集積回路を含む1ないし複数の電子回路で構成される。 The cognitive function determination system 10 in the present disclosure includes a computer system in, for example, a communication device 3, a first server 4, a second server 5, and the like. The main configuration of a computer system is a processor and memory as hardware. When the processor executes the program recorded in the memory of the computer system, the functions as the communication device 3, the first server 4, and the second server 5 are realized. The program may be pre-recorded in the memory of the computer system, may be provided through a telecommunications line, and may be recorded on a non-temporary recording medium such as a memory card, optical disk, hard disk drive, etc. that can be read by the computer system. May be provided. A processor in a computer system is composed of one or more electronic circuits including a semiconductor integrated circuit (IC) or a large scale integrated circuit (LSI). The integrated circuit such as an IC or LSI referred to here has a different name depending on the degree of integration, and includes an integrated circuit called a system LSI, VLSI (Very Large Scale Integration), or ULSI (Ultra Large Scale Integration). Further, an FPGA (Field-Programmable Gate Array) programmed after the LSI is manufactured, or a logical device capable of reconfiguring the junction relationship inside the LSI or reconfiguring the circuit partition inside the LSI should also be adopted as a processor. Can be done. A plurality of electronic circuits may be integrated on one chip, or may be distributed on a plurality of chips. The plurality of chips may be integrated in one device, or may be distributed in a plurality of devices. The computer system referred to here includes a microcontroller having one or more processors and one or more memories. Therefore, the microcontroller is also composed of one or more electronic circuits including a semiconductor integrated circuit or a large-scale integrated circuit.
 また、第1サーバ4、第2サーバ5の各々における複数の機能が、1つの筐体内に集約されていることは判定システム10に必須の構成ではなく、第1サーバ4、第2サーバ5の各々の構成要素は、複数の筐体に分散して設けられていてもよい。さらに、判定システム10の少なくとも一部の機能、例えば、第1サーバ4及び第2サーバ5の一部の機能がクラウド(クラウドコンピューティング)等によって実現されてもよい。 Further, it is not an essential configuration for the determination system 10 that a plurality of functions in each of the first server 4 and the second server 5 are integrated in one housing, and the first server 4 and the second server 5 have different functions. Each component may be distributed in a plurality of housings. Further, at least a part of the functions of the determination system 10, for example, a part of the functions of the first server 4 and the second server 5 may be realized by the cloud (cloud computing) or the like.
 反対に、上述の実施形態において、複数の装置に分散されている判定システム10の少なくとも一部の機能が、1つの筐体内に集約されていてもよい。例えば、第1サーバ4と第2サーバ5とに分散されている判定システム10の一部の機能が、1つの筐体内に集約されていてもよい。 On the contrary, in the above-described embodiment, at least a part of the functions of the determination system 10 distributed in a plurality of devices may be integrated in one housing. For example, some functions of the determination system 10 distributed in the first server 4 and the second server 5 may be integrated in one housing.
 一変形例において、提示処理S5では、対象者2に定型文の読み上げを促す情報を視覚的に与えてもよい。例えば、提示処理S5において、対象者2に定型文の読み上げを促す文章、画像、映像等を、通話装置3の表示部33に表示させてもよい。 In one modification, in the presentation process S5, information prompting the target person 2 to read a fixed phrase may be visually given. For example, in the presentation process S5, a sentence, an image, a video, or the like that prompts the target person 2 to read a fixed phrase may be displayed on the display unit 33 of the call device 3.
 一変形例において、提示処理S5において提示される内容(自動音声の内容、文章、画像等)は、解答が一義的に決まる質問を含んでもよい。質問は、運転免許証に関連していてもよい。質問の例としては、例えば、一時停止標識の色を答えさせる質問、表示部33にスピードメータの画像を表示させてメータが指し示す速度を答えさせる質問、表示部33にナンバープレートの画像を表示させて地名、番号等を答えさせる質問等がある。表示部33に表示される画像は、動画であってもよく、例えばその大きさが徐々に変化する(徐々に大きくなる)画像であってもよい。また、表示部33に一時的に画像を表示させてその内容を対象者2に記憶させた後に画像を非表示とし、画像に示されていた内容についての質問を対象者2に提示してもよい。すなわち、提示処理S5では、運転免許証に関連する質問を対象者2に提示し、この質問は、質問に対する解答としての定型文の読み上げを対象者2に促してもよい。 In one modification, the content (content of automatic voice, sentence, image, etc.) presented in the presentation process S5 may include a question whose answer is uniquely determined. The question may be related to a driver's license. Examples of questions include, for example, a question that asks the display unit 33 to answer the color of the stop sign, a question that causes the display unit 33 to display an image of the speedometer and answers the speed indicated by the meter, and a display unit 33 that displays an image of the number plate. There are questions that ask you to answer the place name, number, etc. The image displayed on the display unit 33 may be a moving image, for example, an image whose size gradually changes (gradually increases). Further, even if the display unit 33 temporarily displays the image and stores the content in the target person 2, the image is hidden and a question about the content shown in the image is presented to the target person 2. Good. That is, in the presentation process S5, a question related to the driver's license may be presented to the subject 2, and this question may prompt the subject 2 to read a fixed phrase as an answer to the question.
 一変形例において、提示処理S5において提示される内容は、解答が一義的に決まらない質問を、更に含んでもよい。質問の例としては、例えば、対象者2の氏名、年齢、教育歴等を答えさせる質問がある。 In one modification, the content presented in the presentation process S5 may further include a question whose answer is not uniquely determined. As an example of the question, for example, there is a question to answer the name, age, educational history, etc. of the subject 2.
 一変形例において、推定処理S9に用いられる特徴量は、通話装置3に対する対象者2の操作入力に関する情報(例えば、タッチパネルに対する操作速度等)を、更に含んでもよい。すなわち、判定方法は、対象者2から通話装置3への操作入力を受け付ける操作受け付け処理を更に備えてもよい。推定処理S9では、音声取得処理S6で取得された音声データに加えて、操作入力の結果を更に用いて、対象者2の認知機能を推定してもよい。 In one modification, the feature amount used in the estimation process S9 may further include information regarding the operation input of the target person 2 to the communication device 3 (for example, the operation speed for the touch panel). That is, the determination method may further include an operation acceptance process for receiving an operation input from the target person 2 to the communication device 3. In the estimation process S9, the cognitive function of the subject 2 may be estimated by further using the result of the operation input in addition to the voice data acquired in the voice acquisition process S6.
 一変形例において、本人情報を含む情報データベースDB1は、第1サーバ4の記憶部42に記録されていてもよい。この場合、第1サーバ4のみで、対象者特定処理S2、属性取得処理S3、本人確認処理S4を実行可能である。 In one modification, the information database DB1 including the personal information may be recorded in the storage unit 42 of the first server 4. In this case, the target person identification process S2, the attribute acquisition process S3, and the identity verification process S4 can be executed only by the first server 4.
 一変形例において、属性取得処理S3において、対象者2に通話装置3を用いて属性データを入力させてもよい。 In one modification, in the attribute acquisition process S3, the target person 2 may be made to input the attribute data using the communication device 3.
 一変形例において、推定処理S9において、対象者2の属性データを特徴量として用いなくてもよい。 In one modification, it is not necessary to use the attribute data of the target person 2 as a feature amount in the estimation process S9.
 一変形例において、通話装置3は、対象者2が携帯可能な装置に限られない。通話装置3は、例えば、対象者2の住戸に備えられるいわゆる固定電話、或いは対象者2を含む不特定多数の人が利用できるように設置される公衆電話等であってもよい。また、通話装置3は電話に限られず、マイクとスピーカ等の通話手段と通信部31とを備えていればよい。通話装置3は、運転免許センター1に備えられていてもよい。すなわち、対象者2は、運転免許センター1にて、本開示の判定方法による検査を受検してもよい。 In one modification, the communication device 3 is not limited to the device that the target person 2 can carry. The communication device 3 may be, for example, a so-called fixed telephone provided in the dwelling unit of the target person 2, a public telephone installed so as to be used by an unspecified number of people including the target person 2. Further, the communication device 3 is not limited to a telephone, and may be provided with a communication means such as a microphone and a speaker and a communication unit 31. The calling device 3 may be provided in the driver's license center 1. That is, the subject 2 may take the inspection by the determination method of the present disclosure at the driver's license center 1.
 一変形例において、判定方法において本人確認処理S4は必須ではなく、省略されてもよい。また、判定方法は、本人確認処理S4に代えて或いは加えて、代理受検抑止処理を含んでもよい。代理受検抑止処理は、対象者2以外の人が、対象者2の代わりに判定方法による検査を受検するのを抑止する処理である。なお、本人確認処理S4も、代理受検抑止処理として機能する。 In one modification, the identity verification process S4 is not essential in the determination method and may be omitted. In addition, the determination method may include a proxy inspection suppression process in place of or in addition to the identity verification process S4. The proxy inspection deterrence process is a process of deterring a person other than the subject 2 from taking the inspection by the determination method on behalf of the subject 2. The identity verification process S4 also functions as a proxy test suppression process.
 一例において、代理受検抑止処理は、通話装置3の表示部33に所定の注意文を表示させることを含んでもよい。注意文の例としては、例えば、「テスト中は声紋による個人認証により常に同じ人が発話していることを確認しています」等の文章がある。 In one example, the proxy acceptance suppression process may include displaying a predetermined caution statement on the display unit 33 of the call device 3. As an example of a cautionary sentence, for example, there is a sentence such as "During the test, it is confirmed that the same person is always speaking by personal authentication by voiceprint".
 一例において、通話装置3がカメラ等の撮像部を備えている場合、代理受検抑止処理は、通話装置3で撮影された対象者2の画像を通話装置3の表示部33に表示させることを含んでもよい。この場合、撮影された対象者2の画像を処理した画像データから、対象者2が本人であるかの本人確認を実際に行ってもよいし、行わなくてもよい。また、この場合、判定方法は、対象者2が撮像部の撮像範囲から外れた場合に検査を中止する中止処理を含んでもよい。 In one example, when the calling device 3 includes an imaging unit such as a camera, the proxy acceptance suppression process includes displaying the image of the target person 2 taken by the calling device 3 on the display unit 33 of the calling device 3. It may be. In this case, it is not necessary to actually confirm the identity of the subject 2 from the processed image data of the captured image of the subject 2. Further, in this case, the determination method may include a discontinuation process for discontinuing the inspection when the subject 2 deviates from the imaging range of the imaging unit.
 一例において、代理受検抑止処理は、人(例えば運転免許センター1の職員以外の人)による監視を含んでもよい。例えば、運転免許証の更新の通知が郵送で行なわれる場合、代理受検抑止処理は、更新の通知を郵送する郵便局の職員による監視を含んでもよい。 In one example, the proxy examination deterrence process may include monitoring by a person (for example, a person other than the staff of the Driver's License Center 1). For example, if the driver's license renewal notice is mailed, the proxy inspection deterrence process may include oversight by a post office employee who mails the renewal notice.
 一変形例において、本人確認処理S4は、運転免許証の更新の際に用いられる第2暗証番号(ワンタイムパスワード)を用いてもよい。 In one modification, the identity verification process S4 may use the second personal identification number (one-time password) used when renewing the driver's license.
 一変形例において、対象者2への推定結果の通知は即座に行われなくてもよく、例えば、電気通信回線NT1に接続されている所定のサーバに通話装置3からアクセスすることで、対象者2が推定結果を閲覧できてもよい。また、推定結果の通知は、通話装置3以外の手段(例えば郵送等)を介して行われてもよい。 In one modification, the notification of the estimation result to the target person 2 does not have to be performed immediately. For example, by accessing the predetermined server connected to the telecommunication line NT1 from the communication device 3, the target person 2 may be able to view the estimation result. Further, the notification of the estimation result may be performed via means other than the communication device 3 (for example, by mail).
 (4)まとめ
 以上述べたように、第1の態様の認知機能の判定方法は、運転免許証の発行と更新との少なくとも一方の役務を行う運転免許センター(1)における、対象者(2)への役務の提供に際して用いられる。判定方法は、音声取得処理(S6)と、推定処理(S9)と、反映処理(S12)と、を備える。音声取得処理(S6)では、対象者(2)に定型文の読み上げを実行させ読み上げに係る音声データを取得する。推定処理(S9)では、音声取得処理(S6)で取得された音声データに基づいて、対象者(2)の認知機能を推定する。反映処理(S12)では、運転免許センター(1)での対象者(2)への役務の提供の手順又は役務の提供の可否の判断に、推定処理(S9)の結果を反映させる。
(4) Summary As described above, the method for determining the cognitive function of the first aspect is the target person (2) at the driver's license center (1), which performs at least one of the services of issuing and renewing the driver's license. It is used when providing services to. The determination method includes a voice acquisition process (S6), an estimation process (S9), and a reflection process (S12). In the voice acquisition process (S6), the target person (2) is made to read aloud a fixed phrase and acquire the voice data related to the reading. In the estimation process (S9), the cognitive function of the subject (2) is estimated based on the voice data acquired in the voice acquisition process (S6). In the reflection process (S12), the result of the estimation process (S9) is reflected in the procedure for providing the service to the target person (2) at the driver's license center (1) or in the determination of whether or not the service can be provided.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第2の態様の認知機能の判定方法では、第1の態様において、推定処理(S9)では、音声データから抽出される特徴量を用いて認知機能を推定する。 In the method for determining the cognitive function in the second aspect, in the first aspect, in the estimation process (S9), the cognitive function is estimated using the feature amount extracted from the voice data.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第3の態様の認知機能の判定方法では、第2の態様において、特徴量は、以下の群から選択される1以上を含む。上記群は、音声データで示される対象者(2)の音声に含まれる音節における母音の第一フォルマント周波数、又は第二フォルマント周波数に関する量を含む。上記群は、音声データで示される対象者(2)の音声に含まれる開音節における子音の音圧と前記子音に後続する母音の音圧との間の音圧差を含む。上記群は、音声データで示される対象者(2)の音声に含まれる複数の開音節における、子音の音圧と前記子音に後続する母音の音圧との間の音圧差の、ばらつきを含む。上記群は、上記定型文の読み上げに要した総時間を含む。上記群は、上記定型文の複数回の読み上げにそれぞれ要した時間の変化量を含む。 In the method for determining cognitive function in the third aspect, in the second aspect, the feature amount includes one or more selected from the following groups. The above group includes a quantity related to the first formant frequency or the second formant frequency of the vowel in the syllable included in the voice of the subject (2) represented by the voice data. The above group includes a sound pressure difference between the sound pressure of a consonant in an open syllable included in the voice of the subject (2) indicated by voice data and the sound pressure of a vowel following the consonant. The above group includes variations in the sound pressure difference between the sound pressure of a consonant and the sound pressure of a vowel following the consonant in a plurality of open syllables included in the voice of the subject (2) indicated by voice data. .. The above group includes the total time required to read the above fixed phrase. The above group includes the amount of change in the time required for each of the multiple readings of the fixed phrase.
 この態様によれば、判定方法による判定結果の正確性の向上を図ることが可能となる。 According to this aspect, it is possible to improve the accuracy of the judgment result by the judgment method.
 第4の態様の認知機能の判定方法は、第1~第3の態様のいずれか1つにおいて、音声取得処理(S6)で取得された音声データを記録する記録処理(S7)を、更に備える。 The method for determining the cognitive function of the fourth aspect further includes a recording process (S7) for recording the voice data acquired by the voice acquisition process (S6) in any one of the first to third aspects. ..
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第5の態様の認知機能の判定方法は、第1~第4の態様のいずれか1つにおいて、推定処理(S9)では、機械学習により生成された学習済モデル(M1)を用いて、認知機能を推定する。 The method for determining the cognitive function of the fifth aspect is that in any one of the first to fourth aspects, the estimation process (S9) uses the trained model (M1) generated by machine learning to recognize. Estimate function.
 この態様によれば、判定方法による判定結果の正確性の向上を図ることが可能となる。 According to this aspect, it is possible to improve the accuracy of the judgment result by the judgment method.
 第6の態様の認知機能の判定方法は、第1~第5の態様のいずれか1つにおいて、音声取得処理(S6)では、電気通信回線(NT1)を通じて音声データを取得する。 The method for determining the cognitive function in the sixth aspect is one of the first to fifth aspects, in which the voice acquisition process (S6) acquires voice data through a telecommunication line (NT1).
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第7の態様の認知機能の判定方法は、第1~第6の態様のいずれか1つにおいて、推定処理(S9)では、音声取得処理(S6)で取得された音声データに加えて、対象者(2)の属性に関する属性データを更に用いて、認知機能を推定する。 The method for determining the cognitive function in the seventh aspect is that in any one of the first to sixth aspects, in the estimation process (S9), in addition to the voice data acquired in the voice acquisition process (S6), a target is used. The cognitive function is estimated by further using the attribute data related to the attribute of the person (2).
 この態様によれば、判定方法による判定結果の正確性の向上を図ることが可能となる。 According to this aspect, it is possible to improve the accuracy of the judgment result by the judgment method.
 第8の態様の認知機能の判定方法は、第1~第7の態様のいずれか1つにおいて、定型文の読み上げを促す情報を対象者に提示する提示処理(S5)を更に備える。 The method for determining the cognitive function in the eighth aspect further includes, in any one of the first to seventh aspects, a presentation process (S5) for presenting information prompting the subject to read a fixed phrase.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第9の態様の認知機能の判定方法は、第8の態様において、提示処理(S5)では、対象者(2)に定型文の読み上げを促す自動音声を再生する。 The method for determining the cognitive function in the ninth aspect is that in the eighth aspect, in the presentation process (S5), an automatic voice prompting the target person (2) to read a fixed phrase is reproduced.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第10の態様の認知機能の判定方法は、第8又は第9の態様において、提示処理(S5)では、対象者(2)に定型文の読み上げを促す情報を視覚的に与える。 The method for determining the cognitive function in the tenth aspect is, in the eighth or ninth aspect, in the presentation process (S5), the subject (2) is visually provided with information prompting the subject (2) to read a fixed phrase.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第11の態様の認知機能の判定方法は、第8~第10の態様のいずれか1つにおいて、提示処理(S5)では、運転免許証に関連する質問を対象者(2)に提示する。上記質問は、質問に対する解答としての定型文の読み上げを対象者(2)に促す内容を含む。 The method for determining the cognitive function in the eleventh aspect is one of the eighth to tenth aspects, in which the presentation process (S5) presents a question related to the driver's license to the subject (2). The above-mentioned question includes a content that prompts the subject (2) to read a fixed phrase as an answer to the question.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第12の態様の認知機能の判定方法は、第1~第11の態様のいずれか1つにおいて、対象者(2)が本人であることを確認するための本人確認処理(S4)を更に備える。 The method for determining the cognitive function of the twelfth aspect further includes an identity verification process (S4) for confirming that the subject (2) is the person in any one of the first to eleventh aspects. ..
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第13の態様の認知機能の判定方法は、第12の態様において、本人確認処理(S4)は、対象者(2)の生体情報を用いて行う。 The method for determining the cognitive function in the thirteenth aspect is that in the twelfth aspect, the identity verification process (S4) is performed using the biological information of the subject (2).
 この態様によれば、本人確認の正確性の向上を図ることが可能となる。 According to this aspect, it is possible to improve the accuracy of identity verification.
 第14の態様の認知機能の判定方法は、第13の態様において、生体情報は、対象者(2)の声紋の情報を含む。 In the thirteenth aspect of the method for determining the cognitive function in the fourteenth aspect, the biological information includes the voiceprint information of the subject (2).
 この態様によれば、本人確認の正確性の向上を図ることが可能となる。 According to this aspect, it is possible to improve the accuracy of identity verification.
 第15の態様の認知機能の判定方法は、第1~第14の態様のいずれか1つにおいて、対象者(2)を特定するための対象者特定処理(S2)を更に備える。 The method for determining the cognitive function of the fifteenth aspect further includes a target person identification process (S2) for identifying the target person (2) in any one of the first to fourteenth aspects.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第16の態様の認知機能の判定方法は、第15の態様において、対象者特定処理(S2)は、対象者(2)の生体情報を用いて行う。 In the fifteenth aspect of the method for determining the cognitive function of the sixteenth aspect, the target person identification process (S2) is performed using the biological information of the target person (2).
 この態様によれば、対象者(2)の特定の正確性の向上を図ることが可能となる。 According to this aspect, it is possible to improve the specific accuracy of the target person (2).
 第17の態様の認知機能の判定方法は、第1~第16の態様のいずれか1つにおいて、対象者(2)からの操作入力を受け付ける操作受付処理を更に備える。推定処理(S9)は、音声取得処理(S6)で取得された音声データに加えて、操作入力の結果を更に用いて、認知機能を推定する。 The method for determining the cognitive function in the 17th aspect further includes an operation acceptance process for receiving an operation input from the target person (2) in any one of the 1st to 16th aspects. The estimation process (S9) estimates the cognitive function by further using the result of the operation input in addition to the voice data acquired in the voice acquisition process (S6).
 この態様によれば、判定方法による判定結果の正確性の向上を図ることが可能となる。 According to this aspect, it is possible to improve the accuracy of the judgment result by the judgment method.
 第18の態様のプログラムは、第1~第17の態様のいずれか1つの認知機能の判定方法を、1以上のプロセッサに実行させるためのプログラムである。 The program of the eighteenth aspect is a program for causing one or more processors to execute the method of determining the cognitive function of any one of the first to seventeenth aspects.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 第19の態様の認知機能の判定システム(10)は、運転免許証の発行と更新との少なくとも一方の役務を行う運転免許センター(1)における、対象者(2)への役務の提供に際して用いられる。判定システム(10)は、音声取得部(F5)と、推定部(F7)と、反映部(F13)と、を備える。音声取得部(F5)は、対象者(2)に定型文の読み上げを実行させ読み上げに係る音声データを取得する。推定部(F7)は、音声取得部(F5)で取得された音声データに基づいて、対象者(2)の認知機能を推定する。反映部(F13)は、運転免許センター(1)での対象者(2)への役務の提供の手順又は役務の提供の可否の判断に、推定部(F7)の推定結果を反映させる。 The cognitive function determination system (10) of the nineteenth aspect is used when providing services to the target person (2) at the driver's license center (1), which performs at least one of the services of issuing and renewing the driver's license. Be done. The determination system (10) includes a voice acquisition unit (F5), an estimation unit (F7), and a reflection unit (F13). The voice acquisition unit (F5) causes the target person (2) to read aloud a fixed phrase and acquires voice data related to the reading. The estimation unit (F7) estimates the cognitive function of the subject (2) based on the voice data acquired by the voice acquisition unit (F5). The reflection unit (F13) reflects the estimation result of the estimation unit (F7) in the procedure for providing the service to the target person (2) at the driver's license center (1) or in the determination of whether or not the service can be provided.
 この態様によれば、免許更新時の負担の軽減を図ることが可能となる。 According to this aspect, it is possible to reduce the burden when renewing the license.
 1 運転免許センター
 2 対象者
 10 判定システム
 F5 音声取得部
 F7 推定部
 F13 反映部
 S2 対象者特定処理
 S4 本人確認処理
 S5 提示処理
 S6 音声取得処理
 S7 記録処理
 S9 推定処理
 S12 反映処理
 M1 学習済モデル
 NT1 電気通信回線
 
 
1 Driver's License Center 2 Target person 10 Judgment system F5 Voice acquisition unit F7 Estimate unit F13 Reflection unit S2 Target person identification processing S4 Identity verification processing S5 Presentation processing S6 Voice acquisition processing S7 Recording processing S9 Estimation processing S12 Reflection processing M1 Learned model NT1 Telecommunications line

Claims (19)

  1.  運転免許証の発行と更新との少なくとも一方の役務を行う運転免許センターにおける、対象者への前記役務の提供に際して用いられる認知機能の判定方法であって、
     前記対象者に定型文の読み上げを実行させ前記読み上げに係る音声データを取得する音声取得処理と、
     前記音声取得処理で取得された前記音声データに基づいて、前記対象者の認知機能を推定する推定処理と、
     前記運転免許センターでの前記対象者への前記役務の提供の手順又は前記役務の提供の可否の判断に、前記推定処理の結果を反映させる反映処理と、
    を備える、
     認知機能の判定方法。
    A method for determining the cognitive function used in providing the above-mentioned services to the target person at a driver's license center that performs at least one of the services of issuing and renewing a driver's license.
    A voice acquisition process in which the target person reads out a fixed phrase and acquires voice data related to the reading,
    An estimation process that estimates the cognitive function of the subject based on the voice data acquired by the voice acquisition process, and an estimation process.
    A reflection process that reflects the result of the estimation process in the procedure for providing the service to the target person at the driver's license center or in determining whether or not the service can be provided.
    To prepare
    How to judge cognitive function.
  2.  前記推定処理では、前記音声データから抽出される特徴量を用いて前記認知機能を推定する、
     請求項1に記載の認知機能の判定方法。
    In the estimation process, the cognitive function is estimated using the feature amount extracted from the voice data.
    The method for determining cognitive function according to claim 1.
  3.  前記特徴量は、
      前記音声データで示される前記対象者の音声に含まれる音節における母音の第一フォルマント周波数、又は第二フォルマント周波数に関する量、
      前記対象者の音声に含まれる開音節における子音の音圧と前記子音に後続する母音の音圧との間の音圧差、
      前記対象者の音声に含まれる複数の開音節における、子音の音圧と前記子音に後続する母音の音圧との間の音圧差の、ばらつき、
      前記読み上げに要した総時間、
      前記定型文の複数回の前記読み上げにそれぞれ要した時間の変化量
     からなる群から選択される1以上を含む、
     請求項2に記載の認知機能の判定方法。
    The feature amount is
    A quantity relating to the first formant frequency or the second formant frequency of a vowel in a syllable included in the subject's voice indicated by the voice data.
    The sound pressure difference between the sound pressure of a consonant in an open syllable included in the voice of the subject and the sound pressure of a vowel following the consonant,
    Variation of sound pressure difference between the sound pressure of a consonant and the sound pressure of a vowel following the consonant in a plurality of open syllables included in the voice of the subject.
    Total time required for reading the above,
    Includes one or more selected from the group consisting of the amount of change in the time required for each of the multiple readings of the fixed phrase.
    The method for determining cognitive function according to claim 2.
  4.  前記音声取得処理で取得された前記音声データを記録する記録処理を、更に備える、
     請求項1~3のいずれか1項に記載の認知機能の判定方法。
    A recording process for recording the voice data acquired by the voice acquisition process is further provided.
    The method for determining cognitive function according to any one of claims 1 to 3.
  5.  前記推定処理では、機械学習により生成された学習済モデルを用いて、前記認知機能を推定する、
     請求項1~4のいずれか1項に記載の認知機能の判定方法。
    In the estimation process, the cognitive function is estimated using a trained model generated by machine learning.
    The method for determining cognitive function according to any one of claims 1 to 4.
  6.  前記音声取得処理では、電気通信回線を通じて前記音声データを取得する、
     請求項1~5のいずれか1項に記載の認知機能の判定方法。
    In the voice acquisition process, the voice data is acquired through a telecommunication line.
    The method for determining cognitive function according to any one of claims 1 to 5.
  7.  前記推定処理では、前記音声取得処理で取得された前記音声データに加えて、前記対象者の属性に関する属性データを更に用いて、前記認知機能を推定する、
     請求項1~6のいずれか1項に記載の認知機能の判定方法。
    In the estimation process, the cognitive function is estimated by further using the attribute data related to the attributes of the target person in addition to the voice data acquired in the voice acquisition process.
    The method for determining cognitive function according to any one of claims 1 to 6.
  8.  前記定型文の前記読み上げを促す情報を前記対象者に提示する提示処理を更に備える、
     請求項1~7のいずれか1項に記載の認知機能の判定方法。
    Further provided with a presentation process for presenting the information prompting the reading of the fixed phrase to the target person.
    The method for determining cognitive function according to any one of claims 1 to 7.
  9.  前記提示処理では、前記対象者に前記定型文の前記読み上げを促す自動音声を再生する、
     請求項8に記載の認知機能の判定方法。
    In the presentation process, an automatic voice prompting the target person to read the fixed phrase is reproduced.
    The method for determining cognitive function according to claim 8.
  10.  前記提示処理では、前記対象者に前記定型文の前記読み上げを促す情報を視覚的に与える、
     請求項8又は9に記載の認知機能の判定方法。
    In the presentation process, information that prompts the subject to read the fixed phrase is visually given.
    The method for determining cognitive function according to claim 8 or 9.
  11.  前記提示処理では、前記運転免許証に関連する質問を前記対象者に提示し、
     前記質問は、前記質問に対する解答としての前記定型文の前記読み上げを前記対象者に促す内容を含む、
     請求項8~10のいずれか1項に記載の認知機能の判定方法。
    In the presentation process, a question related to the driver's license is presented to the subject.
    The question includes a content that prompts the subject to read the fixed phrase as an answer to the question.
    The method for determining cognitive function according to any one of claims 8 to 10.
  12.  前記対象者が本人であることを確認するための本人確認処理を更に備える、
     請求項1~11のいずれか1項に記載の認知機能の判定方法。
    Further provided with an identity verification process for confirming that the subject is the person himself / herself.
    The method for determining cognitive function according to any one of claims 1 to 11.
  13.  前記本人確認処理は、前記対象者の生体情報を用いて行う、
     請求項12に記載の認知機能の判定方法。
    The identity verification process is performed using the biological information of the subject.
    The method for determining cognitive function according to claim 12.
  14.  前記生体情報は、前記対象者の声紋の情報を含む、
     請求項13に記載の認知機能の判定方法。
    The biological information includes information on the voiceprint of the subject.
    The method for determining cognitive function according to claim 13.
  15.  前記対象者を特定するための対象者特定処理を更に備える、
     請求項1~14のいずれか1項に記載の認知機能の判定方法。
    Further provided with a target person identification process for identifying the target person.
    The method for determining cognitive function according to any one of claims 1 to 14.
  16.  前記対象者特定処理は、前記対象者の生体情報を用いて行う、
     請求項15に記載の認知機能の判定方法。
    The target person identification process is performed using the biological information of the target person.
    The method for determining cognitive function according to claim 15.
  17.  前記対象者からの操作入力を受け付ける操作受付処理を更に備え、
     前記推定処理は、前記音声取得処理で取得された前記音声データに加えて、前記操作入力の結果を更に用いて、前記認知機能を推定する、
     請求項1~16のいずれか1項に記載の認知機能の判定方法。
    Further provided with an operation reception process for receiving operation input from the target person,
    The estimation process estimates the cognitive function by further using the result of the operation input in addition to the voice data acquired by the voice acquisition process.
    The method for determining cognitive function according to any one of claims 1 to 16.
  18.  請求項1~17のいずれか1項に記載の認知機能の判定方法を、1以上のプロセッサに実行させるためのプログラム。 A program for causing one or more processors to execute the cognitive function determination method according to any one of claims 1 to 17.
  19.  運転免許証の発行と更新との少なくとも一方の役務を行う運転免許センターにおける、対象者への前記役務の提供に際して用いられる認知機能の判定システムであって、
     前記対象者に定型文の読み上げを実行させ前記読み上げに係る音声データを取得する音声取得部と、
     前記音声取得部で取得された前記音声データに基づいて、前記対象者の認知機能を推定する推定部と、
     前記運転免許センターでの前記対象者への前記役務の提供の手順又は前記役務の提供の可否の判断に、前記推定部の推定結果を反映させる反映部と、
    を備える、
     認知機能の判定システム。
     
    A cognitive function determination system used to provide the above-mentioned services to the target person at a driver's license center that performs at least one of the services of issuing and renewing a driver's license.
    A voice acquisition unit that causes the target person to read aloud a fixed phrase and acquires voice data related to the reading.
    An estimation unit that estimates the cognitive function of the subject based on the voice data acquired by the voice acquisition unit, and an estimation unit.
    A reflection unit that reflects the estimation result of the estimation unit in the procedure for providing the service to the target person at the driver's license center or in determining whether or not the service can be provided.
    To prepare
    Cognitive function judgment system.
PCT/JP2020/029682 2019-08-05 2020-08-03 Determination method for cognitive function, program, and determination system for cognitive function WO2021024987A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021537304A JP7479013B2 (en) 2019-08-05 2020-08-03 Method, program, and system for assessing cognitive function

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-143998 2019-08-05
JP2019143998 2019-08-05

Publications (1)

Publication Number Publication Date
WO2021024987A1 true WO2021024987A1 (en) 2021-02-11

Family

ID=74502693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/029682 WO2021024987A1 (en) 2019-08-05 2020-08-03 Determination method for cognitive function, program, and determination system for cognitive function

Country Status (2)

Country Link
JP (1) JP7479013B2 (en)
WO (1) WO2021024987A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005182241A (en) * 2003-12-17 2005-07-07 Ytb:Kk Telephone booking system and automatic booking schedule managing method by speech recognition function
US20120190001A1 (en) * 2011-01-25 2012-07-26 Hemisphere Centre for Mental Health & Wellness Inc. Automated cognitive testing methods and applications therefor
JP2015219731A (en) * 2014-05-19 2015-12-07 吉田 一雄 Inspection system
JP2018050847A (en) * 2016-09-27 2018-04-05 パナソニックIpマネジメント株式会社 Cognitive function evaluation apparatus, cognitive function evaluation method, and program
JP2019083902A (en) * 2017-11-02 2019-06-06 パナソニックIpマネジメント株式会社 Cognitive function evaluation apparatus and cognitive function evaluation system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000028724A (en) * 1998-07-14 2000-01-28 Hitachi Ltd System and method for controlling radiation exposure dose
JP6793932B2 (en) 2016-05-17 2020-12-02 公立大学法人会津大学 An identification / reaction measuring device for measuring the identification / reaction function of a subject, and a program for executing and controlling the measurement of the identification / reaction function of a subject.
JP6927714B2 (en) 2017-02-28 2021-09-01 パイオニア株式会社 Control device
JP6884605B2 (en) 2017-03-10 2021-06-09 パイオニア株式会社 Judgment device
JP6537005B1 (en) 2018-10-31 2019-07-03 日本テクトシステムズ株式会社 Test control system, method and program for cognitive function test

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005182241A (en) * 2003-12-17 2005-07-07 Ytb:Kk Telephone booking system and automatic booking schedule managing method by speech recognition function
US20120190001A1 (en) * 2011-01-25 2012-07-26 Hemisphere Centre for Mental Health & Wellness Inc. Automated cognitive testing methods and applications therefor
JP2015219731A (en) * 2014-05-19 2015-12-07 吉田 一雄 Inspection system
JP2018050847A (en) * 2016-09-27 2018-04-05 パナソニックIpマネジメント株式会社 Cognitive function evaluation apparatus, cognitive function evaluation method, and program
JP2019083902A (en) * 2017-11-02 2019-06-06 パナソニックIpマネジメント株式会社 Cognitive function evaluation apparatus and cognitive function evaluation system

Also Published As

Publication number Publication date
JPWO2021024987A1 (en) 2021-02-11
JP7479013B2 (en) 2024-05-08

Similar Documents

Publication Publication Date Title
Balasuriya et al. Use of voice activated interfaces by people with intellectual disability
Bernstein et al. Speech perception without hearing
Hustad The relationship between listener comprehension and intelligibility scores for speakers with dysarthria
McKenzie The sociolinguistics of variety identification and categorisation: Free classification of varieties of spoken English amongst non-linguist listeners
EP3811245A1 (en) Systems and methods for mental health assessment
CN110675951A (en) Intelligent disease diagnosis method and device, computer equipment and readable medium
Pilnick et al. Advice, authority and autonomy in shared decision‐making in antenatal screening: the importance of context
Yousaf et al. A Novel Technique for Speech Recognition and Visualization Based Mobile Application to Support Two‐Way Communication between Deaf‐Mute and Normal Peoples
WO2006109268A1 (en) Automated speech disorder detection method and apparatus
Hanson et al. Communication support through multimodal supplementation: A scoping review
Li et al. The effects of articulatory gestures on L2 pronunciation learning: A classroom-based study
Maharjan et al. Can we talk? design implications for the questionnaire-driven self-report of health and wellbeing via conversational agent
Thomson et al. What ‘form’does informal assessment take? A scoping review of the informal assessment literature for aphasia
Murdach What good is soft evidence?
Ismail et al. Academic dishonesty: An empirical study of personal beliefs and values of undergraduate students in Malaysia
WO2021024987A1 (en) Determination method for cognitive function, program, and determination system for cognitive function
Dubas et al. Speech-in-speech recognition in preschoolers
JP2021064284A (en) Cognitive function evaluation system, cognitive function evaluation method, and program
Rex et al. A preliminary validation of a dynamic speech motor assessment for Swedish-speaking children with childhood apraxia of speech
US11547345B2 (en) Dynamic neuropsychological assessment tool
McCartney Assessment of expressive language
Wolfrum et al. Clinical assessment of communication-related speech parameters in dysarthria: The impact of perceptual adaptation
Adade et al. Factors influencing sign language interpretation service in ghana: The interpreters’ perspective
Shi et al. Linguistic and attitudinal factors in normal-hearing bilingual listeners' perception of degraded English passages
Tyler et al. Error consistency and the evaluation of treatment outcomes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20850416

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021537304

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20850416

Country of ref document: EP

Kind code of ref document: A1