WO2022085474A1 - Procédé de traitement d'informations - Google Patents

Procédé de traitement d'informations Download PDF

Info

Publication number
WO2022085474A1
WO2022085474A1 PCT/JP2021/037243 JP2021037243W WO2022085474A1 WO 2022085474 A1 WO2022085474 A1 WO 2022085474A1 JP 2021037243 W JP2021037243 W JP 2021037243W WO 2022085474 A1 WO2022085474 A1 WO 2022085474A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
level
information
answer
question
Prior art date
Application number
PCT/JP2021/037243
Other languages
English (en)
Japanese (ja)
Inventor
洋 矢羽田
孝啓 西
正真 遠間
敏康 杉尾
ジャン ジャスパー ヴァン デン バーグ
デイビッド マイケル デュフィー
バーナデット エリオット ボウマン
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2020175954A external-priority patent/JP2023169448A/ja
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2022085474A1 publication Critical patent/WO2022085474A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers

Definitions

  • This disclosure relates to an information processing method in an information providing system including a device for communicating with a user.
  • Patent Document 1 when communicating, the knowledge level of the information transmitter user is compared with the knowledge level of the receiver user, and when the knowledge level of the receiver user is lower than the knowledge level of the transmitter user, A method of transmitting to the recipient user by selecting the event with the highest knowledge level of the recipient user and converting the quantitative value of the degree of the input event into the quantitative value of the degree of the selected event with the highest knowledge level. Is disclosed.
  • an information providing system including a device that communicates with a user, it is desired to realize communication according to the characteristics of the user.
  • the information processing method is an information processing method in an information providing system including a device for communicating with a user, wherein the device includes a microphone and a speaker, and is acquired by the microphone of the device.
  • the first voice information including the first question from the user is acquired together with the device ID that identifies the device, and the communication policy information including the hearing level corresponding to the user is acquired based on the device ID, and the hearing level is obtained.
  • Is associated with the device ID in the information providing system generates first answer information indicating the first answer to the first question, specifies an output format corresponding to the hearing level, and specifies the first answer. Includes outputting information to the speaker of the device.
  • the output format includes a volume, and the volume when the hearing level is the first level is higher than the volume when the hearing level is the second level higher than the first level.
  • an information providing system including a device for communicating with a user
  • communication according to the characteristics of the user can be realized.
  • This is an example of a table used when the user's language level is subdivided into multiple languages and managed.
  • It is a sequence diagram which shows an example which performs the initial setting of communication.
  • It is a sequence diagram which shows an example of the response method which answers a question from a user.
  • It is a sequence diagram which shows an example which estimates the degree of understanding of a user from communication and intervenes with support as appropriate.
  • the communication support device is an information communication terminal that communicates with a user by continuously measuring the language level, knowledge level, visual level, and / or auditory level of the user.
  • the recommended method (policy) for smooth communication with the user to be determined is recorded, and the information communication terminal is provided with the first communication newly received by the user.
  • the first communication received by the user is compared with the recommended method (policy) and it is determined that the first communication does not match the recommended method (policy)
  • the first communication is detected by the sensor.
  • Generate individual communication to the user by changing at least one of the language expression method of the first communication received by the user, the required knowledge system, the display method of character information, and the output method of voice information. Then, the individual communication is transmitted to the user.
  • the textual information contained in the communication is recognized. If there is a lack of visual and / or hearing to recognize the voice information contained in the communication, the smooth communication that is normally performed between the user and the information communication terminal is performed.
  • the recommended method (policy) for the user received by simplifying the language expression, simplifying the prerequisite knowledge, displaying the text information that is easier to recognize visually, and outputting the voice information that is easier to hear more audibly. It is possible to support the understanding of communication according to the ability of the user.
  • the information processing method is an information processing method in an information providing system including a device for communicating with a user, wherein the device includes a microphone and a speaker, and is acquired by the microphone of the device.
  • the first voice information including the first question from the user is acquired together with the device ID for identifying the device, and the communication policy information including the hearing level corresponding to the user is acquired based on the device ID.
  • the hearing level is an output format output from the speaker set for the user via the information communication terminal of the user or the device, and is set corresponding to the user ID that identifies the user.
  • the user ID is associated with the device ID in the information providing system, generates first answer information indicating the first answer to the first question, specifies an output format corresponding to the hearing level, and specifies the first answer. 1 Includes outputting response information to the speaker of the device.
  • the output format includes a volume, and the volume when the hearing level is the first level is higher than the volume when the hearing level is the second level higher than the first level.
  • the first level and the second level in the auditory level may be referred to as a first auditory level and a second auditory level, respectively.
  • the answer to the question when a user asks a question to the device, the answer to the question can be output from the speaker of the device in an output format according to the user's hearing level.
  • the answer to the question it is possible to provide the answer to the user in an output format that the user can listen to more easily than in the case where the answer is provided in a uniform output format.
  • the output format may further include at least one of speed and clarity.
  • the information processing method is an information processing method in an information providing system including a device for communicating with a user, wherein the device includes a microphone and a display, and is acquired by the microphone of the device.
  • the first voice information including the first question from the user is acquired together with the device ID for identifying the device, and the communication policy information including the visual level corresponding to the user is acquired based on the device ID.
  • the visual level is an output format output from the display set for the user via the user's information communication terminal or the device, and is set in correspondence with the user ID that identifies the user.
  • the user ID is associated with the device ID in the information providing system, generates first answer information indicating the first answer to the first question, specifies an output format corresponding to the visual level, and specifies the first answer. 1 Includes outputting response information to the display of the device.
  • the output format includes a display size of characters, and the display size of the characters when the visual level is the first level is the second level when the visual level is higher than the first level. It is larger than the display size of characters.
  • the first level and the second level in the visual level may be referred to as a first visual level and a second visual level, respectively.
  • the answer to the question when a user asks a question to the device, the answer to the question can be output from the display of the device in an output format according to the user's visual level.
  • the answer can be provided to the user in an output format that can be more easily viewed by the user, as compared with the case where the answer is provided in a uniform output format.
  • the output format may further include at least one of modification of character edging, color arrangement of characters, and arrangement of characters.
  • the communication policy information further includes a knowledge level corresponding to the user, and the knowledge level is the information communication terminal of the user. Alternatively, it is a response level set for the user via the device, and is set in correspondence with the user ID, and the first response information is based on the first voice information and the communication policy information. , May be generated according to the knowledge level.
  • an answer according to the user's knowledge level can be output from the speaker of the device. This allows the user to be provided with a personalized answer created for the user. As a result, it is possible to provide the user with an answer that is easier for the user to understand than when a uniformly created general answer is provided.
  • the knowledge level may be set for each field.
  • the field may include at least one of social common sense, formal science, natural science, social science, humanities, and applied science.
  • the first number of technical terms included in the 1-1 answer to the first question when the knowledge level is the first level is the above. It may be less than the second number of technical terms included in the 1-2 answers to the first question when the knowledge level is a second level higher than the first level.
  • the first level and the second level in the knowledge level may be referred to as a first knowledge level and a second knowledge level, respectively.
  • the average first length of the sentence included in the 1-1 answer to the first question when the knowledge level is the first level May be shorter than the average second length of the text contained in the 1-2 answers to the first question when the knowledge level is a second level higher than the first level.
  • the knowledge level is the total number of first characters of the first answer to the first question when the knowledge level is the first level. It may be less than the total number of second characters of the first and second answers to the first question in the case of the second level higher than the first level.
  • the communication policy information further includes a language level corresponding to the user, and the language level refers to the information communication terminal or the device of the user. It is a language level set for the user via the user, and is set in association with the user ID, and the first response information may be generated according to the knowledge level and the language level.
  • the second voice information including the second question is output to the apparatus in order to output to the user from the speaker of the apparatus, and the second question is asked.
  • Is used for updating the knowledge level unlike the first question, and obtains the second answer information indicating the user's second answer to the second question from the device, and uses the second answer information as the second answer information. Based on this, the knowledge level may be updated.
  • the knowledge level may be updated based on the correctness of the second answer.
  • the knowledge level may be updated based on the time required from the output of the second voice information to the acquisition of the second answer information. ..
  • An information processing method is an information processing method in an information providing system including a device for communicating with a plurality of users including a first user and a second user, wherein the device includes a microphone and a speaker.
  • voice information including a question acquired by the microphone of the device is acquired together with a device ID for identifying the device, and the question is asked by the first user and the first user using the voice information and the speaker identification database.
  • the speaker identification database determines which of the two users is used, and associates the speaker identification database with the first user ID that identifies the first user to determine the feature amount of the voice of the first user and identifies the second user.
  • the feature amount of the voice of the second user is managed in association with the second user ID, and the first user ID and the second user ID are associated with the device ID in the information providing system, (i).
  • the first communication policy information including the first knowledge level corresponding to the first user is acquired, and the first knowledge level is the information communication of the first user.
  • the first knowledge which is the response level set for the first user via the terminal or the device, is associated with the first user ID, and is based on the voice information and the first communication policy information.
  • the first answer information indicating the answer to the question is generated according to the level, the first answer information is output to the speaker of the apparatus, and (ii) when the question is determined to be by the second user.
  • the second communication policy information including the second knowledge level corresponding to the second user is acquired, and the second knowledge level is the information communication terminal of the first user, the information communication terminal of the second user, or the device. It is a response level set for the second user via the above, is associated with the second user ID, and is based on the voice information and the second communication policy information according to the second knowledge level.
  • the second answer information indicating the answer to the question is generated, and the second answer information is output to the speaker of the apparatus.
  • a first user or a second user when a first user or a second user asks a question to the device, it is determined whether the question is made by the first user or the second user, and it is determined according to the knowledge level of each user. Different answers can be provided for each user. In other words, even if the first user or the second user asks the same question, the answer can be changed for each user according to the knowledge level of each user. As a result, it is possible to provide each user with an answer that can be more easily understood by each user as compared with the case where a uniformly created general answer is provided to each user.
  • voice information is acquired together with the device ID, and the first user ID and the second user ID are associated with the device ID in the information providing system.
  • the device ID can be used to extract the user associated with the device ID as a candidate for the speaker. can.
  • the speaker can be efficiently identified without collating with all the features of the user's voice managed in the identification database.
  • An information processing method is an information processing method in an information providing system including a device for communicating with a user, wherein the device includes a microphone and a speaker, and is acquired by the microphone of the device.
  • the first voice information including the first question from the user is acquired together with the device ID that identifies the device, and the communication policy information including the knowledge level corresponding to the user is acquired based on the device ID.
  • the knowledge level is a response level set for the user via the user's information communication terminal or the device, and is set in association with a user ID that identifies the user, and the user ID is the information. It is associated with the device ID in the providing system, and based on the first voice information and the communication policy information, the first answer information indicating the first answer to the first question is generated according to the knowledge level. It includes outputting the first answer information to the speaker of the apparatus.
  • an answer according to the user's knowledge level can be output from the speaker of the device. This allows the user to be provided with a personalized answer created for the user. As a result, it is possible to provide the user with an answer that is easier for the user to understand than when a uniformly created general answer is provided.
  • a device is a device that communicates with a user visually or auditorily, and measures or sets at least one of the user's language level, knowledge level, visual level, and auditory level.
  • the recommended method (policy) for smooth communication with the user which is determined by the above, is recorded, and by communicating with the user according to the above recommended method (policy), the personalized understanding for the user is recorded. Achieve easy and smooth communication.
  • the present disclosure can also be realized as a program for causing a computer to execute each characteristic configuration included in such an information processing method, or as a communication support system operated by this program.
  • a computer program can be distributed via a computer-readable non-temporary recording medium such as a CD-ROM or a communication network such as the Internet.
  • Digitized personal data (personal information) is stored in the cloud via a communication network, managed by an information bank as big data, and used for various purposes for individuals.
  • the highly information-oriented society is an information infrastructure that highly integrates the real space (physical space), which is the material world surrounding individuals, and the virtual space (cyberspace), in which computers cooperate with each other to perform various processes related to the physical space. It is a society where economic development and solution of social issues are expected by cyber physical system).
  • FIG. 1 is a diagram showing an example of the overall configuration of the information system according to the embodiment of the present disclosure.
  • the upper half shows a cyber space containing a cloud
  • the lower half shows a physical space containing people and things.
  • devices related to users who are mainly assisted in communication and task execution in daily life are described in the center of the left and right.
  • an object related to another user who has many daily contacts with the user (hereinafter referred to as a cohabitant to distinguish the user) is arranged at the left end.
  • objects related to the communication partner (doctor in the figure) with whom the user communicates for a specific purpose in social life are arranged on the right side.
  • the user uses a personal information communication terminal 99, an information communication terminal 100 that can be used by a person other than the user, such as a cohabitant, and an information source 102.
  • the information communication terminal 99 may be, for example, a smartphone owned by the user or a personal computer.
  • the information communication terminal 100 may be, for example, a robot that communicates with the user in the user's living space, a smart speaker, or a wearable device that is worn and used like a smart watch, smart glasses, hearables, or smart clothes. It may be a smartphone or a personal computer used by the user.
  • the information communication terminal 100 may have a function of communicating with the user by voice, may have a function of communicating with the user by displaying a video or a character, and may have a touch, a gesture, or the like. It may have a function of communicating with the user by using facial expressions.
  • the information source 102 is, for example, a television or a magazine, which is a tool for a user to acquire information on a daily basis. Users live every day while using information communication terminals (99, 100) and information sources 102. Each of these information communication terminals is connected to a cloud 101 that stores and manages user information and information about the device via a wide area communication network.
  • the cohabitant has a personal information communication terminal 99 and uses the information communication terminal 100 and the information source 102 together with the user.
  • the information communication terminal 100 and the information source 102 described as shared assets in this figure are things that may be used by both the user and the cohabitant.
  • the information communication terminal 100 may be used by a user or by a cohabitant.
  • the doctor When the doctor diagnoses the user's illness, the doctor inputs the diagnosis result as a medical record using the information communication terminal 110.
  • the input medical record information is stored and managed in the cloud 111 that stores the medical record information connected to the information communication terminal 110 by the wide area communication network.
  • a doctor is used as an example of communication, but this disclosure is not limited to this, and any person who directly communicates with the user, such as a lawyer, a police officer, a friend, or a neighbor, can be used. good.
  • FIG. 2 shows a block diagram showing an example of the configuration of the information system according to the embodiment of the present disclosure.
  • the information communication terminals (99, 100) are sensors 203, 213 and video / audio output units 206, 217 for communicating with the user by video information and voice information, respectively, and an operation unit that accepts button presses and touch operations from the user.
  • 205, 216 Data used by arithmetic units 202, 212 and arithmetic units 202, 212 that perform information processing such as voice recognition, voice synthesis, information retrieval, and information drawing performed in information communication terminals (99, 100).
  • It includes memory 204 and 215 to be held, and communication units 201 and 211 for performing information communication with a computer on the network.
  • the facial expression is displayed on the movable unit 214 for gesturing to smoothly communicate with the user and the video / audio output unit 217. May be good.
  • the cloud 101 that manages user information has a communication unit 221 for performing information communication with a computer on the network, a memory 223 that records user information, and forms and outputs user information in response to an external request.
  • a calculation unit 222 that performs information processing is provided.
  • the cloud 111 that manages the information handled by the other party also has a communication unit 241 for performing information communication with a computer on the network, a memory 243 that records the information handled by the other party, and the other party handles the information in response to an external request. It is provided with a calculation unit 242 that performs information processing such as inputting / outputting information.
  • the information communication terminal 110 handled by the other party has an operation unit 234 that accepts button presses and touch operations from the other party, an arithmetic unit 232 that performs information processing such as information retrieval and information drawing performed in the information communication terminal 110, and an arithmetic unit. It includes a memory 233 for holding data used by 232, and a communication unit 231 for performing information communication with a computer on the network.
  • the user communicates via an application running on the terminal.
  • the communication history with the user may be displayed in characters for a short conversation such as a chat, and the avatar may be displayed as the conversation partner.
  • the avatar drawn by the above application is graphic data displayed on the screen of the smartphone (video / audio output unit 217).
  • the form is different, there is no essential difference in communicating with the user, and the information communication terminal 100 will be described below as the form of the robot.
  • the information communication terminal 100 communicates with the cloud 101 that manages user information via a network that is a wide area communication network.
  • the cloud 101 that manages user information manages an index for specifying a user's communication ability, a conversation history, and a user's daily task, which will be described later.
  • the information communication terminal 100 and the cloud 101 that manages the user information are described separately, but the present invention is not limited to this, and the user information is stored inside the information communication terminal 100 and the user information is described.
  • the configuration may be such that the cloud 101 that manages the above is not used. In this case, since the user information is not acquired from the cloud 101 via the network, there is a possibility that communication cannot be used or the risk of leakage of the user information is reduced.
  • the other party (doctor) who communicates with the user will be described using the example of FIG.
  • the doctor with whom you communicate with the user diagnoses the user.
  • the examination result is input via the information communication terminal 110 which is a medical record input terminal.
  • the medical record is managed in the cloud 111 as information handled by the other party, which is a medical record storage cloud.
  • the cloud 101 that manages user information communicates with the medical record storage cloud 111, and can acquire or access information on the user's own medical examination results, prescriptions, and past medical history.
  • the cloud 101 that manages user information as an information bank
  • an individual has the independence and initiative for his / her personal information, and his / her own personal information (personal) for the purpose of returning value to himself / herself or society. It is a world of Society 5.0 type that can utilize data).
  • personal authentication technology using biometric information, personal information leakage prevention technology by distributed encryption management, and data encrypted so that it can be used for a wide range of needs while ensuring the security of personal information are maintained.
  • Secret calculation technology that enables arithmetic processing is used, but the details of those technologies are omitted in this disclosure.
  • FIG. 3 is a table showing an example of the correlation between the means of communication between the user and the information communication terminal 100 and the communication ability of the related user.
  • the information communication terminal 100 to be used is a robot, an AI speaker mainly for voice communication, an earphone for hearing support, etc.
  • voice is used as a means of communication with the user. be.
  • voice is related to a specific communication method recommended when the information communication terminal 100 communicates with a user (hereinafter, this is referred to as a communication policy or simply a policy).
  • a communication policy or simply a policy.
  • the user's visual level is not included in the actual communication ability element, so it is listed as No in the table.
  • the policy is a guideline on how to express the information to be conveyed, which should be observed in order to smoothly convey certain information to the target user.
  • the information communication terminal 100 to be used is a smartphone, an AI speaker attached to a monitor, a system in which an AI speaker and a television are linked, etc., it can be used as a means of communication with the user. May use voice and text. In this case, it is the user's language level, knowledge level, visual level, and auditory level that are relevant to the communication policy of the information communication terminal 100.
  • the information communication terminal 100 to be used is a smartphone, smart watch, television, or the like
  • only characters may be used as a means of communication with the user.
  • it is the user's language level, knowledge level, and visual level that are related to the communication policy of the information communication terminal 100.
  • the means of communication that can be used differ depending on the form of the information communication terminal 100 to be used and the function to be used.
  • the policy of communication between the user and the information communication terminal 100 is determined by the language level and knowledge level of the user, as well as the visual level and / or the auditory level used according to the means of communication.
  • FIG. 4 is a table showing an example of the correlation between the communication policy and the language level of the user.
  • the language level is divided into multiple stages such as 1-5.
  • the language level means that 1 is the least linguistic knowledge and 5 is the most linguistic knowledge. Therefore, when the language level is 1, set the policy to use only the basic words that are often used. On the contrary, when the language level is 5, set the policy to use various words used in a wide range of situations.
  • the words used for each language level are words taken by first grade elementary school students at language level 1, words taken by third grade elementary school students at language level 2, and taken by sixth grade elementary school students at language level 3.
  • FIG. 5 is a table showing an example of the correlation between the communication policy and the knowledge level of the user.
  • the knowledge level is divided into a plurality of stages such as 1 to 5.
  • the knowledge level means that 1 is the least knowledgeable and 5 is the most knowledgeable. Therefore, when the knowledge level is 1, set the policy to use only the basic knowledge that is often used. On the contrary, when the knowledge level is 5, the policy is set to use various knowledge used in a wide range of situations.
  • the knowledge used for each knowledge level is the knowledge that the first grader of elementary school takes if it is knowledge level 1, the knowledge that the third grader of elementary school takes if it is knowledge level 2, and the sixth grader of elementary school if it is knowledge level 3.
  • Knowledge knowledge taken by third-year junior high school students at language level 4, knowledge taken by third-year high school students at language level 5, and so on.
  • the higher the knowledge level the greater the amount of knowledge required. You may set it.
  • the higher the level of knowledge in the answer to which the policy is applied the greater the number of technical terms contained in the answer, or the longer the average length of the text contained in the answer. However, the total number of characters in the answer may be increased.
  • FIG. 6 is a table showing an example of the correlation between the communication policy and the visual level of the user.
  • the visual level is divided into a plurality of stages such as 1 to 5.
  • Perceptual level means that 1 has the lowest visual cognitive ability and 5 has the highest visual cognitive ability. Therefore, when the visual level is 1, the policy is set so that characters and sentences are displayed in the design with the highest visibility.
  • the design referred to here is the display size of characters, the modification of character borders, the color scheme of characters, the arrangement of characters, and the like. On the contrary, when the visual level is 5, the policy is set to display characters using a general design.
  • FIG. 7 is a table showing an example of the correlation between the communication policy and the user's hearing level.
  • the hearing level is divided into a plurality of stages such as 1 to 5.
  • Hearing level means that 1 has the lowest auditory cognitive ability and 5 has the highest auditory cognitive ability. Therefore, when the hearing level is 1, the policy is set to output the sound at the slowest, clearest, and / or loudest volume. Clarity here means that the pronunciation of letters is clear, there are few ambiguous pronunciations, and it is easy to hear. On the contrary, when the hearing level is 5, the policy is set to output the sound at the general speed, clarity, and volume.
  • FIG. 8 is a table showing specific examples of communication policies between the user and the information communication terminal 100 according to the communication ability of the user.
  • the communication means voice, text, video
  • the user's communication ability language level, knowledge level).
  • Perceptual level, Hearing level how it changes depending on the policy.
  • the policy will be expressed in the following function notation using the five elements that determine it.
  • the policy is an output value that changes according to the input value of the communication method used, the language level, the knowledge level, the visual level, and the auditory level of the user. Communication ability that is not involved in the policy is expressed as 0.
  • the language level is 1, the words taken by the first graders of elementary school are used, and since the knowledge level is 1, the answer sentence "It is a method to check if you are sick" is given in the category of the knowledge taken by the first graders of elementary school. Since the hearing level is 2, it is easier to hear, and the policy is to output voice at a speech speed of 70 words / minute and a volume of 70 dB.
  • policy f (voice, 4, 4, 0, 4).
  • use more advanced language use broader knowledge, and respond rather slowly.
  • FIG. 9 is a sequence diagram showing an example of initializing communication.
  • the information communication terminal 99 is also referred to as a smartphone 99, and an example of the case where the information communication terminal 99 is a smartphone will be described.
  • the information communication terminal 100 is also referred to as a robot 100, and an example of the case where the information communication terminal 100 is a robot will be described.
  • step S901 the user operates an application for setting the robot 100 installed on the smartphone 99, and operates so as to perform the initial setting.
  • the smartphone 99 opens an ad hoc communication session with the robot 100, and in step S902, the robot transmits identification information (robot ID) to the smartphone.
  • This may be performed in a connection form called ad hoc mode or the like in Wi-Fi (registered trademark) of wireless communication, may be performed by using a beacon signal in Bluetooth (registered trademark), or another communication method. , May be done by communication mode.
  • the user may input the robot ID into the smartphone 99, or a code indicating the robot ID (for example, a QR code (registered trademark)) may be read and set by the camera of the smartphone 99.
  • a code indicating the robot ID for example, a QR code (registered trademark)
  • step S903 the user newly creates a user account, and at that time, registers a user ID that serves as user identification information in the application. Further, in step S904, when the user talks to the robot 100 or the like, information (user identification information) for the robot 100 or the cloud 101 to recognize that the communication partner is the user is registered.
  • the user identification information may be, for example, a wakeup word used only by the user for the robot 100, a physical feature of the user (face, fingerprint, etc.), or a voice feature of the user.
  • step S905 the user registers a policy for the robot 100 to communicate with the user.
  • This is setting information that is, communication policy information
  • This is setting information including at least one of the above-mentioned language level, knowledge level, visual level, and auditory level.
  • step S906 the application for setting the smartphone 99 uses the user ID, the user identification information, the policy and the robot ID of the robot 100 used by the user. It is transmitted to the cloud 101 that manages the information.
  • the cloud 101 that manages the received user information records the above information regarding the initial setting of the user in the memory 223 and registers the user. Then, in step S908, the smartphone 99 is notified of the completion of registration.
  • step S909 the user confirms that the user's registration was successful via the video / audio output unit 206 of the smartphone 99, and then further adds if there is an additional registration.
  • the initial setting of the housemate is done in the same way as the user registration earlier.
  • User ID that is the identification information of the co-resident in the account setting of the co-resident, user identification information for the robot 100 or the cloud 101 to recognize that the co-resident is the communication partner, and the robot 100 communicates with the co-resident.
  • Register the policy for taking in the application of the smartphone 99 steps S910 to S915).
  • the application that sets the smartphone 99 uses the user information of the cohabitant, the user identification information, the policy, and the robot ID of the robot used by the cohabitant. Send to the managed cloud 101.
  • the cloud 101 records the above information regarding the initial setting of the cohabitant in the memory 223 and registers the cohabitant. Then, the notification of the completion of registration is notified to the smartphone 99. Then, the user completes the initial setting for Yu and the housemate (step S916).
  • Registration of these users may be performed individually rather than continuously. Further, the registration of the cohabitant may be performed by the cohabitant himself / herself using the smartphone of the cohabitant instead of the smartphone of the user. Here, since the user and the cohabitant use one robot 100, a plurality of users may be registered in a form associated with the robot 100. Further, in the registration, the user identification information for identifying the communication partner and the policy used at that time are registered so that the person who communicates with the robot 100 can respond appropriately depending on whether the user or the cohabitant can respond appropriately. It is important to be done.
  • FIG. 10 is a sequence diagram showing an example of initializing communication.
  • the difference from FIG. 9 is that the initial setting is performed directly by using the robot 100 without going through the smartphone 99.
  • the information registered in the initial settings is exactly the same as in FIG.
  • the user When the user starts the initial setting for the robot 100, the user registers his / her user ID, user identification information, and policy via the robot 100 (steps S1001 to S1004).
  • the robot 100 notifies the cloud 101 that manages the user information of the initial setting information of these users together with the robot ID, and in step S1007, the user registration is completed. Similarly, the user continues to perform the initial setting of the cohabitant via the robot 100.
  • the robot ID that identifies the robot 100 and the initial setting values (user ID, user identification information, policy) for each user are registered in the cloud 101 as a pair (steps S1008 to S1015).
  • FIG. 11 is a table showing various parameters set in the initial settings shown by the flowcharts of FIGS. 9 and 10. Since this table is used when generating communication with a user or a cohabitant, it is recorded and managed in the cloud 101 that manages user information.
  • one record corresponds to one row.
  • the user ID (0001) is the information of the user who uses the robot 100 of the robot ID (00001000), and three pieces of information are used as the user identification information for the robot 100 to identify this user.
  • the policy (f (character + voice, 3, 3, 3, 3)) applied to this user is recorded.
  • the three pieces of information contained in the user identification information include this user-specific wakeup word (Einstein) used when this user talks to the robot 100, and the robot 100 or the cloud 101 identifies that this user is communicating.
  • a user's physical feature amount for example, a face feature amount vector
  • a user's voice feature amount voice feature amount vector
  • Another user a cohabitant, is registered in the robot 100, which has the same robot ID (00001000).
  • the roommate is a user ID (0002), and the record contains a unique wakeup word (Razaford) used only by the housemate, physical features of the housemate, voice features of the housemate, and a policy for the housemate (f). (Character + voice, 4, 4, 5, 5)) is registered. The same is recorded for other robots and users.
  • the robot 100 or the cloud 101 may be a user with a user ID of 0001 or a cohabitant of 0002 by identifying the difference from the registered user identification information. Or you can determine if you are talking to someone other than these two. As a result, the robot 100 can communicate with the policy set according to the person with whom the robot is talking.
  • FIG. 12 is a sequence diagram showing an example of a response method for answering a question from a user. This figure is a flow from the information from the information source 102 to the user asking a question to the robot 100 by voice and the robot 100 answering the user by voice according to the set user policy. Here, it is assumed that the initial setting of the user has been completed in advance.
  • step S1201 the user does not understand the meaning of the word “PCR test” used in the explanation "How many new infections are by PCR test” that appears in the news information from the information source 102. And.
  • step S1202 the user asks the robot 100, "What is a PCR test?" Orally (voice).
  • the robot 100 acquires the voice information of the question asked by the user by the sensor 213, and in step S1203, transmits the voice information and its own robot ID to the cloud 101 that manages the user information via the communication unit 211.
  • the cloud 101 refers to the database of FIG. 11, that is, the speaker identification database, and extracts the user who is using the received robot ID. Further, it is determined by collating with the user identification information which of the extracted users the utterance was made.
  • the speaker may be specified by comparing the voice feature amount of the received voice information with the registered voice feature amount. Further, the voice information may be recognized and converted into an utterance character string, and the utterance character string may be intentionally analyzed to estimate that the question is a question.
  • the default policy may be used as the speech of the unregistered user.
  • step S1205 the user's policy is referred to from the database, and the answer content (answer character string) according to the language level and knowledge level set for the user and the auditory level set for the user are met.
  • the user's policy is f (voice, 4, 4, 0, 4)
  • step S1206 the cloud 101 responds to the robot 100 with the specified answer content (answer character string) and the specified presentation method or output format (speech speed, volume) as the answer according to the user's policy. Request to answer.
  • step S1207 the robot 100 converts the response character string into an audio signal from the received response content (answer character string) and the presentation method, and responds to the user by voice via the video / audio output unit 217.
  • the voice output is controlled so that the above answer voice is slightly easier to hear.
  • the user can hear the answer according to the user's language level, knowledge level, and auditory level, and can understand the contents well (step S1208).
  • the user can immediately and accurately understand the news information.
  • step S1209 the cloud 101 requests the robot 100 to inquire the user about the evaluation of the policy applied to this answer by the inquiry content (inquiry character string) and the presentation method (utterance speed, volume).
  • the robot 100 asks the user a question based on the presentation method designated via the video / audio output unit 217 for the audio obtained by converting the inquiry character string into an audio signal.
  • This evaluation request may be made when the robot 100 finishes the last reply within a time of 3 seconds or more and 10 seconds or less and there is no further inquiry from the user.
  • the robot 100 converts the user's utterance into text information (character string) from the evaluation feedback (voice) to the user's response policy, or the voice information spoken by the user is used as it is (voice) and is clouded with its own robot ID. Transfer to 101 (steps S1211, S1212). Upon receiving the user's evaluation information, the cloud 101 receives the date and time of the question asked by the user, the content thereof, the policy used when the robot 100 answers, the content of the answer, and the user's evaluation of the answer in step S1213. Information (value that quantifies user's evaluation) is linked and recorded.
  • step S1214 a new policy is updated according to a change in the communication ability of the user, and in steps S1215 and S1216, the notification is given to the user via the robot 100.
  • the policy is to determine the change in the communication ability of the user by continuously measuring the communication exchanged between the user and the robot 100, and update it as necessary. Since the policy is determined by the user's language level, knowledge level, visual level, and auditory level, the robot 100 determines the user's language level, knowledge level, visual level, and auditory level in daily communication with the user. Always check and evaluate.
  • the robot 100 asks a question about knowledge that exceeds the knowledge level of the policy currently applied to check the user's knowledge level, or displays the characters a little smaller to check the user's visual level.
  • the user's communication ability may be constantly grasped and reflected in the policy by giving an answer with a slightly smaller voice and checking the user's hearing level.
  • the robot 100 can specify the information transmitted by the information source 102 and the expression method thereof, the communication ability of the user may be set from the information. For example, if the TV program the user is watching is watching the pre-election debate between US presidential candidates in English, the user's English language level is 5, the knowledge level is 5, and the hearing level is the user. It may be estimated from the distance between and the TV and / or the output volume information of the TV.
  • the TV program that the user is watching is an animation for children mainly for the lower grades of elementary school
  • the user's language level is 1
  • the knowledge level is 1
  • the hearing level is the user and TV.
  • the books that the user is reading on the robot 100 or tablet terminal are specialized books and papers
  • the destinations that are accessed via the Internet are documents and URLs that require advanced knowledge understanding of the Cabinet Office and the Japan Patent Office.
  • the language level may be estimated to be 5
  • the visual level may be estimated from the character display size on the robot 100 or the tablet terminal.
  • the robot 100 Since the robot 100 is close to the user on a daily basis and measures the communication ability of the user, when a sudden decrease in language level or knowledge level that does not normally occur is detected, the user himself / herself is in good physical condition. Ask the user if it is not bad, or contact the user's family, the insurance company to which the user is subscribed, the security company, the family doctor, etc. to check if there is any danger to the user's body. You may work to do so. These judgments can be made more accurately by making judgments together with sensor information that can detect the amount of activity and biometric information such as a smart watch worn by the user.
  • the cloud 101 requesting the robot 100 having a specific robot ID to evaluate the response policy of the user is included in the message in which the evaluation result of the user's response policy is returned from the robot 100 to the cloud 101. Questions and answers are linked by the included robot ID.
  • the present disclosure is not limited to this, and a unique message ID for identifying the message is given to the message of the evaluation request from the cloud 101 to the robot 100 and the message of the evaluation result from the robot 100 to the cloud 101. You may manage this question and answer in association with each other.
  • the voice information spoken by the user, the character string thereof, the answer content reflecting the user's policy, and the presentation method are exchanged between the robot 100 and the cloud 101.
  • the user ID and the user identification information which are the information for identifying the individual user, are not exchanged. Instead, the robot ID of the robot 100 communicating with the cloud 101 is used. Since the robot ID and the user's initial setting information (user ID, user identification information, policy) are managed in association with the memory 223 of the cloud 101 in the initial setting, the robot ID is used in the communication between the robot 100 and the cloud 101. It is possible to narrow down the number of people who are communicating with the robot 100 to a scale of several people, and to identify them with high accuracy by using the user identification information.
  • FIG. 13 is a sequence diagram showing an example of a response method for answering a question from a user.
  • the difference between FIGS. 12 and 13 is that the robot 100 responds to the user using characters, video, and voice. Others are the same as in FIG. 12, so the same parts are designated by the same reference numerals and the description thereof will be omitted.
  • the user policy set in the cloud 101 is f (character + video + voice 2, 2, 2, 2), and here, not only voice but also text and video are set to be answered. It is the same as FIG. 12 until the robot 100 receives a question from the user, transfers it to the cloud 101, and the answer content and the presentation method are determined by the cloud 101.
  • step S1306 an answer using characters, video, and voice is given based on the user's policy set in the cloud 101.
  • the user's language level, knowledge level, visual level, and auditory level are all set to 2, and a relatively easy answer content and presentation method are required. Therefore, Cloud 101 generates an answer string using only the basic words and knowledge, "This is a method of investigating a disease by a gene", based on the set language level and knowledge level as an answer according to the user's policy. ..
  • this character string is displayed together with the illustration 800 that explains with a more visible design (adjustment of character size and color scheme, ruby for Chinese characters, etc.).
  • the robot 100 is instructed to output voice more slowly and at a louder volume.
  • the video / audio output unit 217 may use the projection mapping technique to display the illustration 800 near the user in a more visible design.
  • the size may be larger than the size of the image / audio output unit 217 of the robot 100 so that the user can easily see the image.
  • FIG. 14 is a flowchart showing an example of responding according to the currently applied policy.
  • the robot 100 receives a question from the user in step S1401, in step S1402, the robot 100 and / or the cloud 101 responds to the question (for example, the text of the answer) and its expression method (for example, the text of the answer). For example, the speed and volume at which the text of the answer is read aloud, the size and color scheme for displaying the text of the answer) are generated according to the current policy, and the user responds.
  • the robot 100 receives a question from the user in step S1401, in step S1402, the robot 100 and / or the cloud 101 responds to the question (for example, the text of the answer) and its expression method (for example, the text of the answer). For example, the speed and volume at which the text of the answer is read aloud, the size and color scheme for displaying the text of the answer) are generated according to the current policy, and the user responds.
  • the robot 100 that performs individual optimum communication according
  • FIG. 15 is a sequence diagram showing an example in which the robot 100 asks a question to the user and measures the communication ability of the user based on the answer. For example, the robot 100 asks the user a question "If the PCR test is negative, isn't it infected?" By voice, and detects that the user answers "I'm not infected.” By voice. By spontaneously and actively starting this series of exchanges based on the current policy (current user's communication ability), the user's communication ability can be continuously measured from various angles. can.
  • the above answer example is generally correct, but it is not a perfect answer.
  • step S1501 the robot 100 constantly senses the user state, and it is determined whether the user is in a relaxed state and can afford to answer the question from the robot 100.
  • step S1502 the cloud 101 that manages user information instructs the robot 100 to confirm the current user state at a predetermined timing, for example, once every three months, and in step S1503, the robot 100 tells the robot 100 the current user state. Get information about.
  • step S1504 the cloud 101 ends the process when it is determined that the quiz question is at an inappropriate timing based on the acquired user status and the question frequency so far.
  • step S1504 when the cloud 101 determines that the timing is appropriate for the quiz question, the cloud 101 instructs the robot 100 to set the quiz question.
  • the cloud 101 specifies the question content (voice signal or text information of the question) and the presentation method (speech speed, volume, etc.) to the robot 100.
  • step S1505 the robot 100 asks the user a question content designated via the video / audio output unit 217 according to the received instruction by the designated presentation method.
  • the robot 100 transfers the user's answer as an audio signal (or converted into character information) to the cloud 101 (steps S1506 and S1507).
  • step S1508 the cloud 101 provides feedback voice information (for example, either a correct answer or an incorrect answer, and in the case of an incorrect answer, exemplary answer information, etc.) and presentation information for the received answer to the robot 100.
  • step S1509 the robot 100 gives voice feedback to the user accordingly.
  • the cloud 101 includes the date and time of the question asked to the user, the content thereof, the policy used when the robot 100 asks the question, the content of the user's answer to the question, and the user or the user when the question is asked. At least two of the surrounding environment of the robot 100 (intensity of ambient noise, distance between the user and the robot 100, time required for the user to answer) are recorded in association with each other.
  • the cloud 101 re-determines the user's language level, knowledge level, visual level, and auditory level from the user's answer to the question, determines whether or not the update is necessary, and updates to a new policy if necessary (step S1511). , S1512).
  • the explanation was given using voice communication, but the present disclosure is not limited to this, and questions may be given using visual information (or in combination with voice information).
  • the question content generated by the cloud 101 is expressed by characters and video information, and the presentation method adjusts the visibility such as the size and color scheme of the characters according to the user's policy.
  • the presentation method adjusts the visibility such as the size and color scheme of the characters according to the user's policy.
  • FIG. 16 is a flowchart showing an example in which the robot 100 asks a question to the user and updates the communication ability and / or the policy of the user according to the answer.
  • step S1601 the robot 100 sets a policy to be used for a predetermined question with reference to the policy currently applied to the user, and gives a question according to the policy used for the question.
  • the policy used for the question means that the question is given with the same level of difficulty as the policy currently set by the user and the presentation method.
  • step S1602 the robot 100 subsequently senses the user's answer. When the answer is completed, in step S1603, it is determined whether the question is finished or the question is still continued. If you want to continue the question, proceed to No, set a predetermined question policy, and ask the question again.
  • step S1605 it is determined whether or not it is the same as the policy currently used in daily communication with the user. In the same case, the process proceeds to Yes and the policy is not updated, and the process ends.
  • step S1606 the policy is updated in step S1607, and the process ends.
  • the robot 100 voluntarily asks questions according to a plurality of policies, and by confirming the correct answer rate for each of the plurality of policies, it is possible to estimate what policy is suitable for the user. By applying this as a new policy, it is possible to smoothly proceed with daily communication between the user and the robot 100 and without stress to the user.
  • FIG. 17 shows an example of a table used when the robot 100 asks a question to the user and updates the communication ability (policy) of the user based on the answer.
  • policy the communication ability of the user based on the answer.
  • f character + voice, 3, 3, 3, 3
  • the visual level is set to 2 which is close to the average response time.
  • the hearing level is set to 3 so as to ensure the ease of hearing.
  • the policy determination method explained here is just an example that is easy to understand for convenience of explanation. For example, even in the same policy, by repeating a plurality of questions and measuring a plurality of answers, a policy may be determined according to the more accurately measured communication ability of the user.
  • FIG. 18 is an example of a table used when the language level of a user is subdivided into a plurality of languages and managed.
  • the language level of the user is recorded and managed for each of English, Chinese, Hindi, Japanese, Spanish, and Arabic. These language levels may be set by the user by self-report, or may be evaluated and set according to a score such as a language proficiency test.
  • the languages marked with "-" in the "Language level for each language” in the table are not at the level where communication is established, indicating that they cannot be used in practice.
  • the robot 100 measure the communication ability (language level) that differs depending on the type of language used for communication, communication can be achieved according to the language level of each user's language.
  • the language level for each language used by the robot 100 is set, but this can also be used for other purposes.
  • the user currently has an English language level of 3, but by intentionally setting the communication with the robot 100 to English, it is possible to practice English conversation aiming at a higher English language level. ..
  • the robot 100 tests the user's English proficiency, the user can know his / her English proficiency. Further, it is conceivable that the robot 100 raises the language level of English to a higher level in accordance with the improvement of the English ability of the user to communicate.
  • FIG. 19 is an example of a table used when the knowledge level of a user is subdivided into a plurality of fields and managed.
  • the knowledge level is expressed by one aggregated value as one element of the communication ability of the user.
  • the question of the robot 100 "If the PCR test is negative, is it not infected?" Illustrated in FIG. 15 clearly asks the knowledge about medical treatment. Depending on whether or not this question is answered correctly, it may be inappropriate to comprehensively express the user's knowledge level as a single value, especially for non-medical themes.
  • the knowledge level of the user is 4 overall, but the applied science is 5 and the humanities is 3.
  • the knowledge level of each field is 4 overall, but the applied science is 5 and the humanities is 3.
  • the knowledge level for each field was set here, but this can also be used for other purposes.
  • the user currently has a social science knowledge level of 3, but the robot 100 tries to incorporate political and economic topics in daily communication with the user, and provides political and economic news information to the user every day. It is also possible to acquire a higher level of social science knowledge by introducing them.
  • the robot 100 tests the user on the social sciences, the user can know the knowledge level of his / her social science field. Further, it is conceivable that the robot 100 raises the knowledge level in the social science field to a higher level for communication according to the knowledge of the user in the social science field.
  • the subdivision of the knowledge level is not limited to each academic field as described above, but may be set according to the knowledge of each specific field such as music, movies, animation, and painting. Further, it may be set separately from Japanese rock, Japanese pop, American rock, American pop, etc. in more detail.
  • FIG. 20 is a sequence diagram showing an example of initializing communication.
  • the answer contents and the presentation method according to the communication ability of the user are generated by the cloud 101, but here, the system configuration is generated by the robot 100.
  • the personal information of the user is not managed at all in the cloud 101, and the personal information is not exchanged on the Internet, so that this service can be used more safely.
  • the range in which personal information or information closely related to personal information is handled includes only the user, the smartphone 99, and the robot 100, and does not include the cloud 101.
  • step S2006 the smartphone notifies the robot 100 of the initial setting information and causes the robot 100 to register the initial setting information.
  • step S2008 the robot 100 records the user's initial setting information in the memory 215, and in step S2009, returns a notification of registration completion to the smartphone 99.
  • the smartphone 99 also encrypts the initial setting information of this user and records it in the memory 204 of the smartphone 99 (step S2007). This can be used when the robot 100 breaks down, or when the user's initial setting information is set in the new robot 100 when the robot 100 is replaced.
  • the user who has received the notification of the completion of user registration via the video / audio output unit 206 of the smartphone 99 ends the initial setting (step S2010).
  • the smartphone 99 notifies the cloud 101 of the robot ID and the user ID, which are a part of the initial settings of the user (the minimum information required for maintenance purposes that are not closely related to personal information), and the pair thereof. Is registered (steps S2011, S2012). Conversely, only the information closely related to the personal information of the user and the initial setting information excluding the user identification information and the policy are registered in the cloud 101.
  • the cloud 101 does not share information related to personal information such as questions asked by the user to the robot 100 and the contents of conversations, but maintenance information and the like are appropriately provided from the smartphone and / or the robot 100 in order to improve service quality. You may be notified.
  • the process of initializing the roommate as the second user is the same as the process of initializing the first user, so it is omitted here.
  • the initial setting is performed using only the robot 100.
  • the initial setting information may be registered in the robot 100.
  • the robot 100 referred to here may be replaced by a smartphone 99 owned by the user.
  • a smartphone 99 owned by the user.
  • a user uses a smartphone 99 as a mobile phone every day, but at home, by storing the smartphone in a robot-type case, the user uses the smartphone 99 as if it were the robot 100 described above. It is also possible.
  • equipping this case with not only a smartphone charging function but also a microphone, speaker, and projection mapping function, it is possible to enable usage like a robot or smart speaker simply by storing the smartphone in this case. Is.
  • FIG. 21 is a sequence diagram showing an example of a response method for answering a question from a user.
  • This figure is an embodiment different from FIG. 12 in which the cloud 101 generates an answer according to the user's policy, and is a case where the robot 100 generates an answer according to the user's policy. Therefore, there is no exchange of information with the cloud 101 in the conversation processing with the user. Since the robot 100 manages initial setting information (user identification information and policies) closely related to the user's personal information and realizes a conversation with the user by internal processing, there is an advantage that the risk of leakage of the user's personal information can be reduced.
  • the same parts as those in FIG. 12 will be designated by the same reference numerals, and the description thereof will be omitted.
  • step S2102 the user asks the robot 100 "What is a PCR test?" Orally (voice).
  • step S2103 the robot 100 acquires the voice information of the question asked by the user by the sensor 213, and the user identification recorded in the memory 215 of the robot 100 which of the preset users was the utterance. Determined by matching with information (at least one of the unique wakeup words, physical features, and voice features), and the user is asking a question (rather than a co-resident or a non-default third party). Estimate that. If the user is fixed at one person, this step of identifying the speaker may be omitted assuming that the utterance of the default user is used.
  • step S2104 the robot 100 next reads the user's policy from the memory 215, and the answer content (answer character string) according to the language level and knowledge level set for the user and the auditory level set for the user. Determine the presentation method (speech speed, volume, etc.) according to. Further, in step S2105, the robot 100 gives an answer according to the user's policy by voice via the video / audio output unit 217, and the user understands the PCR test by the answer.
  • step S2107 the robot 100 inquires the user by voice about the evaluation of the policy applied to this answer. Based on the evaluation voice from the user, the robot 100 has the date and time of the question asked by the user, the content thereof, the policy applied when the robot 100 answers, the content of the answer, and the user's evaluation information (user) for the answer. At least two of (values obtained by quantifying the evaluation of) are linked and recorded (steps S2108 and S2109).
  • step S2110 if it is determined in step S2110 that the user's evaluation can be improved by updating the policy, the robot 100 updates the policy and records it in the memory 215, and at the same time, in step S2111, a new policy is created. Notify the user that the policy will be updated.
  • the robot 100 may access information on the Internet when generating an answer, the user's personal information (user identification information, policy, etc.) is not used even in the access, and the user's personal information is not used. No information is leaked.
  • the personal information including the user's initial setting information and the member information of a specific group is registered in the initial setting so that the robot 100 can use it. May be. For example, when accessing articles and event information managed by academic societies, news information distributed by news distribution companies, cash and securities information of users managed by financial institutions, acquisition / usage status of point services, etc. Etc. correspond to this case.
  • the cloud 101 manages the policy and generates the answer and the robot 100 manages the policy and generates the answer
  • the cloud 101 and the robot 100 cooperate to generate the answer. You can do it.
  • the user's initial setting information is recorded and managed in the memory 215 of the robot 100. If there is a part corresponding to personal information from the content of the question or conversation received from the user, the robot 100 hides or anonymizes the part, and then converts it into a general question content that cannot identify the user's personal information. Send to cloud 101. The cloud 101 returns a general answer to the robot 100. It is conceivable that the robot 100 applies the user's policy to the received general answer content, converts it into an answer that is easy for the user to understand, and then presents it to the user.
  • the linguistic expression of the answer content, prerequisite knowledge, display size and color scheme of characters and images, and speech speed And volume are adjusted.
  • the knowledge level is 2
  • the user's current setting is "a method of investigating a disease by gene” as shown in the 4th line.
  • the expression may be converted into a plain expression and presented to the user.
  • FIG. 22 is a sequence diagram showing an example in which a user is asked a question and the communication ability is measured by the answer.
  • This figure is an embodiment different from FIG. 15 in which the cloud 101 generates questions according to the user's policy, and is a case where the robot 100 generates questions according to the user's policy. Therefore, there is no exchange of information with the cloud 101 in a series of conversation processing with the user. Since the robot 100 manages initial setting information (user identification information and policies) closely related to the user's personal information and realizes a conversation with the user by internal processing, there is an advantage that the risk of leakage of the user's personal information can be reduced.
  • this figure will be described while omitting the description of the same part as in FIG.
  • step S2201 the robot 100 constantly senses the user state, and it is determined whether the user is in a relaxed state and can afford to answer the question from the robot 100.
  • the robot 100 tests the user's communication ability in order to update the user's policy at a predetermined timing, for example, once every three months.
  • step S2202 the robot 100 ends the process when it is determined that the quiz question is at an inappropriate timing based on the user status and the question frequency so far.
  • the robot 100 determines the question content (at least one of the characters representing the question, the video, and the audio information) and the presentation method (character display size, color scheme, audio). Determine the speaking speed, volume, etc.).
  • the robot 100 asks the user a question content determined via the video / audio output unit 217 by the determined presentation method.
  • step S2204 the robot 100 acquires the user's answer as an audio signal via the sensor 213.
  • the robot 100 feeds back the received response to the user using any of text, video, and audio information (step S2205, step S2206).
  • Feedback includes information such as correct and incorrect answers.
  • step S2207 the robot 100 asks the user the date and time of the question, the content thereof, the policy used when the robot 100 asks the question, the content of the user's answer to the question, and the user or the user when the question is asked.
  • At least two of the surrounding environment of the robot 100 intensity of ambient noise, distance between the user and the robot 100, time required for the user to answer) are recorded in association with each other.
  • the robot 100 re-determines the user's language level, knowledge level, visual level, and auditory level from the user's answer to the question, determines whether or not the update is necessary, and updates to a new policy if necessary (step S2208). , S2209).
  • FIG. 23 is a sequence diagram showing an example in which when a user is communicating with a doctor with whom he / she is talking, the degree of understanding of the user is estimated from the communication and support intervention is appropriately performed.
  • the robot 100 shares the same experience as the user. That is, it is assumed that the robot 100 also perceives what the user sees and hears. For example, there is a case where the robot 100 is near the user, or a case where the user communicates with the doctor with whom the user is talking, such as a remote diagnosis, via the robot 100 or the smartphone 99.
  • step S2301 the conversation 1 spoken to the user by the doctor with whom the conversation partner is talking is transmitted to the user as well as to the robot 100.
  • the robot 100 either keeps the conversation 1 as voice data or converts the voice into character data and transmits it to the cloud 101 that manages the user information.
  • the user responds to the conversation 1 to the doctor in the conversation 2.
  • This conversation 2 is also transmitted to the robot 100.
  • the robot 100 either keeps the conversation 2 as voice data or converts the voice into character data and transmits it to the cloud 101 that manages the user information.
  • the cloud 101 determines the policy to which conversation 1 applies, compares it to the policy currently applied, and / or estimates, based on the content of conversation 2, whether the user is likely to understand conversation 1. .. Specifically, this is when conversation 1 is more difficult than the policy currently applied, conversation 2 is an ambiguous response, or it takes a long time to respond. May decide that communication support is needed.
  • Comparing the policy to which Conversation 1 applies with the policy currently applied is to judge the conversation of the other party in light of the policy which is a guideline for daily and smooth communication between the user and the robot 100. This is to determine whether the conversation of the other party was made in a way that is sufficiently easy to understand even in the communication ability of the user. In the example of this figure, since it was determined that conversation 1 and conversation 2 do not need such communication support, the cloud 101 listens to these conversations without intervening (step S2305).
  • step S2306 the conversation 3 that the doctor spoke to the user (for example, "Have you ever had anaphylaxis?") Is transmitted to the user as well as to the robot 100.
  • step S2307 the robot 100 similarly transmits the conversation 3 to the cloud 101.
  • step S2308 the user responds to the conversation 3 with the conversation 4 (for example, "I don't think ##) to the doctor. This conversation 4 is also transmitted to the robot 100.
  • step S2309 the robot 100 similarly transmits the conversation 4 to the cloud 101.
  • the cloud 101 determines the policy to which conversation 3 applies, compares it to the policy currently applied, and / or, based on the content of conversation 4, whether the user is likely to understand conversation 3. presume.
  • the policy to which the conversation 3 corresponds is more difficult than the policy currently applied, and / or the conversation 4 is an ambiguous response.
  • the cloud 101 determines that communication support is necessary, and the user who manages the conversation 5 (for example, "there is no particularly strong allergy other than pollen") for the user to understand the conversation 3 is managed by the cloud 101. Created by accessing the information allergy test result information to intervene in the conversation between the user and the doctor (step S2310).
  • the conversation 5 is a conversation created according to the currently applied policy in order for the user to correctly understand the conversation 3 and facilitate communication.
  • step S2311 the cloud 101 instructs the robot 100 to have a conversation 5, and in step S2312, the robot 100 utters the conversation 5 to the doctor and the user.
  • step S2313 the doctor responds to the user and the robot 100 in conversation 6 (eg, "OK").
  • step S2314 the robot 100 similarly transmits the conversation 6 to the cloud 101, and in step S2315 the cloud 101 determines that no additional communication intervention is required. The necessity of this communication intervention can be determined in the same manner as the evaluation for conversation 1.
  • conversation 7 and conversation 8 are the same as conversation 1 and conversation 2, and therefore description thereof will be omitted (steps S2316 to S2320).
  • Intervention judgment for communication may be made by detecting (for example, bending the neck, waving a hand) and biometric information of the user (for example, brain wave, eye movement, respiration, heart rate (variation), etc.).
  • the material for determining intervention is the policy corresponding to the conversation made by the other party, the content of the conversation made by the user who responds to it, the time taken for the user to respond to the conversation made by the other party, and the user's. At least one or more of the apparent reaction and the biometric information of the user may be used for judgment.
  • the cloud 101 determines the necessity of communication support every time the user responds, but the present invention is not limited to this. For example, this judgment may be made at all times during communication, and communication support may be provided at any time when necessary. For example, it may be the timing before the user responds. Alternatively, the time may be set when the time when no one is communicating (in this case, the time when neither the user nor the doctor is silent) becomes a predetermined time or more (for example, 3 seconds or more has elapsed).
  • FIG. 24 is a flowchart showing an example of estimating the degree of understanding of the user from the communication in which the user participates and appropriately intervening in support.
  • step S2401 the robot 100 senses whether the user is communicating with someone from the conversation or chat.
  • step S2402 the language level of the words used in the conversation or chat, which is the communication, is further determined during the communication.
  • step S2403 if the level of the language used in the conversation or chat is equal to or lower than the language level of the policy currently applied, the process proceeds to Yes, and sensing is continued until the conversation ends (Yes in step S2406). If not, the process proceeds to No, and in step S2404, the relevant part having a high language level used is replaced according to the policy currently applied. Further, in step S2405, the robot 100 conveys the contents of the conversation or chat to the user in an easy-to-understand manner according to the policy, and continues sensing until the conversation ends.
  • the application that supports the communication of the smartphone is whether the conversation content is sufficiently easy for the user to understand, or the policy currently applied. Judgment by comparison with. If the application determines that it is difficult for the user to understand or respond accurately, it displays information to support understanding on the screen of the smartphone. Specifically, the avatar displayed by the application says, "Anaphylaxis is a strong allergic reaction.” "According to an allergy test conducted three years ago, there is no strong allergy other than pollen of sugi and cypress.” Is displayed according to the currently applied policy (for example, high visibility and large font size with only basic words). While checking these supplementary information displayed on the smartphone during the conversation with the doctor, the user can confidently answer the doctor's interview and answer the correct information smoothly without interrupting the communication.
  • FIG. 25 is a sequence diagram showing an example in which the robot 100 summarizes the communication so far immediately before the end of communication, unlike FIG. 23.
  • the user and the doctor with whom he / she talks have conversations 1 to 6. Since the processing up to this point overlaps with the processing described in FIG. 23, the explanation is omitted, but the cloud 101 analyzes the communication in the middle of the conversation and records the process.
  • step S2515 when the cloud 101 determines that the conversation between the user and the doctor with whom the user is talking is about to end as a result of communication analysis, in step S2516, the conversation is summarized from the conversation so far and the conclusion is organized. In step S2517, the summary and / or conclusion is further expressed as conversation 7 (for example, "Take two antihypertensive drugs after a meal?") According to the policy, and the robot 100 confirms with the user and the doctor. Make a request. In step S2518, the robot 100 utters the received conversation 7 and confirms whether the user and the doctor have a common understanding.
  • conversation 7 for example, "Take two antihypertensive drugs after a meal?"
  • the end of the conversation is the context and flow of the conversation, the user or the other party summarizing the conversation so far, the user or the other party saying goodbye, the user trying to get out of the room where the other party is talking, the robot 100. May be detected by a method such as confirming to the user or the other party that the message is terminated.
  • step S2519 the doctor or user responds to the summary or conclusion shown in conversation 7 in conversation 8 (eg, "yes, yes”).
  • step S2520 the conversation 8 is also notified from the robot 100 to the cloud 101 as before.
  • step S2521 if the conversation 8 clearly agrees with the conversation 7, the cloud 101 ends the process. If this is not the case, a conversation for explanation is generated as conversation 9, and the robot 100 is instructed in step S2522.
  • step S2523 the robot 100 utters the received conversation 9 and requests the user and the doctor to summarize, correct the conclusion, and explain.
  • step S2524 the user or doctor responds to it in conversation 10.
  • step S2526 the cloud 101 is summarized by the conversation 10, and when the conclusion is clarified, the process ends. If this is not the case, continue to ask for a summary or explanation of the conclusion, as in Conversation 9.
  • FIG. 26 is a sequence diagram showing an example in which the robot 100 summarizes the communication so far after the communication is completed, unlike FIG. 25. As shown in the figure, the user and the doctor with whom he / she talks have conversations 1 to 6. Since the processing up to this point overlaps with the processing described with reference to FIG. 25, the description thereof is omitted.
  • step S2616 the robot 100 indicates that the conversation has ended in conversation 6, the context and flow of the conversation, the user or the other party summarizing the conversation so far, the user or the other party giving a farewell greeting, and the user. It may be detected by a method such as leaving the room where the other party is present, a certain period of time elapses after the user stops talking with the other party, or the robot 100 confirms with the user or the other party that the conversation has ended.
  • the cloud 101 conveys a summary of the conversation so far and / or a conclusion to the user via the robot 100 according to the policy currently applied to the user (steps S2617 to S2619). .. By doing so, it becomes easy for the user to re-recognize and correctly understand the content and result of the communication with the other party (step S2620).
  • FIG. 27 is a sequence diagram showing an example of summarizing only to the user after the communication is completed. As shown in the figure, the user and the doctor with whom he / she talks have conversations 1 to 6. Since the processing up to this point overlaps with the processing described with reference to FIG. 25, the description thereof is omitted.
  • step S2716 when the user's diagnosis is completed, the doctor to talk to inputs a medical record or a prescription to the information communication terminal 110 for inputting the medical record.
  • the medical record and the prescription are transmitted from the information communication terminal 110 for inputting the medical record to the medical record storage cloud 111 that manages the information used by the other party, and in step S2717, the cloud 111 records these. Further, in step S2718, the cloud 111 shares the medical record and prescription information with the cloud 101 that manages the user information.
  • the cloud 101 that manages user information securely stores this.
  • the cloud 101 that manages the user information determines that the conversation (diagnosis) between the user and the doctor who was the other party has ended due to the input of new medical record or prescription information. Therefore, when it is determined through the robot 100 that the user is in a state where the user can relax and secure privacy, the cloud 101 currently applies the policy regarding the diagnosis result of the doctor with whom the user was talking and the prescription of the medicine. According to the robot 100, the user is urged to confirm (for example, "Because the blood pressure is high, I have to take two medicines after eating") (steps S2719 to S2721). By doing so, it becomes easy for the user to re-recognize and correctly understand the content and result of communication with the other party (doctor in this example) (step S2722).
  • the user information management cloud 101 is like an information bank in which various personal information about users is recorded and managed. For example, by setting prior confirmation as a user in advance, when the personal information of the user recorded in the medical record storage cloud 111 is added or corrected, it is immediately shared with the user information management cloud 101. It is set to be used. By setting in this way (prior permission of the person), information including personal information about various users is collected in the cloud 101 that manages the user information and can be used for various purposes. An example of use will be described later with reference to FIG. 29, assuming that the robot 100 supports the taking of medicines.
  • FIG. 28 is a sequence diagram showing an example of summarizing only to the user after the communication is completed.
  • the user and the doctor with whom he / she is talking are having conversations 1 to 6, but the content of this conversation cannot be detected by the robot 100. For example, this happens when the user goes to the hospital for diagnosis and the robot 100 remains at the user's home.
  • the processing up to this point is almost the same as the processing described in FIG. 27, so the explanation is omitted.
  • the difference is that the robot 100 cannot detect the communication between the user and the doctor with whom the user is talking. Therefore, the robot 100 and the cloud 101 that manages the user information do not contain any information about conversation.
  • step S2807 the doctor with whom the conversation partner inputs a chart or prescription into the information communication terminal 110 for inputting the chart (not shown), and in step S2808, the chart storage cloud 111 that manages the information used by the conversation partner. Further, in step S2809, no information is input to the robot 100 and the cloud 101 that manages the user information until it is shared with the cloud 101 that manages the user information.
  • the subsequent processing can proceed in the same manner as in FIG. 27.
  • FIG. 29 is an example of a table used when supporting the task execution of the user. As shown in the table, users have multiple routine tasks. These tasks are identified by a number for each task.
  • the task name of task 1 is "garbage removal”, the date and time when this task occurs is “every Wednesday at 9 am”, the place where this task occurs is “home”, and the content of this task is “home”. Throw away the trash. " Whether or not the user has performed the task of this task 1 is determined by the video / audio sensing unit of the robot 100, for example, whether or not the user has taken out the dust and returned home without the dust, as shown in the check column. It is possible to recognize an image taken through another imaging device and make a judgment using the shooting date and time.
  • the task name of task 2 is "feeding”
  • the date and time when this task occurs is “every day at 10 am”
  • the place where this task occurs is indefinite
  • the name of the task with the value of task 3 is "taking medicine”
  • the date and time when this task occurs is “13:00 every day”
  • the place where this task occurs is indefinite
  • the content of this task is "medicine”.
  • Whether or not the user has performed the task of this task 3 is determined by recognizing an image taken by the camera of the robot 100, for example, whether or not the user has taken all the medicines, and determining the shooting date and time. It can be judged by using.
  • the user may be requested to cooperate. For example, in order to check if all the medicines have been taken or if there are any omissions or duplications in the medicines to be taken, show the robot 100 (video / audio sensing unit) with all the medicines on the palm before taking the medicines. You may drink from. Alternatively, the robot 100 may have a conversation asking the user to show all the medicines before taking the medicines.
  • FIG. 30 is a sequence diagram showing an example of supporting the user's daily task execution.
  • the execution support of the three tasks illustrated in FIG. 29 will be described with the setting that the start of the flowchart of FIG. 30 is Wednesday morning.
  • the robot 100 constantly senses the user state (for example, the user's biological information, activity amount, place, posture, etc.) (step S3001). Further, it is assumed that the cloud 101 that manages the user information regularly monitors whether the task performed by the user or the start time of the registered task as shown in FIG. 29 has come (step S3002).
  • step S3003 it is assumed that the user executes task 1 before the scheduled time for task 1 (garbage removal) (9:00 am on Wednesday).
  • step S3004 the robot 100 detects and confirms whether the task 1 has been performed according to the confirmation items in the check column.
  • step S3005 the robot 100 notifies the cloud 101 that the execution is completed.
  • step S3006 the cloud 101 records the execution of task 1.
  • step S3007 the cloud 101 detects that the time has passed and the scheduled time for task 2 (feeding) has come (10 am every day).
  • step S3008 the cloud 101 instructs the robot 100 to confirm the execution of the task 2.
  • step S3009 the robot 100 asks the user whether the task 2 has been executed. As a result, the user is aware of performing the task 2 and executes it (step S3010).
  • step S3011 when the robot 100 detects and confirms that the user has executed task 2 according to the confirmation items in the check column, in step S3012, the robot 100 notifies the cloud 101 that the execution is completed. In step S3013, and cloud 101 records the performance of task 2.
  • step S3014 the cloud 101 detects that the time has passed and the scheduled time (13:00 every day) for task 3 (taking medicine) has come.
  • step S3015 the cloud 101 instructs the robot 100 to confirm the execution of the task 3.
  • step S3016 the robot 100 asks the user whether the task 3 has been executed. As a result, the user is aware of performing the task 3 and executes it (step S3017).
  • step S3018 the robot 100 cannot detect and confirm the execution of task 3 according to the confirmation items in the check column. For example, the case where the medicine is taken in a place invisible to the robot 100 corresponds to this.
  • step S3019 if the cloud 101 has not received the completion notification even after a predetermined time has elapsed after instructing the robot 100 to confirm the execution of the task 3, the robot asks the robot to confirm the execution of the task 3 again in step S3020. Instruct 100.
  • step S3021 the robot 100 confirms to the user whether the execution of the task 3 is completed, and in step S3022, the user replies that the execution has been completed.
  • step S3023 the robot 100 receives this answer and notifies the cloud 101 of it.
  • step S3024 cloud 101 records the execution of task 3.
  • the user's daily tasks are registered in the format shown in FIG. 29, and the process of confirming whether the tasks are executed at the scheduled time as shown in FIG. 30 is carried out using the robot 100 and the cloud 101. This allows the user to perform miscellaneous and often forgotten tasks without omission.
  • the user may ask the robot 100 whether the task has been performed and confirm it.
  • the robot 100 may detect the user's action and inform the user that the task has already been performed. Since all the execution records of these tasks are recorded in the cloud 101, it is possible to reply to the user or convey the execution status of the task based on the data.
  • the information communication terminal 100 and the cloud 101 that manages user information are separately described as being communicated via a network, but the stored user information is recorded and managed inside the information communication terminal 100. You may do so. In this case, it is not necessary to exchange data corresponding to the user's personal information such as conversation contents and communication ability via the network, and there is an advantage that the cost of information communication and information management and the risk of leakage can be reduced.
  • This disclosure is useful as a technique that enables smooth communication with users.
  • Information and communication terminals (smartphones, personal computers, etc.) 100: Information and communication terminals (robots, smart speakers, wearable devices, etc.) 101: Cloud that manages user information 102: Information source 110: Information communication terminal (medical record input terminal) 111: Cloud that manages information handled by the other party (medical record storage cloud)

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Marketing (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Epidemiology (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Ce procédé de traitement d'informations est destiné à un système de fourniture d'informations pour communiquer avec un utilisateur, et consiste : à acquérir des premières informations de locution comprenant une première question d'un utilisateur conjointement avec un ID de dispositif; à acquérir des informations de politique de communication comprenant un niveau auditif ou un niveau visuel correspondant à l'utilisateur; et à spécifier un format de sortie correspondant au niveau auditif ou au niveau visuel et à fournir, à un haut-parleur ou à un dispositif d'affichage, des premières informations de réponse indiquant une première réponse à la première question.
PCT/JP2021/037243 2020-10-20 2021-10-07 Procédé de traitement d'informations WO2022085474A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2020-175954 2020-10-20
JP2020175954A JP2023169448A (ja) 2020-10-20 2020-10-20 生活を支援する方法、装置、システム、及びプログラム
JPPCT/JP2021/017031 2021-04-28
PCT/JP2021/017031 WO2022085223A1 (fr) 2020-10-20 2021-04-28 Procédé, dispositif, système et programme d'aide à la qualité de vie

Publications (1)

Publication Number Publication Date
WO2022085474A1 true WO2022085474A1 (fr) 2022-04-28

Family

ID=81290368

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/037243 WO2022085474A1 (fr) 2020-10-20 2021-10-07 Procédé de traitement d'informations

Country Status (1)

Country Link
WO (1) WO2022085474A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000250840A (ja) * 1999-03-01 2000-09-14 Nippon Telegr & Teleph Corp <Ntt> インタフェース制御方法および装置とインタフェース制御プログラムを記録した記録媒体
JP2003032356A (ja) * 2001-07-16 2003-01-31 Kyocera Corp 携帯端末、及びそのユーザーインターフェース選択方法
JP2014002470A (ja) * 2012-06-15 2014-01-09 Ricoh Co Ltd 処理装置、処理システム、出力方法およびプログラム
JP2018136740A (ja) * 2017-02-22 2018-08-30 シャープ株式会社 ネットワークシステム、情報処理方法、サーバ、および端末
JP2019164709A (ja) * 2018-03-20 2019-09-26 ヤフー株式会社 出力制御装置、出力制御方法、および出力制御プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000250840A (ja) * 1999-03-01 2000-09-14 Nippon Telegr & Teleph Corp <Ntt> インタフェース制御方法および装置とインタフェース制御プログラムを記録した記録媒体
JP2003032356A (ja) * 2001-07-16 2003-01-31 Kyocera Corp 携帯端末、及びそのユーザーインターフェース選択方法
JP2014002470A (ja) * 2012-06-15 2014-01-09 Ricoh Co Ltd 処理装置、処理システム、出力方法およびプログラム
JP2018136740A (ja) * 2017-02-22 2018-08-30 シャープ株式会社 ネットワークシステム、情報処理方法、サーバ、および端末
JP2019164709A (ja) * 2018-03-20 2019-09-26 ヤフー株式会社 出力制御装置、出力制御方法、および出力制御プログラム

Similar Documents

Publication Publication Date Title
D’Onofrio Phonetic detail and dimensionality in sound-shape correspondences: Refining the bouba-kiki paradigm
Potter Discourse analysis and discursive psychology.
Goldsmith Soliciting advice: The role of sequential placement in mitigating face threat
Darroch et al. Interpreters’ experiences of transferential dynamics, vicarious traumatisation, and their need for support and supervision: A systematic literature review
Semel Listening like a computer: Attentional tensions and mechanized care in psychiatric digital phenotyping
Cornes et al. Challenges of mental health interpreting when working with deaf patients
Abbott et al. Reflections on researcher departure: Closure of prison relationships in ethnographic research
Havelka Video-mediated remote interpreting<? br?> in healthcare: Analysis of an Austrian pilot project
Roy et al. Working with Deafblind people to develop a good practice approach
Burch ‘We shouldn’t be told to shut up, we should be told we can speak out’: Reflections on using arts-based methods to research disability hate crime
Han et al. Relative contribution of auditory and visual information to Mandarin Chinese tone identification by native and tone-naïve listeners
Okonji et al. Financial inclusion: perceptions of visually impaired older Nigerians
Seibold et al. Intentional preparation of auditory attention-switches: Explicit cueing and sequential switch-predictability
Robertson et al. The role of audiovisual asynchrony in person recognition
Castle et al. The musical experiences of adults with severe sight impairment: an interpretative phenomenological analysis
Kelly-Corless Delving into the unknown: An experience of doing research with d/Deaf prisoners
Nakao et al. Use of machine learning by non-expert dhh people: Technological understanding and sound perception
Fletcher et al. Beyond speech intelligibility: Quantifying behavioral and perceived listening effort in response to dysarthric speech
Bitman Rethinking the concept of ‘subaltern-researcher’: different D/deaf identities and communicative modalities as conflict factors in in-depth interviews
Dreschler et al. A profiling system for the assessment of individual needs for rehabilitation with hearing aids
WO2022085474A1 (fr) Procédé de traitement d&#39;informations
WO2022085223A1 (fr) Procédé, dispositif, système et programme d&#39;aide à la qualité de vie
Plejert et al. Enhanced patient involvement in Swedish aphasia intervention
Berget et al. Making research more inclusive: Is Universal Design of Research the answer?
Clay et al. Masking care: A qualitative investigation of the impact of face masks on the experience of stroke rehabilitation from the perspective of staff and service users with communication difficulties

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21882615

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21882615

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP