WO2020202354A1 - Communication robot, control method for same, information processing server, and information processing method - Google Patents

Communication robot, control method for same, information processing server, and information processing method Download PDF

Info

Publication number
WO2020202354A1
WO2020202354A1 PCT/JP2019/014263 JP2019014263W WO2020202354A1 WO 2020202354 A1 WO2020202354 A1 WO 2020202354A1 JP 2019014263 W JP2019014263 W JP 2019014263W WO 2020202354 A1 WO2020202354 A1 WO 2020202354A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
communication
robot
state
information
Prior art date
Application number
PCT/JP2019/014263
Other languages
French (fr)
Japanese (ja)
Inventor
直秀 小川
薫 鳥羽
渉 瀬下
山川 浩
Original Assignee
本田技研工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 本田技研工業株式会社 filed Critical 本田技研工業株式会社
Priority to PCT/JP2019/014263 priority Critical patent/WO2020202354A1/en
Priority to JP2021511728A priority patent/JP7208361B2/en
Publication of WO2020202354A1 publication Critical patent/WO2020202354A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities

Definitions

  • the present invention relates to a communication robot, its control method, an information processing server, and an information processing method.
  • Patent Document 1 is premised on communication between the user and the robot, and it is considered that the robot intervenes in the user in order for the user and the remote user (remote user) to communicate with each other. Absent. That is, there is no known technique for controlling the intervention timing and intervention method by the robot so that the user and the remote user can communicate smoothly.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique capable of controlling intervention behavior for communicating between a user and a remote user.
  • a communication robot Identification means for identifying the state of the first user, A receiving means for receiving status information indicating the status of the second user who communicates with the first user remotely from an external device, and A determination means for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user. It has a providing means for providing information to the first user, and has When it is determined that the first user and the second user can communicate with each other, the providing means provides the first user with information proposing communication between the first user and the second user. provide, A communication robot characterized by this is provided.
  • intervention behavior for communicating between a user and a remote user is controlled.
  • FIG. 1 shows an example of the communication system which concerns on Embodiment 1 of this invention.
  • Block diagram showing a functional configuration example of the communication robot according to the first embodiment Block diagram showing an example of the software configuration of the communication robot according to the first embodiment
  • a flowchart showing a series of operations of the communication control process according to the first embodiment A flowchart showing a series of operations of the history processing according to the first embodiment.
  • FIG. 1 The figure which shows an example of the selection screen which selects the user state in which a communication robot intervenes according to Embodiment 1.
  • Block diagram showing a functional configuration example of the information processing server according to the second embodiment Block diagram showing an example of the software configuration of the communication robot according to the second embodiment
  • Block diagram showing an example of the software configuration of the information processing server according to the second embodiment A flowchart showing a series of operations of communication control processing in the information processing server according to the second embodiment.
  • the present embodiment is not limited to this, and can be applied to communication of other persons such as when the target user and the remote user are friends.
  • the place where the robot and the child communicate with each other will be described assuming a predetermined space (for example, a space in the home such as a children's room or a shared space).
  • the place where the remote user stays will be described as a predetermined remote space (for example, a shared space in the home of the grandparents or a space where the remote user stays).
  • FIG. 1 shows an example of the communication system 10 according to the present embodiment.
  • the robot 100 and the target user 110 exist in a predetermined space, and the robot 100 interacts with the target user 110.
  • the robot 100 can communicate with the device 150 (for example, a mobile device) used by the remote user 160 via the network 120. That is, the target user 110 and the remote user 160 can communicate with each other via the robot 100 and the device 150.
  • the device 150 used by the remote user 160 is, for example, a smartphone, but is not limited to this, and may include a personal computer, a television, a tablet terminal, a smart watch, a game machine, and the like.
  • a movable communication robot exists instead of the device 150, and the target user 110 and the remote user 160 may communicate with each other via the two communication robots.
  • FIG. 2 is a block diagram showing a functional configuration example of the robot 100 according to the present embodiment.
  • the control unit 210 included in the robot 100 includes one or more CPUs (Central Processing Units) 211, an HDD (Hard Disk Drive) 212, and a RAM (Random Access Memory) 213.
  • the CPU 211 controls various processes shown below by reading and executing the program stored in the HDD 212.
  • the HDD 212 is a non-volatile storage area, and stores programs corresponding to various processes.
  • a semiconductor memory may be used instead of the HDD.
  • the RAM 213 is a volatile storage area and is used as, for example, a work memory.
  • the control unit 210 may be composed of a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), a dedicated circuit, or the like.
  • the robot 100 includes various parts that serve as an interface for information with the outside. Each part shown below operates based on the control by the control unit 210.
  • the voice input unit 214 is a part for acquiring voice information from the outside, and includes, for example, a microphone. In the present embodiment, the voice input unit 214 acquires the utterance information of the target user in order to make a call, for example.
  • the image pickup unit 215 is a portion for acquiring image information of the user and the surroundings, and includes, for example, a camera. In the present embodiment, for example, an image of a target user is acquired in order to make a video call.
  • the power supply unit 216 is a part that supplies power to the robot 100 and corresponds to a battery.
  • the audio output unit 217 is a portion for outputting audio information to the outside, and includes, for example, a speaker and the like. In the present embodiment, the voice output unit 217 outputs, for example, the utterance of a remote user.
  • the image display unit 218 is a part for outputting image information, and includes a display and the like. In the present embodiment, the image display unit 218 displays an image of a remote user in order to make a moving image call, for example.
  • the operation unit 219 is a part for performing a user operation on the robot.
  • the image display unit 218 and the operation unit 219 may be collectively configured as, for example, a touch panel display that also functions as an input device.
  • the notification unit 220 is a part for notifying various information to the outside, and may be, for example, a light emitting unit such as an LED (Light Emitting Diode) or may be integrated with an audio output unit 217.
  • the drive unit 121 is a portion for moving the robot 100, and may include, for example, an actuator, a tire, a motor, an engine, and the like.
  • the sensor unit 222 includes various sensors for acquiring information on the external environment.
  • the sensor may include, for example, a temperature sensor, a humidity sensor, an infrared sensor, a depth sensor, a lidar (LiDAR: Light Detection and Ringing), and the like, and a sensor may be provided according to the information to be acquired.
  • the image pickup unit 215 may be included in the sensor unit 222.
  • the communication unit 223 is a part for communicating with an external device (for example, a device 150 used by a remote user or an external server) via the network 120, and the communication method, communication protocol, and the like are not particularly limited. Further, the communication unit 223 may include a GPS (Global Positioning System) for detecting its own position information.
  • GPS Global Positioning System
  • each part is shown by one block, but each part may be physically divided and installed at a plurality of places according to the function and structure of the robot 100.
  • the power supply unit 216 or the like may be configured to be detachable from the robot 100, or may be provided with an interface for adding a part for providing a new function.
  • the appearance of the robot 100 is not particularly limited, but may be configured according to, for example, a person (for example, a child) assumed as a communication target. Further, the material and size of the robot are not particularly limited.
  • FIG. 3 is a diagram showing an example of a software configuration of the robot 100 according to the present embodiment.
  • each processing unit is realized by the CPU 211 reading and executing the program stored in the HDD 212 or the like.
  • each DB database
  • the software configuration shows an exemplary configuration example of the present embodiment, and each software configuration such as firmware, OS, middleware, and framework is omitted.
  • the user state identification unit 301 is a part that identifies the user state of the target user based on the information from the sensor unit 222, the image pickup unit 215, and the voice input unit 214, and the information received from the outside.
  • the user state here will be described later with reference to FIG. It should be noted that not all user states need to be identified by the user state identification unit 301, and a part of the user state may be set externally via the communication unit 223 or via the operation unit 219. It may be set on the robot 100 side.
  • the user status DB 313 stores the user status table. The user status table will be described in detail with reference to FIG. 4, but the correspondence between the identified user status and the communication content between the target user and the remote user according to the user status is defined. There is.
  • the user state processing unit 302 specifies the communication content performed for the identified user state by using the user state identified by the user state identification unit 301 and the user state table stored in the user state DB 310. To do.
  • the user state identification unit 301 stores the user state of the target user, which is periodically identified, and the identified place and time as the action history of the target user in the action history DB 312.
  • the user state of the target user is periodically identified, if the user state shown in FIG. 4 cannot be identified, the user state is stored as "unidentified" or the like.
  • the map management unit 303 creates a map of the space in which the robot 100 operates and updates it periodically.
  • the map may be created based on the information acquired by the sensor unit 222 or the imaging unit 215 included in the robot 100, or may be created based on the location information acquired from the outside via the communication unit 223. May be good.
  • the map management unit 303 stores the created or updated map in the map information DB311.
  • the map management unit 303 provides map information according to the identified user state.
  • the communication processing unit 304 exchanges information with the outside. For example, the communication processing unit 304 receives state information indicating the state of the remote user, and transmits / receives data of a voice call or a video call for the target user and the remote user to communicate with each other. Alternatively, state information indicating the user state identified for the target user may be transmitted to the external device.
  • the remote user status acquisition unit 305 acquires status information indicating the user status of the remote user via the network 120. Further, the acquisition of the user state of the remote user may be performed periodically. The user state of the remote user acquired periodically may be stored in the action history DB 312 as the action history of the remote user.
  • the communication control unit 306 intervenes in the target user based on the user state of the target user identified by the user state identification unit 301 and the user state of the remote user acquired by the remote user state acquisition unit 305. Then, it is controlled to start communication (voice call or video call) between the target user and the remote user.
  • the process by the communication control unit 306 will be described later as a communication control process.
  • the action history processing unit 307 identifies the pattern of the user state of the target user based on the periodic user state of the target user stored in the action history DB 312 (that is, functions as a pattern identification means). Further, when the user state of the remote user is stored in the action history DB 312, the pattern of the user state of the remote user may be identified based on the user state of the remote user periodically.
  • the movement control unit 308 controls the robot to move to a place where communication is performed according to the identified user state or the like.
  • the robot of the present embodiment intervenes in the user when the target user is in a predetermined user state and communicates with the remote user so that the target user and the remote user can communicate with each other.
  • a predetermined user state As described above, the robot of the present embodiment intervenes in the user when the target user is in a predetermined user state and communicates with the remote user so that the target user and the remote user can communicate with each other.
  • An example of the user state considered by the robot to intervene in the target user will be described.
  • FIG. 4 shows a user state table 400 showing the user state according to the present embodiment and the content of communication associated with each user state.
  • the user state table 400 is composed of a user state 401, an occurrence timing 402, a communication content 403, and a communication location 404.
  • the user state 401 is a name for identifying the user state
  • the occurrence timing 402 indicates a timing for the robot 100 to identify the corresponding user state.
  • the communication content 403 shows an example of the content of the communication performed in the corresponding user state
  • the communication place 404 indicates the place where the corresponding communication is performed.
  • the user state in which "strong emotions are expressed" is a state in which the target user (child) is crying or yelling.
  • the robot 100 identifies this user state based on, for example, voice information.
  • the communication content associated with this user state is what the grandparents speak to (to a crying or yelling child). If grandparents, who are remote users, can intervene in the target user via a robot and talk to the target user, it will be an opportunity to calm the target user and talk honestly. The robot performs this communication at the current position of the target user (child).
  • the user state of "learning / practice start” is, for example, a state in which a child has started reading aloud, practicing a musical instrument, and practicing dancing.
  • the robot 100 identifies this state when it is input that the learning / practice has been started, for example, through the operation unit 219 of the robot 100 or the terminal of the parents of the child.
  • this state may be identified by the robot 100 recognizing the behavior of the child based on the voice information or the image information.
  • the content of communication for this user state is, for example, that the grandparents confirm the movements and utterances of the child.
  • grandparents who are remote users, can intervene in the target user via a robot to listen to reading aloud and watch the practice, it is possible to support the practice of the child on behalf of or in addition to the parent. it can.
  • the robot 100 performs this communication, for example, at the current position of the target user (child).
  • the user state "immediately before going to bed” is a state in which the target user enters the bed after taking a predetermined action before going to bed.
  • the robot 100 identifies the user state. ..
  • the content of communication for this user state is, for example, an utterance (reading aloud) by grandparents. If grandparents, who are remote users, can intervene in the target user via a robot and read aloud, the chances of reading aloud can be increased and the labor of parents can be reduced.
  • Robot 100 performs this communication in a shared space or a children's room.
  • the user state "immediately before going out” is when the target user takes a predetermined action before going out, and then heads to the entrance.
  • the robot 100 identifies that the child has taken a predetermined outing action such as brushing teeth and preparing a predetermined belonging at a predetermined time based on voice information or image information, the user state is determined. Identify.
  • the content of communication for this user state is, for example, a greeting sent by grandparents. If grandparents, who are remote users, can intervene in the target user via a robot and greet the sending out, it is possible to send out the child on behalf of the parent or in addition to the parent, such as cheering up a word.
  • the robot 100 performs this communication at the entrance.
  • the user status "immediately before returning home” is the state in which the target user has returned home to the vicinity of the home.
  • the robot 100 receives the GPS signal possessed by the target user at a predetermined frequency via the communication unit 223 to acquire the movement locus of the target user, and when the movement locus approaches a range of a predetermined distance from the home, this Identify the user state.
  • the content of communication for this user state is, for example, a greeting from the grandparents when they return home.
  • the robot 100 moves to the entrance and waits to intervene in the target user. If grandparents, who are remote users, can intervene in the target user via a robot and greet them, they can welcome the child instead of the parent even in the absence of the parent, reducing the loneliness of the child. be able to.
  • the robot 100 performs this communication at the entrance.
  • the user status may include, for example, "watching TV” or "expressing intention to communicate”.
  • the user state of "watching TV” is a state in which the target user is watching TV. This state is relatively relaxed and may allow grandparents, who are remote users, to come in and have a conversation. Further, when it is input that the robot 100 wants to communicate (at a predetermined time or the like) via the operation unit 219 of the robot 100 or the terminal of the child's parents, the robot 100 "displays the intention to communicate”. ”Identify the state.
  • the communication content in these states is such that the target user and the remote user have a normal conversation, for example.
  • the manifestation of intention to communicate can be set from, for example, the operation screen shown in FIG. 7 displayed on the terminal of the parents of the child.
  • the radio button 701 By pressing the radio button 701, it is set that communication is possible at the present time, or that communication is possible "from 20:00 to 21:00".
  • the information input at the terminals of the parents is received by the communication unit 223 of the robot 100, and then used for determining the user status.
  • the set time information may be transmitted to the device of the remote user.
  • the communicable state is shown as the user state. However, it may include a state in which communication is impossible, such as "mealing”, “bathing”, and “sleeping".
  • the user state identified by the robot may differ depending on the attributes of the target user. For example, depending on the age, the robot may be set to identify only the user states of "immediately before going out", “immediately before returning home", and "starting learning / practice".
  • the user state to be identified may be selected, for example, via the operation unit 219 of the robot 100 or the terminal of the parents of the child.
  • the user state to be processed by the robot 100 may be set by adding a check mark 801 to each of the listed user states 802. ..
  • the robot 100 acquires the user state of the remote user and considers the user state of the remote user.
  • the user state of the remote user is only a state in which communication is possible or a state in which communication is possible in a predetermined time zone.
  • Such a user state is set by an operation screen similar to the operation screen shown in FIG. 7 on the device 150 used by the remote user.
  • a robot similar to the robot 100 may exist on the remote user side, and the user state of the remote user may be identified by the robot. In this case, for example, when the remote user is identifying the state of watching TV, the remote user can communicate.
  • the robot 100 When the robot 100 according to the present embodiment considers the user state of the target user and the user state of the remote user and determines that communication is possible, the robot 100 provides the target user with information that proposes communication with the remote user. provide.
  • Information that proposes communication may be provided by presenting audio or images. These voices and images may include the name of the other party with whom the person communicates and information representing the content of the communication. If the user status is "immediately before going out", the information representing the communication content includes information such as "I have a greeting from my grandmother.” In addition, if the user status is "a strong emotion is expressed", the content differs depending on the user status, such as "grandmother will call out”.
  • the robot 100 intervenes in the target user in order to propose communication to the target user, the robot 100 approaches the target person, goes around the target person, and notifies a predetermined image or voice according to the user state. It may be notified by the unit 220.
  • the map management unit 303 of the robot 100 creates a map of the operating space by using the peripheral information acquired by the image pickup unit 215, the sensor unit 222, and the like. For example, the map may be updated by recollecting these peripheral information every time a certain period of time elapses. For example, when the arrangement of furniture is changed, the map information may be updated according to the state after the change. Further, the map management unit 303 may acquire information (map information, location information, etc.) from the outside via the communication unit 223 and perform mapping. Further, the map management unit 303 may be configured to detect an object existing in each area and associate the usage of the area.
  • the area may be associated with a bedroom, and the area with toys and desks may be associated with a children's room. If the purpose of the area cannot be specified, the configuration may be such that the setting is accepted by the user.
  • FIG. 5 is a flowchart showing a series of operations of the communication control process of the robot 100 according to the present embodiment.
  • this process is realized by the CPU 211 of the robot 100 reading and executing the program stored in the HDD 212.
  • Each processing step is realized, for example, in cooperation with the part of FIG. 2 and the processing unit of FIG. 3, but here, in order to simplify the explanation, the processing subject is comprehensively described as the robot 100.
  • the robot 100 monitors the status of the target user.
  • the state monitoring is performed in a state in which the target user acts within a range in which the target user can enter the field of view of the imaging unit 215 and a range in which the target user's action sound can be collected.
  • the robot 100 may collect information while moving so that the state monitoring of the target user can be performed more easily.
  • the robot 100 determines whether the user is in a predetermined user state.
  • the predetermined user state identified by the robot 100 is the user state described above with respect to FIG. 4, and the robot 100 identifies each user state by the respective methods described with reference to FIG. If the predetermined user state is identified (YES in S502), the robot 100 proceeds to S503, and if not (NO in S502), returns to S501 and continues the state monitoring.
  • the robot 100 acquires the user state of the remote user.
  • the robot 100 may, for example, transmit state information indicating the user state identified in S502 to the device 150 used by the remote user, or transmit request information requesting the user state of the remote user to the device 150. You may.
  • the robot 100 acquires the state information indicating the user state of the remote user, which is transmitted by the device 150 of the remote user in response to the above information.
  • the robot 100 determines whether communication between the target user and the remote user is possible based on the user state of the identified target user and the user state of the remote user. For example, the robot 100 determines that the target user and the remote user can communicate with each other when the identified user state of the target user and the user state of the remote user are both communicable user states. In the example of this embodiment, the user state of the target user shown in FIG. 4 is a communicable state. Therefore, if the remote user is set to be able to communicate on the operation screen shown in FIG. 7, it is determined that the target user and the remote user can communicate with each other. If it is determined that communication is possible (YES in S504), the robot 100 proceeds to S505, and if not (NO in S504), the robot 100 ends this process.
  • the robot 100 moves to a place where communication is performed. For example, the robot 100 moves to the place where the target user is when the target user is not nearby. Further, when the communication location of the identified user state is a specific location (that is, it is not the current position of the target user), the communication location is moved to the specific location. For example, when the user state of the identified target user is "immediately before going out” or “immediately before returning home", the robot 100 moves to the "entrance" which is a communication place. That is, the robot 100 can move to a different communication location according to the user state of the identified target user.
  • the robot 100 proposes communication to the target user. Specifically, the robot 100 displays information proposing communication on the image display unit 218 or outputs it as voice from the voice output unit 217 in response to the target user approaching a predetermined distance or less. As described above, specific communication proposals include different communication contents depending on the user status.
  • the robot 100 waits to receive a response to a proposal from the target user. For example, the robot 100 receives either "communicate” or “do not communicate” as a response to the proposal by touching the touch panel or by voice.
  • the robot 100 determines whether the target user has accepted the communication proposal.
  • the response to the proposal is "communicate”
  • the robot 100 determines that the target user has accepted the proposal (YES in S507) and proceeds to S508.
  • the response to the proposal is "no communication” (or may be a timeout) (NO in S507), this process ends.
  • the robot 100 transmits a call start request to the remote user's device 150, and waits for a response to the call start request from the device 150.
  • the remote user's device 150 receives the call start request, it notifies the remote user of the arrival of the call start request (that is, intervenes in the remote user) using voice information or the like, and waits for the remote user to accept the call start.
  • the call start request transmitted by the robot 100 may include the user state of the target user and the communication content. Therefore, the device 150 of the remote user presents the user status and communication contents of the target user, proposes communication, and then accepts the selection of "communicate" or "not communicate” from the remote user. In this case, the remote user can start communication after confirming the user status of the target user.
  • the robot 100 when the robot 100 receives the response to the call start request from the device 150, it determines whether the call start request is accepted by the device 150 side. The robot 100 proceeds to S510 when the response is a response that accepts the call start request (YES in S509), and ends this process when the response is not a response that accepts the call start request (NO in S509). To do.
  • the robot 100 starts a voice call or a video call between the target user and the remote user, and the communication content according to the user state is executed.
  • the robot 100 uses a voice call or a video call input from the image pickup unit 215, the voice input unit 214, and the device 150 used by the remote user.
  • the target user can set in advance from the operation unit 219 to change the background image in which the private space is reflected (the background is erased or replaced with another image).
  • the robot 100 changes the background of the moving image transmitted in the execution of the communication content, and then transmits the changed moving image to the remote user side.
  • the change of the transmitted moving image is not limited to the change of the background image, and the user area may be changed to the avatar.
  • the robot 100 may recognize the movement of the target user from the image captured by the target user and reflect it in the movement of the avatar.
  • the transmission image is not limited to the change, and the voice spoken by the target user may be converted into another voice, or the ambient sound may be reduced and transmitted. By doing so, it is possible to appropriately adjust the elements that the user feels uncomfortable if the living space of the target user is transmitted as it is. That is, it is possible to adjust the transmitted information to make the communication between the target user and the remote user smoother and more comfortable.
  • the target user may be able to set different background and avatar settings for each location or for each communication content. In this way, the target user can optimize the degree of exposure of the target user's information for each space or communication of concern.
  • the audio or moving image transmitted from the remote user side may be changed according to the setting by the remote user.
  • the background of a moving image taken by a remote user may be changed, or the sound may be changed.
  • the received audio or moving image may be modified and presented, or the robot 100 may change the audio or video on the device side used by the remote user.
  • the image may be received and presented.
  • the robot 100 determines whether the call is terminated. For example, when the end operation is received from the operation unit 219 of the robot 100, it is determined that the call is ended and the present process is ended. If not, the call in S510 is continued.
  • FIG. 6 is a flowchart showing history processing in the robot 100 of the present embodiment.
  • this process is realized by the CPU 211 of the robot 100 reading and executing the program stored in the HDD 212.
  • Each processing step is realized, for example, in cooperation with the part of FIG. 2 and the processing unit of FIG. 3, but here, in order to simplify the explanation, the processing subject is comprehensively described as the robot 100.
  • the robot 100 acquires the behavior history information of the target user to be analyzed for a period of, for example, 2 weeks, 1 month, 6 months, etc. from the behavior history DB 312.
  • the robot 100 extracts the behavior pattern of the target user in, for example, a time zone.
  • the robot 100 extracts, for example, a pattern in which user states such as "immediately before going to bed”, “immediately before going out”, and “immediately before returning home” are identified near a specific time and at a specific place, respectively.
  • the robot 100 stores a specific place and a specific place of the extracted action pattern in the action history DB 312 as information for predicting the occurrence timing and the communication place of a predetermined user state.
  • the robot 100 may refer to the stored information and change the location for monitoring the user status of S501 described above so that the user status can be determined more reliably. That is, it becomes possible to anticipate the occurrence of a user state, anticipate an appropriate place, and more accurately intervene in the user (suggest communication).
  • the robot 100 may simplify the start of communication when a specific time and a specific place are extracted for the user state such as "immediately before going out” and “immediately before returning home”. That is, if the robot 100 moves to the communication location at the scheduled time and the user states of the target user and the remote user can communicate with each other, the confirmation of acceptance of the proposal of S507 and acceptance of the call of S509 is omitted. You may start communication. In this way, it becomes possible to easily start communication such as exchanging a word of greeting.
  • FIG. 9 shows an example of the communication system 90 according to the present embodiment.
  • the information processing server 900 is added.
  • the robot 100 transmits / receives data to / from the information processing server 900 via the network 120.
  • the device 150 used by the remote user 160 also transmits / receives data to / from the information processing server 900.
  • FIG. 10 is a block diagram showing a functional configuration example of the information processing server 900.
  • the control unit 1010 includes one or more CPUs (Central Processing Units) 1011s, an HDD (Hard Disk Drive) 1012, and a RAM (Random Access Memory) 1013.
  • the CPU 1011 controls various processes shown below by reading and executing the program stored in the HDD 1012.
  • the HDD 1012 is a non-volatile storage area, and stores programs corresponding to various processes.
  • a semiconductor memory may be used instead of the HDD.
  • the RAM 1013 is a volatile storage area and is used as, for example, a work memory.
  • the control unit 1010 may be composed of a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), a dedicated circuit, or the like. Further, each component of the control unit 1010 may have a virtualized configuration.
  • the power supply unit 1014 is a portion that supplies power to the information processing server 900 from the outside.
  • the communication unit 1015 is a part for communicating with the robot 100 and the device 150 used by the remote user via the network 120, and the communication method, communication protocol, and the like are not particularly limited.
  • FIG. 11 is a diagram showing an example of a software configuration of the robot 100 according to the present embodiment.
  • each processing unit is realized by the CPU 211 reading and executing the program stored in the HDD 212 or the like.
  • each DB database
  • the software configuration shows only the configuration examples necessary for the implementation of this embodiment, and each software configuration such as firmware, OS, middleware, and framework is omitted.
  • the user state identification unit 301 is a part that identifies the user state of the target user based on the information from the sensor unit 222, the image pickup unit 215, and the voice input unit 214, and the information received from the outside.
  • the user state is the same as that described above in FIG. It should be noted that not all user states need to be identified by the user state identification unit 301, and a part of the user state may be set externally via the communication unit 223 or via the operation unit 219. It may be set on the robot 100 side.
  • the user status DB 313 stores the user status table.
  • the user state DB 313 which is the same as that of the embodiment, is synchronized with the user state DB of the information processing server 900 when updated by the robot 100, and substantially the same data is held.
  • the user state identified by the user state identification unit 301 is transmitted to the information processing server 900 as state information indicating the user state.
  • the user state identification unit 301 stores the user state of the target user, which is periodically identified, and the identified place and time as the action history of the target user in the action history DB 312.
  • the user state of the target user is periodically identified, if the user state shown in FIG. 4 cannot be identified, the user state is stored as "unidentified" or the like.
  • the communication processing unit 304 exchanges information with the outside. For example, the communication processing unit 304 transmits / receives data exchanged with the information processing server 900, and transmits / receives data of a voice call or a video call for communication between the target user and the remote user.
  • the simple communication control unit 1101 receives information proposing communication from the information processing server 900 and intervenes in the target user. Then, it is controlled to start communication (voice call or video call) between the target user and the remote user. The process by the simple communication control unit 1101 will be described later as a simple communication control process.
  • the map management unit 303, the action history processing unit 307, the movement control unit 308, and each DB are substantially the same as those in the first embodiment.
  • Each DB is synchronized with the user state DB 1210, the map information DB 1211, and the action history DB 1212 on the information processing server 900 side, and substantially the same data is held.
  • FIG. 12 is a diagram showing an example of a software configuration of the information processing server 900 according to the present embodiment.
  • each processing unit is realized by the CPU 1011 reading and executing the program stored in the HDD 1012 or the like.
  • each DB database
  • the software configuration shows only the configuration examples necessary for the implementation of this embodiment, and each software configuration such as firmware, OS, middleware, and framework is omitted.
  • the user state acquisition unit 1201 is a part that acquires state information indicating the user state from the robot 100. Even when a part of the user state is set from the outside in the robot 100, the state information indicating the set user state is transmitted to the information processing server 900.
  • the user state DB 1210 stores the user state table.
  • the user state DB 1210 is synchronized with the user state DB of the robot 100.
  • the user state processing unit 1202 specifies the communication content performed for the user state by using the user state of the target user acquired by the user state acquisition unit 1201 and the user state table stored in the user state DB 1210. To do.
  • the communication processing unit 1204 exchanges information with the robot 100 and the device 150 used by the remote user. For example, the communication processing unit 1204 receives state information indicating the state of the remote user, and transmits / receives data of a voice call or a video call for the target user and the remote user to communicate with each other. Alternatively, state information indicating the user state identified for the target user may be transmitted to the external device.
  • the remote user status acquisition unit 1205 acquires status information indicating the user status of the remote user via the network 120.
  • the communication control unit 1206 causes the robot to intervene in the target user based on the user state of the target user identified by the user state acquisition unit 1201 and the user state of the remote user acquired by the remote user state acquisition unit 1205. Control the robot 100. Then, the robot 100 is controlled so as to start communication (voice call or video call) between the target user and the remote user.
  • the processing by the communication control unit 1206 will be described later as a communication control processing.
  • the information processing server 900 determines whether the target user and the remote user can communicate with each other, and then causes the robot 100 to intervene in the target user to cause the remote user. Propose communication with. However, the user state of the target user identified by the robot 100 is the same as that of the first embodiment.
  • the information processing server 900 acquires the user state of the remote user and considers the user state of the remote user.
  • the user state of the remote user is only a state in which communication is possible or a state in which communication is possible in a predetermined time zone.
  • Such a user state is set by an operation screen similar to the operation screen shown in FIG. 7 on the device 150 used by the remote user.
  • a robot similar to the robot 100 may exist on the remote user side, and the user state of the remote user may be identified by the robot.
  • Method of proposing communication When the information processing server 900 according to the present embodiment determines that communication is possible in consideration of the user state of the target user and the user state of the remote user, the robot 100 informs the target user of the remote user. Send information that suggests communication. The robot 100 provides information that proposes communication with a remote user to the target user based on the information received from the information processing server 900.
  • the information proposing communication is provided as in Embodiment 1.
  • FIG. 13 is a flowchart showing a series of operations of the communication control process in the information processing server 900 according to the present embodiment.
  • this process is realized by the CPU 1011 of the information processing server 900 reading and executing the program stored in the HDD 1012.
  • Each processing step is realized, for example, in cooperation with the part of FIG. 10 and the processing unit of FIG. 12, but here, in order to simplify the explanation, the processing subject is comprehensively described as the information processing server 900. ..
  • the information processing server 900 acquires state information indicating the user state of the target user identified by the robot 100.
  • the information processing server 900 determines whether the acquired user state is a predetermined user state.
  • the predetermined user state is the user state described above with respect to FIG.
  • the information processing server 900 proceeds to S1303 when it determines that the acquired user state is a predetermined user state (YES in S1302), and returns to S1301 when it does not (NO in S1302).
  • the information processing server 900 acquires the user status of the remote user.
  • the information processing server 900 may, for example, transmit the state information indicating the user state identified in S1302 to the device 150 used by the remote user, or send the request information requesting the user state of the remote user to the device 150. May be sent to.
  • the information processing server 900 acquires the state information indicating the user state of the remote user, which is transmitted by the device 150 of the remote user in response to the above information.
  • the information processing server 900 determines whether communication between the target user and the remote user is possible based on the acquired user state of the target user and the user state of the remote user. For example, the information processing server 900 determines that the target user and the remote user can communicate with each other when the user state of the target user and the user state of the remote user are both communicable user states. In the example of this embodiment, the user state of the target user shown in FIG. 4 is a communicable state. Therefore, if the remote user is set to be able to communicate on the operation screen shown in FIG. 7, it is determined that the target user and the remote user can communicate with each other. The information processing server 900 proceeds to S1305 when it is determined that communication is possible (YES in S1304), and ends this process when it is not (NO in S1304).
  • the information processing server 900 transmits the proposal information for the robot 100 to propose communication to the target user to the robot 100.
  • the proposal information includes communication contents that differ depending on the user status.
  • the information processing server 900 determines whether the target user has accepted the communication proposal. For example, the information processing server 900 receives response information indicating a response by the target user from the robot 100. Then, when the response information is "communicate", it is determined that the target user has accepted the proposal (YES in S1306), and the process proceeds to S1307. On the other hand, when the response to the proposal is "no communication" (or may be a timeout) (NO in S1306), this process ends.
  • the information processing server 900 transmits a call start request to the remote user's device 150, and waits for a response to the call start request from the device 150.
  • the remote user's device 150 receives the call start request, it notifies the arrival of the call start request (that is, intervenes in the remote user) using voice information or the like, and waits for the remote user to accept the call start.
  • the call start request transmitted by the information processing server 900 may include the user status of the target user and the communication content. Therefore, the device 150 of the remote user presents the user status and communication contents of the target user, proposes communication, and then accepts the selection of "communicate" or "not communicate” from the remote user. In this case, the remote user can start communication after confirming the user status of the target user.
  • the information processing server 900 determines whether the call start request is accepted on the device 150 side. If the response is a response that accepts the call start request (YES in S1308), the information processing server 900 proceeds to S1309, and if the response is not a response that accepts the call start request (NO in S1308), the processing is performed. To finish.
  • the information processing server 900 starts a voice call or a video call between the target user and the remote user, and the communication content according to the user state is executed.
  • the information processing server 900 uses a voice call or a video call input from the image pickup unit 215 and the voice input unit 214 of the robot 100 and the device 150 used by the remote user.
  • the information processing server 900 can change the background image (erasing the background or replacing it with another image) in which the private space of the target user is reflected. For example, the information processing server 900 acquires the setting information preset by the operation unit 219 of the robot 100 from the robot 100 and changes the background image. When the information processing server 900 has acquired the setting information for changing the background image, the information processing server 900 changes the background of the moving image transmitted from the robot 100 and then transmits the changed moving image to the remote user side.
  • the change of the transmitted moving image is not limited to the change of the background image, and the user area may be changed to avatar.
  • the information processing server 900 may recognize the movement of the target user from the image captured by the target user and reflect it in the movement of the avatar.
  • the transmission image is not limited to the change, and the voice spoken by the target user may be converted into another voice, or the ambient sound may be reduced and transmitted. By doing so, it is possible to appropriately adjust the elements that the user feels uncomfortable if the living space of the target user is transmitted as it is. That is, it is possible to adjust the transmitted information to make the communication between the target user and the remote user smoother and more comfortable.
  • the target user may be able to set different background and avatar settings for each location or for each communication content. In this way, the target user can optimize the degree of exposure of the target user's information for each space or communication of concern.
  • the audio or moving image transmitted from the remote user side may be changed according to the setting by the remote user.
  • the background of a moving image taken by a remote user may be changed, or the sound may be changed.
  • the information processing server 900 may receive the changed voice or moving image on the device side used by the remote user and transmit it to the robot 100.
  • the information processing server 900 determines whether the call is terminated. For example, when a termination request is received from the robot 100, it is determined that the call is terminated and the present process is terminated. If not, the call in S1309 is continued.
  • FIG. 14 is a flowchart showing a series of operations of the simple communication control process in the robot 100 according to the present embodiment.
  • this process is realized by the CPU 211 of the robot 100 reading and executing the program stored in the HDD 212.
  • Each processing step is realized, for example, in cooperation with the part of FIG. 2 and the processing unit of FIG. 3, but here, in order to simplify the explanation, the processing subject is comprehensively described as the robot 100.
  • the robot 100 monitors the status of the target user.
  • the robot 100 transmits state information indicating the user state to the information processing server 900.
  • the robot 100 receives the proposal information from the information processing server 900. That is, it means that the information processing server 900 determines that the target user and the remote user can communicate with each other.
  • the robot 100 moves to a place where communication is performed. For example, the robot 100 moves to the place where the target user is when the target user is not nearby. Further, when the communication location of the identified user state is a specific location (that is, it is not the current position of the target user), the communication location is moved to the specific location. For example, when the user state included in the proposal information is "immediately before going out” or “immediately before returning home", the robot 100 moves to the "entrance" which is a communication place. That is, the robot 100 can move to a different communication location according to the user state of the identified target user.
  • the robot 100 proposes communication to the target user based on the proposal information from the information processing server 900. Specifically, the robot 100 displays information proposing communication on the image display unit 218 or outputs it as voice from the voice output unit 217 in response to the target user approaching a predetermined distance or less. As described above, specific communication proposals include different communication contents depending on the user status.
  • the robot 100 waits to receive a response to a proposal from the target user. The robot 100 receives either "communicate” or "does not communicate” as a response to the proposal by touching the touch panel or by voice.
  • the robot 100 transmits the response of the target user to the communication proposal to the information processing server 900.
  • the robot 100 receives a call start request from the information processing server 900.
  • the robot 100 starts a voice call or a video call between the target user and the remote user, and the communication content according to the user state is executed.
  • the robot 100 determines whether the call is terminated. For example, when the end operation is received from the operation unit 219 of the robot 100, it is determined that the call is ended, and the call end request is transmitted to the information processing server 900 to end this process. If not, the call in S1408 is continued.
  • the communication robot (for example, 100) of the above embodiment is Identification means for identifying the state of the first user (for example, 301, S502) and Receiving means (for example, 304, S503) that receives state information indicating the state of the second user who communicates with the first user remotely from an external device, and A determination means (for example, 306) for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user. It has a providing means (for example, 217, 218, 306, S506) that provides information to the first user. When it is determined that the first user and the second user can communicate with each other, the providing means provides the first user with information proposing communication between the first user and the second user. provide.
  • Identification means for identifying the state of the first user (for example, 301, S502) and Receiving means (for example, 304, S503) that receives state information indicating the state of the second user who communicates with the first user remotely from an external device
  • a determination means for
  • the communication robot of the above embodiment A receiving means (for example, 214, 219, S507) that receives a response from the first user to the information provided by the providing means, and Further having a call control means (for example, 304, S510) for controlling a voice call or a video call between the first user and the second user.
  • a call control means for example, 304, S510
  • the call control means indicates that the response from the first user starts communication between the first user and the second user, the voice call via the device used by the second user and the communication robot.
  • a video call is started (for example, S510).
  • the intervention of the communication robot allows the target user and the remote user to start call-based communication as desired.
  • the call control means transmits a communication start request to the device used by the second user (for example,). S508) If the start request is not accepted by the device used by the second user, the voice call or video call via the device used by the second user and the communication robot is not started (for example, S509). ..
  • communication can be started after finally confirming the convenience of the remote user.
  • the information provided by the providing means further includes the content of communication performed between the first user and the second user.
  • the providing means makes the communication content different depending on the identified state of the first user (for example, S506).
  • the user can easily grasp what kind of communication was intervened by the robot.
  • the content of communication between the first user and the second user can be set in advance via an operating means (FIG. 8).
  • the content of communication with the remote user can be set according to the target user.
  • the content of the communication is a communication in which the second user returns home by the second user using a voice call or a video call, or is sent by the first user (FIG. 4).
  • grandparents who are remote users, can intervene in the target user via a robot and talk to the target user to accept or send a child on behalf of the parent even in the absence of the parent. It can reduce the loneliness of children.
  • the communication content is a communication in which the second user speaks to the crying first user using a voice call or a video call (FIG. 4).
  • the grandparents who are remote users intervene in the target user via the robot and talk to the target user, which is an opportunity to calm the target user's feelings or talk obediently.
  • the communication content is a communication in which the second user confirms a movement or utterance performed by the first user using a voice call or a video call (FIG. 4).
  • the grandparents who are remote users, intervene in the target user via a robot to listen to the reading aloud and watch the practice, so that the child's child can be replaced or joined by the parent. Can support practice.
  • the communication content is a communication in which the first user listens to an utterance by the second user using a voice call (FIG. 4).
  • grandparents who are remote users intervene in the target user via a robot to read aloud, thereby increasing the chance of reading aloud and reducing the labor of parents.
  • the communication robot of the above embodiment Further having a movement control means (for example, 308) for moving the communication robot to a place where the first user communicates via the communication robot.
  • the movement control means moves the communication robot to a place where communication is performed predetermined according to the state of the first user.
  • the communication robot can autonomously move to a place where the target user is intervened and communication is proposed.
  • the communication robot of the above embodiment A storage means for storing state information indicating the state of the first user, which is periodically collected, and Further having a pattern identification means (for example, 307) for identifying the state pattern of the first user based on the state information stored in the storage means.
  • the movement control means moves the communication robot to the place according to the state of the first user expected from the pattern of the state of the first user.
  • the communication robot can autonomously move to a place where the target user is intervened and communication is proposed based on the pattern of the user state that occurs regularly.
  • the first user is a child.
  • the second user is the grandparent of the first user.
  • the communication robot of the above embodiment The device of the second user is a robot capable of identifying the state of the second user.
  • the user state of the remote user can also be identified by the robot.
  • the control method of the communication robot (for example, 100) in the above embodiment is An identification step (for example, S502) for identifying the state of the first user, A receiving step (for example, S503) of receiving state information indicating the state of the second user who communicates with the first user remotely from an external device, and A determination step (for example, S504) for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user. It has a providing step (for example, S506) that provides information to the first user. In the providing process, when it is determined that communication between the first user and the second user is possible, information proposing communication between the first user and the second user is provided to the first user. provide.
  • the information processing server (for example, 900) in the above embodiment is A first acquisition means (for example, 1201) for acquiring information about the state of the first user from the communication robot, and A second acquisition means (for example, 1205) for acquiring information about the state of the second user from a device used by the second user who communicates with the first user remotely.
  • a determination means for example, 1206) for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user. It has a transmission means (for example, 1204) for transmitting information to be provided from the communication robot to the first user to the communication robot. When it is determined that the first user and the second user can communicate with each other, the transmitting means transmits information proposing communication between the first user and the second user to the communication robot. To do.
  • an information processing server capable of controlling intervention behavior for communicating between a user and a remote user.
  • the information processing method executed by the information processing server in the above embodiment is The first acquisition step (for example, S1301) of acquiring information about the state of the first user from the communication robot, and A second acquisition step (for example, S1303) of acquiring information about the state of the second user from a device used by the second user who remotely communicates with the first user.
  • a determination step for example, S1304) for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
  • It has a transmission step (for example, S1305) of transmitting information to be provided from the communication robot to the first user to the communication robot. In the transmission step, when it is determined that communication between the first user and the second user is possible, information proposing communication between the first user and the second user is transmitted to the communication robot. To do.

Abstract

This communication robot has: an identification means for identifying the state of a first user; a receiving means for receiving, from an external device, state information that indicates the state of a second user communicating with the first user from a distance; a determination means for determining, on the basis of the state of the first user and the state of the second user, whether communication between the first and second user is possible; and a providing means for providing information to the first user. When it is determined that communication between the first and second users is possible, the providing means provides the first user with information that proposes communication between the first and second users.

Description

コミュニケーションロボットおよびその制御方法、情報処理サーバならびに情報処理方法Communication robot and its control method, information processing server and information processing method
 本発明は、コミュニケーションロボットおよびその制御方法、情報処理サーバならびに情報処理方法に関する。 The present invention relates to a communication robot, its control method, an information processing server, and an information processing method.
 従来、ユーザとコミュニケーションをとることが可能なコミュニケーションロボットが知られている。このようなコミュニケーションロボットには、ロボットを注視したり、話しかけたりしているユーザとのインタラクション状態に応じて、ユーザへの介入(インタラクションに引き込む)タイミングや介入の仕方を制御するものがある(特許文献1)。 Conventionally, communication robots capable of communicating with users have been known. Some of these communication robots control the timing and method of intervention (interaction) with the user according to the state of interaction with the user who is gazing at or talking to the robot (patented). Document 1).
特開2013-237124号公報Japanese Unexamined Patent Publication No. 2013-237124
 しかしながら、特許文献1で開示されるロボットはユーザとロボット間でのコミュニケーションを前提としており、ユーザと遠隔のユーザ(遠隔ユーザ)がコミュニケーションを行うために、ロボットがユーザに介入することは考慮されていない。すなわち、ユーザと遠隔ユーザとが円滑にコミュニケーションをとるために、ロボットによる介入タイミングや介入の仕方を制御する技術は知られていない。 However, the robot disclosed in Patent Document 1 is premised on communication between the user and the robot, and it is considered that the robot intervenes in the user in order for the user and the remote user (remote user) to communicate with each other. Absent. That is, there is no known technique for controlling the intervention timing and intervention method by the robot so that the user and the remote user can communicate smoothly.
 本発明は、上記課題に鑑みてなされ、その目的は、ユーザと遠隔ユーザのコミュニケーションをとるための介入行動を制御可能な技術を提供することである。 The present invention has been made in view of the above problems, and an object of the present invention is to provide a technique capable of controlling intervention behavior for communicating between a user and a remote user.
 本発明によれば、
 コミュニケーションロボットであって、
 第1ユーザの状態を識別する識別手段と、
 遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの状態を示す状態情報を外部装置から受信する受信手段と、
 前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定手段と、
 前記第1ユーザに情報を提供する提供手段と、を有し、
 前記提供手段は、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記第1ユーザに提供する、
 ことを特徴とするコミュニケーションロボットが提供される。
According to the present invention
A communication robot
Identification means for identifying the state of the first user,
A receiving means for receiving status information indicating the status of the second user who communicates with the first user remotely from an external device, and
A determination means for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
It has a providing means for providing information to the first user, and has
When it is determined that the first user and the second user can communicate with each other, the providing means provides the first user with information proposing communication between the first user and the second user. provide,
A communication robot characterized by this is provided.
 本発明によれば、ユーザと遠隔ユーザのコミュニケーションをとるための介入行動を制御になる。 According to the present invention, intervention behavior for communicating between a user and a remote user is controlled.
 添付図面は明細書に含まれ、その一部を構成し、本発明の実施の形態を示し、その記述と共に本発明の原理を説明するために用いられる。
本発明の実施形態1に係るコミュニケーションシステムの一例を示す図 実施形態1に係るコミュニケーションロボットの機能構成例を示すブロック図 実施形態1に係るコミュニケーションロボットのソフトウェア構成の一例を示すブロック図 実施形態1に係るユーザ状態テーブルの一例を示す図 実施形態1に係るコミュニケーション制御処理の一連の動作を示すフローチャート 実施形態1に係る履歴処理の一連の動作を示すフローチャート 実施形態1に係るユーザ状態に係る設定画面の一例を示す図 実施形態1に係るコミュニケーションロボットが介入するユーザ状態を選択する選択画面の一例を示す図 実施形態2に係るコミュニケーションシステムの一例を示す図 実施形態2に係る情報処理サーバの機能構成例を示すブロック図 実施形態2に係るコミュニケーションロボットのソフトウェア構成の一例を示すブロック図 実施形態2に係る情報処理サーバのソフトウェア構成の一例を示すブロック図 実施形態2に係る情報処理サーバにおけるコミュニケーション制御処理の一連の動作を示すフローチャート 実施形態2に係るコミュニケーションロボットにおけるコミュニケーション制御処理の一連の動作を示すフローチャート
The accompanying drawings are included in the specification, which form a part thereof, show an embodiment of the present invention, and are used together with the description to explain the principle of the present invention.
The figure which shows an example of the communication system which concerns on Embodiment 1 of this invention. Block diagram showing a functional configuration example of the communication robot according to the first embodiment Block diagram showing an example of the software configuration of the communication robot according to the first embodiment The figure which shows an example of the user state table which concerns on Embodiment 1. A flowchart showing a series of operations of the communication control process according to the first embodiment. A flowchart showing a series of operations of the history processing according to the first embodiment. The figure which shows an example of the setting screen which concerns on the user state which concerns on Embodiment 1. The figure which shows an example of the selection screen which selects the user state in which a communication robot intervenes according to Embodiment 1. The figure which shows an example of the communication system which concerns on Embodiment 2. Block diagram showing a functional configuration example of the information processing server according to the second embodiment Block diagram showing an example of the software configuration of the communication robot according to the second embodiment Block diagram showing an example of the software configuration of the information processing server according to the second embodiment A flowchart showing a series of operations of communication control processing in the information processing server according to the second embodiment. A flowchart showing a series of operations of communication control processing in the communication robot according to the second embodiment.
 以下、添付図面を参照して実施形態を詳しく説明する。尚、以下の実施形態は特許請求の範囲に係る発明を限定するものでするものでなく、また実施形態で説明されている特徴の組み合わせの全てが発明に必須のものとは限らない。実施形態で説明されている複数の特徴うち二つ以上の特徴が任意に組み合わされてもよい。また、同一若しくは同様の構成には同一の参照番号を付し、重複した説明は省略する。 Hereinafter, embodiments will be described in detail with reference to the attached drawings. The following embodiments do not limit the invention according to the claims, and not all combinations of features described in the embodiments are essential to the invention. Two or more of the plurality of features described in the embodiments may be arbitrarily combined. In addition, the same or similar configuration will be given the same reference number, and duplicate description will be omitted.
 (実施形態1)
 [システム概要]
 本実施形態では、対人用の可動式コミュニケーションロボット(以下、単に「ロボット」)が、ユーザと遠隔にいるユーザとがコミュニケーションを行えるように、ユーザの状態に応じてユーザに介入する形態について説明する。ここでロボットがインタラクションを行うユーザ(単に対象ユーザともいう)は、育児の対象となる児童(子供)である場合を例に説明する。また、遠隔にいるユーザ(単に遠隔ユーザともいう)は、対象ユーザとは離れた家・場所にいる祖父母である場合を例に説明する。しかし、本実施形態はこれに限定されるものではなく、対象ユーザと遠隔ユーザとが友人同士である場合など他の人物のコミュニケーションにも適用可能である。また、本実施形態では、ロボットと児童とのコミュニケーションが行われる場所を所定の空間(例えば、子供部屋や共有スペースなどの家庭内の空間)を想定して説明する。また、遠隔ユーザが滞在する場所を所定の遠隔の空間(例えば、祖父母の家庭内の共有スペースや滞在先の空間)として説明する。
(Embodiment 1)
[System overview]
In the present embodiment, a mode in which a movable communication robot for interpersonal use (hereinafter, simply “robot”) intervenes in the user according to the state of the user so that the user and a remote user can communicate with each other will be described. .. Here, a case where the user (also simply referred to as a target user) with which the robot interacts is a child (child) to be raised will be described as an example. Further, a case where the remote user (also simply referred to as a remote user) is a grandparent who is in a house / place away from the target user will be described as an example. However, the present embodiment is not limited to this, and can be applied to communication of other persons such as when the target user and the remote user are friends. Further, in the present embodiment, the place where the robot and the child communicate with each other will be described assuming a predetermined space (for example, a space in the home such as a children's room or a shared space). Further, the place where the remote user stays will be described as a predetermined remote space (for example, a shared space in the home of the grandparents or a space where the remote user stays).
 図1は、本実施形態に係るコミュニケーションシステム10の一例を示している。本システムでは、所定の空間にロボット100と対象ユーザ110とが存在し、ロボット100が対象ユーザ110とインタラクションを行う。ロボット100はネットワーク120を介して遠隔ユーザ160が用いる装置150(例えばモバイルデバイス)と通信することができる。すなわち、対象ユーザ110と遠隔ユーザ160とは、ロボット100と装置150とを介してコミュニケーションを行うことができる。なお、本実施形態では、遠隔ユーザ160が用いる装置150は例えばスマートフォンであるが、これに限らず、パーソナルコンピュータ、テレビ、タブレット端末、スマートウォッチ、ゲーム機などを含んでよい。また、装置150の代わりに可動式コミュニケーションロボットが存在し、対象ユーザ110と遠隔ユーザ160とが2つのコミュニケーションロボットを介してコミュニケーションを行ってもよい。 FIG. 1 shows an example of the communication system 10 according to the present embodiment. In this system, the robot 100 and the target user 110 exist in a predetermined space, and the robot 100 interacts with the target user 110. The robot 100 can communicate with the device 150 (for example, a mobile device) used by the remote user 160 via the network 120. That is, the target user 110 and the remote user 160 can communicate with each other via the robot 100 and the device 150. In the present embodiment, the device 150 used by the remote user 160 is, for example, a smartphone, but is not limited to this, and may include a personal computer, a television, a tablet terminal, a smart watch, a game machine, and the like. Further, a movable communication robot exists instead of the device 150, and the target user 110 and the remote user 160 may communicate with each other via the two communication robots.
 [ロボットの機能構成例]
 図2は、本実施形態に係るロボット100の機能構成例を示すブロック図である。ロボット100が備える制御部210は、1つ以上のCPU(Central Processing Unit)211と、HDD(Hard Disc Drive)212と、RAM(Random Access Memory)213とを含んで構成される。CPU211は、HDD212に格納されたプログラムを読み出して実行することにより、以下に示す各種処理を制御する。HDD212は、不揮発性の記憶領域であり、各種処理に対応するプログラムが格納される。なお、HDDの代わりに半導体メモリが用いられてもよい。RAM213は、揮発性の記憶領域であり、例えば、ワークメモリなどとして利用される。なお、制御部210は、GPU(Graphics Processing Unit)やASIC(Application Specific Integrated Circuit)、あるいは専用回路などから構成されてもよい。
[Example of robot function configuration]
FIG. 2 is a block diagram showing a functional configuration example of the robot 100 according to the present embodiment. The control unit 210 included in the robot 100 includes one or more CPUs (Central Processing Units) 211, an HDD (Hard Disk Drive) 212, and a RAM (Random Access Memory) 213. The CPU 211 controls various processes shown below by reading and executing the program stored in the HDD 212. The HDD 212 is a non-volatile storage area, and stores programs corresponding to various processes. A semiconductor memory may be used instead of the HDD. The RAM 213 is a volatile storage area and is used as, for example, a work memory. The control unit 210 may be composed of a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), a dedicated circuit, or the like.
 更に、本実施形態に係るロボット100は、外部との情報のインターフェースとなる各種部位を備える。以下に示す各部位は、制御部210による制御に基づいて動作する。音声入力部214は、外部からの音声情報を取得する部位であり、例えば、マイクなどが含まれる。音声入力部214は、本実施形態では例えば、通話を行うために対象ユーザの発話情報を取得する。撮像部215は、ユーザや周辺の画像情報を取得する部位であり、例えば、カメラなどが含まれる。本実施形態では例えば、動画通話を行うために対象ユーザを撮影した画像を取得する。 Further, the robot 100 according to the present embodiment includes various parts that serve as an interface for information with the outside. Each part shown below operates based on the control by the control unit 210. The voice input unit 214 is a part for acquiring voice information from the outside, and includes, for example, a microphone. In the present embodiment, the voice input unit 214 acquires the utterance information of the target user in order to make a call, for example. The image pickup unit 215 is a portion for acquiring image information of the user and the surroundings, and includes, for example, a camera. In the present embodiment, for example, an image of a target user is acquired in order to make a video call.
 電源部216は、ロボット100の電源を供給する部位であり、バッテリーに相当する。音声出力部217は、外部への音声情報を出力するための部位であり、例えば、スピーカーなどが含まれる。音声出力部217は、本実施形態では例えば、遠隔ユーザの発話を出力する。画像表示部218は、画像情報を出力するための部位であり、ディスプレイなどが含まれる。画像表示部218は、本実施形態では例えば、動画通話を行うために遠隔ユーザの画像を表示する。操作部219は、ロボットに対するユーザー操作を行うための部位である。画像表示部218と操作部219は、例えば、入力装置としても機能するタッチパネルディスプレイとしてまとめて構成されてもよい。 The power supply unit 216 is a part that supplies power to the robot 100 and corresponds to a battery. The audio output unit 217 is a portion for outputting audio information to the outside, and includes, for example, a speaker and the like. In the present embodiment, the voice output unit 217 outputs, for example, the utterance of a remote user. The image display unit 218 is a part for outputting image information, and includes a display and the like. In the present embodiment, the image display unit 218 displays an image of a remote user in order to make a moving image call, for example. The operation unit 219 is a part for performing a user operation on the robot. The image display unit 218 and the operation unit 219 may be collectively configured as, for example, a touch panel display that also functions as an input device.
 報知部220は、各種情報を外部に報知するための部位であり、例えば、LED(Light Emitting Diode)などの発光部であってもよいし、音声出力部217と一体となっていてもよい。駆動部121は、ロボット100の移動を行う部位であり、例えば、アクチュエーターやタイヤ、モーター、エンジンなどが含まれていてよい。センサ部222は、外部環境の情報を取得するための各種センサを含む。センサとしては、例えば、温度センサ、湿度センサ、赤外線センサ、深度センサ、ライダー(LiDAR: Light Detection and Ranging)などが含まれてよく、取得すべき情報に応じてセンサが設けられてよい。また、撮像部215がセンサ部222に含まれてもよい。通信部223は、ネットワーク120を介して外部装置(例えば遠隔ユーザの用いる装置150や外部サーバ)と通信を行うための部位であり、通信方式や通信プロトコルなどは特に限定するものではない。また、通信部223には、自身の位置情報を検知するためのGPS(Global Positioning System)が含まれてもよい。 The notification unit 220 is a part for notifying various information to the outside, and may be, for example, a light emitting unit such as an LED (Light Emitting Diode) or may be integrated with an audio output unit 217. The drive unit 121 is a portion for moving the robot 100, and may include, for example, an actuator, a tire, a motor, an engine, and the like. The sensor unit 222 includes various sensors for acquiring information on the external environment. The sensor may include, for example, a temperature sensor, a humidity sensor, an infrared sensor, a depth sensor, a lidar (LiDAR: Light Detection and Ringing), and the like, and a sensor may be provided according to the information to be acquired. Further, the image pickup unit 215 may be included in the sensor unit 222. The communication unit 223 is a part for communicating with an external device (for example, a device 150 used by a remote user or an external server) via the network 120, and the communication method, communication protocol, and the like are not particularly limited. Further, the communication unit 223 may include a GPS (Global Positioning System) for detecting its own position information.
 なお、図2の構成例では、各部位をそれぞれ1つのブロックにて示しているが、ロボット100の機能や構造に併せて、物理的に複数個所に各部位が分けて設置されてよい。また、例えば、電源部216などはロボット100から着脱可能に構成されてもよいし、新たな機能を提供するための部位を増設するためのインターフェースを備えてもよい。 In the configuration example of FIG. 2, each part is shown by one block, but each part may be physically divided and installed at a plurality of places according to the function and structure of the robot 100. Further, for example, the power supply unit 216 or the like may be configured to be detachable from the robot 100, or may be provided with an interface for adding a part for providing a new function.
 なお、ロボット100の外観は特に限定するものでは無いが、例えば、コミュニケーション対象として想定する人物(例えば、児童)に応じて構成されていてよい。また、ロボットの材質やサイズなども特に限定するものではない。 The appearance of the robot 100 is not particularly limited, but may be configured according to, for example, a person (for example, a child) assumed as a communication target. Further, the material and size of the robot are not particularly limited.
 [ロボットのソフトウェア構成]
 図3は、本実施形態に係るロボット100のソフトウェア構成の例を示す図である。本実施形態において、各処理部はCPU211がHDD212等に格納されたプログラムを読み出して実行することにより実現される。また、各DB(データベース)は、HDD212に構成される。なお、ソフトウェア構成は本実施形態の例示的な構成例を示しており、ファームウェア、OS、ミドルウェア、フレームワークなどの各ソフトウェア構成は省略している。
[Robot software configuration]
FIG. 3 is a diagram showing an example of a software configuration of the robot 100 according to the present embodiment. In the present embodiment, each processing unit is realized by the CPU 211 reading and executing the program stored in the HDD 212 or the like. Further, each DB (database) is configured in HDD 212. The software configuration shows an exemplary configuration example of the present embodiment, and each software configuration such as firmware, OS, middleware, and framework is omitted.
 ユーザ状態識別部301は、センサ部222、撮像部215、音声入力部214からの情報や、外部から受信した情報に基づき、対象ユーザのユーザ状態を識別する部位である。ここでのユーザ状態については、図4を参照して後述する。なお、ユーザ状態は全てがユーザ状態識別部301によって識別されなくてもよく、ユーザ状態の一部は、通信部223を介して外部から設定が行われてもよいし、操作部219を介してロボット100側にて設定されてもよい。ユーザ状態DB313は、ユーザ状態テーブルを格納する。ユーザ状態テーブルについては、図4を参照して詳述するが、識別されたユーザ状態と、ユーザ状態に応じて対象ユーザと遠隔ユーザとの間で行われるコミュニケーション内容との対応関係を定義している。ユーザ状態処理部302は、ユーザ状態識別部301にて識別されたユーザ状態と、ユーザ状態DB310に記憶されるユーザ状態テーブルとを用いて、識別されたユーザ状態に対して行われるコミュニケーション内容を特定する。 The user state identification unit 301 is a part that identifies the user state of the target user based on the information from the sensor unit 222, the image pickup unit 215, and the voice input unit 214, and the information received from the outside. The user state here will be described later with reference to FIG. It should be noted that not all user states need to be identified by the user state identification unit 301, and a part of the user state may be set externally via the communication unit 223 or via the operation unit 219. It may be set on the robot 100 side. The user status DB 313 stores the user status table. The user status table will be described in detail with reference to FIG. 4, but the correspondence between the identified user status and the communication content between the target user and the remote user according to the user status is defined. There is. The user state processing unit 302 specifies the communication content performed for the identified user state by using the user state identified by the user state identification unit 301 and the user state table stored in the user state DB 310. To do.
 ユーザ状態の識別は周期的に行われてよい。ユーザ状態識別部301は、周期的に識別した対象ユーザのユーザ状態と、識別した場所および時間とを、対象ユーザの行動履歴として行動履歴DB312に格納する。周期的に対象ユーザのユーザ状態を識別する場合に、図4に示すユーザ状態が識別できない場合には、ユーザ状態を「未識別」などとして格納する。 Identification of user status may be performed periodically. The user state identification unit 301 stores the user state of the target user, which is periodically identified, and the identified place and time as the action history of the target user in the action history DB 312. When the user state of the target user is periodically identified, if the user state shown in FIG. 4 cannot be identified, the user state is stored as "unidentified" or the like.
 マップ管理部303は、ロボット100が動作を行う空間のマップを作成し、周期的に更新する。マップは、ロボット100が備えるセンサ部222や撮像部215にて取得された情報に基づいて作成されてもよいし、通信部223を介して外部から取得されたローケーション情報に基づいて作成されてもよい。マップ管理部303は、作成または更新したマップを、マップ情報DB311に格納する。また、マップ管理部303は、識別されたユーザ状態に応じて、マップ情報を提供する。 The map management unit 303 creates a map of the space in which the robot 100 operates and updates it periodically. The map may be created based on the information acquired by the sensor unit 222 or the imaging unit 215 included in the robot 100, or may be created based on the location information acquired from the outside via the communication unit 223. May be good. The map management unit 303 stores the created or updated map in the map information DB311. In addition, the map management unit 303 provides map information according to the identified user state.
 通信処理部304は、外部との情報のやり取りを行う。例えば、通信処理部304は、遠隔ユーザの状態を示す状態情報を受信したり、対象ユーザと遠隔ユーザとがコミュニケーションを行うための音声通話又は動画通話のデータを送受信したりする。あるいは、対象ユーザについて識別されたユーザ状態を示す状態情報を外部装置に送信してもよい。 The communication processing unit 304 exchanges information with the outside. For example, the communication processing unit 304 receives state information indicating the state of the remote user, and transmits / receives data of a voice call or a video call for the target user and the remote user to communicate with each other. Alternatively, state information indicating the user state identified for the target user may be transmitted to the external device.
 遠隔ユーザ状態取得部305は、遠隔ユーザのユーザ状態を示す状態情報をネットワーク120を介して取得する。また、遠隔ユーザのユーザ状態の取得は周期的に行われてよい。周期的に取得される遠隔ユーザのユーザ状態は、遠隔ユーザの行動履歴として行動履歴DB312に格納されてもよい。 The remote user status acquisition unit 305 acquires status information indicating the user status of the remote user via the network 120. Further, the acquisition of the user state of the remote user may be performed periodically. The user state of the remote user acquired periodically may be stored in the action history DB 312 as the action history of the remote user.
 コミュニケーション制御部306は、ユーザ状態識別部301で識別される対象ユーザのユーザ状態と、遠隔ユーザ状態取得部305で取得される遠隔ユーザのユーザ状態とに基づいて、対象ユーザに介入する。そして、対象ユーザと遠隔ユーザとの間のコミュニケーション(音声通話または動画通話)を開始させるように制御する。コミュニケーション制御部306による処理は、コミュニケーション制御処理として後述する。 The communication control unit 306 intervenes in the target user based on the user state of the target user identified by the user state identification unit 301 and the user state of the remote user acquired by the remote user state acquisition unit 305. Then, it is controlled to start communication (voice call or video call) between the target user and the remote user. The process by the communication control unit 306 will be described later as a communication control process.
 行動履歴処理部307は、行動履歴DB312に格納される周期的な対象ユーザのユーザ状態に基づいて、対象ユーザのユーザ状態のパターンを識別する(すなわちパターン識別手段として機能する)。また、遠隔ユーザのユーザ状態が行動履歴DB312に格納される場合、周期的な遠隔ユーザのユーザ状態に基づいて、遠隔ユーザのユーザ状態のパターンを識別してもよい。移動制御部308は、識別されたユーザ状態等に応じて、ロボットをコミュニケーションが行われる場所に移動させるように制御する。 The action history processing unit 307 identifies the pattern of the user state of the target user based on the periodic user state of the target user stored in the action history DB 312 (that is, functions as a pattern identification means). Further, when the user state of the remote user is stored in the action history DB 312, the pattern of the user state of the remote user may be identified based on the user state of the remote user periodically. The movement control unit 308 controls the robot to move to a place where communication is performed according to the identified user state or the like.
 [ユーザ状態]
 上述のように、本実施形態のロボットは、対象ユーザと遠隔ユーザとがコミュニケーションを行うことができるように、対象ユーザが所定のユーザ状態である場合にユーザに介入し、遠隔ユーザとのコミュニケーションを提案する。ロボットが対象ユーザに介入するために考慮するユーザ状態の一例について説明する。
[User status]
As described above, the robot of the present embodiment intervenes in the user when the target user is in a predetermined user state and communicates with the remote user so that the target user and the remote user can communicate with each other. suggest. An example of the user state considered by the robot to intervene in the target user will be described.
 図4は、本実施形態に係るユーザ状態とそれぞれのユーザ状態に関連付けられるコミュニケーションの内容とを表す、ユーザ状態テーブル400を示している。ユーザ状態テーブル400は、ユーザ状態401、発生タイミング402、コミュニケーション内容403、およびコミュニケーション場所404とで構成される。ユーザ状態401はユーザ状態を識別する名称であり、発生タイミング402は、ロボット100が対応するユーザ状態を識別するタイミングを示す。コミュニケーション内容403は、対応するユーザ状態において行われるコミュニケーションの内容の例を示しており、コミュニケーション場所404は、対応するコミュニケーションが行われる場所を示す。 FIG. 4 shows a user state table 400 showing the user state according to the present embodiment and the content of communication associated with each user state. The user state table 400 is composed of a user state 401, an occurrence timing 402, a communication content 403, and a communication location 404. The user state 401 is a name for identifying the user state, and the occurrence timing 402 indicates a timing for the robot 100 to identify the corresponding user state. The communication content 403 shows an example of the content of the communication performed in the corresponding user state, and the communication place 404 indicates the place where the corresponding communication is performed.
 「強い感情が表れている」ユーザ状態は、対象ユーザ(子供)が泣いているまたは怒鳴っている状態である。ロボット100は、このユーザ状態を、例えば音声情報に基づいて識別する。このユーザ状態と関連付けられているコミュニケーション内容は(泣いている又は怒鳴っている子供に)祖父母が話しかけるものである。遠隔ユーザである祖父母がロボットを介して対象ユーザに介入して、対象ユーザに話しかけることができれば、対象ユーザの気持ちを落ちつけたり、素直に話をするきっかけとなる。ロボットは、このコミュニケーションを、対象ユーザ(子供)の現在位置において行う。 The user state in which "strong emotions are expressed" is a state in which the target user (child) is crying or yelling. The robot 100 identifies this user state based on, for example, voice information. The communication content associated with this user state is what the grandparents speak to (to a crying or yelling child). If grandparents, who are remote users, can intervene in the target user via a robot and talk to the target user, it will be an opportunity to calm the target user and talk honestly. The robot performs this communication at the current position of the target user (child).
 「学習・練習開始」のユーザ状態は、例えば子供が音読、楽器の練習、踊りの練習を開始した状態である。ロボット100は、例えば、ロボット100の操作部219または子供の両親の端末を介して、学習・練習を開始したことが入力された場合に、この状態を識別する。ロボット100が子供の周辺にいる場合、ロボット100が音声情報や画像情報に基づいて子供の行動を認識することにより、この状態を識別してもよい。このユーザ状態に対するコミュニケーション内容は、例えば子供の動きや発話を祖父母が確認するものである。遠隔ユーザである祖父母がロボットを介して対象ユーザに介入して、音読を聞いてあげたり、練習を見守ってあげることができれば、親に代わって又は親に加わって子供の練習を支援することができる。ロボット100は、このコミュニケーションを例えば対象ユーザ(子供)の現在位置において行う。 The user state of "learning / practice start" is, for example, a state in which a child has started reading aloud, practicing a musical instrument, and practicing dancing. The robot 100 identifies this state when it is input that the learning / practice has been started, for example, through the operation unit 219 of the robot 100 or the terminal of the parents of the child. When the robot 100 is in the vicinity of the child, this state may be identified by the robot 100 recognizing the behavior of the child based on the voice information or the image information. The content of communication for this user state is, for example, that the grandparents confirm the movements and utterances of the child. If grandparents, who are remote users, can intervene in the target user via a robot to listen to reading aloud and watch the practice, it is possible to support the practice of the child on behalf of or in addition to the parent. it can. The robot 100 performs this communication, for example, at the current position of the target user (child).
 「就寝直前」のユーザ状態は、対象ユーザが所定の就寝前の行動をとった後にベットに入った状態である。ロボット100は、音声情報や画像情報などに基づいて、所定の時間帯に子供が歯磨きをしてトイレを済ませるような所定の就寝前行動をとったことを識別した場合、当該ユーザ状態を識別する。このユーザ状態に対するコミュニケーション内容は、例えば祖父母による発話(読み聞かせ)である。遠隔ユーザである祖父母がロボット介して対象ユーザに介入して、読み聞かせをすることができれば、読み聞かせの機会を増やしたり親の労力を軽減することができる。ロボット100は、このコミュニケーションを共有スペースまたは子供部屋において行う。 The user state "immediately before going to bed" is a state in which the target user enters the bed after taking a predetermined action before going to bed. When the robot 100 identifies that the child has taken a predetermined pre-bedtime action such as brushing teeth and finishing the toilet at a predetermined time based on voice information, image information, etc., the robot 100 identifies the user state. .. The content of communication for this user state is, for example, an utterance (reading aloud) by grandparents. If grandparents, who are remote users, can intervene in the target user via a robot and read aloud, the chances of reading aloud can be increased and the labor of parents can be reduced. Robot 100 performs this communication in a shared space or a children's room.
 「出かける直前」のユーザ状態は、対象ユーザが所定の外出前の行動をとったときであり、その後、玄関に向かう状態である。ロボット100は、音声情報や画像情報などに基づいて、所定の時間帯に子供が歯磨きをして所定の持ち物を準備するような所定の外出行動をとったことを識別した場合、当該ユーザ状態を識別する。このユーザ状態に対するコミュニケーション内容は、例えば祖父母による送り出しの挨拶である。遠隔ユーザである祖父母がロボット介して対象ユーザに介入して、送り出しの挨拶をすることができれば、一言元気づけるなど親の代わりに又は親に加えて子供を送り出すことができる。ロボット100は、このコミュニケーションを玄関で行う。 The user state "immediately before going out" is when the target user takes a predetermined action before going out, and then heads to the entrance. When the robot 100 identifies that the child has taken a predetermined outing action such as brushing teeth and preparing a predetermined belonging at a predetermined time based on voice information or image information, the user state is determined. Identify. The content of communication for this user state is, for example, a greeting sent by grandparents. If grandparents, who are remote users, can intervene in the target user via a robot and greet the sending out, it is possible to send out the child on behalf of the parent or in addition to the parent, such as cheering up a word. The robot 100 performs this communication at the entrance.
 「帰宅直前」のユーザ状態は、対象ユーザが自宅周辺まで帰宅した状態である。ロボット100は、通信部223を介して、対象ユーザが所持するGPS信号を所定頻度で受信して対象ユーザの移動軌跡を取得し、その移動軌跡が自宅から所定距離の範囲に近づいた場合にこのユーザ状態を識別する。このユーザ状態に対するコミュニケーション内容は、例えば祖父母による帰宅を迎える挨拶である。このときロボット100は、対象ユーザに介入するために玄関に移動し、待機する。遠隔ユーザである祖父母がロボット介して対象ユーザに介入して、迎え入れる挨拶をすることができれば、親不在の場合であっても親の代わりに子供を迎え入れることができ、子供の寂しさを低減することができる。ロボット100は、このコミュニケーションを玄関で行う。 The user status "immediately before returning home" is the state in which the target user has returned home to the vicinity of the home. The robot 100 receives the GPS signal possessed by the target user at a predetermined frequency via the communication unit 223 to acquire the movement locus of the target user, and when the movement locus approaches a range of a predetermined distance from the home, this Identify the user state. The content of communication for this user state is, for example, a greeting from the grandparents when they return home. At this time, the robot 100 moves to the entrance and waits to intervene in the target user. If grandparents, who are remote users, can intervene in the target user via a robot and greet them, they can welcome the child instead of the parent even in the absence of the parent, reducing the loneliness of the child. be able to. The robot 100 performs this communication at the entrance.
 そのほか、ユーザ状態には、例えば「テレビ視聴中」や「コミュニケーション希望の意思表示中」などが含まれてよい。「テレビ視聴中」のユーザ状態は、対象ユーザがテレビを見ている状態である。この状態は比較的リラックスした状態であり遠隔ユーザである祖父母が入ってきて会話を行うことが可能な場合がある。また、ロボット100は、例えば、ロボット100の操作部219または子供の両親の端末を介して、(所定の時間などに)コミュニケーションを希望することが入力された場合に、「コミュニケーション希望の意思表示中」の状態を識別する。これらの状態のコミュニケーション内容は、対象ユーザと遠隔ユーザとが例えば通常の会話を行うものになる。例えば、コミュニケーション希望の意思表示は、例えば、子供の両親の端末に表示される、図7に示す操作画面から設定することができる。ラジオボタン701を押下することによって現時点でコミュニケーション可能であることを設定したり、「20:00から21:00まで」コミュニケーション可能であることを設定する。両親の端末で入力された情報はロボット100の通信部223によって受信され、その後、ユーザ状態の判定に用いられる。設定された時間情報は、遠隔ユーザの装置に送信されてもよい。 In addition, the user status may include, for example, "watching TV" or "expressing intention to communicate". The user state of "watching TV" is a state in which the target user is watching TV. This state is relatively relaxed and may allow grandparents, who are remote users, to come in and have a conversation. Further, when it is input that the robot 100 wants to communicate (at a predetermined time or the like) via the operation unit 219 of the robot 100 or the terminal of the child's parents, the robot 100 "displays the intention to communicate". ”Identify the state. The communication content in these states is such that the target user and the remote user have a normal conversation, for example. For example, the manifestation of intention to communicate can be set from, for example, the operation screen shown in FIG. 7 displayed on the terminal of the parents of the child. By pressing the radio button 701, it is set that communication is possible at the present time, or that communication is possible "from 20:00 to 21:00". The information input at the terminals of the parents is received by the communication unit 223 of the robot 100, and then used for determining the user status. The set time information may be transmitted to the device of the remote user.
 また、上述の説明では、ユーザ状態として、コミュニケーション可能な状態のみを示した。しかし、「食事中」、「入浴中」、「就寝中」などのコミュニケーション不可能な状態を含んでもよい。 Also, in the above explanation, only the communicable state is shown as the user state. However, it may include a state in which communication is impossible, such as "mealing", "bathing", and "sleeping".
 さらに、ロボットが識別するユーザ状態は、対象ユーザの属性に応じてその状態が異なってもよい。例えば、年齢に応じて、「出かける直前」と「帰宅直前」と「学習・練習開始」のユーザ状態のみをロボットが識別するように設定してもよい。識別するユーザ状態の選択は、例えばロボット100の操作部219または子供の両親の端末を介して行えるような構成であってもよい。例えば、図8に示すユーザ状態選択画面800のように、リストされるユーザ状態802のそれぞれに対してチェックマーク801を付与することにより、ロボット100が処理対象とするユーザ状態を設定してもよい。 Furthermore, the user state identified by the robot may differ depending on the attributes of the target user. For example, depending on the age, the robot may be set to identify only the user states of "immediately before going out", "immediately before returning home", and "starting learning / practice". The user state to be identified may be selected, for example, via the operation unit 219 of the robot 100 or the terminal of the parents of the child. For example, as in the user state selection screen 800 shown in FIG. 8, the user state to be processed by the robot 100 may be set by adding a check mark 801 to each of the listed user states 802. ..
 [遠隔ユーザのユーザ状態]
 本実施形態では、ロボット100は、上述した対象ユーザのユーザ状態を識別した場合に、遠隔ユーザのユーザ状態を取得して、遠隔ユーザのユーザ状態を考慮する。本実施形態では、説明を簡単にするために、遠隔ユーザのユーザ状態は、コミュニケーション可能な状態、または所定の時間帯にコミュニケーション可能な状態のみであるものとする。このようなユーザ状態は、遠隔ユーザの用いる装置150上で図7に示す操作画面と同様の操作画面によって設定される。もちろん、遠隔ユーザ側にロボット100と同様のロボットが存在し、当該ロボットによって遠隔ユーザのユーザ状態が識別されてもよい。この場合、例えば遠隔ユーザのテレビを視聴している状態を識別している場合には遠隔ユーザがコミュニケーション可能である。
[User status of remote user]
In the present embodiment, when the robot 100 identifies the user state of the target user described above, the robot 100 acquires the user state of the remote user and considers the user state of the remote user. In the present embodiment, for the sake of simplicity, it is assumed that the user state of the remote user is only a state in which communication is possible or a state in which communication is possible in a predetermined time zone. Such a user state is set by an operation screen similar to the operation screen shown in FIG. 7 on the device 150 used by the remote user. Of course, a robot similar to the robot 100 may exist on the remote user side, and the user state of the remote user may be identified by the robot. In this case, for example, when the remote user is identifying the state of watching TV, the remote user can communicate.
 [コミュニケーションの提案]
 本実施形態に係るロボット100は、対象ユーザのユーザ状態と、遠隔ユーザのユーザ状態とを考慮して、コミュニケーション可能であると判定した場合に、対象ユーザに遠隔ユーザとのコミュニケーションを提案する情報を提供する。コミュニケーションを提案する情報の提供は、音声、画像の提示によりなされてよい。これらの音声や画像には、コミュニケーションを行う相手方の名前や、コミュニケーション内容を表す情報を含んでよい。コミュニケーション内容を表す情報は、ユーザ状態が「出かける直前」であれば、例えば「おばあさんから挨拶があるよ」との情報を含む。また、ユーザ状態が「強い感情が表れている」であれば、「おばあさんが声をかけてくれるよ」などと、ユーザ状態に応じて異なる内容を含む。
[Communication proposal]
When the robot 100 according to the present embodiment considers the user state of the target user and the user state of the remote user and determines that communication is possible, the robot 100 provides the target user with information that proposes communication with the remote user. provide. Information that proposes communication may be provided by presenting audio or images. These voices and images may include the name of the other party with whom the person communicates and information representing the content of the communication. If the user status is "immediately before going out", the information representing the communication content includes information such as "I have a greeting from my grandmother." In addition, if the user status is "a strong emotion is expressed", the content differs depending on the user status, such as "grandmother will call out".
 また、ロボット100は、対象ユーザにコミュニケーションを提案するために対象ユーザに介入する場合、対象人物に近づいていって、その周りを回ったり、ユーザ状態に応じて予め決められた画像や音声を報知部220により報知してもよい。 Further, when the robot 100 intervenes in the target user in order to propose communication to the target user, the robot 100 approaches the target person, goes around the target person, and notifies a predetermined image or voice according to the user state. It may be notified by the unit 220.
 [マッピング動作]
 ロボット100のマップ管理部303は、撮像部215、センサ部222等にて取得された周辺情報を用いて、動作する空間のマップを作成する。例えば、一定時間が経過するごとに、これらの周辺情報を再収集し、マップを更新するようにしてもよい。例えば、家具の配置が変更になった場合には、その変更後の状態に応じて、マップ情報を更新してよい。更に、マップ管理部303は、通信部223を介して外部からの情報(地図情報や位置情報など)を取得してマッピングを行うようにしてもよい。また、マップ管理部303は、各領域に存在する物体を検知して、その領域の用途を対応付けるような構成であってもよい。例えば、ベッドが存在する領域であればその領域を寝室として対応付けてもよく、おもちゃや机がある領域は子供部屋として対応付けてもよい。領域の用途が特定できない場合には、ユーザにより設定を受け付けるような構成であってもよい。
[Mapping operation]
The map management unit 303 of the robot 100 creates a map of the operating space by using the peripheral information acquired by the image pickup unit 215, the sensor unit 222, and the like. For example, the map may be updated by recollecting these peripheral information every time a certain period of time elapses. For example, when the arrangement of furniture is changed, the map information may be updated according to the state after the change. Further, the map management unit 303 may acquire information (map information, location information, etc.) from the outside via the communication unit 223 and perform mapping. Further, the map management unit 303 may be configured to detect an object existing in each area and associate the usage of the area. For example, if the area has a bed, the area may be associated with a bedroom, and the area with toys and desks may be associated with a children's room. If the purpose of the area cannot be specified, the configuration may be such that the setting is accepted by the user.
 [コミュニケーション制御処理]
 図5は、本実施形態に係るロボット100のコミュニケーション制御処理の一連の動作を示すフローチャートである。本実施形態において、本処理は、ロボット100のCPU211がHDD212に記憶されたプログラムを読み出して実行することにより実現される。各処理工程は、例えば、図2の部位や図3の処理部が協働して実現されるが、ここでは説明を簡略化するために、処理主体をロボット100として包括的に説明する。
[Communication control processing]
FIG. 5 is a flowchart showing a series of operations of the communication control process of the robot 100 according to the present embodiment. In the present embodiment, this process is realized by the CPU 211 of the robot 100 reading and executing the program stored in the HDD 212. Each processing step is realized, for example, in cooperation with the part of FIG. 2 and the processing unit of FIG. 3, but here, in order to simplify the explanation, the processing subject is comprehensively described as the robot 100.
 S501において、ロボット100は、対象ユーザの状態監視を行う。本実施形態では、状態監視は、例えば、対象ユーザが撮像部215の視界に入る範囲や対象ユーザの行動音を収集可能な範囲で対象ユーザが行動する状態で行われる。ロボット100は、対象ユーザの状態監視をより行い易いように移動しながら情報を収集してもよい。 In S501, the robot 100 monitors the status of the target user. In the present embodiment, the state monitoring is performed in a state in which the target user acts within a range in which the target user can enter the field of view of the imaging unit 215 and a range in which the target user's action sound can be collected. The robot 100 may collect information while moving so that the state monitoring of the target user can be performed more easily.
 S502において、ロボット100は、ユーザが所定のユーザ状態であるかを判定する。ロボット100が識別する所定のユーザ状態は、図4について上述したユーザ状態であり、ロボット100は図4を参照して説明したそれぞれの方法により、各ユーザ状態を識別する。ロボット100は、所定のユーザ状態が識別された場合(S502にてYES)にはS503へ進み、そうでない場合(S502にてNO)にはS501へ戻って、状態監視を継続する。 In S502, the robot 100 determines whether the user is in a predetermined user state. The predetermined user state identified by the robot 100 is the user state described above with respect to FIG. 4, and the robot 100 identifies each user state by the respective methods described with reference to FIG. If the predetermined user state is identified (YES in S502), the robot 100 proceeds to S503, and if not (NO in S502), returns to S501 and continues the state monitoring.
 S503において、ロボット100は、遠隔ユーザのユーザ状態を取得する。まず、ロボット100は、例えば、S502において識別されたユーザ状態を示す状態情報を遠隔ユーザの用いる装置150に送信してもよいし、遠隔ユーザのユーザ状態を要求するリクエスト情報を装置150に送信してもよい。ロボット100は、遠隔ユーザの装置150が上記情報に応答して送信した、遠隔ユーザのユーザ状態を示す状態情報を取得する。 In S503, the robot 100 acquires the user state of the remote user. First, the robot 100 may, for example, transmit state information indicating the user state identified in S502 to the device 150 used by the remote user, or transmit request information requesting the user state of the remote user to the device 150. You may. The robot 100 acquires the state information indicating the user state of the remote user, which is transmitted by the device 150 of the remote user in response to the above information.
 S504において、ロボット100は、識別された対象ユーザのユーザ状態と遠隔ユーザのユーザ状態とに基づいて、対象ユーザと遠隔ユーザとのコミュニケーションが可能であるかを判定する。例えば、ロボット100は、識別された対象ユーザのユーザ状態と遠隔ユーザのユーザ状態とが共にコミュニケーション可能なユーザ状態である場合に、対象ユーザと遠隔ユーザとはコミュニケーションが可能であると判定する。本実施形態の例では、図4に示した対象ユーザのユーザ状態はコミュニケーション可能な状態である。このため、遠隔ユーザが図7に示した操作画面でコミュニケーション可能に設定していれば、対象ユーザと遠隔ユーザとはコミュニケーション可能であると判定される。ロボット100は、コミュニケーション可能と判定した場合(S504にてYES)にはS505へ進み、そうでない場合(S504にてNO)には本処理を終了する。 In S504, the robot 100 determines whether communication between the target user and the remote user is possible based on the user state of the identified target user and the user state of the remote user. For example, the robot 100 determines that the target user and the remote user can communicate with each other when the identified user state of the target user and the user state of the remote user are both communicable user states. In the example of this embodiment, the user state of the target user shown in FIG. 4 is a communicable state. Therefore, if the remote user is set to be able to communicate on the operation screen shown in FIG. 7, it is determined that the target user and the remote user can communicate with each other. If it is determined that communication is possible (YES in S504), the robot 100 proceeds to S505, and if not (NO in S504), the robot 100 ends this process.
 S505において、ロボット100は、コミュニケーションを行う場所に移動する。例えば、ロボット100は、対象ユーザが近くにいない場合には対象ユーザのいる場所に移動する。また、識別されたユーザ状態のコミュニケーション場所が特定の場所である(すなわち対象ユーザの現在位置でない)場合、当該特定の場所へ移動する。例えば、識別された対象ユーザのユーザ状態が「出かける直前」や「帰宅直前」である場合、ロボット100はコミュニケーション場所である「玄関」に移動する。すなわち、ロボット100は、識別された対象ユーザのユーザ状態に応じて、異なるコミュニケーション場所に移動することができる。 In S505, the robot 100 moves to a place where communication is performed. For example, the robot 100 moves to the place where the target user is when the target user is not nearby. Further, when the communication location of the identified user state is a specific location (that is, it is not the current position of the target user), the communication location is moved to the specific location. For example, when the user state of the identified target user is "immediately before going out" or "immediately before returning home", the robot 100 moves to the "entrance" which is a communication place. That is, the robot 100 can move to a different communication location according to the user state of the identified target user.
 S506において、ロボット100は、対象ユーザにコミュニケーションを提案する。具体的には、ロボット100は、対象ユーザが所定の距離以下に近づいたことに応じて、コミュニケーションを提案する情報を画像表示部218に表示あるいは音声出力部217から音声として出力する。具体的なコミュニケーションの提案については、上述した通り、ユーザ状態に応じて異なるコミュニケーション内容を含む。ロボット100は、対象ユーザからの提案に対する応答を受け付けるように待機する。例えば、ロボット100は、提案に対する応答は「コミュニケーションする」と「コミュニケーションしない」とのいずれかを、タッチパネルへ接触や音声により受け付ける。 In S506, the robot 100 proposes communication to the target user. Specifically, the robot 100 displays information proposing communication on the image display unit 218 or outputs it as voice from the voice output unit 217 in response to the target user approaching a predetermined distance or less. As described above, specific communication proposals include different communication contents depending on the user status. The robot 100 waits to receive a response to a proposal from the target user. For example, the robot 100 receives either "communicate" or "do not communicate" as a response to the proposal by touching the touch panel or by voice.
 S507において、ロボット100は、対象ユーザがコミュニケーションの提案を受け入れたかを判定する。ロボット100は、提案に対する応答が「コミュニケーションする」である場合、対象ユーザが提案を受け入れたと判定(S507にてYES)してS508へ進む。一方、提案に対する応答が「コミュニケーションしない」(あるいはタイムアウトであってもよい)である場合(S507にてNO)、本処理を終了する。 In S507, the robot 100 determines whether the target user has accepted the communication proposal. When the response to the proposal is "communicate", the robot 100 determines that the target user has accepted the proposal (YES in S507) and proceeds to S508. On the other hand, when the response to the proposal is "no communication" (or may be a timeout) (NO in S507), this process ends.
 S508において、ロボット100は、遠隔ユーザの装置150へ通話開始要求を送信し、装置150からの通話開始要求に対する応答を待つ。遠隔ユーザの装置150は、通話開始要求を受信すると、音声情報等を用いて遠隔ユーザに通話開始要求の到着を報知(すなわち遠隔ユーザへ介入)して、遠隔ユーザによる通話開始を受け入れを待つ。ロボット100が送信する通話開始要求は、対象ユーザのユーザ状態や、コミュニケーション内容を含んでよい。このため、遠隔ユーザの装置150は、対象ユーザのユーザ状態やコミュニケーション内容を提示してコミュニケーションを提案したうえで、「コミュニケーションする」と「コミュニケーションしない」との選択を遠隔ユーザから受け付ける。この場合、遠隔ユーザは対象ユーザのユーザ状態を確認したうえでコミュニケーションを開始することができる。 In S508, the robot 100 transmits a call start request to the remote user's device 150, and waits for a response to the call start request from the device 150. When the remote user's device 150 receives the call start request, it notifies the remote user of the arrival of the call start request (that is, intervenes in the remote user) using voice information or the like, and waits for the remote user to accept the call start. The call start request transmitted by the robot 100 may include the user state of the target user and the communication content. Therefore, the device 150 of the remote user presents the user status and communication contents of the target user, proposes communication, and then accepts the selection of "communicate" or "not communicate" from the remote user. In this case, the remote user can start communication after confirming the user status of the target user.
 S509において、ロボット100は、装置150からの通話開始要求に対する応答を受信すると、装置150側で通話開始要求が受け入れられたかを判定する。ロボット100は、当該応答が通話開始要求を受け入れる応答である場合(S509においてYES)には、S510へ進み、当該応答が通話開始要求を受け入れる応答でない場合(S509においてNO)には本処理を終了する。 In S509, when the robot 100 receives the response to the call start request from the device 150, it determines whether the call start request is accepted by the device 150 side. The robot 100 proceeds to S510 when the response is a response that accepts the call start request (YES in S509), and ends this process when the response is not a response that accepts the call start request (NO in S509). To do.
 S510において、ロボット100は、対象ユーザと遠隔ユーザとの間の音声通話または動画通話を開始し、ユーザ状態に応じたコミュニケーション内容が実施される。コミュニケション内容の実施において、ロボット100は、撮像部215および音声入力部214と、遠隔ユーザの用いる装置150とから入力される音声通話または動画通話を用いる。 In S510, the robot 100 starts a voice call or a video call between the target user and the remote user, and the communication content according to the user state is executed. In implementing the communication content, the robot 100 uses a voice call or a video call input from the image pickup unit 215, the voice input unit 214, and the device 150 used by the remote user.
 画像を用いたコミュニケーションを行う場合、対象ユーザは、プライベートな空間が映り込む背景画像を変更する(背景を消したり、他の画像に差し替える)ように操作部219から予め設定することができる。ロボット100は、背景画像の変更が設定されている場合には、コミュニケーション内容の実施において送信される動画像の背景を変更したうえで、変更した動画像を遠隔ユーザ側へ送信する。また、送信される動画像の変更は、背景画像の変更に限らず、ユーザ領域をアバタに変更してもよい。この場合、ロボット100は対象ユーザを撮像した画像から対象ユーザの動きを認識して、アバタの動きに反映してもよい。更に、送信される画像の変更に限らず、対象ユーザの発話する音声を他の声に変換したり、周囲音を削減して送信するようにしてもよい。このようにすることで、対象ユーザの生活空間がありのままに伝わってしまうとユーザが不快に感じる要素を、適切に調整することができる。すなわち、伝わる情報を調整して、対象ユーザと遠隔ユーザとの間のコミュニケーションをより円滑で快適なものにすることができる。 When communicating using an image, the target user can set in advance from the operation unit 219 to change the background image in which the private space is reflected (the background is erased or replaced with another image). When the background image is changed, the robot 100 changes the background of the moving image transmitted in the execution of the communication content, and then transmits the changed moving image to the remote user side. Further, the change of the transmitted moving image is not limited to the change of the background image, and the user area may be changed to the avatar. In this case, the robot 100 may recognize the movement of the target user from the image captured by the target user and reflect it in the movement of the avatar. Further, the transmission image is not limited to the change, and the voice spoken by the target user may be converted into another voice, or the ambient sound may be reduced and transmitted. By doing so, it is possible to appropriately adjust the elements that the user feels uncomfortable if the living space of the target user is transmitted as it is. That is, it is possible to adjust the transmitted information to make the communication between the target user and the remote user smoother and more comfortable.
 さらに、対象ユーザは、背景やアバタの設定は、場所ごとにあるいはコミュニケーション内容ごと異なる設定を行うことができるようにしてもよい。このようにすれば、対象ユーザは気になる空間やコミュニケーションごとに対象ユーザの情報の露出度合いを最適化することができる。 Furthermore, the target user may be able to set different background and avatar settings for each location or for each communication content. In this way, the target user can optimize the degree of exposure of the target user's information for each space or communication of concern.
 なお、上述の例は対象ユーザ側から発信される音声または動画像について説明したが、遠隔ユーザ側から発信される音声または動画像を遠隔ユーザによる設定に応じて、変更してもよい。例えば、遠隔ユーザを撮影した動画像の背景を変更したり、音声を変更したりしてもよい。この場合、遠隔ユーザによる設定をロボット100が受信したうえで、受信した音声や動画像に変更を加えて提示してもよいし、ロボット100は遠隔ユーザが用いる装置側で変更された音声または動画像を受信して提示するようにしてもよい。 Although the above example has described the audio or moving image transmitted from the target user side, the audio or moving image transmitted from the remote user side may be changed according to the setting by the remote user. For example, the background of a moving image taken by a remote user may be changed, or the sound may be changed. In this case, after the robot 100 receives the setting by the remote user, the received audio or moving image may be modified and presented, or the robot 100 may change the audio or video on the device side used by the remote user. The image may be received and presented.
 S511において、ロボット100は、通話が終了されるかを判定する。例えば、ロボット100の操作部219から終了操作を受け付けた場合、通話を終了すると判定して本処理を終了し、そうでない場合、S510における通話を継続する。 In S511, the robot 100 determines whether the call is terminated. For example, when the end operation is received from the operation unit 219 of the robot 100, it is determined that the call is ended and the present process is ended. If not, the call in S510 is continued.
 [履歴処理]
 図6は、本実施形態のロボット100における履歴処理を示すフローチャートである。本実施形態において、本処理は、ロボット100のCPU211がHDD212に記憶されたプログラムを読み出して実行することにより実現される。各処理工程は、例えば、図2の部位や図3の処理部が協働して実現されるが、ここでは説明を簡略化するために、処理主体をロボット100として包括的に説明する。
[History processing]
FIG. 6 is a flowchart showing history processing in the robot 100 of the present embodiment. In the present embodiment, this process is realized by the CPU 211 of the robot 100 reading and executing the program stored in the HDD 212. Each processing step is realized, for example, in cooperation with the part of FIG. 2 and the processing unit of FIG. 3, but here, in order to simplify the explanation, the processing subject is comprehensively described as the robot 100.
 S601において、ロボット100は、解析する対象ユーザの、例えば2週間、1か月、6か月などの期間の行動履歴情報を行動履歴DB312から取得する。S602において、ロボット100は、対象ユーザの例えば時間帯における行動パターンを抽出する。ロボット100は、例えば、「就寝直前」や「出かける直前」、「帰宅直前」といったユーザ状態がそれぞれ特定の時間近傍、および特定の場所で識別されるようなパターンを抽出する。 In S601, the robot 100 acquires the behavior history information of the target user to be analyzed for a period of, for example, 2 weeks, 1 month, 6 months, etc. from the behavior history DB 312. In S602, the robot 100 extracts the behavior pattern of the target user in, for example, a time zone. The robot 100 extracts, for example, a pattern in which user states such as "immediately before going to bed", "immediately before going out", and "immediately before returning home" are identified near a specific time and at a specific place, respectively.
 S603において、ロボット100は、抽出された行動パターンの特定の場所と特定の場所とを所定のユーザ状態の発生タイミングやコミュニケーション場所を予測するための情報として行動履歴DB312に格納する。 In S603, the robot 100 stores a specific place and a specific place of the extracted action pattern in the action history DB 312 as information for predicting the occurrence timing and the communication place of a predetermined user state.
 ロボット100は、格納された情報を参照して、上述したS501のユーザ状態を監視する場所を変更して、より確実にユーザ状態を判定できるようにしてもよい。すなわち、ユーザ状態の発生を予想して適切な場所に先回りし、より的確にユーザに介入する(コミュニケーションを提案する)ことができるようになる。 The robot 100 may refer to the stored information and change the location for monitoring the user status of S501 described above so that the user status can be determined more reliably. That is, it becomes possible to anticipate the occurrence of a user state, anticipate an appropriate place, and more accurately intervene in the user (suggest communication).
 また、ロボット100は、例えば「出かける直前」、「帰宅直前」といったユーザ状態について、特定の時間と特定の場所とが抽出されている場合には、コミュニケーションの開始を簡略化してもよい。すなわち、ロボット100は、予定時間にコミュニケーションの場所に移動して、対象ユーザと遠隔ユーザのユーザ状態が共にコミュニケーション可能であれば、S507の提案の受け入れととS509の通話の受け入れの確認を省略してコミュニケーションを開始してもよい。このようにすれば、一言挨拶を交わすようなコミュニケーションを容易に開始することができるようになる。 Further, the robot 100 may simplify the start of communication when a specific time and a specific place are extracted for the user state such as "immediately before going out" and "immediately before returning home". That is, if the robot 100 moves to the communication location at the scheduled time and the user states of the target user and the remote user can communicate with each other, the confirmation of acceptance of the proposal of S507 and acceptance of the call of S509 is omitted. You may start communication. In this way, it becomes possible to easily start communication such as exchanging a word of greeting.
 以上説明したように、本実施形態では、ユーザと遠隔ユーザのコミュニケーションをとるための介入行動を制御可能になる。 As described above, in the present embodiment, it is possible to control the intervention behavior for communicating between the user and the remote user.
 (実施形態2)
 以下、本発明に係る実施形態2について説明する。実施形態1では、識別した対象ユーザのユーザ状態と、遠隔ユーザのユーザ状態とに基づいて両者のコミュニケーションが可能であるかの判定を、ロボット100が判定する例について説明した。これに対し、実施形態2では、上記両者のコミュニケーションが可能であるかの判定を情報処理サーバで行う例について説明する。なお、実施形態の説明では、同一の参照番号は同一または実質的に同一の機能を果たすものとし、重複する説明は省略して相違点について重点的に説明する。
(Embodiment 2)
Hereinafter, the second embodiment according to the present invention will be described. In the first embodiment, an example in which the robot 100 determines whether or not communication between the two can be performed based on the user state of the identified target user and the user state of the remote user has been described. On the other hand, in the second embodiment, an example in which the information processing server determines whether or not the communication between the two is possible will be described. In the description of the embodiment, the same reference number shall perform the same or substantially the same function, and the overlapping description will be omitted and the differences will be mainly described.
 図9は、本実施形態に係るコミュニケーションシステム90の一例を示している。実施形態1の構成に加えて、情報処理サーバ900が追加されている。ロボット100はネットワーク120を介してデータを情報処理サーバ900と送受信する。また、遠隔ユーザ160が用いる装置150も情報処理サーバ900とデータを送受信する。 FIG. 9 shows an example of the communication system 90 according to the present embodiment. In addition to the configuration of the first embodiment, the information processing server 900 is added. The robot 100 transmits / receives data to / from the information processing server 900 via the network 120. The device 150 used by the remote user 160 also transmits / receives data to / from the information processing server 900.
 [ロボットおよび情報処理サーバの機能構成例]
 本実施形態のロボット100の機能構成例は、実施形態1と同様である。図10は、情報処理サーバ900の機能構成例を示すブロック図である。制御部1010は、1つ以上のCPU(Central Processing Unit)1011と、HDD(Hard Disc Drive)1012と、RAM(Random Access Memory)1013とを含んで構成される。CPU1011は、HDD1012に格納されたプログラムを読み出して実行することにより、以下に示す各種処理を制御する。HDD1012は、不揮発性の記憶領域であり、各種処理に対応するプログラムが格納される。なお、HDDの代わりに半導体メモリが用いられてもよい。RAM1013は、揮発性の記憶領域であり、例えば、ワークメモリなどとして利用される。なお、制御部1010は、GPU(Graphics Processing Unit)やASIC(Application Specific Integrated Circuit)、あるいは専用回路などから構成されてもよい。また、制御部1010の各構成要素が仮想化された構成であってもよい。
[Example of functional configuration of robot and information processing server]
The functional configuration example of the robot 100 of this embodiment is the same as that of the first embodiment. FIG. 10 is a block diagram showing a functional configuration example of the information processing server 900. The control unit 1010 includes one or more CPUs (Central Processing Units) 1011s, an HDD (Hard Disk Drive) 1012, and a RAM (Random Access Memory) 1013. The CPU 1011 controls various processes shown below by reading and executing the program stored in the HDD 1012. The HDD 1012 is a non-volatile storage area, and stores programs corresponding to various processes. A semiconductor memory may be used instead of the HDD. The RAM 1013 is a volatile storage area and is used as, for example, a work memory. The control unit 1010 may be composed of a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), a dedicated circuit, or the like. Further, each component of the control unit 1010 may have a virtualized configuration.
 電源部1014は、情報処理サーバ900に外部からの電源を供給する部位である。通信部1015は、ネットワーク120を介してロボット100や遠隔ユーザの用いる装置150と通信を行うための部位であり、通信方式や通信プロトコルなどは特に限定するものではない。 The power supply unit 1014 is a portion that supplies power to the information processing server 900 from the outside. The communication unit 1015 is a part for communicating with the robot 100 and the device 150 used by the remote user via the network 120, and the communication method, communication protocol, and the like are not particularly limited.
 [ロボット100のソフトウェア構成]
 図11は、本実施形態に係るロボット100のソフトウェア構成の例を示す図である。本実施形態において、各処理部はCPU211がHDD212等に格納されたプログラムを読み出して実行することにより実現される。また、各DB(データベース)は、HDD212に構成される。なお、ソフトウェア構成は本実施形態の実施に必要な構成例のみを示しており、ファームウェア、OS、ミドルウェア、フレームワークなどの各ソフトウェア構成は省略している。
[Software configuration of robot 100]
FIG. 11 is a diagram showing an example of a software configuration of the robot 100 according to the present embodiment. In the present embodiment, each processing unit is realized by the CPU 211 reading and executing the program stored in the HDD 212 or the like. Further, each DB (database) is configured in HDD 212. The software configuration shows only the configuration examples necessary for the implementation of this embodiment, and each software configuration such as firmware, OS, middleware, and framework is omitted.
 ユーザ状態識別部301は、センサ部222、撮像部215、音声入力部214からの情報や、外部から受信した情報に基づき、対象ユーザのユーザ状態を識別する部位である。ユーザ状態については、図4において上述したものと同一である。なお、ユーザ状態は全てがユーザ状態識別部301によって識別されなくてもよく、ユーザ状態の一部は、通信部223を介して外部から設定が行われてもよいし、操作部219を介してロボット100側にて設定されてもよい。ユーザ状態DB313は、ユーザ状態テーブルを格納する。ユーザ状態テーブルについては、実施形態と同様である、ユーザ状態DB313は、ロボット100において更新されると情報処理サーバ900のユーザ状態DBと同期され、実質的に同一のデータが保持される。ユーザ状態識別部301によって識別されたユーザ状態は、ユーザ状態を示す状態情報として情報処理サーバ900に送信される。 The user state identification unit 301 is a part that identifies the user state of the target user based on the information from the sensor unit 222, the image pickup unit 215, and the voice input unit 214, and the information received from the outside. The user state is the same as that described above in FIG. It should be noted that not all user states need to be identified by the user state identification unit 301, and a part of the user state may be set externally via the communication unit 223 or via the operation unit 219. It may be set on the robot 100 side. The user status DB 313 stores the user status table. As for the user state table, the user state DB 313, which is the same as that of the embodiment, is synchronized with the user state DB of the information processing server 900 when updated by the robot 100, and substantially the same data is held. The user state identified by the user state identification unit 301 is transmitted to the information processing server 900 as state information indicating the user state.
 ユーザ状態の識別は周期的に行われてよい。ユーザ状態識別部301は、周期的に識別した対象ユーザのユーザ状態と、識別した場所および時間とを、対象ユーザの行動履歴として行動履歴DB312に格納する。周期的に対象ユーザのユーザ状態を識別する場合に、図4に示すユーザ状態が識別できない場合には、ユーザ状態を「未識別」などとして格納する。 Identification of user status may be performed periodically. The user state identification unit 301 stores the user state of the target user, which is periodically identified, and the identified place and time as the action history of the target user in the action history DB 312. When the user state of the target user is periodically identified, if the user state shown in FIG. 4 cannot be identified, the user state is stored as "unidentified" or the like.
 通信処理部304は、外部との情報のやり取りを行う。例えば、通信処理部304は、情報処理サーバ900との間でやり取りするデータを送受信したり、対象ユーザと遠隔ユーザとがコミュニケーションを行うための音声通話又は動画通話のデータを送受信したりする。 The communication processing unit 304 exchanges information with the outside. For example, the communication processing unit 304 transmits / receives data exchanged with the information processing server 900, and transmits / receives data of a voice call or a video call for communication between the target user and the remote user.
 簡易コミュニケーション制御部1101は、情報処理サーバ900からコミュニケーションを提案する情報を受信して、対象ユーザに介入する。そして、対象ユーザと遠隔ユーザとの間のコミュニケーション(音声通話または動画通話)を開始させるように制御する。簡易コミュニケーション制御部1101による処理は、簡易コミュニケーション制御処理として後述する。 The simple communication control unit 1101 receives information proposing communication from the information processing server 900 and intervenes in the target user. Then, it is controlled to start communication (voice call or video call) between the target user and the remote user. The process by the simple communication control unit 1101 will be described later as a simple communication control process.
 なお、マップ管理部303、行動履歴処理部307、移動制御部308および各DBは、それぞれ実施形態1と実質的に同一である。各DBは、情報処理サーバ900側のユーザ状態DB1210、マップ情報DB1211及び行動履歴DB1212と同期され、実質的に同一のデータが保持される。 The map management unit 303, the action history processing unit 307, the movement control unit 308, and each DB are substantially the same as those in the first embodiment. Each DB is synchronized with the user state DB 1210, the map information DB 1211, and the action history DB 1212 on the information processing server 900 side, and substantially the same data is held.
 [情報処理サーバのソフトウェア構成]
 図12は、本実施形態に係る情報処理サーバ900のソフトウェア構成の例を示す図である。本実施形態において、各処理部はCPU1011がHDD1012等に格納されたプログラムを読み出して実行することにより実現される。また、各DB(データベース)は、HDD1012に構成される。なお、ソフトウェア構成は本実施形態の実施に必要な構成例のみを示しており、ファームウェア、OS、ミドルウェア、フレームワークなどの各ソフトウェア構成は省略している。
[Software configuration of information processing server]
FIG. 12 is a diagram showing an example of a software configuration of the information processing server 900 according to the present embodiment. In the present embodiment, each processing unit is realized by the CPU 1011 reading and executing the program stored in the HDD 1012 or the like. Further, each DB (database) is configured in HDD 1012. The software configuration shows only the configuration examples necessary for the implementation of this embodiment, and each software configuration such as firmware, OS, middleware, and framework is omitted.
 ユーザ状態取得部1201は、ロボット100からのユーザ状態を示す状態情報を取得する部位である。なお、ロボット100においてユーザ状態の一部が外部から設定された場合であっても、当該設定されたユーザ状態を示す状態情報が情報処理サーバ900に送信される。ユーザ状態DB1210は、ユーザ状態テーブルを格納する。このユーザ状態DB1210は、ロボット100のユーザ状態DBと同期される。ユーザ状態処理部1202は、ユーザ状態取得部1201にて取得された対象ユーザのユーザ状態と、ユーザ状態DB1210に記憶されるユーザ状態テーブルとを用いて、ユーザ状態に対して行われるコミュニケーション内容を特定する。 The user state acquisition unit 1201 is a part that acquires state information indicating the user state from the robot 100. Even when a part of the user state is set from the outside in the robot 100, the state information indicating the set user state is transmitted to the information processing server 900. The user state DB 1210 stores the user state table. The user state DB 1210 is synchronized with the user state DB of the robot 100. The user state processing unit 1202 specifies the communication content performed for the user state by using the user state of the target user acquired by the user state acquisition unit 1201 and the user state table stored in the user state DB 1210. To do.
 通信処理部1204は、ロボット100および遠隔ユーザの用いる装置150との情報のやり取りを行う。例えば、通信処理部1204は、遠隔ユーザの状態を示す状態情報を受信したり、対象ユーザと遠隔ユーザとがコミュニケーションを行うための音声通話又は動画通話のデータを送受信したりする。あるいは、対象ユーザについて識別されたユーザ状態を示す状態情報を外部装置に送信してもよい。 The communication processing unit 1204 exchanges information with the robot 100 and the device 150 used by the remote user. For example, the communication processing unit 1204 receives state information indicating the state of the remote user, and transmits / receives data of a voice call or a video call for the target user and the remote user to communicate with each other. Alternatively, state information indicating the user state identified for the target user may be transmitted to the external device.
 遠隔ユーザ状態取得部1205は、遠隔ユーザのユーザ状態を示す状態情報をネットワーク120を介して取得する。コミュニケーション制御部1206は、ユーザ状態取得部1201で識別される対象ユーザのユーザ状態と、遠隔ユーザ状態取得部1205で取得される遠隔ユーザのユーザ状態とに基づいて、ロボットが対象ユーザに介入するようロボット100を制御する。そして、対象ユーザと遠隔ユーザとの間のコミュニケーション(音声通話または動画通話)を開始させるようにロボット100を制御する。コミュニケーション制御部1206による処理は、コミュニケーション制御処理として後述する。 The remote user status acquisition unit 1205 acquires status information indicating the user status of the remote user via the network 120. The communication control unit 1206 causes the robot to intervene in the target user based on the user state of the target user identified by the user state acquisition unit 1201 and the user state of the remote user acquired by the remote user state acquisition unit 1205. Control the robot 100. Then, the robot 100 is controlled so as to start communication (voice call or video call) between the target user and the remote user. The processing by the communication control unit 1206 will be described later as a communication control processing.
 [ユーザ状態]
 上述のように、本実施形態では、情報処理サーバ900が、対象ユーザと遠隔ユーザとがコミュニケーションを行うことが可能であるかを判定したうえで、ロボット100に対象ユーザに介入させて、遠隔ユーザとのコミュニケーションを提案する。しかしながら、ロボット100が識別する対象ユーザのユーザ状態は実施形態1と同様である。
[User status]
As described above, in the present embodiment, the information processing server 900 determines whether the target user and the remote user can communicate with each other, and then causes the robot 100 to intervene in the target user to cause the remote user. Propose communication with. However, the user state of the target user identified by the robot 100 is the same as that of the first embodiment.
 [遠隔ユーザのユーザ状態]
 本実施形態では、情報処理サーバ900は、対象ユーザの所定のユーザ状態が識別された場合に、遠隔ユーザのユーザ状態を取得して、遠隔ユーザのユーザ状態を考慮する。本実施形態では、実施形態1と同様、遠隔ユーザのユーザ状態は、コミュニケーション可能な状態、または所定の時間帯にコミュニケーション可能な状態のみであるものとする。このようなユーザ状態は、遠隔ユーザの用いる装置150上で図7に示す操作画面と同様の操作画面によって設定される。もちろん、遠隔ユーザ側にロボット100と同様のロボットが存在し、当該ロボットによって遠隔ユーザのユーザ状態が識別されてもよい。
[User status of remote user]
In the present embodiment, when the predetermined user state of the target user is identified, the information processing server 900 acquires the user state of the remote user and considers the user state of the remote user. In the present embodiment, as in the first embodiment, the user state of the remote user is only a state in which communication is possible or a state in which communication is possible in a predetermined time zone. Such a user state is set by an operation screen similar to the operation screen shown in FIG. 7 on the device 150 used by the remote user. Of course, a robot similar to the robot 100 may exist on the remote user side, and the user state of the remote user may be identified by the robot.
 [コミュニケーションを提案する手法]
 本実施形態に係る情報処理サーバ900は、対象ユーザのユーザ状態と、遠隔ユーザのユーザ状態とを考慮して、コミュニケーション可能であると判定した場合に、ロボット100に、対象ユーザに遠隔ユーザとのコミュニケーションを提案する情報を送信する。ロボット100は、情報処理サーバ900から受信した情報に基づいて、対象ユーザに遠隔ユーザとのコミュニケーションを提案する情報を提供する。コミュニケーションを提案する情報は、実施形態1と同様に提供される。
[Method of proposing communication]
When the information processing server 900 according to the present embodiment determines that communication is possible in consideration of the user state of the target user and the user state of the remote user, the robot 100 informs the target user of the remote user. Send information that suggests communication. The robot 100 provides information that proposes communication with a remote user to the target user based on the information received from the information processing server 900. The information proposing communication is provided as in Embodiment 1.
 [コミュニケーション制御処理]
 図13は、本実施形態に係る情報処理サーバ900におけるコミュニケーション制御処理の一連の動作を示すフローチャートである。本実施形態において、本処理は、情報処理サーバ900のCPU1011がHDD1012に記憶されたプログラムを読み出して実行することにより実現される。各処理工程は、例えば、図10の部位や図12の処理部が協働して実現されるが、ここでは説明を簡略化するために、処理主体を情報処理サーバ900として包括的に説明する。
[Communication control processing]
FIG. 13 is a flowchart showing a series of operations of the communication control process in the information processing server 900 according to the present embodiment. In the present embodiment, this process is realized by the CPU 1011 of the information processing server 900 reading and executing the program stored in the HDD 1012. Each processing step is realized, for example, in cooperation with the part of FIG. 10 and the processing unit of FIG. 12, but here, in order to simplify the explanation, the processing subject is comprehensively described as the information processing server 900. ..
 S1301において、情報処理サーバ900は、ロボット100が識別した対象ユーザのユーザ状態を示す状態情報を取得する。S1302において、情報処理サーバ900は、取得したユーザ状態が所定のユーザ状態であるかを判定する。所定のユーザ状態は、図4について上述したユーザ状態である。情報処理サーバ900は、取得したユーザ状態が所定のユーザ状態であると判定した場合(S1302にてYES)にはS1303へ進み、そうでない場合(S1302にてNO)にはS1301へ戻る。 In S1301, the information processing server 900 acquires state information indicating the user state of the target user identified by the robot 100. In S1302, the information processing server 900 determines whether the acquired user state is a predetermined user state. The predetermined user state is the user state described above with respect to FIG. The information processing server 900 proceeds to S1303 when it determines that the acquired user state is a predetermined user state (YES in S1302), and returns to S1301 when it does not (NO in S1302).
 S1303において、情報処理サーバ900は、遠隔ユーザのユーザ状態を取得する。まず、情報処理サーバ900は、、例えば、S1302において識別されたユーザ状態を示す状態情報を遠隔ユーザの用いる装置150に送信してもよいし、遠隔ユーザのユーザ状態を要求するリクエスト情報を装置150に送信してもよい。情報処理サーバ900は、遠隔ユーザの装置150が上記情報に応答して送信した、遠隔ユーザのユーザ状態を示す状態情報を取得する。 In S1303, the information processing server 900 acquires the user status of the remote user. First, the information processing server 900 may, for example, transmit the state information indicating the user state identified in S1302 to the device 150 used by the remote user, or send the request information requesting the user state of the remote user to the device 150. May be sent to. The information processing server 900 acquires the state information indicating the user state of the remote user, which is transmitted by the device 150 of the remote user in response to the above information.
 S1304において、情報処理サーバ900は、、取得した対象ユーザのユーザ状態と遠隔ユーザのユーザ状態とに基づいて、対象ユーザと遠隔ユーザとのコミュニケーションが可能であるかを判定する。例えば、情報処理サーバ900は、対象ユーザのユーザ状態と遠隔ユーザのユーザ状態とが共にコミュニケーション可能なユーザ状態である場合に、対象ユーザと遠隔ユーザとはコミュニケーションが可能であると判定する。本実施形態の例では、図4に示した対象ユーザのユーザ状態はコミュニケーション可能な状態である。このため、遠隔ユーザが図7に示した操作画面でコミュニケーション可能に設定していれば、対象ユーザと遠隔ユーザとはコミュニケーション可能であると判定される。情報処理サーバ900は、コミュニケーション可能と判定した場合(S1304にてYES)にはS1305へ進み、そうでない場合(S1304にてNO)には本処理を終了する。 In S1304, the information processing server 900 determines whether communication between the target user and the remote user is possible based on the acquired user state of the target user and the user state of the remote user. For example, the information processing server 900 determines that the target user and the remote user can communicate with each other when the user state of the target user and the user state of the remote user are both communicable user states. In the example of this embodiment, the user state of the target user shown in FIG. 4 is a communicable state. Therefore, if the remote user is set to be able to communicate on the operation screen shown in FIG. 7, it is determined that the target user and the remote user can communicate with each other. The information processing server 900 proceeds to S1305 when it is determined that communication is possible (YES in S1304), and ends this process when it is not (NO in S1304).
 S1305において、情報処理サーバ900は、ロボット100が対象ユーザにコミュニケーションを提案するための提案情報を、ロボット100に送信する。提案情報は、ユーザ状態に応じて異なるコミュニケーション内容を含む。 In S1305, the information processing server 900 transmits the proposal information for the robot 100 to propose communication to the target user to the robot 100. The proposal information includes communication contents that differ depending on the user status.
 S1306において、情報処理サーバ900は、対象ユーザがコミュニケーションの提案を受け入れたかを判定する。例えば、情報処理サーバ900は、対象ユーザによる応答を示す応答情報をロボット100から受信する。そして、応答情報が「コミュニケーションする」である場合、対象ユーザが提案を受け入れたと判定(S1306にてYES)してS1307へ進む。一方、提案に対する応答が「コミュニケーションしない」(あるいはタイムアウトであってもよい)である場合(S1306にてNO)、本処理を終了する。 In S1306, the information processing server 900 determines whether the target user has accepted the communication proposal. For example, the information processing server 900 receives response information indicating a response by the target user from the robot 100. Then, when the response information is "communicate", it is determined that the target user has accepted the proposal (YES in S1306), and the process proceeds to S1307. On the other hand, when the response to the proposal is "no communication" (or may be a timeout) (NO in S1306), this process ends.
 S1307において、情報処理サーバ900は、遠隔ユーザの装置150へ通話開始要求を送信し、装置150からの通話開始要求に対する応答を待つ。遠隔ユーザの装置150は、通話開始要求を受信すると、音声情報等を用いて通話開始要求の到着を報知(すなわち遠隔ユーザへ介入)して、遠隔ユーザによる通話開始を受け入れを待つ。情報処理サーバ900が送信する通話開始要求は、対象ユーザのユーザ状態や、コミュニケーション内容を含んでよい。このため、遠隔ユーザの装置150は、対象ユーザのユーザ状態やコミュニケーション内容を提示してコミュニケーションを提案したうえで、「コミュニケーションする」と「コミュニケーションしない」との選択を遠隔ユーザから受け付ける。この場合、遠隔ユーザは対象ユーザのユーザ状態を確認したうえでコミュニケーションを開始することができる。 In S1307, the information processing server 900 transmits a call start request to the remote user's device 150, and waits for a response to the call start request from the device 150. When the remote user's device 150 receives the call start request, it notifies the arrival of the call start request (that is, intervenes in the remote user) using voice information or the like, and waits for the remote user to accept the call start. The call start request transmitted by the information processing server 900 may include the user status of the target user and the communication content. Therefore, the device 150 of the remote user presents the user status and communication contents of the target user, proposes communication, and then accepts the selection of "communicate" or "not communicate" from the remote user. In this case, the remote user can start communication after confirming the user status of the target user.
 S1308において、情報処理サーバ900は、装置150からの通話開始要求に対する応答を受信すると、装置150側で通話開始要求が受け入れられたかを判定する。情報処理サーバ900は、当該応答が通話開始要求を受け入れる応答である場合(S1308においてYES)には、S1309へ進み、当該応答が通話開始要求を受け入れる応答でない場合(S1308においてNO)には本処理を終了する。 In S1308, when the information processing server 900 receives the response to the call start request from the device 150, the information processing server 900 determines whether the call start request is accepted on the device 150 side. If the response is a response that accepts the call start request (YES in S1308), the information processing server 900 proceeds to S1309, and if the response is not a response that accepts the call start request (NO in S1308), the processing is performed. To finish.
 S1309において、情報処理サーバ900は、対象ユーザと遠隔ユーザとの間の音声通話または動画通話を開始し、ユーザ状態に応じたコミュニケーション内容が実施される。コミュニケション内容の実施において、情報処理サーバ900は、ロボット100の撮像部215および音声入力部214と、遠隔ユーザの用いる装置150とから入力される音声通話または動画通話を用いる。 In S1309, the information processing server 900 starts a voice call or a video call between the target user and the remote user, and the communication content according to the user state is executed. In implementing the communication content, the information processing server 900 uses a voice call or a video call input from the image pickup unit 215 and the voice input unit 214 of the robot 100 and the device 150 used by the remote user.
 画像を用いたコミュニケーションを行う場合、情報処理サーバ900は、対象ユーザのプライベートな空間が映り込む背景画像を変更(背景を消したり、他の画像に差し替える)することができる。例えば、情報処理サーバ900は、ロボット100の操作部219で予め設定された設定情報をロボット100から取得して背景画像を変更する。情報処理サーバ900は、背景画像を変更する設定情報を取得している場合には、ロボット100から送信される動画像の背景を変更したうえで、変更した動画像を遠隔ユーザ側へ送信する。 When communicating using images, the information processing server 900 can change the background image (erasing the background or replacing it with another image) in which the private space of the target user is reflected. For example, the information processing server 900 acquires the setting information preset by the operation unit 219 of the robot 100 from the robot 100 and changes the background image. When the information processing server 900 has acquired the setting information for changing the background image, the information processing server 900 changes the background of the moving image transmitted from the robot 100 and then transmits the changed moving image to the remote user side.
 また、送信される動画像の変更は、背景画像の変更に限らず、ユーザ領域をアバタに変更してもよい。この場合、情報処理サーバ900は対象ユーザを撮像した画像から対象ユーザの動きを認識して、アバタの動きに反映してもよい。更に、送信される画像の変更に限らず、対象ユーザの発話する音声を他の声に変換したり、周囲音を削減して送信するようにしてもよい。このようにすることで、対象ユーザの生活空間がありのままに伝わってしまうとユーザが不快に感じる要素を、適切に調整することができる。すなわち、伝わる情報を調整して、対象ユーザと遠隔ユーザとの間のコミュニケーションをより円滑で快適なものにすることができる。 Further, the change of the transmitted moving image is not limited to the change of the background image, and the user area may be changed to avatar. In this case, the information processing server 900 may recognize the movement of the target user from the image captured by the target user and reflect it in the movement of the avatar. Further, the transmission image is not limited to the change, and the voice spoken by the target user may be converted into another voice, or the ambient sound may be reduced and transmitted. By doing so, it is possible to appropriately adjust the elements that the user feels uncomfortable if the living space of the target user is transmitted as it is. That is, it is possible to adjust the transmitted information to make the communication between the target user and the remote user smoother and more comfortable.
 さらに、対象ユーザは、背景やアバタの設定は、場所ごとにあるいはコミュニケーション内容ごと異なる設定を行うことができるようにしてもよい。このようにすれば、対象ユーザは気になる空間やコミュニケーションごとに対象ユーザの情報の露出度合いを最適化することができる。 Furthermore, the target user may be able to set different background and avatar settings for each location or for each communication content. In this way, the target user can optimize the degree of exposure of the target user's information for each space or communication of concern.
 なお、上述の例は対象ユーザ側から発信される音声または動画像について説明したが、遠隔ユーザ側から発信される音声または動画像を遠隔ユーザによる設定に応じて、変更してもよい。例えば、遠隔ユーザを撮影した動画像の背景を変更したり、音声を変更したりしてもよい。この場合、遠隔ユーザによる設定を情報処理サーバ900が受信したうえで、受信した音声や動画像に変更を加えて提示してもよい。また、情報処理サーバ900は遠隔ユーザが用いる装置側で変更された音声または動画像を受信してロボット100に送信するようにしてもよい。 Although the above example has described the audio or moving image transmitted from the target user side, the audio or moving image transmitted from the remote user side may be changed according to the setting by the remote user. For example, the background of a moving image taken by a remote user may be changed, or the sound may be changed. In this case, after the information processing server 900 receives the setting by the remote user, the received audio or moving image may be modified and presented. Further, the information processing server 900 may receive the changed voice or moving image on the device side used by the remote user and transmit it to the robot 100.
 S1310において、情報処理サーバ900は、通話が終了されるかを判定する。例えば、ロボット100から終了要求を受け付けた場合、通話を終了すると判定して本処理を終了し、そうでない場合、S1309における通話を継続する。 In S1310, the information processing server 900 determines whether the call is terminated. For example, when a termination request is received from the robot 100, it is determined that the call is terminated and the present process is terminated. If not, the call in S1309 is continued.
 [簡易コミュニケーション制御処理]
 図14は、本実施形態に係るロボット100における簡易コミュニケーション制御処理の一連の動作を示すフローチャートである。本実施形態において、本処理は、ロボット100のCPU211がHDD212に記憶されたプログラムを読み出して実行することにより実現される。各処理工程は、例えば、図2の部位や図3の処理部が協働して実現されるが、ここでは説明を簡略化するために、処理主体をロボット100として包括的に説明する。
[Simple communication control processing]
FIG. 14 is a flowchart showing a series of operations of the simple communication control process in the robot 100 according to the present embodiment. In the present embodiment, this process is realized by the CPU 211 of the robot 100 reading and executing the program stored in the HDD 212. Each processing step is realized, for example, in cooperation with the part of FIG. 2 and the processing unit of FIG. 3, but here, in order to simplify the explanation, the processing subject is comprehensively described as the robot 100.
 S1401において、ロボット100は、対象ユーザの状態監視を行う。S1402において、ロボット100は、所定のユーザ状態を識別すると、ユーザ状態を示す状態情報を情報処理サーバ900に送信する。 In S1401, the robot 100 monitors the status of the target user. In S1402, when the robot 100 identifies a predetermined user state, the robot 100 transmits state information indicating the user state to the information processing server 900.
 S1403において、ロボット100は、情報処理サーバ900から提案情報を受信する。すなわち、情報処理サーバ900では対象ユーザと遠隔ユーザとがコミュニケーション可能であると判定したことを意味する。 In S1403, the robot 100 receives the proposal information from the information processing server 900. That is, it means that the information processing server 900 determines that the target user and the remote user can communicate with each other.
 S1404において、ロボット100は、コミュニケーションを行う場所に移動する。例えば、ロボット100は、対象ユーザが近くにいない場合には対象ユーザのいる場所に移動する。また、識別されたユーザ状態のコミュニケーション場所が特定の場所である(すなわち対象ユーザの現在位置でない)場合、当該特定の場所へ移動する。例えば、提案情報に含まれるユーザ状態が「出かける直前」や「帰宅直前」である場合、ロボット100はコミュニケーション場所である「玄関」に移動する。すなわち、ロボット100は、識別された対象ユーザのユーザ状態に応じて、異なるコミュニケーション場所に移動することができる。 In S1404, the robot 100 moves to a place where communication is performed. For example, the robot 100 moves to the place where the target user is when the target user is not nearby. Further, when the communication location of the identified user state is a specific location (that is, it is not the current position of the target user), the communication location is moved to the specific location. For example, when the user state included in the proposal information is "immediately before going out" or "immediately before returning home", the robot 100 moves to the "entrance" which is a communication place. That is, the robot 100 can move to a different communication location according to the user state of the identified target user.
 S1405において、ロボット100は、情報処理サーバ900からの提案情報に基づき、対象ユーザにコミュニケーションを提案する。具体的には、ロボット100は、対象ユーザが所定の距離以下に近づいたことに応じて、コミュニケーションを提案する情報を画像表示部218に表示あるいは音声出力部217から音声として出力する。具体的なコミュニケーションの提案については、上述した通り、ユーザ状態に応じて異なるコミュニケーション内容を含む。ロボット100は、対象ユーザからの提案に対する応答を受け付けるように待機する。ロボット100は、提案に対する応答は「コミュニケーションする」と「コミュニケーションしない」とのいずれかを、タッチパネルへ接触や音声により受け付けるものとする。 In S1405, the robot 100 proposes communication to the target user based on the proposal information from the information processing server 900. Specifically, the robot 100 displays information proposing communication on the image display unit 218 or outputs it as voice from the voice output unit 217 in response to the target user approaching a predetermined distance or less. As described above, specific communication proposals include different communication contents depending on the user status. The robot 100 waits to receive a response to a proposal from the target user. The robot 100 receives either "communicate" or "does not communicate" as a response to the proposal by touching the touch panel or by voice.
 S1406において、ロボット100は、コミュニケーションの提案に対する対象ユーザの応答を、情報処理サーバ900に送信する。S1407において、ロボット100は、情報処理サーバ900から通話開始要求を受信する。S1408において、ロボット100は、対象ユーザと遠隔ユーザとの間の音声通話または動画通話を開始し、ユーザ状態に応じたコミュニケーション内容が実施される。S1409において、ロボット100は、通話が終了されるかを判定する。例えば、ロボット100の操作部219から終了操作を受け付けた場合、通話を終了すると判定して情報処理サーバ900に通話終了要求を送信して本処理を終了する。そうでない場合、S1408における通話を継続する。 In S1406, the robot 100 transmits the response of the target user to the communication proposal to the information processing server 900. In S1407, the robot 100 receives a call start request from the information processing server 900. In S1408, the robot 100 starts a voice call or a video call between the target user and the remote user, and the communication content according to the user state is executed. In S1409, the robot 100 determines whether the call is terminated. For example, when the end operation is received from the operation unit 219 of the robot 100, it is determined that the call is ended, and the call end request is transmitted to the information processing server 900 to end this process. If not, the call in S1408 is continued.
 以上説明したように、本実施形態では、ユーザと遠隔ユーザのコミュニケーションをとるための介入行動を制御可能になる。 As described above, in the present embodiment, it is possible to control the intervention behavior for communicating between the user and the remote user.
 <実施形態のまとめ>
1.上記実施形態のコミュニケーションロボット(例えば100)は、
 第1ユーザの状態を識別する識別手段(例えば301、S502)と、
 遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの状態を示す状態情報を外部装置から受信する受信手段(例えば304、S503)と、
 前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定手段(例えば306)と、
 前記第1ユーザに情報を提供する提供手段(例えば217、218、306、S506)と、を有し、
 前記提供手段は、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記第1ユーザに提供する。
<Summary of Embodiment>
1. 1. The communication robot (for example, 100) of the above embodiment is
Identification means for identifying the state of the first user (for example, 301, S502) and
Receiving means (for example, 304, S503) that receives state information indicating the state of the second user who communicates with the first user remotely from an external device, and
A determination means (for example, 306) for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
It has a providing means (for example, 217, 218, 306, S506) that provides information to the first user.
When it is determined that the first user and the second user can communicate with each other, the providing means provides the first user with information proposing communication between the first user and the second user. provide.
 この実施形態によれば、ユーザと遠隔ユーザのコミュニケーションをとるための介入行動を制御可能なコミュニケーションロボットを提供することが可能になる。 According to this embodiment, it is possible to provide a communication robot capable of controlling intervention behavior for communicating between a user and a remote user.
 2.上記実施形態のコミュニケーションロボットは、
 前記提供手段により提供された情報に対する前記第1ユーザからの応答を受け付ける受付手段(例えば214、219、S507)と、
 前記第1ユーザと前記第2ユーザとの間の音声通話又は動画通話を制御する通話制御手段(例えば304、S510)と、を更に有し、
 前記通話制御手段は、前記第1ユーザからの応答が前記第1ユーザと前記第2ユーザとのコミュニケーションを開始することを示す場合、第2ユーザの用いる装置と前記コミュニケーションロボットとを介した音声通話又は動画通話を開始させる(例えばS510)。
2. The communication robot of the above embodiment
A receiving means (for example, 214, 219, S507) that receives a response from the first user to the information provided by the providing means, and
Further having a call control means (for example, 304, S510) for controlling a voice call or a video call between the first user and the second user.
When the call control means indicates that the response from the first user starts communication between the first user and the second user, the voice call via the device used by the second user and the communication robot. Alternatively, a video call is started (for example, S510).
 この実施形態によれば、コミュニケーションロボットの介入により、対象ユーザと遠隔ユーザとが通話ベースのコミュニケーションを所望に開始することができる。 According to this embodiment, the intervention of the communication robot allows the target user and the remote user to start call-based communication as desired.
 3.上記実施形態のコミュニケーションロボットでは、
 前記通話制御手段は、前記第1ユーザからの応答が前記第1ユーザと前記第2ユーザとのコミュニケーションを開始することを示す場合、前記第2ユーザの用いる装置へコミュニケーションの開始要求を送信(例えばS508)し、前記第2ユーザの用いる装置により前記開始要求が受け入れられない場合には、前記第2ユーザの用いる装置と前記コミュニケーションロボットとを介した音声通話又は動画通話を開始させない(例えばS509)。
3. 3. In the communication robot of the above embodiment,
When the response from the first user indicates that the communication between the first user and the second user is started, the call control means transmits a communication start request to the device used by the second user (for example,). S508) If the start request is not accepted by the device used by the second user, the voice call or video call via the device used by the second user and the communication robot is not started (for example, S509). ..
 この実施形態によれば、遠隔ユーザの都合を最終的に確認したうえでコミュニケーションを開始することができる。 According to this embodiment, communication can be started after finally confirming the convenience of the remote user.
 4.上記実施形態のコミュニケーションロボットでは、
 前記提供手段により提供される情報は、前記第1ユーザと前記第2ユーザとの間で行うコミュニケーション内容を更に含み、
 前記提供手段は、識別された前記第1ユーザの状態に応じて、前記コミュニケーション内容を異ならせる(例えばS506)。
4. In the communication robot of the above embodiment,
The information provided by the providing means further includes the content of communication performed between the first user and the second user.
The providing means makes the communication content different depending on the identified state of the first user (for example, S506).
 この実施形態によれば、ユーザはどのようなコミュニケーションを行うためにロボットによって介入されたかを容易に把握することができる。 According to this embodiment, the user can easily grasp what kind of communication was intervened by the robot.
 5.上記実施形態のコミュニケーションロボットでは、
 前記第1ユーザと前記第2ユーザとの間のコミュニケーション内容は予め操作手段を介して設定可能である(図8)。
5. In the communication robot of the above embodiment,
The content of communication between the first user and the second user can be set in advance via an operating means (FIG. 8).
 この実施形態によれば、遠隔ユーザとのコミュニケーションの内容を対象ユーザに合わせて設定することができる。 According to this embodiment, the content of communication with the remote user can be set according to the target user.
 6.上記実施形態のコミュニケーションロボットでは、
 前記コミュニケーション内容は、音声通話又は動画通話を用いて前記第2ユーザによって前記第1ユーザの帰宅を迎える、あるいは前記第1ユーザの送り出すコミュニケーションである(図4)。
6. In the communication robot of the above embodiment,
The content of the communication is a communication in which the second user returns home by the second user using a voice call or a video call, or is sent by the first user (FIG. 4).
 この実施形態によれば、遠隔ユーザである祖父母がロボットを介して対象ユーザに介入して、対象ユーザに話しかけることにより、親不在の場合であっても親の代わりに子供を迎え入れたり送り出すことができ、子供の寂しさを低減することができる。 According to this embodiment, grandparents, who are remote users, can intervene in the target user via a robot and talk to the target user to accept or send a child on behalf of the parent even in the absence of the parent. It can reduce the loneliness of children.
 7.上記実施形態のコミュニケーションロボットでは、
 前記コミュニケーション内容は、音声通話又は動画通話を用いて前記第2ユーザが泣いている前記第1ユーザに話しかけるコミュニケーションである(図4)。
7. In the communication robot of the above embodiment,
The communication content is a communication in which the second user speaks to the crying first user using a voice call or a video call (FIG. 4).
 この実施形態によれば、遠隔ユーザである祖父母がロボットを介して対象ユーザに介入して、対象ユーザに話しかけることにより、対象ユーザの気持ちを落ちつけたり、素直に話をするきっかけとなる。 According to this embodiment, the grandparents who are remote users intervene in the target user via the robot and talk to the target user, which is an opportunity to calm the target user's feelings or talk obediently.
 8.上記実施形態のコミュニケーションロボットでは、
 前記コミュニケーション内容は、音声通話又は動画通話を用いて前記第1ユーザが行う動きまたは発話を前記第2ユーザが確認するコミュニケーションである(図4)。
8. In the communication robot of the above embodiment,
The communication content is a communication in which the second user confirms a movement or utterance performed by the first user using a voice call or a video call (FIG. 4).
 この実施形態によれば、遠隔ユーザである祖父母がロボットを介して対象ユーザに介入して、音読を聞いてあげたり、練習を見守ってあげることにより、親に代わって又は親に加わって子供の練習を支援することができる。 According to this embodiment, the grandparents, who are remote users, intervene in the target user via a robot to listen to the reading aloud and watch the practice, so that the child's child can be replaced or joined by the parent. Can support practice.
 9.上記実施形態のコミュニケーションロボットでは、
 前記コミュニケーション内容は、音声通話を用いて前記第2ユーザによる発話を前記第1ユーザが聞くコミュニケーションである(図4)。
9. In the communication robot of the above embodiment,
The communication content is a communication in which the first user listens to an utterance by the second user using a voice call (FIG. 4).
 この実施形態によれば、遠隔ユーザである祖父母がロボット介して対象ユーザに介入して、読み聞かせをすることにより、読み聞かせの機会を増やしたり親の労力を軽減することができる。 According to this embodiment, grandparents who are remote users intervene in the target user via a robot to read aloud, thereby increasing the chance of reading aloud and reducing the labor of parents.
 10.上記実施形態のコミュニケーションロボットは、
 前記第1ユーザが前記コミュニケーションロボットを介してコミュニケーションを行う場所に前記コミュニケーションロボットを移動させる移動制御手段(例えば308)を更に有し、
 前記移動制御手段は、前記第1ユーザの状態に応じて予め定められたコミュニケーションを行う場所に前記コミュニケーションロボットを移動させる。
10. The communication robot of the above embodiment
Further having a movement control means (for example, 308) for moving the communication robot to a place where the first user communicates via the communication robot.
The movement control means moves the communication robot to a place where communication is performed predetermined according to the state of the first user.
 この実施形態によれば、対象ユーザに介入してコミュニケーションを提案する場所に、コミュニケーションロボットが自律的に移動することができる。 According to this embodiment, the communication robot can autonomously move to a place where the target user is intervened and communication is proposed.
 11.上記実施形態のコミュニケーションロボットは、
 周期的に収集される前記第1ユーザの状態を示す状態情報を記憶する記憶手段と、
 前記記憶手段に記憶された状態情報に基づいて前記第1ユーザの状態のパターンを識別するパターン識別手段(例えば307)と、を更に有し、
 前記移動制御手段は、前記第1ユーザの状態のパターンから予想される前記第1ユーザの状態に応じて、前記コミュニケーションロボットを前記場所に移動させる。
11. The communication robot of the above embodiment
A storage means for storing state information indicating the state of the first user, which is periodically collected, and
Further having a pattern identification means (for example, 307) for identifying the state pattern of the first user based on the state information stored in the storage means.
The movement control means moves the communication robot to the place according to the state of the first user expected from the pattern of the state of the first user.
 この実施形態によれば、定期的に発生するようなユーザ状態のパターンに基づいて、対象ユーザに介入してコミュニケーションを提案する場所に、コミュニケーションロボットが自律的に移動することができる。 According to this embodiment, the communication robot can autonomously move to a place where the target user is intervened and communication is proposed based on the pattern of the user state that occurs regularly.
 12.上記実施形態のコミュニケーションロボットでは、
 前記第1ユーザは子供である。
12. In the communication robot of the above embodiment,
The first user is a child.
 この実施形態によれば、子供と遠隔ユーザとのコミュニケーションを支援したり、親の育児の負担を軽減するコミュニケーションロボットを実現することができる。 According to this embodiment, it is possible to realize a communication robot that supports communication between a child and a remote user and reduces the burden of parenting.
 13.上記実施形態のコミュニケーションロボットでは、
 前記第2ユーザは、前記第1ユーザの祖父母である。
13. In the communication robot of the above embodiment,
The second user is the grandparent of the first user.
 この実施形態によれば、家庭内のユーザと遠隔の祖父母とのコミュニケーションを支援するコミュニケーションロボットを実現することができる。 According to this embodiment, it is possible to realize a communication robot that supports communication between a user in the home and a remote grandparent.
 14.上記実施形態のコミュニケーションロボットは、
 前記第2ユーザの装置は、第2ユーザの状態を識別することが可能なロボットである。
14. The communication robot of the above embodiment
The device of the second user is a robot capable of identifying the state of the second user.
 この実施形態によれば、遠隔ユーザのユーザ状態もロボットにより識別することができる。 According to this embodiment, the user state of the remote user can also be identified by the robot.
 15.上記実施形態におけるコミュニケーションロボット(例えば100)の制御方法は、
 第1ユーザの状態を識別する識別工程(例えばS502)と、
 遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの状態を示す状態情報を外部装置から受信する受信工程(例えばS503)と、
 前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定工程(例えばS504)と、
 前記第1ユーザに情報を提供する提供工程(例えばS506)と、を有し、
 前記提供工程では、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記第1ユーザに提供する。
15. The control method of the communication robot (for example, 100) in the above embodiment is
An identification step (for example, S502) for identifying the state of the first user,
A receiving step (for example, S503) of receiving state information indicating the state of the second user who communicates with the first user remotely from an external device, and
A determination step (for example, S504) for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
It has a providing step (for example, S506) that provides information to the first user.
In the providing process, when it is determined that communication between the first user and the second user is possible, information proposing communication between the first user and the second user is provided to the first user. provide.
 この実施形態によれば、ユーザと遠隔ユーザのコミュニケーションをとるための介入行動を制御可能なコミュニケーションロボットの制御方法を提供することが可能になる。 According to this embodiment, it is possible to provide a control method of a communication robot capable of controlling intervention behavior for communicating between a user and a remote user.
 16.上記実施形態における情報処理サーバ(例えば900)は、
 コミュニケーションロボットからの第1ユーザの状態に関する情報を取得する第1取得手段(例えば1201)と、
 遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの用いる装置からの、前記第2ユーザの状態に関する情報を取得する第2取得手段(例えば1205)と、
 前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定手段(例えば1206)と、
 前記コミュニケーションロボットから前記第1ユーザに提供するための情報を、前記コミュニケーションロボットへ送信する送信手段(例えば1204)と、を有し、
 前記送信手段は、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記コミュニケーションロボットへ送信する。
16. The information processing server (for example, 900) in the above embodiment is
A first acquisition means (for example, 1201) for acquiring information about the state of the first user from the communication robot, and
A second acquisition means (for example, 1205) for acquiring information about the state of the second user from a device used by the second user who communicates with the first user remotely.
A determination means (for example, 1206) for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
It has a transmission means (for example, 1204) for transmitting information to be provided from the communication robot to the first user to the communication robot.
When it is determined that the first user and the second user can communicate with each other, the transmitting means transmits information proposing communication between the first user and the second user to the communication robot. To do.
 この実施形態によれば、ユーザと遠隔ユーザのコミュニケーションをとるための介入行動を制御可能な情報処理サーバを提供することが可能になる。 According to this embodiment, it is possible to provide an information processing server capable of controlling intervention behavior for communicating between a user and a remote user.
 17.上記実施形態における情報処理サーバによって実行される情報処理方法は、
 コミュニケーションロボットからの第1ユーザの状態に関する情報を取得する第1取得工程(例えばS1301)と、
 遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの用いる装置からの、前記第2ユーザの状態に関する情報を取得する第2取得工程(例えばS1303)と、
 前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定工程(例えばS1304)と、
 前記コミュニケーションロボットから前記第1ユーザに提供するための情報を、前記コミュニケーションロボットへ送信する送信工程(例えばS1305)と、を有し、
 前記送信工程では、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記コミュニケーションロボットへ送信する。
17. The information processing method executed by the information processing server in the above embodiment is
The first acquisition step (for example, S1301) of acquiring information about the state of the first user from the communication robot, and
A second acquisition step (for example, S1303) of acquiring information about the state of the second user from a device used by the second user who remotely communicates with the first user.
A determination step (for example, S1304) for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
It has a transmission step (for example, S1305) of transmitting information to be provided from the communication robot to the first user to the communication robot.
In the transmission step, when it is determined that communication between the first user and the second user is possible, information proposing communication between the first user and the second user is transmitted to the communication robot. To do.
 この実施形態によれば、ユーザと遠隔ユーザのコミュニケーションをとるための介入行動を制御可能な情報処理方法を提供することが可能になる。 According to this embodiment, it is possible to provide an information processing method capable of controlling intervention behavior for communicating between a user and a remote user.
 発明は上記の実施形態に制限されるものではなく、発明の要旨の範囲内で、種々の変形・変更が可能である。 The invention is not limited to the above embodiment, and various modifications and changes can be made within the scope of the gist of the invention.

Claims (17)

  1.  コミュニケーションロボットであって、
     第1ユーザの状態を識別する識別手段と、
     遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの状態を示す状態情報を外部装置から受信する受信手段と、
     前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定手段と、
     前記第1ユーザに情報を提供する提供手段と、を有し、
     前記提供手段は、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記第1ユーザに提供する、ことを特徴とするコミュニケーションロボット。
    A communication robot
    Identification means for identifying the state of the first user,
    A receiving means for receiving status information indicating the status of the second user who communicates with the first user remotely from an external device, and
    A determination means for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
    It has a providing means for providing information to the first user, and has
    When it is determined that the first user and the second user can communicate with each other, the providing means provides the first user with information proposing communication between the first user and the second user. A communication robot characterized by providing.
  2.  前記提供手段により提供された情報に対する前記第1ユーザからの応答を受け付ける受付手段と、
     前記第1ユーザと前記第2ユーザとの間の音声通話又は動画通話を制御する通話制御手段と、を更に有し、
     前記通話制御手段は、前記第1ユーザからの応答が前記第1ユーザと前記第2ユーザとのコミュニケーションを開始することを示す場合、第2ユーザの用いる装置と前記コミュニケーションロボットとを介した音声通話又は動画通話を開始させる、ことを特徴とする請求項1に記載のコミュニケーションロボット。
    A receiving means for receiving a response from the first user to the information provided by the providing means, and
    Further having a call control means for controlling a voice call or a video call between the first user and the second user.
    When the response from the first user indicates that the communication between the first user and the second user is started, the call control means makes a voice call via the device used by the second user and the communication robot. The communication robot according to claim 1, wherein the video call is started.
  3.  前記通話制御手段は、前記第1ユーザからの応答が前記第1ユーザと前記第2ユーザとのコミュニケーションを開始することを示す場合、前記第2ユーザの用いる装置へコミュニケーションの開始要求を送信し、前記第2ユーザの用いる装置により前記開始要求が受け入れられない場合には、前記第2ユーザの用いる装置と前記コミュニケーションロボットとを介した音声通話又は動画通話を開始させない、ことを特徴とする請求項2に記載のコミュニケーションロボット。 When the response from the first user indicates that the communication between the first user and the second user is started, the call control means transmits a communication start request to the device used by the second user. The claim is characterized in that when the start request is not accepted by the device used by the second user, a voice call or a video call via the device used by the second user and the communication robot is not started. The communication robot described in 2.
  4.  前記提供手段により提供される情報は、前記第1ユーザと前記第2ユーザとの間で行うコミュニケーション内容を更に含み、
     前記提供手段は、識別された前記第1ユーザの状態に応じて、前記コミュニケーション内容を異ならせる、ことを特徴とする請求項1から3のいずれか1項に記載のコミュニケーションロボット。
    The information provided by the providing means further includes the content of communication performed between the first user and the second user.
    The communication robot according to any one of claims 1 to 3, wherein the providing means is different in the communication content according to the identified state of the first user.
  5.  前記第1ユーザと前記第2ユーザとの間のコミュニケーション内容は予め操作手段を介して設定可能である、ことを特徴とする請求項4に記載のコミュニケーションロボット。 The communication robot according to claim 4, wherein the content of communication between the first user and the second user can be set in advance via an operating means.
  6.  前記コミュニケーション内容は、音声通話又は動画通話を用いて前記第2ユーザによって前記第1ユーザの帰宅を迎える、あるいは前記第1ユーザの送り出すコミュニケーションである、ことを特徴とする請求項4または5に記載のコミュニケーションロボット。 The content of the communication according to claim 4 or 5, wherein the communication content is a communication in which the first user returns home by the second user using a voice call or a video call, or is sent by the first user. Communication robot.
  7.  前記コミュニケーション内容は、音声通話又は動画通話を用いて前記第2ユーザが泣いている前記第1ユーザに話しかけるコミュニケーションである、ことを特徴とする請求項4または5に記載のコミュニケーションロボット。 The communication robot according to claim 4 or 5, wherein the communication content is a communication in which the second user speaks to the first user who is crying by using a voice call or a video call.
  8.  前記コミュニケーション内容は、音声通話又は動画通話を用いて前記第1ユーザが行う動きまたは発話を前記第2ユーザが確認するコミュニケーションである、ことを特徴とする請求項4または5に記載のコミュニケーションロボット。 The communication robot according to claim 4 or 5, wherein the communication content is communication in which the second user confirms a movement or utterance performed by the first user using a voice call or a video call.
  9.  前記コミュニケーション内容は、音声通話を用いて前記第2ユーザによる発話を前記第1ユーザが聞くコミュニケーションである、ことを特徴とする請求項4または5に記載のコミュニケーションロボット。 The communication robot according to claim 4 or 5, wherein the communication content is a communication in which the first user listens to an utterance by the second user using a voice call.
  10.  前記第1ユーザが前記コミュニケーションロボットを介してコミュニケーションを行う場所に前記コミュニケーションロボットを移動させる移動制御手段を更に有し、
     前記移動制御手段は、前記第1ユーザの状態に応じて予め定められたコミュニケーションを行う場所に前記コミュニケーションロボットを移動させる、ことを特徴とする請求項1から9のいずれか1項に記載のコミュニケーションロボット。
    Further having a movement control means for moving the communication robot to a place where the first user communicates via the communication robot.
    The communication according to any one of claims 1 to 9, wherein the movement control means moves the communication robot to a place where communication is performed predetermined according to a state of the first user. robot.
  11.  周期的に収集される前記第1ユーザの状態を示す状態情報を記憶する記憶手段と、
     前記記憶手段に記憶された状態情報に基づいて前記第1ユーザの状態のパターンを識別するパターン識別手段と、を更に有し、
     前記移動制御手段は、前記第1ユーザの状態のパターンから予想される前記第1ユーザの状態に応じて、前記コミュニケーションロボットを前記場所に移動させる、ことを特徴とする請求項10に記載のコミュニケーションロボット。
    A storage means for storing state information indicating the state of the first user, which is periodically collected, and
    Further, it has a pattern identification means for identifying the state pattern of the first user based on the state information stored in the storage means.
    The communication according to claim 10, wherein the movement control means moves the communication robot to the place according to the state of the first user expected from the pattern of the state of the first user. robot.
  12.  前記第1ユーザは子供である、ことを特徴とする請求項1から11のいずれか1項に記載のコミュニケーションロボット。 The communication robot according to any one of claims 1 to 11, wherein the first user is a child.
  13.  前記第2ユーザは、前記第1ユーザの祖父母である、ことを特徴とする請求項1から12のいずれか1項に記載のコミュニケーションロボット。 The communication robot according to any one of claims 1 to 12, wherein the second user is a grandparent of the first user.
  14.  前記第2ユーザの装置は、第2ユーザの状態を識別することが可能なロボットである、ことを特徴とする請求項1から13のいずれか1項に記載のコミュニケーションロボット。 The communication robot according to any one of claims 1 to 13, wherein the device of the second user is a robot capable of identifying the state of the second user.
  15.  コミュニケーションロボットの制御方法であって、
     第1ユーザの状態を識別する識別工程と、
     遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの状態を示す状態情報を外部装置から受信する受信工程と、
     前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定工程と、
     前記第1ユーザに情報を提供する提供工程と、を有し、
     前記提供工程では、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記第1ユーザに提供する、ことを特徴とするコミュニケーションロボットの制御方法。
    It is a control method for communication robots.
    An identification process that identifies the state of the first user,
    A receiving process of receiving status information indicating the status of a second user who remotely communicates with the first user from an external device,
    A determination step of determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
    It has a providing process for providing information to the first user, and has
    In the providing step, when it is determined that communication between the first user and the second user is possible, information proposing communication between the first user and the second user is provided to the first user. A method of controlling a communication robot, which is characterized by providing.
  16.  情報処理サーバであって、
     コミュニケーションロボットからの第1ユーザの状態に関する情報を取得する第1取得手段と、
     遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの用いる装置からの、前記第2ユーザの状態に関する情報を取得する第2取得手段と、
     前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定手段と、
     前記コミュニケーションロボットから前記第1ユーザに提供するための情報を、前記コミュニケーションロボットへ送信する送信手段と、を有し、
     前記送信手段は、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記コミュニケーションロボットへ送信する、ことを特徴とする情報処理サーバ。
    It is an information processing server
    The first acquisition means for acquiring information about the state of the first user from the communication robot,
    A second acquisition means for acquiring information about the state of the second user from a device used by the second user who remotely communicates with the first user.
    A determination means for determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
    It has a transmission means for transmitting information to be provided from the communication robot to the first user to the communication robot.
    When it is determined that the first user and the second user can communicate with each other, the transmitting means transmits information proposing communication between the first user and the second user to the communication robot. An information processing server characterized by
  17.  情報処理サーバによって実行される情報処理方法であって、
     コミュニケーションロボットからの第1ユーザの状態に関する情報を取得する第1取得工程と、
     遠隔から前記第1ユーザとコミュニケーションをとる第2ユーザの用いる装置からの、前記第2ユーザの状態に関する情報を取得する第2取得工程と、
     前記第1ユーザの状態と前記第2ユーザの状態とに基づいて前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であるかを判定する判定工程と、
     前記コミュニケーションロボットから前記第1ユーザに提供するための情報を、前記コミュニケーションロボットへ送信する送信工程と、を有し、
     前記送信工程では、前記第1ユーザと前記第2ユーザとのコミュニケーションが可能であると判定された場合に、前記第1ユーザと前記第2ユーザとのコミュニケーションを提案する情報を前記コミュニケーションロボットへ送信する、ことを特徴とする情報処理方法。
    An information processing method executed by an information processing server.
    The first acquisition process for acquiring information about the state of the first user from the communication robot,
    A second acquisition step of acquiring information about the state of the second user from a device used by the second user who communicates with the first user remotely.
    A determination step of determining whether communication between the first user and the second user is possible based on the state of the first user and the state of the second user.
    It has a transmission step of transmitting information to be provided from the communication robot to the first user to the communication robot.
    In the transmission step, when it is determined that communication between the first user and the second user is possible, information proposing communication between the first user and the second user is transmitted to the communication robot. An information processing method characterized by doing.
PCT/JP2019/014263 2019-03-29 2019-03-29 Communication robot, control method for same, information processing server, and information processing method WO2020202354A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2019/014263 WO2020202354A1 (en) 2019-03-29 2019-03-29 Communication robot, control method for same, information processing server, and information processing method
JP2021511728A JP7208361B2 (en) 2019-03-29 2019-03-29 Communication robot and its control method, information processing server and information processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/014263 WO2020202354A1 (en) 2019-03-29 2019-03-29 Communication robot, control method for same, information processing server, and information processing method

Publications (1)

Publication Number Publication Date
WO2020202354A1 true WO2020202354A1 (en) 2020-10-08

Family

ID=72666167

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/014263 WO2020202354A1 (en) 2019-03-29 2019-03-29 Communication robot, control method for same, information processing server, and information processing method

Country Status (2)

Country Link
JP (1) JP7208361B2 (en)
WO (1) WO2020202354A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0476698A (en) * 1990-07-13 1992-03-11 Toshiba Corp Picture monitoring device
WO1999067067A1 (en) * 1998-06-23 1999-12-29 Sony Corporation Robot and information processing system
JP2016184980A (en) * 2016-07-26 2016-10-20 カシオ計算機株式会社 Communication device and program
JP2018092528A (en) * 2016-12-07 2018-06-14 国立大学法人電気通信大学 Chat system, management device, terminal device, and method and program for assisting selection of destination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0476698A (en) * 1990-07-13 1992-03-11 Toshiba Corp Picture monitoring device
WO1999067067A1 (en) * 1998-06-23 1999-12-29 Sony Corporation Robot and information processing system
JP2016184980A (en) * 2016-07-26 2016-10-20 カシオ計算機株式会社 Communication device and program
JP2018092528A (en) * 2016-12-07 2018-06-14 国立大学法人電気通信大学 Chat system, management device, terminal device, and method and program for assisting selection of destination

Also Published As

Publication number Publication date
JPWO2020202354A1 (en) 2021-11-04
JP7208361B2 (en) 2023-01-18

Similar Documents

Publication Publication Date Title
US20220337693A1 (en) Audio/Video Wearable Computer System with Integrated Projector
CN106297781B (en) Control method and controller
WO2016052018A1 (en) Home appliance management system, home appliance, remote control device, and robot
US10391636B2 (en) Apparatus and methods for providing a persistent companion device
JP2017010176A (en) Device specifying method, device specifying apparatus, and program
JPWO2016157662A1 (en) Information processing apparatus, control method, and program
WO2017141530A1 (en) Information processing device, information processing method and program
US10809715B2 (en) Apparatus, robot, method, and recording medium
WO2019035359A1 (en) Interactive electronic apparatus, communication system, method, and program
CN110178159A (en) Audio/video wearable computer system with integrated form projector
WO2020202354A1 (en) Communication robot, control method for same, information processing server, and information processing method
WO2020202353A1 (en) Communication robot, method for controlling same, information processing server, and information processing method
CN114930795A (en) Method and system for reducing audio feedback
JP6897696B2 (en) Servers, methods, and programs
WO2020095714A1 (en) Information processing device and method, and program
US11936718B2 (en) Information processing device and information processing method
WO2024070550A1 (en) System, electronic device, system control method, and program
JP7279706B2 (en) Information processing device, information processing system, information processing method and information processing program
CN112060084A (en) Intelligent interaction system
WO2020075403A1 (en) Communication system
CN113765756A (en) Communication method of home terminal, terminal and storage medium
CN113765755A (en) Communication method of home terminal, terminal and storage medium
CN114310924A (en) Virtual and physical social robots with humanoid features
JP2020043521A (en) Remote control server and remote control system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19923326

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021511728

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19923326

Country of ref document: EP

Kind code of ref document: A1