WO2020075403A1 - Communication system - Google Patents

Communication system Download PDF

Info

Publication number
WO2020075403A1
WO2020075403A1 PCT/JP2019/033440 JP2019033440W WO2020075403A1 WO 2020075403 A1 WO2020075403 A1 WO 2020075403A1 JP 2019033440 W JP2019033440 W JP 2019033440W WO 2020075403 A1 WO2020075403 A1 WO 2020075403A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
terminal
communication
smartphone
car navigation
Prior art date
Application number
PCT/JP2019/033440
Other languages
French (fr)
Japanese (ja)
Inventor
建太 築地新
笹尾 桂史
洋樹 中野
浩嗣 大窪
崇 中西
良祐 岡井
塚田 有人
柴田 吉隆
Original Assignee
日立グローバルライフソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立グローバルライフソリューションズ株式会社 filed Critical 日立グローバルライフソリューションズ株式会社
Publication of WO2020075403A1 publication Critical patent/WO2020075403A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H11/00Self-movable toy figures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems

Definitions

  • the present invention relates to a communication system.
  • Patent Document 1 JP-A-2018-152810 discloses a communication robot which is installed near a user mainly in a house and mainly communicates with the user by voice.
  • Patent Document 1 assumes communication inside the house, when the user moves outside the house or inside the house to a place away from the communication robot, the user communicates with the communication robot. I can't.
  • the present invention has an object to be able to communicate with a communication robot even when the user moves outside the house or inside the house to a place away from the communication robot.
  • a typical example of the invention disclosed in the present application is as follows. That is, in a communication system provided with a communication device that communicates with a user at least through video, when the user and the communication device cannot communicate with each other while communicating with the user, A mobile terminal with a display device owned by the person is instructed to communicate with the user.
  • FIG. 1 is a diagram showing the configuration of a communication system that is an embodiment of the present invention.
  • the communication system includes a dialogue terminal 1 which is a communication robot, other dialogue terminals 50, and a remote brain 2.
  • the interactive terminal 1 operates according to the instruction generated by the remote brain 2 based on the information from the interactive terminal 1, and communicates with the user mainly by voice.
  • the interactive terminal 1 one user may use a plurality or one. It is desirable to arrange the dialogue terminal 1 in the living space that the user uses for the longest time so that the dialogue terminal 1 can always communicate with the user. Therefore, when the user uses a plurality of rooms, the dialogue terminal 1 may be installed for each room.
  • the dialogue terminal 1 is activated when a short sentence predetermined by a specific user, which will be described later, is input by voice, and has a conversation with this user.
  • the interactive terminal 1, the other interactive terminal 50 and the remote brain 2 are connected via the communication line 3.
  • the communication line 3 may be a wired network (for example, Ethernet (registered trademark)), a wireless network (for example, short-range wireless communication or wireless LAN), or a combination thereof.
  • a wireless network is desirable for communication between the remote brain 2 and the smartphone 51 and the car navigation 52, which are mobile terminals of the interaction terminal 50.
  • the remote brain 2 is composed of a user information control / management device 20 that collects information (sound, images, etc.) around the user from the interaction terminal 1 and sends instructions regarding communication with the user to the interaction terminal 1.
  • the remote brain 2 is composed of a computer on the cloud. The detailed configuration of the user information control / management device 20 will be described later with reference to FIG.
  • the other interactive terminal 50 includes, for example, a smartphone 51 and a car navigation 52 in which an application related to this communication system described later is installed. Since the user always carries the smartphone 51, the user can communicate with the user even in a place where the dialogue terminal 1 is not provided, such as outside the home. Since the car navigation 52 is arranged in the vehicle owned by the user, it is possible to communicate with the user mainly by voice while the user is driving or riding in the vehicle.
  • the smartphone 51 and the car navigation 52 are cited as the other interaction terminals 50, but the present invention is not limited to this, and any electronic device capable of communicating with the user 50 such as a television may be used.
  • FIG. 2 is a diagram showing a usage state of the interactive terminal 1.
  • the dialogue terminal 1 is installed in a living room that a user often uses and communicates with the user.
  • FIG. 3 to 5 are diagrams showing the external appearance of the interactive terminal 1.
  • FIG. 3 is a front view of the interactive terminal 1 when the light is turned off.
  • FIG. 4 is a front view of the interactive terminal 1 when it is turned on.
  • FIG. 5 is a front view showing the appearance of the interactive terminal 1 in the display state.
  • the interactive terminal 1 includes a base portion 12 and a sphere portion 11 provided above the base portion 12. Note that the appearance of the interaction terminal 1 is an example, and may have another shape (for example, an animal shape).
  • the sphere portion 11 is made of a light-transmitting material (resin or glass), and by turning on a light-emitting device 105, which will be described later, provided inside the sphere portion 11, the light-emitting state shown in FIG. It can be switched to the lighting state shown in.
  • the interactive terminal 1 in the lit state can be used as a room light. Light is emitted upward from the light emitting device 105 in the base portion 12 and illuminates the inner surface of the spherical portion 11.
  • the base 12 includes a control unit 101, a storage unit 102, an image generation unit 103, a communication interface 104, a light emitting device 105, an input unit 106, a camera 107, a sensor 108, a microphone 109, an output unit 110, a speaker 111, and a display device 112, which will be described later.
  • a control unit 101 controls the operation of the base portion 12 to control the operation of the base portion 12.
  • an image is projected on the inner surface of the spherical portion 11 by the display device 112, and the user can visually recognize the projected image from the outside of the interactive terminal 1.
  • the sphere portion 11 can be modeled on a face to make a facial expression.
  • the interactive terminal 1 displays the pattern 19 and makes a facial expression in response to a call from the user or an inquiry to the user.
  • the pattern 19 is displayed while moving on the spherical portion 11 and changing its shape. Further, as the pattern 19, a mouth may be displayed in addition to the eyes.
  • FIG. 6 is a diagram showing an example of a facial expression of the dialogue terminal 1.
  • [A] shows the facial expression of the interaction terminal 1 in a state where the interaction terminal 1 has shifted from a standby mode in which a person in the surroundings is detected to a mode in which the interaction terminal is called or speaks from the interaction terminal.
  • the pattern around the eyes changes variously (for example, randomly) in a short time.
  • the eyes are displayed together with the activation sound, and the steady state of [D] is entered.
  • the facial expression, the expressionlessness that the [J] dialogue ends, and the transition to the standby mode are switched and displayed according to the state of the user or the dialogue terminal 1.
  • [G] the thinking expression is displayed until the interactive terminal 1 receives a response from the remote brain 2 in response to the inquiry from the user.
  • the dialogue terminal 1 communicates with the user by variously changing the pattern around the eyes.
  • FIG. 7 is a block diagram showing the configuration of the dialogue terminal 1.
  • the interactive terminal 1 includes a control unit 101, a storage unit 102, a communication interface 104, an input unit 106, a microphone 109, an output unit 110, and a speaker 111.
  • the interactive terminal 1 may include the image generation unit 103 and the display device 112.
  • the interaction terminal 1 may include at least one of the light emitting device 105, the camera 107, and the sensor 108.
  • the control unit 101 is composed of a processor that executes a program.
  • the storage unit 102 is composed of a memory that stores programs and data.
  • the memory includes a ROM which is a non-volatile storage element and a RAM which is a volatile storage element.
  • the ROM stores an immutable program (eg, OS) and the like.
  • the RAM is a high-speed and volatile storage element such as a DRAM (Dynamic Random Access Memory), and temporarily stores a program executed by the processor and data used when the program is executed.
  • the interactive terminal 1 exerts various functions by the processor executing the program stored in the memory.
  • the image generator 103 generates an image to be displayed on the display device 112. That is, an image to be displayed on the display device 112 is displayed from the image pattern stored in the storage unit 102 according to the surrounding conditions input from the remote brain 2 and various input devices (camera 107, sensor 108, microphone 109, etc.). select. For example, the pattern 19 simulating the eyes displayed on the spherical portion 11 (see FIG. 5) of the interactive terminal 1 is selected.
  • the communication interface 104 communicates with another device (for example, the user information control / management device 20 configuring the remote brain 2), the smartphone 51, and the car navigation 52 using a predetermined protocol.
  • another device for example, the user information control / management device 20 configuring the remote brain 2
  • the smartphone 51 for example, the smartphone 51
  • the car navigation 52 using a predetermined protocol.
  • the light emitting device 105 is a light source that emits visible light for illuminating the spherical portion 11 provided above the interaction terminal 1.
  • the light emitting device 105 may be shared with the light source of a projection device (not shown) that functions as the display device 112 described later, or may be provided separately from the display device 112.
  • the input unit 106 is an interface that sends signals from input devices such as the camera 107, the sensor 108, and the microphone 109 to the control unit 101.
  • the camera 107 captures an image of the surroundings of the interactive terminal 1.
  • the image captured by the camera 107 may be a moving image or a still image (for example, one image per second).
  • the image captured by the camera 107 is sent to the remote brain 2 via the communication interface 104, and is used to identify the user using the face recognition technology.
  • the camera 107 may be a fisheye camera capable of capturing a hemisphere or a camera capable of capturing the entire sphere with two or more cameras in order to capture an image of the surroundings.
  • the sensor 108 is a human sensor that detects whether or not there is a person around the interactive terminal 1, and detects a person using infrared rays, ultrasonic waves, visible light images, or the like.
  • the interaction terminal 1 may include a sensor that measures the environment around the interaction terminal 1. For example, the temperature and humidity are measured and the measurement result is sent to the control unit 101. When the temperature measured by the conversation terminal 1 becomes high (for example, 35 degrees or higher), the user may be urged to turn on the air conditioner because it may cause heat stroke.
  • the microphone 109 is a device that collects ambient sound (for example, a voice uttered by a user) and converts it into an electric signal.
  • the microphone 109 may be omnidirectional or directional.
  • the position of the speaker can be specified from the direction of arrival of the sound, and by displaying a predetermined image in the direction of the speaker, the interactive terminal 1 can be controlled so as to face and communicate with the speaker.
  • the voice uttered by the user is analyzed to authenticate that the user is a registered user.
  • the output unit 110 is an interface that sends signals from the speaker 111 and the control unit 101 to various output devices such as the display device 112.
  • the speaker 111 is a device that generates a sound converted from an electric signal generated by the control unit 101 (for example, a word to a user, an alarm sound, etc.).
  • the display device 112 is a device that displays a predetermined image on the surface of a part (or all) of the interactive terminal 1, and is used for closer communication with the user.
  • the surface of the conversation terminal 1 is formed of a light transmissive (for example, semitransparent) resin material, and the projection device is provided inside the conversation terminal 1. Then, the projection device projects an image on the inner surface of the interactive terminal 1, so that the user can visually recognize the image displayed on the surface of the interactive terminal 1.
  • a liquid crystal display device may be provided on a part of the surface of the interactive terminal 1 to display a predetermined image on the liquid crystal display device.
  • the display device 112 is not a liquid crystal display device, but may be any device capable of displaying an image on the surface of the interactive terminal 1.
  • a pattern 19 imitating the eyes is displayed on the spherical portion 11 (see FIG. 5) of the interactive terminal 1.
  • the interactive terminal 1 may also have a projection function of acquiring an image desired by the user via the communication line 3 and outputting the image to the display device 112.
  • FIG. 8 is a diagram showing the configuration of the user information control / management device 20 that constitutes the remote brain 2.
  • the user information control / management device 20 has a control unit 201, an analysis unit 211, a storage unit 221, and a communication interface 231, and is configured by a computer having a processor, a memory, and a storage device.
  • the functions of the control unit 201 and the analysis unit 211 are exhibited by the processor (for example, the main control program, the artificial intelligence program, the dialogue program, etc.) executing the program stored in the memory.
  • the memory includes a ROM which is a non-volatile storage element and a RAM which is a volatile storage element.
  • the ROM stores an immutable program (for example, BIOS) and the like.
  • the RAM is a high-speed and volatile storage element such as a DRAM (Dynamic Random Access Memory), and temporarily stores a program executed by the processor and data used when the program is executed.
  • the control unit 201 includes a main control unit 202.
  • the main control unit 202 controls the overall operation of the user information control / management device 20. For example, it detects an event for each user and activates an interface with an external service.
  • the analysis unit 211 includes an artificial intelligence 212, a dialogue engine 213, and a behavior determination unit 214.
  • the artificial intelligence 212 learns the input data and derives the result by inference based on the learning result.
  • the artificial intelligence 212 forms a part of the dialogue engine 213, analyzes the user's utterance, and creates a response.
  • the artificial intelligence 212 constitutes a part of the behavior determination unit 214 and detects a speech or a behavior different from the past.
  • the dialogue engine 213 controls the dialogue with the user. That is, the dialogue engine 213 analyzes the user's utterance and creates a response. Interaction engine 213 may include multiple levels of interaction patterns. For example, the normal level, the level at which the user feels sad, the level at which the user feels happy, and the like.
  • the action determination unit 214 analyzes the user's dialogue, action, and image to determine the user's action level.
  • the storage unit 221 is a large-capacity and non-volatile storage device such as a magnetic storage device (HDD) or a flash memory (SSD).
  • the storage unit 221 stores data accessed when the program is executed. Further, the storage unit 221 may store the program executed by the analysis unit 211. In this case, the program is read from the storage unit 221, loaded into the memory, and executed by the processor.
  • the storage unit 221 includes a robot information holding unit 222 in which information of the dialogue terminal 1 for each user is recorded, a dialogue information holding unit 223, and a user information holding unit 224.
  • the robot information holding unit 222 holds robot information 280 described later with reference to FIG.
  • the dialogue information holding unit 223 holds a log of dialogue with the user.
  • the user information holding unit 224 includes user information, registration service information and identity information.
  • User information records personal information that can identify the user.
  • the registered service information records information about services that the user has registered for use.
  • the identity information records the personal attributes of the user.
  • the communication interface 231 is a network interface device that controls communication with another device (for example, the interaction terminal 1) according to a predetermined protocol.
  • the user information control / management device 20 may have an input interface and an output interface.
  • a keyboard or a mouse is connected to the input interface and receives an input from an operator.
  • a display device, a printer, or the like is connected to the output interface, and outputs the execution result of the program in a format that can be visually recognized by the operator.
  • the program executed by the processor of the user information control / management device 20 is provided to the user information control / management device 20 via a removable medium (CD-ROM, flash memory, etc.) or a network, and is a non-temporary storage medium. It is stored in a certain non-volatile storage unit 221. For this reason, the user information control / management device 20 preferably has an interface for reading data from the removable medium.
  • the user information control / management device 20 is a virtual computer on the cloud that is constructed on a plurality of physical computer resources, and is a physical computer or a plurality of logically or physically configured computers. It may operate on a computer system configured on this computer.
  • FIG. 9 is a diagram showing an example of the robot information 280.
  • the robot information 280 records information of the dialogue terminal 1 for each user, and includes individual ID, user ID, user name, smartphone ID, car navigation ID, dialogue history, and analysis history data fields.
  • the individual ID is identification information uniquely given to the interactive terminal 1.
  • the user is a user of the dialogue terminal 1.
  • the user ID is an ID assigned to each user who uses the interactive terminal 1.
  • the smartphone ID is an ID assigned to each mobile terminal such as the smartphone 51 owned by the user.
  • the car navigation ID is an ID assigned to each car navigation 52 owned by the user.
  • the dialogue history is the history of dialogues with the dialogue terminal, and the date (or date and time) of the last conversation is recorded.
  • the dialogue record may be a time-series history of dialogue.
  • Detailed data (conversation log) of the dialogue corresponding to the dialogue history is held in the dialogue information holding unit 223, and sound data collected by the microphone 109 and moving image data (with voice) showing the facial expression of the speaker. But it's okay.
  • text data obtained by analyzing sound data may be used.
  • the words that characterize the conversation eg, strawberry, strawberry pie, butter, etc.
  • the analysis history is the date on which the state of the user of the conversation terminal 1 is analyzed.
  • the facial expression of the dialogue terminal 1 is an image
  • the communication tool is voice
  • the user leaves the dialogue terminal.
  • the facial expression is displayed on the smartphone owned by the user.
  • the user feels that the communication robot is close to him.
  • the smartphone owned by the user cannot be used.
  • the car navigation take communication with the user from the smartphone.
  • the facial expressions move from the smartphone to the car navigation, and the facial expressions are displayed on the car navigation display.
  • the user can feel that the communication robot is close to the user.
  • FIG. 10 is a diagram showing a usage state of the smartphone 51 owned by the user. Since the user always carries the smartphone 51, the user can communicate with the user even outside the home, for example. When the facial expression is displayed on the smartphone 51 (the smartphone 51 has a right to communicate with the user), the interactive terminal 1 is turned off and the facial expression is not displayed. With this function, it is possible to create a feeling that one communication robot is close to the user.
  • FIG. 11 is a diagram showing a usage state of the car navigation 52. Since the car navigation 52 is arranged in the vehicle owned by the user, it is possible to communicate with the user mainly by voice while the user is driving or riding in the vehicle. The recent car navigation 52 has a rich function of detecting a user's smartphone, and when the smartphone is detected, the right to communicate with the user is shifted from the smartphone to the car navigation, thereby creating a closer feeling. can do.
  • the pattern 19 displayed on the screen of the smartphone 51 immediately after switching is [D ]
  • a steady state is possible. Since the user communicates with the interactive terminal 1 via the remote brain 2, there is a time lag between the user uttering and the interactive terminal 1 responding.
  • [A] or [F] is displayed on the smartphone when the communication terminal 1 is switched from the interactive terminal 1 to the smartphone 51 during this time difference, smooth communication between the user and the communication system is hindered. Therefore, the [D] steady state can be considered as the pattern 19 first reflected on the smartphone 51 when the smartphone 51 is switched to.
  • the pattern 19 displayed on the dialogue terminal 1 until immediately before may be used. In this case, since the communication with the interactive terminal 1 including the pattern 19 is taken over by the smartphone 51, smooth communication between the user and the communication system becomes possible. This is the same when communicating with the car navigation 52 immediately after communicating with the smartphone 51.
  • FIG. 7 has been described as a block diagram of the interactive terminal 1, the block diagram of the smartphone 51 and the car navigation 52 is the same as that of FIG. 7.
  • the display device 112 of the smartphone 51 corresponds to the display of the smartphone 51
  • the display device 112 of the car navigation 52 corresponds to the display of the car navigation 52.
  • the camera 107 in the car navigation 52 corresponds to a camera included in the car navigation 52 itself or a camera included in the car navigation 52 as a retrofit.
  • the smartphone 51 and the car navigation 52 which are other interactive terminals 50, communicate with the remote brain 2 in advance, and communicate with the user by facial expression and voice according to the instruction sent from the remote brain 2.
  • An application that allows you to do this is installed. Then, it is any one of the interactive terminal 1, the smartphone 51, and the car navigation 52 that can communicate with the user while communicating with the remote brain 2.
  • FIG. 12 is a flowchart showing a communication process of the interactive terminal 1 which is an embodiment of the present invention.
  • the initial state of the interactive terminal 1 is the off state shown in FIG.
  • ⁇ S101> It is determined whether or not the user has issued a dialogue mode activation command to the dialogue terminal 1. Specifically, when the user calls out to the dialogue terminal, for example, "get up with the robot", the voice is picked up by the microphone 109 of the dialogue terminal 1 and sent to the control section 101 via the input section 106. Then, the information of the voice is sent to the remote brain 2 via the communication line 3, and is analyzed by the analysis unit 211 of the remote brain 2 as the start command of the interactive terminal 1. Further, it may be possible to analyze that it is a command to activate the interactive terminal 1 by automatically recognizing that the user has entered the field of view of the camera 107 provided in the interactive terminal 1.
  • the remote brain 2 sends a command to stop the interactive mode, which will be described later, to the smartphone 51, the car navigation 52, and other interactive terminals that have not received the start command. . Then, as shown in FIGS. 13 and 14, the smartphone 51, the car navigation 52, and the other interaction terminal 50 that has not received the activation command stop the interaction mode.
  • the remote brain 2 gives an instruction to the dialogue terminal 1 to shift from the extinguished state of FIG. 3 to the state of FIG. 5, and the dialogue terminal 1 is turned on and a facial expression 19 is displayed.
  • the pattern 19 when the interactive terminal 1 shifts to the state of FIG. 5 is as described in FIG.
  • the user communicates with the dialogue terminal 1.
  • the communication terminal 1 communicates with the user by communicating with the remote brain 2.
  • the change of the pattern 19, which is the change of the facial expression of the dialogue terminal 1, is as described in FIG.
  • it is any one of the interactive terminal 1, the smartphone 51, and the car navigation 52 that can communicate with the user while communicating with the remote brain 2.
  • the interactive terminal 1 has the right (interactive mode).
  • ⁇ S106> It is determined whether the conversation terminal 1 can recognize the presence of the user. Specifically, when there is no user from the field of view of the camera 107 provided in the interactive terminal 1, when the voice of the user is not input to the microphone 109, or when a predetermined time has passed from the above-described state. Judgment based on whether or not
  • FIG. 13 is a flowchart showing a communication process of the smartphone 51 owned by the user, which is an embodiment of the present invention.
  • ⁇ S203> The user communicates with the smartphone 51.
  • the change in the pattern 19, which is the change in facial expression displayed on the smartphone 51, is as described with reference to FIG.
  • the smartphone 51 has the authority to enter the interactive mode.
  • FIG. 14 is a flowchart showing a communication process in the car navigation 52 owned by the user, which is an embodiment of the present invention.
  • ⁇ S300> Determine whether or not communication has started between the user's smartphone 51 and the car navigation 52.
  • the start of this communication is used for the user carrying the smartphone 51 to determine whether or not the user has boarded the car equipped with this car navigation.
  • the instruction to stop the interactive mode is transmitted to the smartphone 51 of the user.
  • this instruction is performed using near field communication, but it may be performed via the communication line 3 shown in FIG.
  • ⁇ S303> The initial facial expression of the pattern 19 is displayed on the display device 112 of the car navigation 52. This expression is as described with reference to FIG.
  • ⁇ S304> The user communicates with the car navigation 52.
  • the change of the pattern 19, which is the change of the facial expression displayed on the car navigation 52, is as described with reference to FIG. In this step, car navigation has interactive mode privileges.
  • ⁇ S305> It is determined whether or not communication with the user's smartphone, such as short-range wireless communication, has been disconnected. Specifically, when the user is a driver, he / she turns off the power source of the vehicle by turning off the ignition key and gets off the vehicle (the car navigation needs to be always connected to the battery). When the user is not the driver, the user exits the car.
  • ⁇ S308> Instruct the smartphone 51 of the user to start the interactive mode.
  • the car navigation 52 can be made to communicate with the user from the smartphone 51.
  • the facial expression (pattern 19) moves from the smartphone 51 to the car navigation 52, and the facial expression is displayed on the display of the car navigation 52.
  • the smartphone 51 can be made to communicate with the user from the car navigation 52.
  • the pattern 19 moves from the car navigation 52 to the smartphone 51, and the facial expression is displayed on the screen of the smartphone 51.
  • FIG. 15 is a flowchart showing a communication process of the interactive terminal 1 which is an embodiment of the present invention.
  • two or more users share the interaction terminal 1, and one user (hereinafter referred to as user A) has a smartphone 51 or a car outside the home.
  • user A one user
  • user B another user in the house
  • ⁇ S400> It is determined whether or not there is a communication mode stop NG notification from the user A's smartphone 51 or car navigation 52. With this flow, it is possible to prevent the interactive mode of the smartphone 51 or the like of the user A who is communicating with the communication system using the smartphone 51 or the like outside the house from being forcibly disconnected. It should be noted that if there is no notification of the dialogue mode stop NG and a certain time has elapsed, or if there is a notification of the dialogue mode stop OK, the process proceeds to S104.
  • the user A who is in the house and is communicating with the smartphone 51 moves the inside of the house with the smartphone 51 placed and enters the visual field of the interactive terminal 1 to issue the activation command, the user A is sent in S400.
  • the process automatically shifts to S104, and communication with the interactive terminal 1 is possible.
  • FIG. 16 is a flowchart showing a communication process of the smartphone owned by the user, which is an embodiment of the present invention.
  • ⁇ S500> It is determined whether or not there is an instruction to activate the interactive mode for the smartphone 51.
  • the activation command for example, activation of an application installed on the smartphone 51 can be considered.
  • FIG. 17 is a flowchart showing a communication process in car navigation possessed by the user, which is an embodiment of the present invention.
  • the interactive mode is started based on whether or not there is an interactive mode activation command to the smartphone 51 in S500, so S308 of FIG. 14 is deleted.
  • Each determination in the flowcharts of FIGS. 12 to 17 may be performed by the control unit 201 of the remote brain 2, or may be performed by the control unit 101 of the interactive terminal 1, the smartphone 51, or the car navigation 52.
  • Dialogue mode is a state in which communication with the user is possible. That is, it means a state where the remote brain 2 which is one of the communication tools and the terminal etc. can be connected.
  • the application for setting the terminal etc. to the interactive mode is already installed in the interactive terminal 1, the smartphone 51 and the car navigation 52. Alternatively, it can be downloaded from the remote brain 2 or a server on the net.
  • Dialogue terminal 2 Remote brain 3 Communication line (network) 11 Sphere 12 Base 13 Hole 19 Pattern 20 User Information Control / Management Device 50 Other Interactive Terminal 51 Smartphone 52 Car Navigation 101 Control Unit 102 Storage Unit 103 Image Generation Unit 104 Communication Interface 105 Light Emitting Device 106 Input Unit 107 Camera 108 Sensor 109 Microphone 110 Output unit 111 Speaker 112 Display device 201 Control unit 202 Main control unit 211 Analysis unit 212 Artificial intelligence 213 Dialogue engine 214 Behavior determination unit 221 Storage unit 222 Robot information holding unit 223 Dialogue information holding unit 224 User information holding unit 231 Communication Interface 280 robot information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephone Function (AREA)
  • Telephonic Communication Services (AREA)
  • Toys (AREA)
  • Navigation (AREA)

Abstract

A communication system equipped with a communication device for carrying out communication with a user through at least a video image, wherein, when the user and the communication device are carrying out communication and the presence of the user cannot be recognized, an instruction for carrying out communication with the user is issued to a portable terminal equipped with a display device and possessed by the user.

Description

コミュニケーションシステムCommunication system
 本発明は、コミュニケーションシステムに関する。 The present invention relates to a communication system.
 本技術分野の背景技術として、以下の先行技術がある。特許文献1(特開2018-152810号公報)には、主に宅内において利用者の近くに設置されており、主に音声によって利用者とコミュニケーションをする、コミュニケーションロボットが開示されている。 背景 The following prior arts are background arts in this technical field. Patent Document 1 (JP-A-2018-152810) discloses a communication robot which is installed near a user mainly in a house and mainly communicates with the user by voice.
特開2018-152810号公報Japanese Patent Laid-Open No. 2018-152810
 しかし、特許文献1に記載された技術では、宅内においてのコミュニケーションを想定しているため、利用者が宅外や宅内であってもコミュニケーションロボットと離れた場所に移動した場合、コミュニケーションロボットとコミュニケーションすることができない。 However, since the technology described in Patent Document 1 assumes communication inside the house, when the user moves outside the house or inside the house to a place away from the communication robot, the user communicates with the communication robot. I can't.
 本発明は、利用者が宅外や宅内であってもコミュニケーションロボットと離れた場所に移動した場合でも、コミュニケーションロボットとコミュニケーションを行うことができることを目的とする。 The present invention has an object to be able to communicate with a communication robot even when the user moves outside the house or inside the house to a place away from the communication robot.
 本願において開示される発明の代表的な一例を示せば以下の通りである。すなわち、利用者と少なくとも映像を通じてコミュニケーションを行うコミュニケーション装置を備えたコミュニケーションシステムにおいて、利用者と前記コミュニケーション装置とがコミュニケーションを実施している際に、利用者の存在を認識できなくなった場合に、利用者が所有している表示装置付きの携帯端末に対して、利用者とコミュニケーションを行うことを指示する。 ば A typical example of the invention disclosed in the present application is as follows. That is, in a communication system provided with a communication device that communicates with a user at least through video, when the user and the communication device cannot communicate with each other while communicating with the user, A mobile terminal with a display device owned by the person is instructed to communicate with the user.
 本発明の一態様によれば、利用者が宅外や宅内であってもコミュニケーションロボットと離れた場所に移動した場合でも、コミュニケーションロボットとコミュニケーションを行うことができる。前述した以外の課題、構成及び効果は、以下の実施例の説明により明らかにされる。 According to one aspect of the present invention, it is possible to communicate with the communication robot even when the user moves outside the house or inside the house to a place away from the communication robot. Problems, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
本発明の一実施例であるコミュニケーションシステムの構成図である。It is a block diagram of the communication system which is one Example of this invention. 対話端末の使用状態を示す図である。It is a figure which shows the use condition of a dialog terminal. 消灯状態の対話端末の外観を示す正面図である。It is a front view which shows the external appearance of the interactive terminal in an unlit state. 点灯状態の対話端末の外観を示す正面図である。It is a front view which shows the external appearance of the interactive terminal in a lighting state. 表示状態の対話端末の外観を示す正面図である。It is a front view which shows the external appearance of the interactive terminal in a display state. 対話端末の表情の例を示す図である。It is a figure which shows the example of a facial expression of a dialog terminal. 対話端末の構成を示すブロック図である。It is a block diagram which shows the structure of a dialog terminal. 利用者情報制御・管理装置の構成を示す図である。It is a figure which shows the structure of a user information control and management apparatus. ロボット情報の一例を示す図である。It is a figure which shows an example of robot information. スマートフォンの使用状態を示す図である。It is a figure which shows the use condition of a smart phone. カーナビゲーションの使用状態を示す図である。It is a figure which shows the use condition of car navigation. 本発明の一実施例である対話端末のコミュニケーション処理を示すフローチャート図である。It is a flowchart figure which shows the communication process of the interaction terminal which is one Example of this invention. 本発明の一実施例である利用者が所有するスマートフォンのコミュニケーション処理に関するフローチャート図である。It is a flowchart figure regarding the communication process of the smart phone which a user owns which is one Example of this invention. 本発明の一実施例である利用者が保有するカーナビゲーションにおけるコミュニケーション処理を示すフローチャート図である。It is a flowchart figure which shows the communication process in the car navigation which a user has which is one Example of this invention. 本発明の一実施例である対話端末のコミュニケーション処理を示すフローチャート図である。It is a flowchart figure which shows the communication process of the interaction terminal which is one Example of this invention. 本発明の一実施例である利用者が所有するスマートフォンのコミュニケーション処理に関するフローチャート図である。It is a flowchart figure regarding the communication process of the smart phone which a user owns which is one Example of this invention. 本発明の一実施例である利用者が保有するカーナビゲーションにおけるコミュニケーション処理を示すフローチャート図である。It is a flowchart figure which shows the communication process in the car navigation which a user has which is one Example of this invention.
 以下、図面等を用いて、本発明の実施例について説明する。以下の説明は本発明の内容の具体例を示すものであり、本発明がこれらの説明に限定されるものではない。本明細書に開示される技術的思想の範囲内において当業者による様々な変更および修正が可能であり、下記の実施例の構成を適宜組み合わせることも当初から予定している。また、本発明を説明するための全図において、同一の機能を有するものは、同一の符号を付け、その繰り返しの説明は省略する場合がある。 Hereinafter, an embodiment of the present invention will be described with reference to the drawings. The following description shows specific examples of the content of the present invention, and the present invention is not limited to these descriptions. Various changes and modifications can be made by those skilled in the art within the scope of the technical idea disclosed in this specification, and it is also planned from the beginning to appropriately combine the configurations of the following embodiments. In addition, in all the drawings for explaining the present invention, components having the same function are denoted by the same reference numeral, and repeated description thereof may be omitted.
 図1は、本発明の一実施例であるコミュニケーションシステムの構成を示す図である。 FIG. 1 is a diagram showing the configuration of a communication system that is an embodiment of the present invention.
 本実施例のコミュニケーションシステムは、コミュニケーションロボットである対話端末1、その他対話端末50及びリモートブレイン2によって構成される。対話端末1は、対話端末1からの情報に基づいてリモートブレイン2が生成した指示に従って動作し、主に音声によって利用者とコミュニケーションする。対話端末1は、一人の利用者が複数を利用しても、一つを利用してもよい。対話端末1は常に利用者とコミュニケーションできるように、利用者が最も長く使用する居住スペースに配置することが望ましい。このため、利用者が複数の部屋を利用する場合は、部屋毎に対話端末1が設置されるとよい。 The communication system according to the present embodiment includes a dialogue terminal 1 which is a communication robot, other dialogue terminals 50, and a remote brain 2. The interactive terminal 1 operates according to the instruction generated by the remote brain 2 based on the information from the interactive terminal 1, and communicates with the user mainly by voice. As for the interactive terminal 1, one user may use a plurality or one. It is desirable to arrange the dialogue terminal 1 in the living space that the user uses for the longest time so that the dialogue terminal 1 can always communicate with the user. Therefore, when the user uses a plurality of rooms, the dialogue terminal 1 may be installed for each room.
 そして、この対話端末1は、後述する特定の利用者により予め決められた短文を音声入力すると起動して、この利用者と会話を行う。 Then, the dialogue terminal 1 is activated when a short sentence predetermined by a specific user, which will be described later, is input by voice, and has a conversation with this user.
 対話端末1、その他対話端末50とリモートブレイン2とは、通信回線3を介して接続される。通信回線3は、有線ネットワーク(例えば、イーサネット(登録商標))でも無線ネットワーク(例えば、近距離無線通信や無線LAN)でも、その組み合わせでもよい。その他対話端末50のうち携帯端末であるスマートフォン51およびカーナビゲーション52とリモートブレイン2との通信は無線ネットワークが望ましい。 The interactive terminal 1, the other interactive terminal 50 and the remote brain 2 are connected via the communication line 3. The communication line 3 may be a wired network (for example, Ethernet (registered trademark)), a wireless network (for example, short-range wireless communication or wireless LAN), or a combination thereof. In addition, a wireless network is desirable for communication between the remote brain 2 and the smartphone 51 and the car navigation 52, which are mobile terminals of the interaction terminal 50.
 リモートブレイン2は、対話端末1から周囲の情報(音声、画像など)を収集し、利用者とのコミュニケーションに関する指示を対話端末1に送信する利用者情報制御・管理装置20で構成される。リモートブレイン2は、クラウド上の計算機によって構成されている。利用者情報制御・管理装置20の詳細な構成は、図8を用いて後述する。 The remote brain 2 is composed of a user information control / management device 20 that collects information (sound, images, etc.) around the user from the interaction terminal 1 and sends instructions regarding communication with the user to the interaction terminal 1. The remote brain 2 is composed of a computer on the cloud. The detailed configuration of the user information control / management device 20 will be described later with reference to FIG.
 その他対話端末50は、例えば後述するこのコミュニケーションシステムに係るアプリケーションをインストールしたスマートフォン51やカーナビゲーション52が含まれる。利用者はスマートフォン51を常に携帯することから、例えば宅外といった対話端末1が設けられていない場所でも、利用者とコミュニケーションすることができる。カーナビゲーション52は利用者が保有する車の中に配置されていることから、利用者が車を運転中または乗車中に、主に音声によって利用者とコミュニケーションすることができる。本実施例ではその他対話端末50としてスマートフォン51やカーナビゲーション52を挙げたがこれに限定されず、例えばテレビといった利用者50とコミュニケーションを取ることが可能な電子機器であればよい。 The other interactive terminal 50 includes, for example, a smartphone 51 and a car navigation 52 in which an application related to this communication system described later is installed. Since the user always carries the smartphone 51, the user can communicate with the user even in a place where the dialogue terminal 1 is not provided, such as outside the home. Since the car navigation 52 is arranged in the vehicle owned by the user, it is possible to communicate with the user mainly by voice while the user is driving or riding in the vehicle. In the present embodiment, the smartphone 51 and the car navigation 52 are cited as the other interaction terminals 50, but the present invention is not limited to this, and any electronic device capable of communicating with the user 50 such as a television may be used.
 図2は、対話端末1の使用状態を示す図である。対話端末1は、利用者がよく利用する居室内に設置され、利用者とコミュニケーションする。 FIG. 2 is a diagram showing a usage state of the interactive terminal 1. The dialogue terminal 1 is installed in a living room that a user often uses and communicates with the user.
 図3から図5は、対話端末1の外観を示す図である。図3は、消灯時の対話端末1の正面図である。図4は、点灯時の対話端末1の正面図である。図5は、表示状態の対話端末1の外観を示す正面図である。 3 to 5 are diagrams showing the external appearance of the interactive terminal 1. FIG. 3 is a front view of the interactive terminal 1 when the light is turned off. FIG. 4 is a front view of the interactive terminal 1 when it is turned on. FIG. 5 is a front view showing the appearance of the interactive terminal 1 in the display state.
 対話端末1は、基部12と、基部12の上方に設けられる球体部11とで構成される。なお、対話端末1の外観は一例であり、他の形状(例えば、動物の形状)でもよい。 The interactive terminal 1 includes a base portion 12 and a sphere portion 11 provided above the base portion 12. Note that the appearance of the interaction terminal 1 is an example, and may have another shape (for example, an animal shape).
 球体部11は、光を透過する材料(樹脂やガラス)で構成されており、球体部11の内部に設けられた後述する発光装置105を点灯することによって、図3に示す消灯状態から図4に示す点灯状態に切り替えることができる。点灯状態の対話端末1は、ルームライトとして使用できる。基部12内の発光装置105からは上方に光が放射され、球体部11の内面を照射する。 The sphere portion 11 is made of a light-transmitting material (resin or glass), and by turning on a light-emitting device 105, which will be described later, provided inside the sphere portion 11, the light-emitting state shown in FIG. It can be switched to the lighting state shown in. The interactive terminal 1 in the lit state can be used as a room light. Light is emitted upward from the light emitting device 105 in the base portion 12 and illuminates the inner surface of the spherical portion 11.
 基部12は、後述する制御部101、記憶部102、画像生成部103、通信インターフェース104、発光装置105、入力部106、カメラ107、センサ108、マイクロホン109、出力部110、スピーカ111及び表示装置112を収容する。基部12の正面には、カメラ107が周囲を撮影するための光を導入する穴13が設けられており、穴13には透明の樹脂材料が取り付けられている。なお、穴13からセンサ108が周囲の人を検出してもよく、マイクロホン109が周囲の音を収集してもよい。 The base 12 includes a control unit 101, a storage unit 102, an image generation unit 103, a communication interface 104, a light emitting device 105, an input unit 106, a camera 107, a sensor 108, a microphone 109, an output unit 110, a speaker 111, and a display device 112, which will be described later. To house. On the front surface of the base portion 12, a hole 13 for introducing light for the camera 107 to photograph the surroundings is provided, and a transparent resin material is attached to the hole 13. It should be noted that the sensor 108 may detect the surrounding person from the hole 13 and the microphone 109 may collect the surrounding sound.
 また、球体部11の内面には、表示装置112によって画像が投影され、利用者が、投影された画像を対話端末1の外部から視認できる。例えば、図5に示すように、目を模した模様19を球体部11に表示することによって、球体部11を顔に模して表情を作ることができる。対話端末1は、利用者からの呼びかけや利用者への問いかけに伴って、模様19を表示し、表情を作る。この模様19は球体部11上を移動して、また形を変えて表示される。また、模様19として、目の他に口を表示してもよい。 Further, an image is projected on the inner surface of the spherical portion 11 by the display device 112, and the user can visually recognize the projected image from the outside of the interactive terminal 1. For example, as shown in FIG. 5, by displaying a pattern 19 simulating eyes on the sphere portion 11, the sphere portion 11 can be modeled on a face to make a facial expression. The interactive terminal 1 displays the pattern 19 and makes a facial expression in response to a call from the user or an inquiry to the user. The pattern 19 is displayed while moving on the spherical portion 11 and changing its shape. Further, as the pattern 19, a mouth may be displayed in addition to the eyes.
 図6は、対話端末1の表情の例を示す図である。 FIG. 6 is a diagram showing an example of a facial expression of the dialogue terminal 1.
 [A]は、対話端末1が、周囲の人を検出した待機モードから、対話端末に呼びかけがあった、または対話端末からの発話をするモードに移行した状態の対話端末1の表情を示す。[A]に示す状態では、目の周りの模様が短時間で様々に(例えば、ランダムに)変化する。その後[F]に示すように、起動音と共に目が表示され、[D]の定常状態となる。[D]定常状態からは、[B]挨拶をする表情、[C]お礼を言う表情、[E]了解の表情、[G]考え中の表情、[H]お願いする表情、[I]眠い表情、[J]対話が終了し、待機モードに移行する無表情などを、利用者や対話端末1の状態に応じて切り替えて表示する。例えば、[G]考え中の表情は、対話端末1が受けた利用者からの問いかけに対して、リモートブレイン2から応答を受けるまでの間に表示される。このように、対話端末1は、目の周りの模様を様々に変化して利用者とコミュニケーションする。 [A] shows the facial expression of the interaction terminal 1 in a state where the interaction terminal 1 has shifted from a standby mode in which a person in the surroundings is detected to a mode in which the interaction terminal is called or speaks from the interaction terminal. In the state shown in [A], the pattern around the eyes changes variously (for example, randomly) in a short time. Thereafter, as shown in [F], the eyes are displayed together with the activation sound, and the steady state of [D] is entered. [D] From a steady state, [B] a greeting expression, [C] a thank-you expression, [E] an understanding expression, [G] a thinking expression, [H] a requested expression, [I] sleepy The facial expression, the expressionlessness that the [J] dialogue ends, and the transition to the standby mode are switched and displayed according to the state of the user or the dialogue terminal 1. For example, [G] the thinking expression is displayed until the interactive terminal 1 receives a response from the remote brain 2 in response to the inquiry from the user. In this way, the dialogue terminal 1 communicates with the user by variously changing the pattern around the eyes.
 図7は、対話端末1の構成を示すブロック図である。 FIG. 7 is a block diagram showing the configuration of the dialogue terminal 1.
 対話端末1は、制御部101、記憶部102、通信インターフェース104、入力部106、マイクロホン109、出力部110及びスピーカ111を有する。対話端末1は、画像生成部103及び表示装置112を有してもよい。さらに、対話端末1は、発光装置105、カメラ107、センサ108の少なくとも一つを有してもよい。 The interactive terminal 1 includes a control unit 101, a storage unit 102, a communication interface 104, an input unit 106, a microphone 109, an output unit 110, and a speaker 111. The interactive terminal 1 may include the image generation unit 103 and the display device 112. Furthermore, the interaction terminal 1 may include at least one of the light emitting device 105, the camera 107, and the sensor 108.
 制御部101は、プログラムを実行するプロセッサによって構成される。記憶部102は、プログラムやデータを格納するメモリによって構成される。メモリは、不揮発性の記憶素子であるROM及び揮発性の記憶素子であるRAMを含む。ROMは、不変のプログラム(例えば、OS)などを格納する。RAMは、DRAM(Dynamic Random Access Memory)のような高速かつ揮発性の記憶素子であり、プロセッサが実行するプログラム及びプログラムの実行時に使用されるデータを一時的に格納する。メモリに格納されたプログラムをプロセッサが実行することによって、対話端末1が諸機能を発揮する。 The control unit 101 is composed of a processor that executes a program. The storage unit 102 is composed of a memory that stores programs and data. The memory includes a ROM which is a non-volatile storage element and a RAM which is a volatile storage element. The ROM stores an immutable program (eg, OS) and the like. The RAM is a high-speed and volatile storage element such as a DRAM (Dynamic Random Access Memory), and temporarily stores a program executed by the processor and data used when the program is executed. The interactive terminal 1 exerts various functions by the processor executing the program stored in the memory.
 画像生成部103は、表示装置112に表示する画像を生成する。すなわち、リモートブレイン2からの指示や各種入力デバイス(カメラ107、センサ108、マイクロホン109等)に入力された周囲の状況に従って、記憶部102に格納された画像パターンから表示装置112に表示する画像を選択する。例えば、対話端末1の球体部11(図5参照)表示される目を模した模様19を選択する。 The image generator 103 generates an image to be displayed on the display device 112. That is, an image to be displayed on the display device 112 is displayed from the image pattern stored in the storage unit 102 according to the surrounding conditions input from the remote brain 2 and various input devices (camera 107, sensor 108, microphone 109, etc.). select. For example, the pattern 19 simulating the eyes displayed on the spherical portion 11 (see FIG. 5) of the interactive terminal 1 is selected.
 通信インターフェース104は、所定のプロトコルを用いて、他の装置(例えば、リモートブレイン2を構成する利用者情報制御・管理装置20)やスマートフォン51およびカーナビゲーション52と通信する。 The communication interface 104 communicates with another device (for example, the user information control / management device 20 configuring the remote brain 2), the smartphone 51, and the car navigation 52 using a predetermined protocol.
 発光装置105は、図4に示すように、対話端末1の上方に設けられる球体部11を光らせるための可視光を放射する光源である。発光装置105は、後述する表示装置112として機能する投影装置(図示せず)の光源と共用してもよいし、表示装置112と別に設けてもよい。 As shown in FIG. 4, the light emitting device 105 is a light source that emits visible light for illuminating the spherical portion 11 provided above the interaction terminal 1. The light emitting device 105 may be shared with the light source of a projection device (not shown) that functions as the display device 112 described later, or may be provided separately from the display device 112.
 入力部106は、カメラ107、センサ108、マイクロホン109などの入力デバイスからの信号を制御部101に送るインターフェースである。 The input unit 106 is an interface that sends signals from input devices such as the camera 107, the sensor 108, and the microphone 109 to the control unit 101.
 カメラ107は、対話端末1の周囲の画像を撮影する。カメラ107が撮影する画像は、動画像でも、静止画像(例えば、1秒に1枚)でもよい。カメラ107が撮影した画像は、通信インターフェース104を介してリモートブレイン2に送られ、顔認識技術を用いて利用者を識別するために用いられる。カメラ107は、周囲の画像を撮影するために、半球を撮影できる魚眼カメラでも、2以上のカメラによって全球を撮影できるものでもよい。 The camera 107 captures an image of the surroundings of the interactive terminal 1. The image captured by the camera 107 may be a moving image or a still image (for example, one image per second). The image captured by the camera 107 is sent to the remote brain 2 via the communication interface 104, and is used to identify the user using the face recognition technology. The camera 107 may be a fisheye camera capable of capturing a hemisphere or a camera capable of capturing the entire sphere with two or more cameras in order to capture an image of the surroundings.
 センサ108は、対話端末1の周囲に人がいるかを検出する人感センサであり、赤外線や超音波や可視光画像などを用いて人を検知する。対話端末1は、対話端末1の周囲の環境を計測するセンサを有してもよい。例えば、温度、湿度などを測定し、測定結果を制御部101に送る。対話端末1が測定した温度が高温(例えば、35度以上)になった場合、利用者が熱中症になる恐れがあるとして、利用者にエアコンをつけるよう促すようにしてもよい。 The sensor 108 is a human sensor that detects whether or not there is a person around the interactive terminal 1, and detects a person using infrared rays, ultrasonic waves, visible light images, or the like. The interaction terminal 1 may include a sensor that measures the environment around the interaction terminal 1. For example, the temperature and humidity are measured and the measurement result is sent to the control unit 101. When the temperature measured by the conversation terminal 1 becomes high (for example, 35 degrees or higher), the user may be urged to turn on the air conditioner because it may cause heat stroke.
 マイクロホン109は、周囲の音(例えば、利用者が発した音声)を収集して、電気信号に変換するデバイスである。マイクロホン109は、無指向性でも、指向性を有してもよい。指向性マイクロホンを使用すると、音の到来方向から話者の位置を特定でき、話者の方向に、所定の画像を表示することによって、対話端末1が話者と向き合ってコミュニケーションするように制御できる。なお、利用者が発した音声を分析して利用者が登録された利用者であることを認証する。 The microphone 109 is a device that collects ambient sound (for example, a voice uttered by a user) and converts it into an electric signal. The microphone 109 may be omnidirectional or directional. By using the directional microphone, the position of the speaker can be specified from the direction of arrival of the sound, and by displaying a predetermined image in the direction of the speaker, the interactive terminal 1 can be controlled so as to face and communicate with the speaker. . The voice uttered by the user is analyzed to authenticate that the user is a registered user.
 出力部110は、スピーカ111、制御部101からの信号を表示装置112など各種出力デバイスに送るインターフェースである。 The output unit 110 is an interface that sends signals from the speaker 111 and the control unit 101 to various output devices such as the display device 112.
 スピーカ111は、制御部101が発生した電気信号から変換した音(例えば、利用者にかける言葉、アラーム音など)を発生するデバイスである。 The speaker 111 is a device that generates a sound converted from an electric signal generated by the control unit 101 (for example, a word to a user, an alarm sound, etc.).
 表示装置112は、対話端末1の一部(又は全部)の表面に所定の画像を映す装置であり、利用者と、より密なコミュニケーションをとるために使用される。例えば、対話端末1の表面を光透過型(例えば、半透明)の樹脂材料で形成し、対話端末1の内部に投影装置を設ける。そして、投影装置が対話端末1の内表面に画像を投影することによって、利用者が対話端末1の表面に表わされた画像を視認できる。また、対話端末1の表面の一部に液晶表示装置を設け、当該液晶表示装置に所定の画像を表示してもよい。なお、表示装置112は液晶表示装置ではなく、対話端末1の表面に画像を表示できるものであればよい。 The display device 112 is a device that displays a predetermined image on the surface of a part (or all) of the interactive terminal 1, and is used for closer communication with the user. For example, the surface of the conversation terminal 1 is formed of a light transmissive (for example, semitransparent) resin material, and the projection device is provided inside the conversation terminal 1. Then, the projection device projects an image on the inner surface of the interactive terminal 1, so that the user can visually recognize the image displayed on the surface of the interactive terminal 1. Alternatively, a liquid crystal display device may be provided on a part of the surface of the interactive terminal 1 to display a predetermined image on the liquid crystal display device. The display device 112 is not a liquid crystal display device, but may be any device capable of displaying an image on the surface of the interactive terminal 1.
 例えば、対話端末1の球体部11(図5参照)に目を模した模様19を表示する。また、対話端末1は、利用者が所望する画像を通信回線3を経由して取得し、表示装置112に出力するプロジェクション機能を有してもよい。 For example, a pattern 19 imitating the eyes is displayed on the spherical portion 11 (see FIG. 5) of the interactive terminal 1. The interactive terminal 1 may also have a projection function of acquiring an image desired by the user via the communication line 3 and outputting the image to the display device 112.
 図8は、リモートブレイン2を構成する利用者情報制御・管理装置20の構成を示す図である。 FIG. 8 is a diagram showing the configuration of the user information control / management device 20 that constitutes the remote brain 2.
 利用者情報制御・管理装置20は、制御部201、分析部211、記憶部221及び通信インターフェース231を有し、プロセッサ、メモリ及び記憶装置を有する計算機によって構成される。メモリに格納されたプログラムをプロセッサ(例えば、主制御プログラム、人工知能プログラム、対話プログラム等)が実行することによって、制御部201及び分析部211の機能を発揮する。 The user information control / management device 20 has a control unit 201, an analysis unit 211, a storage unit 221, and a communication interface 231, and is configured by a computer having a processor, a memory, and a storage device. The functions of the control unit 201 and the analysis unit 211 are exhibited by the processor (for example, the main control program, the artificial intelligence program, the dialogue program, etc.) executing the program stored in the memory.
 メモリは、不揮発性の記憶素子であるROM及び揮発性の記憶素子であるRAMを含む。ROMは、不変のプログラム(例えば、BIOS)などを格納する。RAMは、DRAM(Dynamic Random Access Memory)のような高速かつ揮発性の記憶素子であり、プロセッサが実行するプログラム及びプログラムの実行時に使用されるデータを一時的に格納する。 The memory includes a ROM which is a non-volatile storage element and a RAM which is a volatile storage element. The ROM stores an immutable program (for example, BIOS) and the like. The RAM is a high-speed and volatile storage element such as a DRAM (Dynamic Random Access Memory), and temporarily stores a program executed by the processor and data used when the program is executed.
 制御部201は、主制御部202を含む。主制御部202は、利用者情報制御・管理装置20の全体の動作を制御する。例えば、利用者毎のイベントを検出し、外部サービスとのインターフェースを起動する。 The control unit 201 includes a main control unit 202. The main control unit 202 controls the overall operation of the user information control / management device 20. For example, it detects an event for each user and activates an interface with an external service.
 分析部211は、人工知能212、対話エンジン213及び行動判定部214を含む。 The analysis unit 211 includes an artificial intelligence 212, a dialogue engine 213, and a behavior determination unit 214.
 人工知能212は、入力されたデータを学習し、学習結果に基づいた推論によって結果を導出する。本実施例では、人工知能212は、対話エンジン213の一部を構成し、利用者の発話を分析し、応答を作成する。また、人工知能212は、行動判定部214の一部を構成し、過去と異なる発言や行動を検出したりする。 The artificial intelligence 212 learns the input data and derives the result by inference based on the learning result. In this embodiment, the artificial intelligence 212 forms a part of the dialogue engine 213, analyzes the user's utterance, and creates a response. In addition, the artificial intelligence 212 constitutes a part of the behavior determination unit 214 and detects a speech or a behavior different from the past.
 対話エンジン213は、利用者との対話を制御する。すなわち、対話エンジン213は、利用者の発話を分析し、応答を作成する。対話エンジン213は、複数のレベルの対話パターンを含むとよい。例えば、通常のレベル、利用者が悲しいと感じているレベル、利用者が嬉しいと感じているレベルなどである。 The dialogue engine 213 controls the dialogue with the user. That is, the dialogue engine 213 analyzes the user's utterance and creates a response. Interaction engine 213 may include multiple levels of interaction patterns. For example, the normal level, the level at which the user feels sad, the level at which the user feels happy, and the like.
 行動判定部214は、利用者の対話や行動、画像を分析し、利用者の行動レベルを判定する。 The action determination unit 214 analyzes the user's dialogue, action, and image to determine the user's action level.
 記憶部221は、例えば、磁気記憶装置(HDD)、フラッシュメモリ(SSD)等の大容量かつ不揮発性の記憶装置である。記憶部221は、プログラムの実行時にアクセスされるデータを格納する。また、記憶部221は、分析部211が実行するプログラムを格納してもよい。この場合、プログラムは、記憶部221から読み出されて、メモリにロードされて、プロセッサによって実行される。 The storage unit 221 is a large-capacity and non-volatile storage device such as a magnetic storage device (HDD) or a flash memory (SSD). The storage unit 221 stores data accessed when the program is executed. Further, the storage unit 221 may store the program executed by the analysis unit 211. In this case, the program is read from the storage unit 221, loaded into the memory, and executed by the processor.
 記憶部221は、利用者毎の対話端末1の情報が記録されるロボット情報保有部222、対話情報保有部223および利用者情報保有部224を含む。ロボット情報保有部222には、図9で後述するロボット情報280が保有されている。対話情報保有部223には、利用者との対話のログが保有されている。利用者情報保有部224は、利用者情報、登録サービス情報及びアイデンティティ情報を含む。利用者情報は、利用者を特定可能な個人情報を記録する。登録サービス情報は、利用者が利用の登録をしているサービスの情報を記録する。アイデンティティ情報は、利用者の個人的な属性を記録する。 The storage unit 221 includes a robot information holding unit 222 in which information of the dialogue terminal 1 for each user is recorded, a dialogue information holding unit 223, and a user information holding unit 224. The robot information holding unit 222 holds robot information 280 described later with reference to FIG. The dialogue information holding unit 223 holds a log of dialogue with the user. The user information holding unit 224 includes user information, registration service information and identity information. User information records personal information that can identify the user. The registered service information records information about services that the user has registered for use. The identity information records the personal attributes of the user.
 通信インターフェース231は、所定のプロトコルに従って、他の装置(例えば、対話端末1)との通信を制御するネットワークインターフェースデバイスである。 The communication interface 231 is a network interface device that controls communication with another device (for example, the interaction terminal 1) according to a predetermined protocol.
 利用者情報制御・管理装置20は、入力インターフェース及び出力インターフェースを有してもよい。入力インターフェースには、例えば、キーボードやマウスなどが接続され、オペレータからの入力を受ける。出力インターフェースには、ディスプレイ装置やプリンタなどが接続され、プログラムの実行結果をオペレータが視認可能な形式で出力する。 The user information control / management device 20 may have an input interface and an output interface. For example, a keyboard or a mouse is connected to the input interface and receives an input from an operator. A display device, a printer, or the like is connected to the output interface, and outputs the execution result of the program in a format that can be visually recognized by the operator.
 利用者情報制御・管理装置20のプロセッサが実行するプログラムは、リムーバブルメディア(CD-ROM、フラッシュメモリなど)又はネットワークを介して利用者情報制御・管理装置20に提供され、非一時的記憶媒体である不揮発性の記憶部221に格納される。このため、利用者情報制御・管理装置20は、リムーバブルメディアからデータを読み込むインターフェースを有するとよい。 The program executed by the processor of the user information control / management device 20 is provided to the user information control / management device 20 via a removable medium (CD-ROM, flash memory, etc.) or a network, and is a non-temporary storage medium. It is stored in a certain non-volatile storage unit 221. For this reason, the user information control / management device 20 preferably has an interface for reading data from the removable medium.
 利用者情報制御・管理装置20は、複数の物理的計算機資源上に構築されたクラウド上の仮想計算機であり、物理的に一つの計算機上で、又は、論理的又は物理的に構成された複数の計算機上で構成される計算機システムで動作するものでもよい。 The user information control / management device 20 is a virtual computer on the cloud that is constructed on a plurality of physical computer resources, and is a physical computer or a plurality of logically or physically configured computers. It may operate on a computer system configured on this computer.
 図9は、ロボット情報280の一例を示す図である。 FIG. 9 is a diagram showing an example of the robot information 280.
 ロボット情報280は、利用者毎の対話端末1の情報を記録し、個体ID、利用者ID、利用者名、スマホID、カーナビID、対話履歴、及び分析履歴のデータフィールドを含む。 The robot information 280 records information of the dialogue terminal 1 for each user, and includes individual ID, user ID, user name, smartphone ID, car navigation ID, dialogue history, and analysis history data fields.
 個体IDは、対話端末1に一意に付与された識別情報である。利用者は、当該対話端末1の利用者である。 The individual ID is identification information uniquely given to the interactive terminal 1. The user is a user of the dialogue terminal 1.
 利用者IDは、対話端末1を利用する利用者ごとに割り振られたIDである。 The user ID is an ID assigned to each user who uses the interactive terminal 1.
 スマホIDは、利用者が所有するスマートフォン51といった携帯端末ごとに割り振られたIDである。 The smartphone ID is an ID assigned to each mobile terminal such as the smartphone 51 owned by the user.
 カーナビIDは、利用者が所有するカーナビゲーション52ごとに割り振られたIDである。 The car navigation ID is an ID assigned to each car navigation 52 owned by the user.
 対話履歴は当該対話端末との対話の履歴であり、最後に会話した日(又は日時)が記録される。対話記録は、対話の時系列の履歴でもよい。対話履歴に対応する対話の詳細データ(会話のログ)は、対話情報保有部223にて保有されており、マイクロホン109が収集した音データや話者の表情を写した動画像データ(音声付き)でもよい。また、音データを解析したテキストデータでもよい。テキストデータから当該会話を特徴付ける単語(例えば、いちご、ストロベリーパイ、バター等)を抽出して、会話の詳細データをタグ付けするとよい。 The dialogue history is the history of dialogues with the dialogue terminal, and the date (or date and time) of the last conversation is recorded. The dialogue record may be a time-series history of dialogue. Detailed data (conversation log) of the dialogue corresponding to the dialogue history is held in the dialogue information holding unit 223, and sound data collected by the microphone 109 and moving image data (with voice) showing the facial expression of the speaker. But it's okay. Alternatively, text data obtained by analyzing sound data may be used. The words that characterize the conversation (eg, strawberry, strawberry pie, butter, etc.) may be extracted from the text data and tagged with the detailed conversation data.
 分析履歴は、当該対話端末1の利用者の状態を分析した日である。 The analysis history is the date on which the state of the user of the conversation terminal 1 is analyzed.
 以上が、対話端末1の説明である。このような対話端末1は、利用者がよく使用して長時間過ごす部屋におかれるため、図2に示すようにコミュニケーションを利用者と取ることが可能であるが、利用者が外出する場合、宅内にいるとしても他の部屋に移動した場合、利用者を認識することができなくなるので利用者とコミュニケーションをとることができなくなる。 The above is the description of the conversation terminal 1. Since such a dialogue terminal 1 is often used by a user and is placed in a room where the user spends a long time, it is possible to communicate with the user as shown in FIG. 2, but when the user goes out, Even if the user is at home, if he / she moves to another room, he / she cannot recognize the user, and cannot communicate with the user.
 本実施例では、対話端末1の表情が映像であること、およびコミュニケーションツールが音声であること、さらに利用者の多くはスマートフォンを携帯していることに着目して、利用者が対話端末を離れたことを検出して表情を本利用者が所有するスマートフォンに映すこととした。これにより、利用者は、コミュニケーションロボットが寄り添ってくれていることを実感するのである。 In the present embodiment, attention is paid to the fact that the facial expression of the dialogue terminal 1 is an image, the communication tool is voice, and that many users carry a smartphone, and the user leaves the dialogue terminal. Upon detecting this, the facial expression is displayed on the smartphone owned by the user. As a result, the user feels that the communication robot is close to him.
 また、特に、利用者が自動車の運転を行う場合、利用者所有のスマートフォンを使用することができない。このような場合、利用者とのコミュニケーションをスマートフォンからカーナビゲーションに取らせるようにする。表情はスマートフォンからカーナビゲーションに移動し、カーナビゲーションのディスプレイに表情が映し出される。これにより、利用者は、コミュニケーションロボットが利用者に寄り添ってくれていると感じることができる。 Also, especially when the user drives a car, the smartphone owned by the user cannot be used. In such a case, let the car navigation take communication with the user from the smartphone. The facial expressions move from the smartphone to the car navigation, and the facial expressions are displayed on the car navigation display. As a result, the user can feel that the communication robot is close to the user.
 図10は、利用者が所有するスマートフォン51の使用状態を示す図である。利用者はスマートフォン51を常に携帯することから、例えば宅外であっても利用者とコミュニケーション可能である。なお、スマートフォン51に表情が映し出されている(スマートフォン51が利用者とコミュニケーションを行う権利がある状態)では、対話端末1は消灯され、表情は映し出されない。この機能によって、1台のコミュニケーションロボットが利用者に寄り添ってついてくる感を演出することができる。 FIG. 10 is a diagram showing a usage state of the smartphone 51 owned by the user. Since the user always carries the smartphone 51, the user can communicate with the user even outside the home, for example. When the facial expression is displayed on the smartphone 51 (the smartphone 51 has a right to communicate with the user), the interactive terminal 1 is turned off and the facial expression is not displayed. With this function, it is possible to create a feeling that one communication robot is close to the user.
 図11は、カーナビゲーション52の使用状態を示す図である。カーナビゲーション52は利用者が保有する車の中に配置されていることから、利用者が車を運転中または乗車中に、主に音声によって利用者とコミュニケーションすることが可能である。最近のカーナビゲーション52は、利用者のスマートフォンを検出する機能が充実しており、スマートフォンが検出されると利用者とコミュニケーションを行う権利をスマートフォンからカーナビゲーションに移行させることで、より寄り添い感を演出することができる。 FIG. 11 is a diagram showing a usage state of the car navigation 52. Since the car navigation 52 is arranged in the vehicle owned by the user, it is possible to communicate with the user mainly by voice while the user is driving or riding in the vehicle. The recent car navigation 52 has a rich function of detecting a user's smartphone, and when the smartphone is detected, the right to communicate with the user is shifted from the smartphone to the car navigation, thereby creating a closer feeling. can do.
 利用者がスマートフォン51やカーナビゲーション52を用いてコミュニケーションシステムとコミュニケーションをする場合は、図6に示す模様19と同様の模様が、スマートフォン51やカーナビゲーション52の画面に映すこととする。これにより、利用者は対話端末1と会話する場合と同様に、愛着を持ってスマートフォン51やカーナビゲーション52を介してコミュニケーションシステムとコミュニケーションできる。 When the user uses the smartphone 51 or car navigation 52 to communicate with the communication system, a pattern similar to the pattern 19 shown in FIG. 6 is displayed on the screen of the smartphone 51 or car navigation 52. As a result, the user can communicate with the communication system via the smartphone 51 and the car navigation 52 with attachment, as in the case of having a conversation with the interaction terminal 1.
 なお、詳しくは後述するが、利用者が対話端末1とコミュニケーションを取っていた直後に対話端末1ではなくスマートフォン51とコミュニケーションを取る場合は、切替直後のスマートフォン51の画面に映る模様19は[D]定常状態が考えられる。利用者はリモートブレイン2を介して対話端末1とコミュニケーションすることから、利用者が発話してから対話端末1が回答するまで時間差がある。この時間差の間にコミュニケーションを取る相手を対話端末1からスマートフォン51に切り替えた場合に、スマートフォンに[A]や[F]を映すと、利用者とコミュニケーションシステムの円滑なコミュニケーションが阻害されてしまう。よって、スマートフォン51に切り替えた場合に最初にスマートフォン51に映る模様19として[D]定常状態が考えられる。また、[D]ではなく、直前まで対話端末1に映っていた模様19でも良い。この場合、対話端末1とのコミュニケーションが模様19を含めてスマートフォン51に引き継がれることから、利用者とコミュニケーションシステムの円滑なコミュニケーションが可能となる。これは、スマートフォン51とコミュニケーションを取っていた直後にカーナビゲーション52とコミュニケーションを取る場合も、同様である。 As will be described in detail later, when the user communicates with the smartphone 51 instead of the conversation terminal 1 immediately after communicating with the conversation terminal 1, the pattern 19 displayed on the screen of the smartphone 51 immediately after switching is [D ] A steady state is possible. Since the user communicates with the interactive terminal 1 via the remote brain 2, there is a time lag between the user uttering and the interactive terminal 1 responding. When [A] or [F] is displayed on the smartphone when the communication terminal 1 is switched from the interactive terminal 1 to the smartphone 51 during this time difference, smooth communication between the user and the communication system is hindered. Therefore, the [D] steady state can be considered as the pattern 19 first reflected on the smartphone 51 when the smartphone 51 is switched to. Further, instead of [D], the pattern 19 displayed on the dialogue terminal 1 until immediately before may be used. In this case, since the communication with the interactive terminal 1 including the pattern 19 is taken over by the smartphone 51, smooth communication between the user and the communication system becomes possible. This is the same when communicating with the car navigation 52 immediately after communicating with the smartphone 51.
 なお、図7は対話端末1のブロック図として説明したが、スマートフォン51やカーナビゲーション52のブロック図も図7と同様である。スマートフォン51における表示装置112はスマートフォン51のディスプレイ、カーナビゲーション52における表示装置112はカーナビゲーション52のディスプレイが該当する。また、カーナビゲーション52におけるカメラ107は、カーナビゲーション52自身が備えるカメラや、後付でカーナビゲーション52に備えられたカメラが該当する。 Although FIG. 7 has been described as a block diagram of the interactive terminal 1, the block diagram of the smartphone 51 and the car navigation 52 is the same as that of FIG. 7. The display device 112 of the smartphone 51 corresponds to the display of the smartphone 51, and the display device 112 of the car navigation 52 corresponds to the display of the car navigation 52. The camera 107 in the car navigation 52 corresponds to a camera included in the car navigation 52 itself or a camera included in the car navigation 52 as a retrofit.
 図1に戻って、その他対話端末50であるスマートフォン51やカーナビゲーション52には、予めリモートブレイン2と通信を行ってリモートブレイン2から送られてくる指示に従って、表情や音声によって利用者とコミュニケーションを行うことを可能とするアプリケーションがインストールされている。そして、リモートブレイン2と通信を行いながら利用者とコミュニケーションをとることができるのは、対話端末1、スマートフォン51、カーナビゲーション52のうちいずれか一つである。 Returning to FIG. 1, the smartphone 51 and the car navigation 52, which are other interactive terminals 50, communicate with the remote brain 2 in advance, and communicate with the user by facial expression and voice according to the instruction sent from the remote brain 2. An application that allows you to do this is installed. Then, it is any one of the interactive terminal 1, the smartphone 51, and the car navigation 52 that can communicate with the user while communicating with the remote brain 2.
 図12は、本発明の一実施例である対話端末1のコミュニケーション処理を示すフローチャート図である。 FIG. 12 is a flowchart showing a communication process of the interactive terminal 1 which is an embodiment of the present invention.
 <S100>対話端末1の初期状態は、図3に示す消灯状態である。 <S100> The initial state of the interactive terminal 1 is the off state shown in FIG.
 <S101>対話端末1に対して利用者が対話モードの起動命令を出したか否かを判断する。具体的には、利用者が対話端末に例えば「ロボットさん起きて」と声を掛けると、その声を対話端末1のマイクロホン109が拾い、入力部106を介して制御部101に送られる。そして、その声の情報は通信回線3を介してリモートブレイン2に送られ、リモートブレイン2の分析部211によって対話端末1の起動命令であると分析される。また、対話端末1が備えるカメラ107の視野に利用者が入ったことを自動的に認識することで、対話端末1の起動命令であると分析しても良い。 <S101> It is determined whether or not the user has issued a dialogue mode activation command to the dialogue terminal 1. Specifically, when the user calls out to the dialogue terminal, for example, "get up with the robot", the voice is picked up by the microphone 109 of the dialogue terminal 1 and sent to the control section 101 via the input section 106. Then, the information of the voice is sent to the remote brain 2 via the communication line 3, and is analyzed by the analysis unit 211 of the remote brain 2 as the start command of the interactive terminal 1. Further, it may be possible to analyze that it is a command to activate the interactive terminal 1 by automatically recognizing that the user has entered the field of view of the camera 107 provided in the interactive terminal 1.
 <S102>起動命令でないと判断された場合は、対話端末1は消灯状態を保持する。 <S102> If it is determined that the command is not the start command, the interactive terminal 1 holds the off state.
 <S103>起動命令であると判断された場合は、リモートブレイン2からスマートフォン51、カーナビゲーション52、起動命令を受け取っていない他の対話端末に対して、後述する対話モードの停止の指令が送られる。そして、図13及び図14で示すように、スマートフォン51、カーナビゲーション52、起動命令を受け取っていないその他対話端末50は対話モードを停止する。 <S103> If it is determined that the command is a start command, the remote brain 2 sends a command to stop the interactive mode, which will be described later, to the smartphone 51, the car navigation 52, and other interactive terminals that have not received the start command. . Then, as shown in FIGS. 13 and 14, the smartphone 51, the car navigation 52, and the other interaction terminal 50 that has not received the activation command stop the interaction mode.
 <S104>リモートブレイン2から対話端末1に対して、図3の消灯状態から図5の状態に移行するように指示が出され、対話端末1は点灯されて表情である模様19が映る。なお、対話端末1が図5の状態に移行する場合の模様19は、図6で説明した通りである。 <S104> The remote brain 2 gives an instruction to the dialogue terminal 1 to shift from the extinguished state of FIG. 3 to the state of FIG. 5, and the dialogue terminal 1 is turned on and a facial expression 19 is displayed. The pattern 19 when the interactive terminal 1 shifts to the state of FIG. 5 is as described in FIG.
 <S105>利用者は対話端末1とコミュニケーションをする。リモートブレイン2と通信を行うことで対話端末1は利用者とコミュニケーションを行う。対話端末1の表情の変化である模様19の変化は、図6で説明した通りである。前述したとおり、このリモートブレイン2と通信を行いながら利用者とコミュニケーションを行うことができるのは、対話端末1、スマートフォン51、カーナビゲーション52のうちいずれか一つである。本ステップの場合、対話端末1がその権限(対話モード)を有している。 <S105> The user communicates with the dialogue terminal 1. The communication terminal 1 communicates with the user by communicating with the remote brain 2. The change of the pattern 19, which is the change of the facial expression of the dialogue terminal 1, is as described in FIG. As described above, it is any one of the interactive terminal 1, the smartphone 51, and the car navigation 52 that can communicate with the user while communicating with the remote brain 2. In the case of this step, the interactive terminal 1 has the right (interactive mode).
 <S106>対話端末1が利用者の存在を認識出来るか否かを判断する。具体的には、対話端末1に備えられているカメラ107の視界から利用者が居なくなった場合、マイクロホン109に利用者の声が入力されなくなった場合、または前述の状態から所定の時間を経過したか等に基づいて判断する。 <S106> It is determined whether the conversation terminal 1 can recognize the presence of the user. Specifically, when there is no user from the field of view of the camera 107 provided in the interactive terminal 1, when the voice of the user is not input to the microphone 109, or when a predetermined time has passed from the above-described state. Judgment based on whether or not
 <S107>S106にてYesと判断された場合、引き続き利用者とコミュニケーションを継続可能な状態を維持する。 <S107> If YES is determined in S106, the state in which communication with the user can be continued is maintained.
 <S108>S106にてNoと判断された場合、リモートブレイン2からスマートフォン51に対して、対話モードに移行するように指示を送る。これにより、利用者が対話端末1とコミュニケーション不可な位置に移動した場合であっても、利用者はスマートフォン51と引き続きコミュニケーション可能であり、利用者に寄り添ったコミュニケーションシステムが実現される。そして、対話端末1は図3の消灯状態に移行する。つまり、対話端末1は対話モードを停止する。 <S108> If No is determined in S106, the remote brain 2 sends an instruction to the smartphone 51 to shift to the interactive mode. As a result, even when the user moves to a position where communication with the dialog terminal 1 is not possible, the user can still communicate with the smartphone 51, and a communication system that is close to the user is realized. Then, the dialogue terminal 1 shifts to the off state of FIG. That is, the interactive terminal 1 stops the interactive mode.
 図13は、本発明の一実施例である利用者が所有するスマートフォン51のコミュニケーション処理に関するフローチャート図である。 FIG. 13 is a flowchart showing a communication process of the smartphone 51 owned by the user, which is an embodiment of the present invention.
 <S200>S108により対話モードに移行するように命令が送られてきたか否かを判断する。 <S200> In S108, it is determined whether a command has been sent to shift to the interactive mode.
 <S201>S200にてNoと判断された場合、対話モード移行受付状態を維持する。 <S201> When No is determined in S200, the interactive mode transition acceptance state is maintained.
 <S202>S200にてYesと判断された場合、スマートフォン51の表示装置112に、コミュニケーションシステムの初期表情を画面に映す。具体的には、対話端末1に表示される模様19と同様の模様が、スマートフォン51の表示装置112に表示される。 <S202> If Yes is determined in S200, the initial expression of the communication system is displayed on the screen of the display device 112 of the smartphone 51. Specifically, a pattern similar to the pattern 19 displayed on the interactive terminal 1 is displayed on the display device 112 of the smartphone 51.
 <S203>利用者はスマートフォン51とコミュニケーションをする。スマートフォン51に映し出される表情の変化である模様19の変化は、図6を用いて説明した通りである。このステップにおいては、スマートフォン51が対話モードとなる権限を有している。 <S203> The user communicates with the smartphone 51. The change in the pattern 19, which is the change in facial expression displayed on the smartphone 51, is as described with reference to FIG. In this step, the smartphone 51 has the authority to enter the interactive mode.
 <S204>S103にて対話モードの停止の命令が送られてきたか否かを判断する。 <S204> It is determined in S103 whether an instruction to stop the interactive mode has been sent.
 <S205>S204にてNoと判断された場合、引き続き利用者とコミュニケーションを継続可能な状態を維持する。 <S205> If No is determined in S204, the state in which communication with the user can be continued is maintained.
 <S206>S204にてYesと判断された場合、スマートフォン51は対話モードを停止する。 <S206> When it is determined to be Yes in S204, the smartphone 51 stops the interactive mode.
 S206にて対話モードが停止された場合は、S104に示す通り利用者は対話端末1とコミュニケーション可能である。 When the interactive mode is stopped in S206, the user can communicate with the interactive terminal 1 as shown in S104.
 図14は、本発明の一実施例である利用者が保有するカーナビゲーション52におけるコミュニケーション処理を示すフローチャート図である。 FIG. 14 is a flowchart showing a communication process in the car navigation 52 owned by the user, which is an embodiment of the present invention.
 <S300>利用者のスマートフォン51とカーナビゲーション52との間で通信が開始されたか否かを判断する。この通信の開始は、スマートフォン51を携帯した利用者がこのカーナビゲーションが搭載された自動車に利用者が乗車したかどうかを判断することに利用している。 <S300> Determine whether or not communication has started between the user's smartphone 51 and the car navigation 52. The start of this communication is used for the user carrying the smartphone 51 to determine whether or not the user has boarded the car equipped with this car navigation.
 <S301>S300にてNoと判断された場合、対話モード移行受付状態を維持する。 <S301> If No is determined in S300, the interactive mode shift acceptance state is maintained.
 <S302>S300にてYesと判断された場合、利用者のスマートフォン51に対して、対話モードの停止の指示を送信する。この実施例ではこの指示を近距離無線通信を使用して行っているが、図1に示された通信回線3を介して行ってもよい。 <S302> If YES is determined in S300, the instruction to stop the interactive mode is transmitted to the smartphone 51 of the user. In this embodiment, this instruction is performed using near field communication, but it may be performed via the communication line 3 shown in FIG.
 <S303>カーナビゲーション52の表示装置112に、模様19の初期表情が映し出される。この表情は、図6を用いて説明した通りである。 <S303> The initial facial expression of the pattern 19 is displayed on the display device 112 of the car navigation 52. This expression is as described with reference to FIG.
 <S304>利用者はカーナビゲーション52とコミュニケーションをする。カーナビゲーション52に映し出される表情の変化である模様19の変化は、図6を用いて説明した通りである。このステップでは、カーナビゲーションに対話モードの権限がある。 <S304> The user communicates with the car navigation 52. The change of the pattern 19, which is the change of the facial expression displayed on the car navigation 52, is as described with reference to FIG. In this step, car navigation has interactive mode privileges.
 <S305>利用者のスマホとの近距離無線通信等による通信が切断されたか否かを判断する。具体的には、利用者が運転手の場合、イグニッションキーを切って自動車の動力源を切り、自動車から降りた場合(カーナビゲーションはバッテリと常時接続されている必要がある)である。利用者が運転手ではない場合、利用者がその自動車から降りた場合である。 <S305> It is determined whether or not communication with the user's smartphone, such as short-range wireless communication, has been disconnected. Specifically, when the user is a driver, he / she turns off the power source of the vehicle by turning off the ignition key and gets off the vehicle (the car navigation needs to be always connected to the battery). When the user is not the driver, the user exits the car.
 <S306>S305にてNoと判断された場合、引き続き利用者とコミュニケーションを継続可能な状態を維持する。 <S306> When No is determined in S305, the state in which communication with the user can be continued is maintained.
 <S307>S305にてYesと判断された場合、カーナビゲーション52の対話モードを停止する。 <S307> If YES is determined in S305, the interactive mode of the car navigation 52 is stopped.
 <S308>利用者のスマートフォン51に対して、対話モードの起動を指示する。 <S308> Instruct the smartphone 51 of the user to start the interactive mode.
 上記フローチャートにより、利用者とのコミュニケーションをスマートフォン51からカーナビゲーション52に取らせることができる。表情(模様19)はスマートフォン51からカーナビゲーション52に移動し、カーナビゲーション52のディスプレイに表情が映し出される。これにより、利用者は、コミュニケーションロボットが利用者に寄り添ってくれていると感じることができる。また、利用者が車から降りた場合は、利用者とのコミュニケーションをカーナビゲーション52からスマートフォン51に取らせることができる。そして、模様19はカーナビゲーション52からスマートフォン51に移動し、スマートフォン51の画面に表情が映し出される。 According to the above flow chart, the car navigation 52 can be made to communicate with the user from the smartphone 51. The facial expression (pattern 19) moves from the smartphone 51 to the car navigation 52, and the facial expression is displayed on the display of the car navigation 52. As a result, the user can feel that the communication robot is close to the user. When the user gets out of the car, the smartphone 51 can be made to communicate with the user from the car navigation 52. Then, the pattern 19 moves from the car navigation 52 to the smartphone 51, and the facial expression is displayed on the screen of the smartphone 51.
 図15は、本発明の一実施例である対話端末1のコミュニケーション処理を示すフローチャート図である。図15~17を用いて説明する一実施例では、対話端末1を2名以上の利用者が共有しており、1名の利用者(以下、利用者A)が宅外においてスマートフォン51やカーナビゲーション52を用いてコミュニケーションシステムとコミュニケーションを取っている場合に、宅内にいる別の利用者(以下、利用者B)が対話端末1の起動命令を行った場合を例として説明する。 FIG. 15 is a flowchart showing a communication process of the interactive terminal 1 which is an embodiment of the present invention. In one embodiment described with reference to FIGS. 15 to 17, two or more users share the interaction terminal 1, and one user (hereinafter referred to as user A) has a smartphone 51 or a car outside the home. A case will be described as an example where another user in the house (hereinafter referred to as user B) issues a command to activate the dialog terminal 1 while communicating with the communication system using the navigation 52.
 <S400>利用者Aのスマートフォン51やカーナビゲーション52から対話モード停止NGの連絡があるか否かを判断する。このフローにより、宅外でスマートフォン51等を利用してコミュニケーションシステムとコミュニケーションを取っている利用者Aのスマートフォン51等の対話モードが強制的に切断されるのを防ぐことができる。なお、対話モード停止NGの連絡がなく一定時間が経過した場合や、対話モード停止OKの連絡が有った場合は、S104に移行する。 <S400> It is determined whether or not there is a communication mode stop NG notification from the user A's smartphone 51 or car navigation 52. With this flow, it is possible to prevent the interactive mode of the smartphone 51 or the like of the user A who is communicating with the communication system using the smartphone 51 or the like outside the house from being forcibly disconnected. It should be noted that if there is no notification of the dialogue mode stop NG and a certain time has elapsed, or if there is a notification of the dialogue mode stop OK, the process proceeds to S104.
 <S401>S400にてYesと判断された場合は、対話端末1は消灯状態を保持する。つまり、利用者Aがコミュニケーションシステムとコミュニケーションをしていることから、利用者Bがコミュニケーションシステムとコミュニケーションを開始するのを防ぐことができる。 <S401> If Yes is determined in S400, the interactive terminal 1 maintains the off state. That is, since the user A is communicating with the communication system, it is possible to prevent the user B from starting communication with the communication system.
 <S402>S106にてNoと判断された場合、対話端末1自身の対話モードを停止する。これにより、宅内にて利用者が対話端末1が設置されている部屋から別の対話端末1が設置されている部屋へ移動をする際に、対話端末1とのコミュニケーションから自動的にスマートフォン51とのコミュニケーションへ移行し、すぐに別の対話端末1とのコミュニケーションに移行することを防止することができる。よって、宅内におけるコミュニケーションシステムとの円滑なコミュニケーションを実現できる。 <S402> If No is determined in S106, the interactive mode of the interactive terminal 1 itself is stopped. As a result, when the user moves from the room in which the interactive terminal 1 is installed to the room in which another interactive terminal 1 is installed in the home, communication with the interactive terminal 1 automatically causes the smartphone 51 and It is possible to prevent the transition to the communication with another communication terminal 1 immediately. Therefore, smooth communication with the communication system in the house can be realized.
 なお、宅内に居てスマートフォン51とコミュニケーション中の利用者Aが、スマートフォン51を置いて宅内を移動し、対話端末1の視野に入ることで起動命令がなされた場合は、S400にて利用者Aのスマホから一定時間対話モード停止NGの連絡が無いことによって、自動的にS104に移行し、対話端末1とコミュニケーション可能である。 In addition, when the user A who is in the house and is communicating with the smartphone 51 moves the inside of the house with the smartphone 51 placed and enters the visual field of the interactive terminal 1 to issue the activation command, the user A is sent in S400. When there is no communication from the smartphone for stopping the interactive mode for a certain period of time, the process automatically shifts to S104, and communication with the interactive terminal 1 is possible.
 図16は、本発明の一実施例である利用者が所有するスマートフォンのコミュニケーション処理に関するフローチャート図である。 FIG. 16 is a flowchart showing a communication process of the smartphone owned by the user, which is an embodiment of the present invention.
 <S500>スマートフォン51に対して対話モードの起動命令があったか否かを判断する。起動命令としては、例えばスマートフォン51にインストールされているアプリケーションの立ち上げが考えられる。 <S500> It is determined whether or not there is an instruction to activate the interactive mode for the smartphone 51. As the activation command, for example, activation of an application installed on the smartphone 51 can be considered.
 <S501>S204にてYesと判断された場合、スマートフォン51の利用者Aが、スマートフォン51の対話モード停止を了承するか否かを判断する。これにより、S103における対話モードの停止命令によってスマートフォン51とのコミュニケーションが突然切断されるという事態を防ぐことができる。 <S501> When it is determined Yes in S204, the user A of the smartphone 51 determines whether or not to accept the stop of the conversation mode of the smartphone 51. As a result, it is possible to prevent a situation in which communication with the smartphone 51 is suddenly disconnected due to the interactive mode stop command in S103.
 <S502>S501にてNoと判断された場合、対話モード停止NGを送信する。 <S502> When it is determined No in S501, the interactive mode stop NG is transmitted.
 図17は、本発明の一実施例である利用者が保有するカーナビゲーションにおけるコミュニケーション処理を示すフローチャート図である。前述の通り本実施例ではS500でスマートフォン51に対して対話モードの起動命令があったか否かに基づいて対話モードを開始することから、図14のS308は削除している。 FIG. 17 is a flowchart showing a communication process in car navigation possessed by the user, which is an embodiment of the present invention. As described above, in the present embodiment, the interactive mode is started based on whether or not there is an interactive mode activation command to the smartphone 51 in S500, so S308 of FIG. 14 is deleted.
 <S600>S400と同様のフローである。このフローにより、宅外でスマートフォン51を利用してコミュニケーションシステムとコミュニケーションを取っている利用者Aのスマートフォン51等の対話モードが強制的に切断されるのを防ぐことができる。 <S600> The flow is the same as S400. With this flow, it is possible to prevent the interactive mode of the smartphone 51 or the like of the user A who is communicating with the communication system using the smartphone 51 outside the home from being forcibly disconnected.
 <S601>S204とS501を合わせたフローである。利用者Bのスマホまたはロボットから送られる対話モード停止指令を了承するか否かを判断する。これにより、S103における対話モードの停止命令によってカーナビゲーション52とのコミュニケーションが突然切断されるという事態を防ぐことができる。 <S601> This is a flow combining S204 and S501. It is determined whether or not to accept the interactive mode stop command sent from the user B's smartphone or robot. As a result, it is possible to prevent a situation in which the communication with the car navigation 52 is suddenly disconnected due to the interactive mode stop command in S103.
 <S602>S601にてNoと判断された場合、対話モード停止NGを送信する。 <S602> If it is determined No in S601, the interactive mode stop NG is transmitted.
 図12~17のフローチャートにおける各判断は、リモートブレイン2の制御部201で行っても良いし、対話端末1やスマートフォン51、カーナビゲーション52の制御部101で行っても良い。 Each determination in the flowcharts of FIGS. 12 to 17 may be performed by the control unit 201 of the remote brain 2, or may be performed by the control unit 101 of the interactive terminal 1, the smartphone 51, or the car navigation 52.
 フローチャートの順番は疑義が生じない範囲で前後してもよい。また、疑義が生じない範囲で、図12~17のフローチャートを組合せても良い。 The order of the flowcharts may be mixed up as long as there is no doubt. Further, the flowcharts of FIGS. 12 to 17 may be combined as long as no doubt arises.
 対話モードとは、利用者とコミュニケーション可能な状態である。すなわち、コミュニケーションツールの一つであるリモートブレイン2と端末等が接続可能な状態をいう。 Dialogue mode is a state in which communication with the user is possible. That is, it means a state where the remote brain 2 which is one of the communication tools and the terminal etc. can be connected.
 端末等を対話モードにするためのアプリは、対話端末1、スマートフォン51及びカーナビゲーション52にインストール済みである。または、リモートブレイン2やネット上のサーバからダウンロード可能である。 The application for setting the terminal etc. to the interactive mode is already installed in the interactive terminal 1, the smartphone 51 and the car navigation 52. Alternatively, it can be downloaded from the remote brain 2 or a server on the net.
 1 対話端末
 2 リモートブレイン
 3 通信回線(ネットワーク)
 11 球体部
 12 基部
 13 穴
 19 模様
 20 利用者情報制御・管理装置
 50 その他対話端末
 51 スマートフォン
 52 カーナビゲーション
 101 制御部
 102 記憶部
 103 画像生成部
 104 通信インターフェース
 105 発光装置
 106 入力部
 107 カメラ
 108 センサ
 109 マイクロホン
 110 出力部
 111 スピーカ
 112 表示装置
 201 制御部
 202 主制御部
 211 分析部
 212 人工知能
 213 対話エンジン
 214 行動判定部
 221 記憶部
 222 ロボット情報保有部
 223 対話情報保有部
 224 利用者情報保有部
 231 通信インターフェース
 280 ロボット情報
1 Dialogue terminal 2 Remote brain 3 Communication line (network)
11 Sphere 12 Base 13 Hole 19 Pattern 20 User Information Control / Management Device 50 Other Interactive Terminal 51 Smartphone 52 Car Navigation 101 Control Unit 102 Storage Unit 103 Image Generation Unit 104 Communication Interface 105 Light Emitting Device 106 Input Unit 107 Camera 108 Sensor 109 Microphone 110 Output unit 111 Speaker 112 Display device 201 Control unit 202 Main control unit 211 Analysis unit 212 Artificial intelligence 213 Dialogue engine 214 Behavior determination unit 221 Storage unit 222 Robot information holding unit 223 Dialogue information holding unit 224 User information holding unit 231 Communication Interface 280 robot information

Claims (4)

  1.  利用者と少なくとも映像を通じてコミュニケーションを行うコミュニケーション装置を備えたコミュニケーションシステムにおいて、
     利用者と前記コミュニケーション装置とがコミュニケーションを実施している際に、利用者の存在を認識できなくなった場合に、利用者が所有している表示装置付きの携帯端末に対して、利用者とコミュニケーションを行うことを指示するコミュニケーションシステム。
    In a communication system equipped with a communication device that communicates with the user at least through video,
    When the user cannot recognize the existence of the user while communicating with the communication device, the user communicates with the mobile terminal with the display device owned by the user. A communication system that directs you to do.
  2.  利用者と少なくとも映像を通じてコミュニケーションを行うコミュニケーション装置を備えたコミュニケーションシステムにおいて、
     利用者と前記コミュニケーション装置とがコミュニケーションを実施している際に、利用者の存在を認識できなくなった場合に、利用者が所有している表示装置付きの携帯端末に対して、利用者とコミュニケーションを行うことを指示し、利用者と前記コミュニケーション装置とのコミュニケーションを停止するコミュニケーションシステム。
    In a communication system equipped with a communication device that communicates with the user at least through video,
    When the user cannot recognize the existence of the user while communicating with the communication device, the user communicates with the mobile terminal with the display device owned by the user. A communication system that instructs the user to perform communication and stops communication between the user and the communication device.
  3.  利用者と少なくとも映像を通じてコミュニケーションを行うコミュニケーション装置を備えたコミュニケーションシステムにおいて、
     利用者が、利用者が所有している表示装置付きの携帯端末とコミュニケーションを実施している際に、前記利用者が所有している表示装置付きの携帯端末とカーナビゲーションとが通信可能となった場合に、前記カーナビゲーションが利用者とコミュニケーションを行うことを指示するコミュニケーションシステム。
    In a communication system equipped with a communication device that communicates with the user at least through video,
    When the user is communicating with the mobile terminal with the display device owned by the user, the mobile terminal with the display device owned by the user and the car navigation can communicate with each other. A communication system for instructing the car navigation to communicate with a user when the car navigation system is turned on.
  4.  利用者と少なくとも映像を通じてコミュニケーションを行うコミュニケーション装置を備えたコミュニケーションシステムにおいて、
     利用者が、利用者が所有している表示装置付きの携帯端末とコミュニケーションを実施している際に、前記利用者が所有している表示装置付きの携帯端末とカーナビゲーションとが通信可能となった場合に、前記カーナビゲーションが利用者とコミュニケーションを行うことを指示し、利用者と前記利用者が所有している表示装置付きの携帯端末とのコミュニケーションを停止するコミュニケーションシステム。
    In a communication system equipped with a communication device that communicates with the user at least through video,
    When the user is communicating with the mobile terminal with the display device owned by the user, the mobile terminal with the display device owned by the user and the car navigation can communicate with each other. A communication system that instructs the car navigation to communicate with the user and stops the communication between the user and the portable terminal with a display device owned by the user.
PCT/JP2019/033440 2018-10-12 2019-08-27 Communication system WO2020075403A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018193087A JP7202836B2 (en) 2018-10-12 2018-10-12 communication system
JP2018-193087 2018-10-12

Publications (1)

Publication Number Publication Date
WO2020075403A1 true WO2020075403A1 (en) 2020-04-16

Family

ID=70164715

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/033440 WO2020075403A1 (en) 2018-10-12 2019-08-27 Communication system

Country Status (2)

Country Link
JP (1) JP7202836B2 (en)
WO (1) WO2020075403A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003008767A (en) * 2001-06-26 2003-01-10 Nippon Seiki Co Ltd Avatar
JP2006154926A (en) * 2004-11-25 2006-06-15 Denso Corp Electronic equipment operation system using character display and electronic apparatuses
JP2018036397A (en) * 2016-08-30 2018-03-08 シャープ株式会社 Response system and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003008767A (en) * 2001-06-26 2003-01-10 Nippon Seiki Co Ltd Avatar
JP2006154926A (en) * 2004-11-25 2006-06-15 Denso Corp Electronic equipment operation system using character display and electronic apparatuses
JP2018036397A (en) * 2016-08-30 2018-03-08 シャープ株式会社 Response system and apparatus

Also Published As

Publication number Publication date
JP2020061050A (en) 2020-04-16
JP7202836B2 (en) 2023-01-12

Similar Documents

Publication Publication Date Title
KR102374910B1 (en) Voice data processing method and electronic device supporting the same
KR102389625B1 (en) Electronic apparatus for processing user utterance and controlling method thereof
JP6669162B2 (en) Information processing apparatus, control method, and program
JP6669073B2 (en) Information processing apparatus, control method, and program
WO2017215297A1 (en) Cloud interactive system, multicognitive intelligent robot of same, and cognitive interaction method therefor
KR20190006403A (en) Voice processing method and system supporting the same
JP2019536150A (en) Social robot with environmental control function
US11302325B2 (en) Automatic dialogue design
KR102463806B1 (en) Electronic device capable of moving and method for operating thereof
JP6067905B1 (en) Robot control program generation system
WO2018155116A1 (en) Information processing device, information processing method, and computer program
WO2017141530A1 (en) Information processing device, information processing method and program
KR20190008663A (en) Voice data processing method and system supporting the same
JP6891601B2 (en) Robot control programs, robot devices, and robot control methods
WO2019077897A1 (en) Information processing device, information processing method, and program
TWI827825B (en) System and method to view occupant status and manage devices of building
JP2005335053A (en) Robot, robot control apparatus and robot control method
JP2019124855A (en) Apparatus and program and the like
JP2005313308A (en) Robot, robot control method, robot control program, and thinking device
KR102511517B1 (en) Voice input processing method and electronic device supportingthe same
KR20190106925A (en) Ai robot and the control method thereof
JP6798258B2 (en) Generation program, generation device, control program, control method, robot device and call system
JP2002261966A (en) Communication support system and photographing equipment
WO2020075403A1 (en) Communication system
US20200211406A1 (en) Managing multi-role activities in a physical room with multimedia communications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19872235

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19872235

Country of ref document: EP

Kind code of ref document: A1