WO2018047932A1 - Dispositif interactif, robot, procédé de traitement, programme - Google Patents

Dispositif interactif, robot, procédé de traitement, programme Download PDF

Info

Publication number
WO2018047932A1
WO2018047932A1 PCT/JP2017/032410 JP2017032410W WO2018047932A1 WO 2018047932 A1 WO2018047932 A1 WO 2018047932A1 JP 2017032410 W JP2017032410 W JP 2017032410W WO 2018047932 A1 WO2018047932 A1 WO 2018047932A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
dialogue
user
processing unit
dialog
Prior art date
Application number
PCT/JP2017/032410
Other languages
English (en)
Japanese (ja)
Inventor
久美子 高塚
山賀 宏之
伊藤 真由美
康一 森川
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2018047932A1 publication Critical patent/WO2018047932A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Definitions

  • the present invention relates to a dialogue apparatus, a robot, a processing method, and a program.
  • Patent Documents 1 and 2 are disclosed as techniques related to these.
  • an object of the present invention is to provide an interactive device, a robot, a processing method, and a program that solve the above-described problems.
  • the dialog device includes a dialog start condition determining unit that determines whether the acquired first acquisition information matches the dialog start condition, and the first acquisition information is the dialog start.
  • a dialog start condition determining unit that determines whether the acquired first acquisition information matches the dialog start condition, and the first acquisition information is the dialog start.
  • an analysis unit that performs analysis related to user detection based on the first acquired information or information obtained from the sensor device, and the user is detected based on a user detection analysis result of the analysis
  • a dialogue processing unit for outputting first dialogue information related to the dialogue with the user.
  • the processing method determines whether the acquired first acquisition information matches the dialog start condition, and when the first acquisition information matches the dialog start condition. , Performing analysis related to user detection based on the first acquired information or information obtained from the sensor device, and first regarding dialogue with the user when the user is detected based on the user detection analysis result of the analysis Output dialog information.
  • the program determines whether or not the acquired first acquisition information matches the dialog start condition, and the first acquisition information matches the dialog start condition.
  • FIG. 1 is a first diagram illustrating an interactive apparatus and an image display example according to the first embodiment.
  • the interactive apparatus 1 has a display screen 16.
  • the interactive device 1 is a tablet terminal, for example.
  • a tablet terminal is an embodiment of an ICT device.
  • the interactive apparatus 1 displays the character image 100 and the auxiliary image 101 on the display screen 16 and displays operation buttons on the display screen 16 that are simplified so that even a user unaccustomed to ICT devices such as elderly people can easily operate the screen.
  • Display in area 110 In the present embodiment, an example in which only icon images of three operation buttons are displayed in the operation button display area 110 is shown.
  • the dialogue apparatus 1 includes a camera 18.
  • FIG. 2 is a hardware configuration diagram of the interactive apparatus according to the first embodiment.
  • the interactive apparatus 1 includes a CPU (Central Processing Unit) 11, a RAM (Random Access Memory) 12, a ROM (Read Only Memory) 13, an SSD (Solid State Drive) 14, a communication module 15, a display screen 16, an IF (interface) 17, A camera 18 and the like are provided.
  • the display screen 16 is configured by a liquid crystal monitor, a touch panel, or the like, and may have an input function for a user to input an operation by touching the touch panel in addition to a display function.
  • FIG. 3 is a functional block diagram of the interactive apparatus according to the first embodiment.
  • the CPU 11 (FIG. 2) of the dialogue apparatus 1 starts a dialogue processing program recorded in the ROM 13 (FIG. 2) or the SSD 14 (FIG. 2).
  • the CPU 11 of the dialogue apparatus 1 includes the functions of the control unit 111, the dialogue start condition determination unit 112, the analysis unit 113, the dialogue processing unit 114, the transmission processing unit 115, and the response information notification unit 116.
  • the CPU 11 of the interactive apparatus 1 has the function of the communication application processing unit 117 by starting the communication application program.
  • the control unit 111 controls other functional units.
  • the dialog start condition determination unit 112 determines whether the acquired information acquired by the dialog device 1 matches the dialog start condition.
  • the analysis unit 113 analyzes information obtained from a sensor device such as a touch panel constituting the camera 18 or the display screen 16 or obtained information when the obtained information matches the dialog start condition.
  • the analysis unit 113 performs analysis related to user detection based on the information.
  • the dialogue processing unit 114 performs an output process of dialogue information regarding the dialogue with the user.
  • the dialogue information includes, for example, voice information or character information.
  • the transmission processing unit 115 transmits the acquired information analysis result obtained by analyzing the acquired information acquired based on the user action, for example, after outputting the dialogue information.
  • the response information notification unit 116 notifies the predetermined transmission destination of the presence or absence of the response information when acquiring the acquired information.
  • the response information is information indicating the content of the response by the user to the dialogue information.
  • the communication application processing unit 117 performs any one of processes such as a mail function, a message processing function, and an SNS (Social Networking Service) function.
  • FIG. 4 is a second diagram illustrating the interactive apparatus and its image display example according to the first embodiment.
  • the interactive apparatus 1 displays the character image 100 and displays a plurality of operation buttons in a predetermined operation button display area 110 in the screen area.
  • the dialogue apparatus 1 does not change the position of the operation button display area 110 in principle. This makes it possible for a user unfamiliar with the ICT device to operate the user without hesitation between many operations.
  • the dialogue apparatus 1 may give an action to the character image 100 to display a gesture such as the character image 100 walking on the screen or a gesture for performing a conversation.
  • the interactive apparatus 1 may display an auxiliary image 101 representing the emotion of the character image 100 as shown in FIG. In FIG. 1, a heart mark is displayed as the auxiliary image 101.
  • the character image 100 shown in FIG. 4 shows a movement that walks to the left and right, and a display in which the character walks between the character image 100a and the character image 100b is performed.
  • FIG. 5 is a diagram showing a processing flow of the interactive apparatus according to the first embodiment. Next, the processing flow of the interactive apparatus 1 will be described in order.
  • the dialogue processing unit 114 of the dialogue apparatus 1 displays the character image 100, the auxiliary image 101, and operation buttons after activation (step S501).
  • the dialogue processing unit 114 controls the type (display type) and movement of the character image 100 and the auxiliary image 101. For example, the dialogue processing unit 114 displays an image that attracts the user's interest, such as moving the character indicated by the character image 100 on the screen or shaking the character's head.
  • the dialogue processing unit 114 may change or move the color of the auxiliary image 101.
  • the dialog start condition determination unit 112 is set to acquire the reception information (first acquisition information) when the communication application processing unit 117 receives the communication information.
  • the communication application processing unit 117 When receiving the communication information, the communication application processing unit 117 outputs the received information to the dialog start condition determining unit 112 based on the communication information.
  • the communication application processing unit 117 is a functional unit that performs application processing related to mail transmission / reception.
  • the received information may include information such as a transmission source identifier such as a transmission source address or a transmission source user name, a face image of the transmission source user, a mail text, and attached data.
  • the communication application processing unit 117 detects these pieces of information as received information.
  • the received information includes a transmission source identifier such as a transmission source user name, a face image of the transmission source user, a message body, and attached data. Such information may be included.
  • the received information may include information such as a caller user name and a call instruction.
  • the dialog start condition determining unit 112 acquires the received information (step S502). Acquisition of received information is an aspect in which a service function (communication application function) provided in the dialogue apparatus 1 acquires an event. When the received information is acquired, the dialog start condition determining unit 112 determines to start the dialog, and instructs the dialog processing unit 114 to start the dialog (step S503). The dialogue processing unit 114 outputs a voice call (step S504). Further, the dialogue processing unit 114 displays information notifying that the event has been acquired on the screen (step S505). Information notifying that this event has been acquired may be information indicating the movement of the character image 100 and the mode of the auxiliary image 101. The control unit 111 of the interactive apparatus 1 detects the reception information acquisition and activates the camera 18 (step S506).
  • the camera 18 is activated, for example, in a video shooting mode.
  • the dialogue apparatus 1 is usually placed on a shelf or a desk, for example.
  • the user of the dialog device 1 holds the dialog device 1 and lifts the face close to the display screen 16. It is assumed that the face approaches the display screen 16 by approaching the interactive device 1.
  • the camera 18 captures the user's face.
  • the camera 18 outputs the captured image (each frame) included in the moving image to the analysis unit 113.
  • the analysis unit 113 determines whether or not a face image (second acquisition information) can be detected from the captured image (step S507).
  • the analysis unit 113 compares the face image with a stored face image obtained by photographing a user's face in advance, and determines whether or not the face image matches. Do the same.
  • the analysis unit 113 determines whether the authentication of the face image is successful (step S508).
  • the analysis unit 113 outputs a dialogue start instruction indicating successful authentication to the dialogue processing unit 114.
  • the analysis unit 113 may output a dialogue start instruction to the dialogue processing unit 114 when a face image can be detected from the captured image without performing face authentication.
  • the dialogue processing unit 114 determines whether or not to output the dialogue information based on the detection information of the face image (second acquisition information).
  • the interactive apparatus 1 may perform authentication processing using voiceprint information instead of the user's face image or together with the face image.
  • the dialogue apparatus 1 is equipped with a microphone, and the voice information acquired from the microphone is analyzed by the analysis unit 113 to generate voiceprint information, which is stored in advance. Authenticates whether the information matches.
  • the interactive apparatus 1 may perform an authentication process using the user's fingerprint information.
  • the interactive device 1 is provided with a fingerprint sensor, and the analysis unit 113 analyzes the fingerprint information acquired from the fingerprint sensor and matches the user's fingerprint information stored in advance. Authenticate whether or not to do.
  • the analysis unit 113 outputs the authentication success to the dialogue processing unit 114 as described above.
  • the interactive device 1 may perform authentication processing based on iris information.
  • step S509 the dialogue processing unit 114 performs display by adding a predetermined action to the character image 100 and the auxiliary image 101.
  • the dialog start condition determining unit 112 instructs the dialog processing unit 114 to start a dialog. Then, in step S504, the dialogue processing unit 114 performs dialogue processing. However, instead of these processes, the following process may be performed.
  • the control unit 111 of the dialogue apparatus 1 detects a predetermined time based on a timer, and the dialogue start condition determination unit 112 acquires information indicating the detection. Then, the dialog start condition determination unit 112 instructs the dialog processing unit 114 to start the dialog in response to detecting the predetermined time, and the dialog processing unit 114 performs the dialog processing.
  • step S502 is replaced with a determination as to whether a predetermined time set by the timer has been detected.
  • the processing after step S503 is started. Further, in this case, the dialogue apparatus 1 performs the processes of steps S503 to S509.
  • the processing after step S510 is omitted because the reception information is not acquired.
  • the process of acquiring information (first acquisition information) indicating that the dialog start condition determination unit 112 has detected a predetermined time corresponds to one mode in which the first acquisition information matches the dialog start condition. .
  • FIG. 6 is a third diagram illustrating the interactive apparatus according to the first embodiment and an image display example thereof.
  • the dialogue processing unit 114 may perform display with the line of sight of the character image 100 directed to the front of the screen, the blinking operation of the character image 100, or the movement of the mouth.
  • the dialogue processing unit 114 detects the interruption of the utterance based on the user's utterance. Then, the dialogue processing unit 114 performs a display in which the character image 100 adds a motion of nodding in the interruption of the utterance, or performs a display in which a motion of blinking eyes or blinking is added.
  • the dialogue processing unit 114 outputs the character image 100 that assists the dialogue based on the dialogue information with the user to the display screen 16.
  • the dialogue processing unit 114 displays the display contents such as the transmission source user name, the transmission source user's face image 102, the mail text 103, the message text 103, and the like included in the received information (step S510).
  • the display content may be displayed in any manner.
  • the dialog processing unit 114 After displaying the display content in step S510, the dialog processing unit 114 performs a dialog with the user so that the reply processing corresponding to the reception of the communication information by the communication application processing unit 117 is completed by only the conversation without the user's operation. You can go.
  • the dialogue processing unit 114 detects the user's voice, analyzes the voice, and performs a character conversion process (step S511). In this case, the dialogue processing unit 114 notifies the communication application processing unit 117 of character information obtained by analyzing the voice. Then, the communication application processing unit 117 generates a mail or message in which text information is written in the text.
  • the communication application processing unit 117 transmits the generated communication information such as a mail or a message to the user who is the transmission source of the reception information based on the transmission source identifier or the user whose transmission destination is predetermined as the transmission destination.
  • step S512 may be used.
  • the second acquisition information includes voice information
  • the communication application processing unit 117 transmits the character information obtained by analyzing the voice information to the transmission source of the first acquisition information.
  • the user can immediately notify the user that the communication application processing unit 117 of the dialogue apparatus 1 has received the communication information (reception information) by means of a screen display or sound.
  • the dialogue apparatus 1 can immediately notify the reception to the user by screen display or sound when the reception information is from a predetermined transmission source.
  • the user of the interactive device 1 is not familiar with the ICT device, the user can browse the information such as the contents of the received information and the face image of the sender of the transmission source only by bringing his face close to the user, and the operation is mostly performed.
  • the functions of communication applications such as mail, SNS application, and message application provided in the dialog device 1 can be used without any problem.
  • the interactive device 1 displays and operates the character image, so that an illusion that the character image is interacting can be given to the user. Thereby, the psychological barrier which operates a user's ICT apparatus can be eased.
  • FIG. 7 is a diagram showing a processing flow of the interactive apparatus according to the second embodiment.
  • the dialogue processing unit 114 of the dialogue apparatus 1 displays the character image 100, the auxiliary image 101, and operation buttons after activation (step S701).
  • the dialogue processing unit 114 controls the type (display type) and movement of the character image 100 and the auxiliary image 101.
  • the dialogue processing unit 114 displays an image that attracts the user's interest, such as moving the character indicated by the character image 100 on the screen or shaking the character's head.
  • the dialogue processing unit 114 may change or move the color of the auxiliary image 101.
  • the dialog start condition determination unit 112 is set to acquire the reception information (first acquisition information) when the communication application processing unit 117 receives the communication information.
  • the communication application processing unit 117 When receiving the communication information, the communication application processing unit 117 outputs the received information to the dialog start condition determining unit 112 based on the communication information.
  • the communication application processing unit 117 is a functional unit that performs application processing related to mail transmission / reception.
  • the received information may include information such as a transmission source identifier such as a transmission source address or a transmission source user name, a face image of the transmission source user, a mail text, and attached data.
  • the communication application processing unit 117 detects these pieces of information as received information.
  • the received information includes a transmission source identifier such as a transmission source user name, a face image of the transmission source user, a message body, and attached data. Such information may be included.
  • the received information may include information such as a caller user name and a call instruction.
  • the dialog start condition determination unit 112 acquires the received information (step S702). Then, the dialog start condition determination unit 112 determines whether the received information matches the dialog start condition (step S703). Specifically, the dialog start condition determination unit 112 may determine that the dialog start condition is met when the reception information is simply acquired. The dialog start condition determining unit 112 may extract predetermined information included in the received information and determine that the information matches the dialog start condition when it can be determined that the information matches the information indicated by the start condition. For example, the dialogue start condition determination unit 112 determines that the transmission start address and the transmission source user name included in the reception information match a predetermined transmission source address and transmission source user name stored in advance. It may be determined that
  • the dialog start condition determination unit 112 may determine that the dialog start condition is met when the sensing information acquired from the IF 17 or the camera 18 is acquired.
  • the dialog start condition determination unit 112 may determine that the sensing information matches the dialog start condition when it can be determined that the sensing information matches the predetermined sensing information stored in advance. For example, when the sensing information is information that has detected that the display screen 16 has been touched, the dialog start condition determination unit 112 may determine that the touch condition has been met when it has detected that the touch has been made.
  • the sensing information is a face image captured by the camera 18, the dialog start condition determination unit 112 determines whether the face image is a face image of a predetermined user. It may be determined that it matches.
  • the dialogue start condition determination unit 112 determines whether the voice print information based on the voice information matches the voice print information of the predetermined user, and is the voice print information of the predetermined user. In this case, it may be determined that the dialog start condition is met.
  • the dialog start condition determining unit 112 determines that the received information or the acquired information (first acquired information) matches the dialog start condition, the dialog start condition determining unit 112 outputs the received information or the acquired information to the dialog processing unit 114.
  • the acquired information is sensing information, detection information, image information, voiceprint information, and the like.
  • the dialogue processing unit 114 performs dialogue processing based on the received information and acquired information. Specifically, the dialogue processing unit 114 notifies the user that the communication application processing unit 117 has received the reception information based on the reception information from the communication application processing unit 117 (step S704). In this notification, the dialogue processing unit 114 changes the movement of the character image 100 to notify that the communication application processing unit 117 has received the reception information. Alternatively, the dialog processing unit 114 may output a predetermined sound, a voice of a character notifying that it has been received, or other sound from a speaker to notify that the communication application processing unit 117 has received the reception information. That is, the operation for notifying that the communication application processing unit 117 has received the reception information is an aspect of the dialogue processing. The character image 100 and the sound for notifying that the communication application processing unit 117 has received the reception information are one aspect of the dialogue promotion information.
  • the dialogue promotion information is information that prompts the user to interact.
  • the control unit 111 of the interactive apparatus 1 detects that the received information or the acquired information has been acquired by the interactive processing unit 114 and activates the camera 18 (step 705).
  • the camera 18 is activated in, for example, a moving image shooting mode.
  • the dialogue apparatus 1 is usually placed on a shelf or a desk, for example.
  • the user of the dialog device 1 holds the dialog device 1 and lifts it to bring the face closer to the display screen 16.
  • the camera 18 captures the user's face.
  • the camera 18 outputs the captured image (each frame) included in the moving image to the analysis unit 113.
  • the analysis unit 113 determines whether or not a face image (second acquisition information) can be detected from the captured image. When the face image is detected, the analysis unit 113 determines whether or not the face image matches the face image obtained by photographing the user's face in advance, as in the face authentication process. The analysis unit 113 determines whether or not the face image has been successfully authenticated (step S706). When the face image matches the face image obtained by photographing the user's face in advance, the analysis unit 113 outputs a dialogue start instruction indicating successful authentication to the dialogue processing unit 114. Note that the analysis unit 113 may output a dialogue start instruction to the dialogue processing unit 114 when a face image can be detected from the captured image without performing face authentication.
  • the interactive device 1 may perform authentication processing using the user's voiceprint information.
  • the dialogue apparatus 1 is equipped with a microphone, and the voice information (second acquisition information) acquired from the microphone is analyzed by the analysis unit 113 to generate voiceprint information. Then, authentication is performed as to whether or not it matches the voice print information of the user stored in advance.
  • the interactive apparatus 1 may perform an authentication process using the user's fingerprint information.
  • the interactive device 1 is provided with a fingerprint sensor, and the analysis unit 113 analyzes the fingerprint information acquired from the fingerprint sensor and matches the user's fingerprint information stored in advance. Authenticate whether or not to do.
  • the analysis unit 113 outputs the authentication success to the dialogue processing unit 114 as described above.
  • the dialogue apparatus 1 may perform authentication processing using the user's iris information.
  • step S706 When the dialogue processing unit 114 detects a successful authentication (YES in step S706), the dialogue processing unit 114 performs a dialogue process (step S707). In this dialogue process, the dialogue processing unit 114 performs display by adding a predetermined action to the character image 100 and the auxiliary image 101.
  • the dialogue processing unit 114 performs display with the line of sight of the character image 100 directed to the front of the screen, the blinking operation of the character image 100, and the movement of the mouth. You may do it.
  • the dialogue processing unit 114 detects the interruption of the utterance based on the user's utterance. Then, the dialogue processing unit 114 performs a display in which the character image 100 adds a motion of nodding in the interruption of the utterance, or performs a display in which a motion of blinking eyes or blinking is added.
  • the dialogue processing unit 114 displays the display contents such as the transmission source user name, the transmission source user's face image 102, the mail text 103, and the message text 103 included in the received information.
  • the display content may be displayed in any manner.
  • the dialogue information includes the character information 103, and the dialogue processing unit 114 outputs the character information 103 together with the face image 102 of the transmission source user of the reception information (first acquisition information).
  • the user can immediately notify the user that the communication application processing unit 117 of the dialogue apparatus 1 has received the communication information (reception information) by means of a screen display or sound.
  • the dialogue apparatus 1 can immediately notify the reception to the user by screen display or sound when the reception information is from a predetermined transmission source.
  • the user of the interactive device 1 is not familiar with the ICT device, the user can browse the information such as the contents of the received information and the face image of the sender of the transmission source only by bringing his face close to the user, and the operation is mostly performed.
  • the functions of communication applications such as mail, SNS application, and message application provided in the dialog device 1 can be used without any problem.
  • the interactive device 1 displays and operates the character image 100, an illusion that the character image 100 is interacting can be given to the user. Thereby, the psychological barrier which operates a user's ICT apparatus can be eased.
  • step S706 of the above-described process flow information indicating that the user of the dialog device 1 has interacted may be transmitted to the user who has transmitted the reception information.
  • the analysis unit 113 outputs authentication success to the transmission processing unit 115.
  • the transmission processing unit 115 acquires reception information.
  • the received information includes a sender identifier such as a sender mail address, a sender user name, and a sender user ID.
  • the transmission processing unit 115 uses this transmission source identifier to instruct the communication application processing unit 117 to transmit information indicating that the authentication has been successful or that a dialogue has occurred.
  • the communication application processing unit 117 is transmitted to the transmission source using the transmission source identifier, indicating that the authentication has succeeded, has interacted, or has failed to interact.
  • This process is an aspect in which the transmission processing unit 115 performs transmission control on the acquired information analysis result obtained by analyzing the second acquired information acquired based on the user action after the output of the dialog information. Moreover, this process is one mode of notifying the presence or absence of reply information to a predetermined transmission destination when the second acquisition information is acquired. Note that the face image of the user of the dialog device 1 may be stored in the information indicating that the dialog is transmitted to the transmission destination.
  • the received information matches the dialog start condition in the process of step S703 of the above-described process flow.
  • the acquired information matches the dialog start condition as follows. For example, as described above, when it is detected that the user touches the display screen 16, when it is determined that the face image captured by the camera 18 is a face image of a predetermined user, the user's detected by the microphone For example, it is determined that the voiceprint information of a predetermined user is based on the voice. In this case, based on the fact that the acquired information matches the dialog start condition, information indicating that the user of the dialog device 1 has interacted may be transmitted to another user at a predetermined transmission destination.
  • the dialog start condition determination unit 112 outputs to the transmission processing unit 115 that the acquired information matches the dialog start condition.
  • the transmission processing unit 115 detects matching of the dialog start conditions, the transmission processing unit 115 acquires a transmission source identifier such as a transmission source mail address, a transmission source user name, and a transmission source user ID of a predetermined transmission destination from a storage unit such as the SSD 14.
  • the transmission processing unit 115 instructs the communication application processing unit 117 to transmit information indicating that the user of the interactive apparatus 1 has interacted.
  • the communication application processing unit 117 transmits information indicating that the user interacts with the user of the interactive apparatus 1 to a predetermined transmission destination using the transmission source identifier.
  • step S706 authentication is performed as to whether or not the face image matches the face image of the predetermined user, but the dialogue information is output based on other detection information of the face image that is the second acquisition information. It may be determined whether or not to perform.
  • the analysis unit 113 detects the size of the captured image of the face image, and when the size is equal to or larger than a predetermined size, the dialogue processing unit 114 determines that the authentication is successful because the user is approaching the dialogue device 1. Determine and instruct the start of dialogue processing.
  • the size of the face image may be determined by the number of pixels in the image range recognized as the face in the captured image.
  • the analysis unit 113 detects a face orientation, an angle formed by the face orientation and a line perpendicular to the screen plane, based on the face image, and determines whether or not the display screen 16 is directly facing. Also good. When it is determined that the face is directly facing the display screen 16, the analysis unit 113 determines that the user is looking at the dialog device 1, and the dialog processing unit 114 instructs the start of the dialog processing along with successful authentication. Alternatively, the analysis unit 113 determines that the user is about to use the interactive device 1 when the facial image is slower than a predetermined speed due to the speed of the movement of the face image, and the interactive processing unit 114 determines that the authentication is successful. You may instruct
  • the analysis unit 113 may output the analysis result of these face images to the dialogue processing unit 114.
  • the dialogue processing unit 114 may change the movement of the character image 100 and the movement and type of the auxiliary image 101 according to the analysis result such as the size, orientation, angle, and movement speed of the face. Good.
  • the analysis unit 113 determines that the user is about to use the dialogue apparatus 1 based on the analysis result
  • the dialogue processing unit 114 acquires information on the analysis result.
  • the dialogue processing unit 114 displays a character image 100 of a gesture (action) that requests the user to speak.
  • the analysis unit 113 calculates the distance between the interactive device 1 and the person in front of the interactive device 1 from the size of the face detected in the face image, and the user uses the interactive device 1 when the distance is equal to or less than the threshold value. It may be determined that the state is about to be attempted. The distance may be calculated by, for example, the size of the area occupied by the face image in the captured image.
  • the analysis unit 113 detects the user's eyes, nose, and mouth based on the face image, estimates the position of the entire face, and determines the angle of the face with respect to the dialogue apparatus 1 from the position of the eye nose and mouth in the entire face. You may make it guess.
  • the dialogue processing unit 114 detects a face having a size greater than or equal to the threshold value, if the face movement is determined to be fast, the dialogue processing unit 114 does not output voice information simulating voice call by the character image 100 or the like. It may be.
  • the analysis unit 113 stores face images of a plurality of family members in advance in the analysis of the face image. Based on the comparison between the acquired face image and the stored face image, the analysis unit 113 determines who the acquired face image is in the family living together. The analysis unit 113 may determine that the user is about to use the interactive device 1 only when it is determined that the user is a specific user. The analysis unit 113 may control the character image 100 not to interact when a visitor (a person who has not registered facial image information) is detected. In this case, the analysis unit 113 may determine that a visitor has been detected when a face image that does not match the stored face image is detected.
  • the analysis unit 113 determines that there is a visitor, and the user is not trying to use the interactive device 1. May be determined.
  • the dialogue processing unit 114 may not perform the dialogue processing.
  • the dialogue processing unit 114 may determine that there is a visitor other than the user based on the voice analysis, and may not perform the dialogue processing.
  • the analysis unit 113 performs analysis processing based on second acquired information other than the face image (fingerprint information, voice information, touch detection with a finger on the display screen 16, etc.) and the user is about to use the interactive device 1.
  • the analysis unit 113 may output the analysis result to the dialogue processing unit 114 in the same manner.
  • the dialogue processing unit 114 may display a character image 100 of a gesture (action) that requests a user to speak based on the analysis result.
  • the dialog processing unit 114 may perform a dialog with the user so that the reply processing corresponding to reception of the communication application processing unit 117 is completed only by the conversation without the user's operation.
  • control may be performed such that the voice of the user is further detected, the voice is analyzed, and the character conversion processing is performed.
  • the dialogue processing unit 114 notifies the communication application processing unit 117 of character information obtained by analyzing the voice. Then, the communication application processing unit 117 generates a mail or message in which text information is written in the text.
  • the communication application processing unit 117 transmits the generated communication information such as a mail or a message to the user who is the transmission source of the reception information based on the transmission source identifier or the user whose transmission destination is predetermined as the transmission destination. It may be.
  • the dialogue processing unit 114 may detect character information from the photographed image obtained from the camera 18 and transmit the character information to the communication application processing unit 117 in the dialogue processing in step S707. For example, instead of producing a voice, the user of the dialogue apparatus 1 writes a sentence on a sheet during the dialogue processing and puts it in front of the camera 18. The camera 18 outputs image information generated by photographing a sheet to the dialogue processing unit 114. The dialogue processing unit 114 analyzes the image information, extracts character information, and notifies the communication application processing unit 117 of the character information. Then, the communication application processing unit 117 generates a mail or message in which text information is written in the text.
  • the communication application processing unit 117 transmits the generated communication information such as a mail or a message to the user who is the transmission source of the reception information based on the transmission source identifier or to the user whose transmission destination is predetermined as the transmission destination. Also good.
  • the image information may be attached to an email or a message and transmitted to the transmission destination.
  • the area where the character image 100 is displayed, the area 120 where the dialog information is displayed, and the area 120 where the character information is displayed are fixed on the display screen 16 of the dialog apparatus 1. .
  • the dialogue processing unit 114 and other communication application processing unit 117 of the dialogue device 1 display text having contents that the character image 100 speaks, and display auxiliary operation buttons and the like. However, the position and size to be displayed are not changed even if the operation steps are changed so that the operation method is not required to be learned.
  • the interactive device 1 fixes the display area according to the type of information to be displayed, so that even if the user is unfamiliar with the ICT device, the user is less confused by the irregular display and becomes familiar with the interactive device 1. Can be easily operated.
  • the dialogue processing unit 114 of the dialogue device 1 displays character information and the like by setting a large display range in the horizontal direction of the screen in order to make it easy to understand and understand the contents (one phrase is preferably displayed on the screen without line breaks). Devised to fit within the display). Further, the dialogue processing unit 114 of the dialogue apparatus 1 displays the operation buttons with a small number such as about three. Thereby, it can be considered that a user such as an elderly person does not get lost in the operation. Also, by reducing the number, the size and spacing of the buttons can be increased, so that pressing mistakes can be suppressed.
  • FIG. 8 is a diagram showing a processing flow of the interactive apparatus according to the third embodiment.
  • the dialogue processing unit 114 of the dialogue apparatus 1 displays the character image 100, the auxiliary image 101, and operation buttons after activation (step S801).
  • the dialogue processing unit 114 controls the display type and movement of the character image 100 and the auxiliary image 101.
  • the dialogue processing unit 114 displays an image that attracts the user's interest, such as moving the character indicated by the character image 100 on the screen or shaking the character's head.
  • the dialogue processing unit 114 may change or move the color of the auxiliary image 101.
  • the dialog start condition determination unit 112 is set to acquire the reception information (first acquisition information) when the communication application processing unit 117 receives the communication information.
  • the communication application processing unit 117 When receiving the communication information, the communication application processing unit 117 outputs the received information to the dialog start condition determining unit 112 based on the communication information.
  • the communication application processing unit 117 is a functional unit that performs application processing related to mail transmission / reception.
  • the received information may include information such as a transmission source identifier such as a transmission source address or a transmission source user name, a face image of the transmission source user, a mail text, and attached data.
  • the communication application processing unit 117 detects these pieces of information as received information.
  • the received information includes a transmission source identifier such as a transmission source user name, a face image of the transmission source user, a message body, and attached data. Such information may be included.
  • the received information may include information such as a caller user name and a call instruction.
  • the dialog start condition determining unit 112 determines whether or not the received information has been acquired (step S802).
  • the determination of whether or not the reception information has been acquired is an aspect of determination of whether or not an event of a service function (communication application function) has been acquired.
  • the dialog start condition determining unit 112 determines to start the process of the first dialog and instructs the dialog processing unit 114 to start the first dialog (Step S803).
  • the subsequent steps S804 to S812 are the same as the steps S504 to S512 according to the first embodiment.
  • the dialog start condition determining unit 112 determines to start the second dialog processing, and instructs the dialog processing unit 114 to start the second dialog. (Step S813).
  • the control unit 111 of the dialogue apparatus 1 detects the start of the second dialogue and activates the camera 18 (step S814).
  • the camera 18 is activated, for example, in a video shooting mode.
  • the camera 18 photographs the user's face.
  • the camera 18 outputs the captured image (each frame) included in the moving image to the analysis unit 113.
  • the analysis unit 113 determines whether or not a face image can be detected from the captured image (step S815). When the face image is detected, the analysis unit 113 determines whether or not the face image matches the face image obtained by photographing the user's face in advance, as in the face authentication process. The analysis unit 113 determines whether or not the face image has been successfully authenticated (step S816). When the face image matches the face image obtained by photographing the user's face in advance, the analysis unit 113 outputs a dialogue start instruction indicating successful authentication to the dialogue processing unit 114. Note that the analysis unit 113 may output a dialogue start instruction to the dialogue processing unit 114 when a face image can be detected from the captured image without performing face authentication.
  • the dialogue processing unit 114 detects the authentication success, the dialogue processing unit 114 performs the second dialogue processing (step S817).
  • the dialogue processing unit 114 performs display by adding a predetermined action to the character image 100 and the auxiliary image 101.
  • This second interactive process is a process of directly interacting between the interactive apparatus 1 and the user.
  • the dialogue processing unit 114 determines whether or not the user's voice is detected. When the user's voice is detected, the dialogue processing unit 114 outputs a character image 100 showing a motion of the character nodding.
  • the dialog apparatus 1 can be made to communicate with users, such as an elderly person unfamiliar with an ICT apparatus.
  • FIG. 9 is a functional block diagram of the interactive apparatus according to the fourth embodiment.
  • the dialogue apparatus 1 may have a function of the photographing application processing unit 118 instead of the communication application processing unit 117.
  • the CPU 11 of the dialogue apparatus 1 starts the dialogue processing program recorded in the ROM 13 or the SSD 14 when the power is turned on.
  • the CPU 11 of the dialogue apparatus 1 includes the functions of the control unit 111, the dialogue start condition determination unit 112, the analysis unit 113, the dialogue processing unit 114, the transmission processing unit 115, and the response information notification unit 116.
  • the CPU 11 of the interactive apparatus 1 has the function of the communication application processing unit 117 by starting the communication application program.
  • the CPU 11 of the interactive apparatus 1 further includes the function of the photographing application processing unit 118 by starting the photographing application program.
  • FIG. 10 is a diagram showing a processing flow of the interactive apparatus according to the fourth embodiment.
  • the interactive device 1 may perform processing described below.
  • the dialogue processing unit 114 of the dialogue device 1 displays the character image 100, the auxiliary image 101, and operation buttons after activation (step S1001).
  • the dialogue processing unit 114 controls the display type and movement of the character image 100 and the auxiliary image 101.
  • the dialogue processing unit 114 displays an image that attracts the user's interest, such as moving the character indicated by the character image 100 on the screen or shaking the character's head.
  • the dialogue processing unit 114 may change or move the color of the auxiliary image 101.
  • the display of the character image 100 and the auxiliary image 101 is an aspect of outputting dialogue promotion information.
  • the control unit 111 of the interactive device 1 activates the camera 18 while the interactive device 1 is operating (step S1002).
  • the camera 18 is activated in, for example, a moving image shooting mode.
  • the dialogue apparatus 1 is usually placed on a shelf or a desk, for example.
  • the user of the dialogue device 1 holds the dialogue device 1 and lifts it up to bring the face closer to the display screen 16 or the dialogue device 1 It is assumed that the face approaches the display screen 16 by approaching the side.
  • the camera 18 captures the user's face.
  • the camera 18 outputs the captured image (each frame) included in the moving image to the analysis unit 113.
  • the dialogue start condition determination unit 112 is set to acquire face detection information (first acquisition information) when the photographing application processing unit 118 is notified of detection of a human face image from the analysis unit 113.
  • the analysis unit 113 always determines whether or not a face image (first acquisition information) can be detected from the captured image. When a face image is detected, the analysis unit 113 determines that the first acquisition information matches the conversation start condition. When the face image is detected, the analysis unit 113 determines whether or not the face image matches a face image that has been captured and stored in advance as in the face authentication process. The analysis unit 113 determines whether or not the face image has been successfully authenticated (step S1003).
  • the analysis unit 113 When the face image matches the face image that has been captured and stored in advance by the user, the analysis unit 113 outputs a dialogue start instruction indicating a successful authentication to the dialogue processing unit 114. As described above, the dialogue processing unit 114 outputs the dialogue information when it is detected that the face image is a predetermined user. Note that the analysis unit 113 may output a dialogue start instruction to the dialogue processing unit 114 when a face image can be detected from the captured image without performing face authentication.
  • the dialogue processing unit 114 When the dialogue processing unit 114 detects the authentication success, the dialogue processing unit 114 performs dialogue processing (step S1004). In this dialogue process, the dialogue processing unit 114 performs display by adding a predetermined action to the character image 100 and the auxiliary image 101.
  • the dialogue processing is as described in the other embodiments.
  • FIG. 11 is a diagram showing a robot having the function of an interactive device.
  • the robot 500 may have the function of the above-described dialogue apparatus 1.
  • the robot 500 may be provided with the display screen 16 shown by the interactive apparatus 1 on the front surface.
  • the interactive apparatus 1 provided in the robot 500 may control the robot 500 so that the robot 500 performs the operation of the character image 100 instead of displaying the character image 100.
  • the dialogue apparatus 1 may control mechanical eye movements, mouth movements, foot movements, and the like included in the robot 500.
  • FIG. 12 is a diagram showing the minimum configuration of the interactive apparatus.
  • the dialogue apparatus 1 includes at least functions of a dialogue start condition determination unit 112, an analysis unit 113, and a dialogue processing unit 114.
  • the dialog start condition determination unit 112 determines whether or not the acquired first acquisition information matches the dialog start condition.
  • the analysis unit 113 analyzes the information obtained from the sensor device (such as a camera) when the first acquisition information matches the conversation start condition.
  • the dialogue processing unit 114 detects a user based on a user detection analysis result using information obtained from the sensor device, the dialogue processing unit 114 performs an output process of dialogue information related to the dialogue with the user.
  • the dialog device 1 obtains response information from the user according to the output of the dialog information, and explains the operation of a predetermined application according to the dialog processing unit 114 that analyzes the response information and the response information analysis result.
  • An application operation unit that outputs the operation explanation information to the dialogue processing unit 114.
  • the dialogue processing unit 114 outputs dialogue information using the operation explanation information and a character image for assisting dialogue based on the dialogue information.
  • the above-described dialogue apparatus 1 has a computer system inside.
  • a program for causing the interactive device 1 to perform each of the above-described processes is stored in a computer-readable recording medium of the interactive device 1, and the computer of the interactive device 1 reads and executes the program.
  • the above processing is performed.
  • the computer-readable recording medium means a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, or the like.
  • the computer program may be distributed to the computer via a communication line, and the computer that has received the distribution may execute the program.
  • the above program may be for realizing a part of the functions of each processing unit described above. Furthermore, what can implement
  • a dialog start condition determining unit that determines whether or not the acquired first acquisition information matches the dialog start condition; When the first acquisition information matches the dialog start condition, an analysis unit that performs analysis related to user detection based on the first acquisition information or information obtained from the sensor device; A dialogue processing unit for outputting first dialogue information related to dialogue with the user when the user is detected based on a user detection analysis result of the analysis;
  • a dialogue apparatus comprising:
  • the second acquisition information includes audio information
  • the dialogue apparatus according to claim 3, wherein the transmission processing unit transmits the character information obtained by analyzing the voice information to the transmission source of the first acquisition information.
  • the second acquisition information includes a face image
  • the dialogue apparatus according to Supplementary Note 3 or Supplementary Note 4, wherein the dialogue processing unit determines whether to output second dialogue information based on the detection information of the face image.
  • the dialog start condition determining unit obtains the first acquisition information that matches the dialog start condition when the first acquisition information including a face image is acquired after outputting the dialog promotion information that prompts the user to perform a dialog. Judgment, The analysis unit analyzes whether the face image included in the first acquisition information is a predetermined user, The dialogue apparatus according to any one of notes 1 to 4, wherein the dialogue processing unit outputs the first dialogue information when it is detected that the face image is the predetermined user.
  • the first dialogue information includes character information
  • Appendix 11 A robot provided with the interactive device according to any one of appendix 1 to appendix 10.
  • a dialogue processing unit for obtaining response information from the user according to output of dialogue information related to dialogue with the user, and analyzing the response information;
  • An application operation unit that outputs operation explanation information that explains an operation of a predetermined application according to a result of the analysis to the dialog processing unit,
  • the dialogue processing unit outputs the dialogue information using the operation explanation information and a character image for assisting the dialogue based on the dialogue information.

Abstract

L'invention un dispositif interactif équipé de : une unité de détermination de condition de début d'interaction pour déterminer si des premières informations d'acquisition acquises correspondent ou non à une condition de début d'interaction; une unité d'analyse pour effectuer une analyse associée à la détection d'un utilisateur sur la base des premières informations d'acquisition ou sur la base d'informations obtenues à partir d'un dispositif de capteur lorsque les premières informations d'acquisition correspondent à la condition de début d'interaction; et une unité de traitement d'interaction pour délivrer en sortie des premières informations d'interaction associées à une interaction avec l'utilisateur lorsque l'utilisateur est détecté sur la base du résultat d'analyse de détection d'utilisateur de l'analyse.
PCT/JP2017/032410 2016-09-12 2017-09-08 Dispositif interactif, robot, procédé de traitement, programme WO2018047932A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-177296 2016-09-12
JP2016177296 2016-09-12

Publications (1)

Publication Number Publication Date
WO2018047932A1 true WO2018047932A1 (fr) 2018-03-15

Family

ID=61561396

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/032410 WO2018047932A1 (fr) 2016-09-12 2017-09-08 Dispositif interactif, robot, procédé de traitement, programme

Country Status (1)

Country Link
WO (1) WO2018047932A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10149271A (ja) * 1996-11-19 1998-06-02 D M L:Kk ユーザーインターフェースシステム
JP2005115896A (ja) * 2003-10-10 2005-04-28 Nec Corp 通信装置及び通信方法
WO2015155977A1 (fr) * 2014-04-07 2015-10-15 日本電気株式会社 Système de liaison, dispositif, procédé, et support d'enregistrement
WO2016098589A1 (fr) * 2014-12-15 2016-06-23 ソニー株式会社 Dispositif de traitement d'informations, programme, procédé de traitement d'informations et système de traitement d'informations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10149271A (ja) * 1996-11-19 1998-06-02 D M L:Kk ユーザーインターフェースシステム
JP2005115896A (ja) * 2003-10-10 2005-04-28 Nec Corp 通信装置及び通信方法
WO2015155977A1 (fr) * 2014-04-07 2015-10-15 日本電気株式会社 Système de liaison, dispositif, procédé, et support d'enregistrement
WO2016098589A1 (fr) * 2014-12-15 2016-06-23 ソニー株式会社 Dispositif de traitement d'informations, programme, procédé de traitement d'informations et système de traitement d'informations

Similar Documents

Publication Publication Date Title
EP3342160B1 (fr) Appareil d'affichage et ses procédés de commande
US10678897B2 (en) Identification, authentication, and/or guiding of a user using gaze information
JP5779641B2 (ja) 情報処理装置、方法およびプログラム
JP5012968B2 (ja) 会議システム
US9848166B2 (en) Communication unit
JP6551507B2 (ja) ロボット制御装置、ロボット、ロボット制御方法およびプログラム
US20190019512A1 (en) Information processing device, method of information processing, and program
JP2018072876A (ja) 感情推定システム、感情推定モデル生成システム
KR102055677B1 (ko) 이동 로봇 및 그 제어방법
US20130346085A1 (en) Mouth click sound based computer-human interaction method, system and apparatus
US9548012B1 (en) Adaptive ergonomic keyboard
KR20150128386A (ko) 디스플레이 장치 및 그의 화상 통화 수행 방법
JP2009166184A (ja) ガイドロボット
WO2016152200A1 (fr) Système de traitement d'informations et procédé de traitement d'informations
Pandey et al. An Assistive Technology-based Approach towards Helping Visually Impaired People
WO2016157993A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
WO2018047932A1 (fr) Dispositif interactif, robot, procédé de traitement, programme
WO2018056169A1 (fr) Dispositif interactif, procédé de traitement, et programme
KR20230043749A (ko) 전자 디바이스들에 대한 적응적 사용자 등록
US10503278B2 (en) Information processing apparatus and information processing method that controls position of displayed object corresponding to a pointing object based on positional relationship between a user and a display region
Goetze et al. Multimodal human-machine interaction for service robots in home-care environments
KR101629758B1 (ko) 글라스형 웨어러블 디바이스의 잠금해제 방법 및 프로그램
KR20160015704A (ko) 글라스형 웨어러블 디바이스를 이용한 지인 인식시스템 및 인식방법
US9122312B2 (en) System and method for interacting with a computing device
WO2018061871A1 (fr) Dispositif de terminal, système de traitement d'informations, procédé de traitement, et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17848870

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17848870

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP