WO2018173404A1 - 情報処理装置および情報処理方法 - Google Patents
情報処理装置および情報処理方法 Download PDFInfo
- Publication number
- WO2018173404A1 WO2018173404A1 PCT/JP2017/046493 JP2017046493W WO2018173404A1 WO 2018173404 A1 WO2018173404 A1 WO 2018173404A1 JP 2017046493 W JP2017046493 W JP 2017046493W WO 2018173404 A1 WO2018173404 A1 WO 2018173404A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information processing
- user
- attention
- control unit
- information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/04—Details of speech synthesis systems, e.g. synthesiser structure or memory management
- G10L13/047—Architecture of speech synthesisers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- This disclosure relates to an information processing apparatus and an information processing method.
- Patent Literature 1 discloses a technique for controlling the output order of utterances related to information notification based on a preset priority.
- the present disclosure proposes a new and improved information processing apparatus and information processing method that allow the user to more effectively grasp the notification contents.
- a control unit that controls information notification to the user based on the notification content, the control unit, based on the calculated degree of difficulty acquisition attention related to the user, the subject of the notification content
- An information processing apparatus for determining an output position is provided.
- the processor includes controlling information notification to the user based on the notification content, and the controlling is based on the calculated degree of difficulty of acquiring the user's attention.
- An information processing method is further provided, further comprising determining an output position of the subject in the notification content.
- FIG. 5 is a diagram illustrating a relationship between user attention and understanding of notification content according to an embodiment of the present disclosure.
- FIG. It is an example of the system configuration
- It is a flowchart which shows the flow of determination of the output position of a theme by the speech control part which concerns on the embodiment.
- FIG. 3 is a diagram illustrating a hardware configuration example according to an embodiment of the present disclosure.
- Embodiment 1.1 Outline of Embodiment 1.2.
- System configuration example 1.3 Functional configuration example of information processing terminal 10 1.4.
- Functional configuration example of external device 20 1.5.
- Functional configuration example of information processing server 30 1.6. Details of determining the output position of the subject 1.7. 1. Flow of operation of information processing server 30 2.
- the information presented by the agent is mainly divided into two types, response and notification.
- the above-mentioned response means that the agent presents information in response to an inquiry from the user. For example, when a user asks "What is my schedule today?", The agent responds to the inquiry and outputs "Tonight's dinner schedule" Corresponds to the above response.
- the above notification refers to sending information from the agent to the user.
- the above notification corresponds to the case where the agent performs output such as “I received a mail from Mr. A.
- the contents are as follows” based on the reception of the mail.
- the main difference between the response and the notification is a user's attention level (hereinafter also referred to as attention).
- attention a user's attention level
- the user's attention since an inquiry from the user is assumed, it is assumed that the user's attention is directed to the agent when the agent outputs information. For this reason, in the case of a response, it can be said that there is a high possibility that the user can grasp the output information.
- FIG. 1 is a diagram showing the relationship between user attention and understanding of notification contents.
- the information notification SO1 when the information notification SO1 is output from the information processing terminal 10 in a situation where the noise level is high, such as when audio is output from the television device, until the attention of the user U1a is obtained. Is expected to take time. For this reason, it may be difficult for the user U1a to grasp the contents of the first half of the information notification SO1 that is not suitable for attention.
- the information processing apparatus and the information processing method according to the present embodiment have been conceived by paying attention to the above points, and realize appropriate information notification according to the user's attention situation. For this reason, the information processing apparatus and the information processing method according to the present embodiment determine the output position of the subject in the notification content based on the attention acquisition difficulty level that is an index of the difficulty of the user's attention, and the output position
- One of the features is that the information processing terminal is notified of information according to the above.
- the information processing apparatus determines that the degree of attention acquisition difficulty related to the user U1a is high in a situation where the noise level is high, The output position may be set in the latter half of the notification content. Further, for example, in a situation where the noise level is low, the information processing apparatus may determine that the attention acquisition difficulty level related to the user U1b is low, and set the output position of the subject in the first half of the notification content. According to the information processing apparatus and the information processing method according to the present embodiment, it is possible to greatly reduce the possibility that the user may fail to grasp the subject matter of the notification content, and to realize more convenient information notification.
- FIG. 2 is an example of a system configuration diagram of the information processing system according to the present embodiment.
- the information processing system according to the present embodiment includes an information processing terminal 10, an external device 20, and an information processing server 30. Further, the information processing terminal 10 and the information processing server 30, and the external device 20 and the information processing server 30 are connected via the network 40 so that they can communicate with each other.
- the information processing terminal 10 is an information processing apparatus that performs various information notifications to the user based on control by the information processing server 30.
- the information processing terminal 10 according to the present embodiment can perform voice output of notification contents based on the output position of the subject determined by the information processing server 30.
- the information processing terminal 10 according to the present embodiment may be, for example, a stationary, built-in, or autonomous mobile dedicated device.
- the information processing terminal 10 according to the present embodiment may be a mobile phone, a smartphone, a PC (Personal Computer), a tablet, or various wearable devices.
- the information processing terminal 10 according to the present embodiment is defined as various devices having a voice information notification function.
- the information processing terminal 10 may have a function of collecting user's utterances and surrounding sounds and transmitting them to the information processing server 30. Further, the information processing terminal 10 according to the present embodiment may capture an image of the user and transmit it to the information processing server 30. Various pieces of information collected by the information processing terminal 10 can be used for calculating an attention acquisition difficulty and detecting an attention action by the information processing server 30 described later.
- the external device 20 is an information processing device that transmits the operating status of the device and collected sensor information to the information processing server 30.
- the operating status and sensor information described above can be used for calculation of the attention acquisition difficulty by the information processing server 30.
- the attention acquisition difficulty level according to the present embodiment can be calculated in consideration of the user's situation in addition to noise.
- the external device 20 according to the present embodiment may be various devices that are operated or used by the user.
- 2 shows a case where the external device 20 is a game machine
- the external device 20 according to the present embodiment is not limited to such an example.
- the external device 20 according to the present embodiment may be, for example, a mobile phone, a smartphone, a PC, a tablet, a wearable device, or the like.
- the external device 20 according to the present embodiment may be various home appliances, office equipment, indoor facilities including lighting, and the like.
- the information processing server 30 is an information processing apparatus that controls information notification to the user by the information processing terminal 10. At this time, the information processing server 30 according to the present embodiment determines the output position of the subject in the notification content based on the attention acquisition difficulty level of the user, and outputs the audio output of the notification content in accordance with the output position to the information processing terminal. 10 can be performed.
- the above-mentioned attention acquisition difficulty level may be an index indicating the difficulty of the user's attention.
- the information processing server 30 can calculate the attention acquisition difficulty level based on various types of information collected by the information processing terminal 10 and the external device 20.
- the network 40 has a function of connecting the information processing terminal 10 and the information processing server 30, and the external device 20 and the information processing server 30.
- the network 40 may include a public line network such as the Internet, a telephone line network, a satellite communication network, various local area networks (LANs) including Ethernet (registered trademark), a wide area network (WAN), and the like. Further, the network 40 may include a dedicated line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
- the network 40 may include a wireless communication network such as Wi-Fi (registered trademark) or Bluetooth (registered trademark).
- the system configuration example of the information processing system according to this embodiment has been described above. Note that the above-described configuration described with reference to FIG. 2 is merely an example, and the configuration of the information processing system according to the present embodiment is not limited to the example.
- the functions of the information processing terminal 10 and the information processing server 30 according to the present embodiment may be realized by a single device. Further, the functions of the information processing server 30 according to the present embodiment can be realized by being distributed by a plurality of devices.
- the configuration of the information processing system according to the present embodiment can be flexibly modified according to specifications and operations.
- FIG. 3 is an example of a functional block diagram of the information processing terminal 10 according to the present embodiment.
- the information processing terminal 10 according to the present embodiment includes an audio collection unit 110, a sensor unit 120, an output unit 130, and a communication unit 140.
- the voice collection unit 110 has a function of collecting the user's voice and surrounding environmental sounds.
- the voice collection unit 110 according to the present embodiment is realized by a microphone that converts a user's voice and environmental sound into an electrical signal.
- the sensor unit 120 has a function of capturing a user image.
- the sensor unit 120 includes an imaging sensor.
- the sensor part 120 may collect various sensor information used for estimation of a user's condition. Therefore, the sensor unit 120 includes, for example, an infrared sensor, an acceleration sensor, a gyro sensor, a geomagnetic sensor, a vibration sensor, a pressure sensor, and a GNSS (Global Navigation Satellite System) signal receiver.
- GNSS Global Navigation Satellite System
- the output unit 130 has a function of outputting notification contents based on control by the information processing server 30. At this time, the output unit 130 may perform sound output based on the artificial sound synthesized by the information processing server 30.
- the output unit 130 according to the present embodiment includes a speaker and an amplifier.
- the output unit 130 may output visual information based on control by the information processing server 30.
- the output unit 130 includes a display device such as a liquid crystal display (LCD) device or an organic light emitting diode (OLED) device.
- LCD liquid crystal display
- OLED organic light emitting diode
- the communication unit 140 has a function of performing information communication with the information processing server 30 via the network 40. Specifically, the communication unit 140 transmits sound information collected by the sound collection unit 110, image information collected by the sensor unit 120, and sensor information to the information processing server 30. Further, the communication unit 140 receives the artificial voice information related to the notification content, the text information related to the notification content, and the like from the information processing server 30.
- the functional configuration example of the information processing terminal 10 according to the present embodiment has been described above.
- said structure demonstrated using FIG. 3 is an example to the last, and the function structure of the information processing terminal 10 which concerns on this embodiment is not limited to the example which concerns.
- the information processing terminal 10 according to the present embodiment may further include a configuration other than that illustrated in FIG.
- the information processing terminal 10 may include an input unit that detects an input operation by a user, for example.
- the functional configuration of the information processing terminal 10 according to the present embodiment can be flexibly modified.
- FIG. 4 is an example of a functional block diagram of the external device 20 according to the present embodiment.
- the external device 20 according to the present embodiment includes an operating status acquisition unit 210, a sensor unit 220, and a communication unit 230.
- the operating status acquisition unit 210 has a function of acquiring the operating status of the apparatus.
- the operating status of the external device 20 acquired by the operating status acquiring unit 210 can be used for the calculation of the attention acquisition difficulty by the information processing server 30.
- the operating status acquisition unit 210 may detect that a keyboard, a mouse, a touch panel, or the like is operated by the user.
- the operating status acquisition unit 210 may detect that a controller or the like is operated by a user or the like.
- the operation status acquisition unit 210 may detect that a call is being performed.
- the operating status acquisition unit 210 indicates that the refrigerator door is open, the heat retention function of the rice cooker is operating and the lid is open, or the vacuum cleaner It may be detected that the suction operation is being performed.
- the operation status acquisition unit 210 may detect that video or audio is being output and that there is a person around. At this time, the operating status acquisition unit 210 can detect that there is a person around, for example, based on sensor information collected by the human sensor.
- the operation status acquisition unit 210 according to the present embodiment can also acquire the operation status of the external device 20 based on various sensor information collected by the sensor unit 220.
- the operating status acquisition unit 210 may acquire operating statuses of other external devices 20 or may acquire operating statuses of a plurality of external devices 20.
- the operating status acquisition unit 210 according to the present embodiment can estimate the operating status of the external device 20 based on the sensor information related to the other external device 20 collected by the sensor unit 220.
- the sensor unit 220 has a function of collecting various sensor information related to the external device 20.
- the sensor information collected by the sensor unit 220 may be used for acquisition of the operation status by the operation status acquisition unit 210.
- the sensor unit 220 may collect sensor information related to the user and surrounding conditions.
- the sensor unit 220 can acquire, for example, a user's speech or a user's image.
- the sensor unit 220 according to the present embodiment may include various sensor devices.
- the sensor unit 220 includes, for example, a microphone, an imaging sensor, a thermal sensor, a vibration sensor, an illuminance sensor, a human sensor, an acceleration sensor, a gyro sensor, a geomagnetic sensor, and the like.
- the communication unit 230 has a function of performing information communication with the information processing server 30 via the network 40. Specifically, the communication unit 230 transmits the operation status of the external device 20 acquired by the operation status acquisition unit 210 to the information processing server 30. Further, the communication unit 230 may transmit the sensor information collected by the sensor unit 220 to the information processing server 30.
- the functional configuration example of the external device 20 according to the present embodiment has been described above. Note that the above-described configuration described with reference to FIG. 4 is merely an example, and the functional configuration of the external device 20 according to the present embodiment is not limited to the related example. In addition to the above configuration, the external device 20 according to the present embodiment may include various configurations according to the characteristics of the external device 20.
- FIG. 5 is an example of a functional block diagram of the information processing server 30 according to the present embodiment.
- the information processing server 30 includes an acoustic analysis unit 310, an image analysis unit 320, a situation estimation unit 330, a natural language processing unit 340, a user information DB 350, an utterance control unit 360, a speech synthesis unit 370, and a communication unit. 380.
- the acoustic analysis unit 310 has a function of recognizing the loudness based on sound information transmitted from the information processing terminal 10 or the external device 20. More specifically, the acoustic analysis unit 310 according to the present embodiment may recognize the ambient noise level. At this time, the acoustic analysis unit 310 according to the present embodiment can calculate the noise level based on, for example, the root mean square (also referred to as effective value or RMS) of the amplitude value of the acoustic signal in unit time. The unit time may be a frame time of one image captured by the information processing terminal 10, for example.
- the noise level calculated by the acoustic analysis unit 310 according to the present embodiment is used for calculating an attention acquisition difficulty by the situation estimation unit 330 described later.
- the acoustic analysis unit 310 has a function of recognizing the type of sound based on sound information transmitted from the information processing terminal 10 or the external device 20.
- the acoustic analysis unit 310 according to the present embodiment may recognize a work sound generated in accordance with a user's action.
- the work sound includes, for example, a sound generated when the user hits the keyboard of the PC, a sound generated when the user operates a home appliance such as a vacuum cleaner, and the like.
- the work sound includes, for example, sound generated by operations such as washing, food processing, and processing performed by the user in the kitchen.
- the above-mentioned work sound may include utterances of users and others.
- the work sound recognized by the acoustic analysis unit 310 according to the present embodiment is used as an index of the user's action situation in the calculation of the attention acquisition difficulty by the situation estimation unit 330 described later.
- the image analysis unit 320 has a function of recognizing a user's situation based on image information and sensor information transmitted from the information processing terminal 10 or the external device 20.
- the image analysis unit 320 recognizes a situation related to the user's attention.
- the image analysis unit 320 may recognize the distance from the information processing terminal 10 to the user.
- the image analysis unit 320 can recognize the distance based on, for example, the size of the user's face area in the image and information collected by a depth sensor or the like.
- the situation of the user recognized by the image analysis unit 320 according to the present embodiment is used for calculating an attention acquisition difficulty and detecting an attention action by the situation estimation unit 330 described later.
- the image analysis unit 320 may recognize the user's face orientation and line of sight. More specifically, the image analysis unit 320 can recognize how far the user's face direction and line of sight deviate from the direction of the information processing terminal 10.
- the user situation recognized by the image analysis unit 320 includes an action situation.
- the image analysis unit 320 according to the present embodiment may recognize a user's action state based on image information or sensor information transmitted from the information processing terminal 10 or the external device 20. For example, the image analysis unit 320 may recognize that the user is reading a book or concentrated on studying based on image information. For example, the image analysis unit 320 may recognize that the user is exercising based on image information, acceleration information, angular velocity information, and the like.
- the user's behavior situation recognized by the image analysis unit 320 according to the present embodiment is used for calculating an attention acquisition difficulty by the situation estimation unit 330 described later.
- the situation estimation unit 330 has a function of calculating an attention acquisition difficulty level that is an index of difficulty of user's attention. At this time, the situation estimation unit 330 according to the present embodiment can calculate the attention acquisition difficulty level based on the noise level and the user situation. More specifically, the situation estimation unit 330 includes the noise level recognized by the acoustic analysis unit 310, the user's face and line of sight recognized by the image analysis unit 320, the distance between the user and the information processing terminal 10, and the like. The attention acquisition difficulty level may be calculated based on the above.
- the user situation includes the user's action situation. That is, the situation estimation unit 330 according to the present embodiment includes the work sound recognized by the acoustic analysis unit 310, the operation status of the external device 20 transmitted from the external device 20, and the user behavior status recognized by the image analysis unit 320.
- the attention acquisition difficulty level may be calculated based on the above.
- the situation estimation unit 330 according to the present embodiment can comprehensively calculate the user's difficulty in taking into account various factors other than the noise level.
- the situation estimation unit 330 according to the present embodiment may calculate the attention acquisition difficulty level A by linearly combining the weights of the factors, for example, as shown in the following formula (1).
- K i in the above formula (1) represents a weighting factor of each factor, and may be a value set for each factor.
- F i in the above formula (1) indicates a detected value of each factor.
- the detection value F i includes a sound volume (dB) recognized by the acoustic analysis unit 310 and a level value corresponding to the sound volume (for example, 1 to 10, Etc.) may be input. That is, the situation estimation unit 330 estimates that the higher the noise level, the more difficult the user's attention is taken.
- the factor when the factor is the user's behavior situation, a value indicating whether or not the corresponding behavior is detected (for example, undetected: 0, detected: 1) may be input as the detected value F i .
- the user behavior status includes the work sound recognized by the acoustic analysis unit 310, the user behavior status recognized by the image analysis unit 320, or the user behavior estimated from the operating status of the external device 20.
- the situation is included. That is, the situation estimation unit 330 estimates that the user's attention is difficult when the user is performing other actions.
- the detection value F i includes a distance value recognized by the image analysis unit 320 (such as cm) and a level value (for example, 1 to 10) corresponding to the distance. , Etc.) may be entered. That is, the situation estimation unit 330 estimates that the more the user is away from the information processing terminal 10, the more difficult the user's attention is taken.
- the detection value F i is a value indicating whether the face or line of sight is facing the information processing terminal (for example, facing: 0, facing). 1), the degree of deviation (°) between the direction of the information processing terminal 10 and the direction of the face or line of sight may be input. That is, the situation estimation unit 330 estimates that the user's attention is difficult when the information processing terminal 10 is not in the user's field of view.
- the calculation example of the attention acquisition difficulty level by the situation estimation unit 330 according to the present embodiment has been described. Note that the above calculation method is merely an example, and the situation estimation unit 330 according to the present embodiment may calculate the attention acquisition difficulty level using other mathematical expressions and methods. In addition, the situation estimation unit 330 according to the present embodiment optimizes factors and weighting factor values used for calculating the attention acquisition difficulty by learning the calculated attention acquisition difficulty and the actual user reaction. It is also possible to do.
- the situation estimation unit 330 has a function of detecting the user's attention behavior.
- the above attention action refers to an action in which the user reacts to the notification content output from the information processing terminal 10.
- the situation estimation unit 330 may detect the attention behavior based on the user situation recognized by the image analysis unit 320.
- the situation estimation unit 330 can detect an attention action based on, for example, that the user has approached the information processing terminal 10 or that the user's face or line of sight is directed toward the information processing terminal 10.
- the situation estimation unit 330 can detect an attention behavior based on the user's utterance recognized by the acoustic analysis unit 310.
- the situation estimation unit 330 can detect an attention action based on, for example, the user uttering “What?” Or “E?”.
- the natural language processing unit 340 has a function of performing natural language processing such as morphological analysis, dependency structure analysis, and assignment of a semantic concept based on text information related to the notification content.
- the text information related to the notification content may be stored in the information processing server 30 in advance, or may be acquired via the communication unit 380 or the network 40.
- the user information DB 350 is a database that stores various information related to the user. In addition to the user name and ID, the user information DB 350 stores attribute information such as age, sex, language used, and birthplace. The attribute information stored in the user information DB 350 according to the present embodiment is used for forming a notification content by the utterance control unit 360 described later.
- the user information DB 350 according to the present embodiment may store user image information, audio features, and the like. In this case, the acoustic analysis unit 310 and the image analysis unit 320 can also identify the user based on the information stored in the user information DB 350. Further, the user information DB 350 may store the user's hobbies and schedules.
- the utterance control unit 360 has a function of controlling information notification to the user based on the notification content. More specifically, the utterance control unit 360 according to the present embodiment is calculated by the situation estimation unit 330 and the function of extracting the subject from the notification content based on the result of the natural language processing by the natural language processing unit 340. And a function for determining the output position of the subject in the notification content based on the attention acquisition difficulty level. In addition, the utterance control unit 360 according to the present embodiment causes the information processing terminal 10 to output a notification content in accordance with the determined output position. Note that the utterance control unit 360 may cause the information processing terminal 10 to output the notification content as visual information.
- FIG. 6 is a diagram for explaining a basic concept of information notification by the utterance control unit 360 according to the present embodiment.
- the intensity of the attention acquisition difficulty level A calculated by the situation estimation unit 330 is shown as levels 1 to 3, and in the middle part and the lower part, changes in user's attention at each level and utterances are shown.
- the subject output positions determined by the control unit 360 are shown.
- the utterance control unit 360 may arrange the output position of the subject SP in the first half of the notification content, particularly in the beginning.
- the utterance control unit 360 may arrange the output position of the subject SP near the center of the notification content.
- the utterance control unit 360 may arrange the output position of the subject SP at the latter half of the notification content, particularly at the end.
- the utterance control unit 360 can output the subject at a timing at which the user's attention is expected to become the highest with respect to the transition of the user's attention according to the degree of attention acquisition difficulty.
- the output position of the theme of notification content can be changed dynamically according to a user's condition, and notification content is notified to a user at a more effective timing. It is possible to notify the subject of
- the speech synthesizer 370 has a function of generating artificial speech based on the notification content formed by the utterance controller 360.
- the artificial voice generated by the voice synthesizer 370 is transmitted to the information processing terminal 10 via the communication unit 380 and the network 40, and is output by the output unit 130.
- the communication unit 380 has a function of performing information communication with the information processing terminal 10 and the external device 20 via the network 40. Specifically, the communication unit 380 receives sound information, image information, and sensor information from the information processing terminal 10. Further, the communication unit 380 receives the operation status and sensor information of the external device 20 from the external device 20. In addition, the communication unit 380 transmits, to the information processing terminal 10, artificial speech and text information related to the notification content in which the subject output position is designated by the utterance control unit 360.
- each function of the information processing server 30 can be realized by being distributed by a plurality of devices.
- the information processing server 30 may further include the functions of the information processing terminal 10.
- the information processing server 30 can perform voice output of notification contents in accordance with the determined output position of the subject.
- the functional configuration of the information processing server 30 according to the present embodiment can be modified as appropriate according to specifications and operations.
- FIG. 7 is a flowchart showing a flow of determination of the output position of the subject by the utterance control unit 360 according to the present embodiment.
- the utterance control unit 360 first extracts the subject from the notification content based on the result of the natural language processing by the natural language processing unit 340 (S1101).
- FIG. 8 is a diagram for explaining the extraction of the subject by the utterance control unit 360 according to the present embodiment.
- the upper part of FIG. 8 shows the notification content OS, and the middle part shows each clause delimited by the natural language processing unit 340.
- Each phrase may be given a phrase type or semantic concept by the natural language processing unit 340.
- the utterance control unit 360 detects a phrase having a word having a semantic concept such as “request”, “suggestion”, “aspiration”, or “opinion” from each of the phrases. For example, the utterance control unit 360 may detect “I recommend” as shown in FIG. Next, the utterance control unit 360 can detect a phrase “bringing an umbrella.” Having a case that is a target of the detected “I recommend”, and extract a subject “I recommending an an umbrella.”.
- the utterance control unit 360 may extract a clause including proper nouns and numerical values and a predicate clause as a subject of the clause as a subject.
- the utterance control unit 360 may use the content of the message as the subject.
- the utterance control unit 360 can also delete from the notification content information that is difficult or redundant to understand by voice, such as the message transmission date and time, the sender's mail address, URL, and message attribute information. It is.
- the utterance control unit 360 controls the output of visual information by the information processing terminal 10
- the utterance control unit 360 requests the confirmation of the visual information displayed on the monitor or the like as the subject relating to the voice utterance. And the details of the notification content may be output to the output unit 130 of the information processing terminal 10.
- the description of the flow of determination of the output position of the subject by the utterance control unit 360 will be continued.
- the utterance control unit 360 subsequently forms the extracted subject based on the user attribute information (S1102).
- the utterance control unit 360 uses a more direct expression to categorize the subject as “You shoulder take an umbrella.”. It is good.
- the utterance control unit 360 sets the verb phrase to the head of the subject for the child user. You may arrange in.
- the utterance control unit 360 can place the target noun phrase at the beginning of the subject.
- the utterance control unit 360 can form a subject that is easier to grasp for the user according to the attribute information of the user stored in the user DB 350 and the characteristics of the language used.
- the utterance control unit 360 may adopt a style that the user uses in daily life regarding the formation of the theme.
- the utterance control unit 360 can learn, for example, the word order and words that the user prefers by collecting utterances in the daily life of the user.
- the utterance control unit 360 may form the theme in a style that the user is familiar with in daily life. For example, when the user is a child, the utterance control unit 360 can form the subject by adopting word order or words that are used by the mother or the like on a daily basis for the user.
- the utterance control unit 360 determines the output position of the subject based on the attention acquisition difficulty calculated by the situation estimation unit 330.
- the utterance control unit 360 may first determine whether or not the attention acquisition difficulty level is equal to or less than the first threshold (S1103). That is, the utterance control unit 360 determines whether or not the attention acquisition difficulty level corresponds to level 1 that is relatively easy to take attention.
- FIG. 9 is a diagram illustrating an example of the output position of the subject determined based on the attention acquisition difficulty level according to the present embodiment. 9 shows the original notification content OS including the subject SP and other text OP.
- attention acquisition difficulty level A is level 1
- utterance control unit 360 generates new notification content CS-1 in which the output position of the subject is changed to the beginning from original notification content OS. I understand that. At this time, the utterance control unit 360 may save the notification content CS-1 as a text file.
- the utterance control unit 360 sets the output position of the subject in the latter half of the notification content. Setting is performed (S1105). More specifically, as shown in FIG. 9, the utterance control unit 360 generates new notification content CS-2 in which the output position of the subject SP is located at the end.
- the utterance control unit 360 measures the length from the beginning to the subject SP in the notification content CS-2 generated in step S1105, and based on the length, the second control that becomes the boundary between the level 2 and the level 3 Is determined (S1106). At this time, the utterance control unit 360 may measure the length by counting the number of characters of other sentences OP. In addition, the utterance control unit 360 determines the second threshold value so that the value increases in proportion to the above length. The longer the other sentence OP is, the higher the possibility that the user's attention can be taken during the output of the other sentence OP without adding additional information to be described later. On the other hand, if the other sentence OP is short, it is difficult to obtain the user's attention while the other sentence OP is being output. The threshold value may be set low.
- the utterance control unit 360 determines whether or not the attention acquisition difficulty level is equal to or less than the second threshold value determined in step S1106 (S1107). That is, the utterance control unit 360 determines whether or not the attention acquisition difficulty level corresponds to level 2.
- the utterance control unit 360 is generated by adding the additional information AP to the beginning of -2 (S1108).
- the utterance control unit 360 may determine the length of the additional information AP according to the attention acquisition difficulty level A. For example, the utterance control unit 360 may add additional information AP having a length proportional to a value obtained by subtracting the second threshold value from the attention acquisition difficulty level A.
- the utterance control unit 360 may add a related topic related to the original notification content OS as additional information AP. At this time, the utterance control unit 360 acquires the name of the user stored in the user DB 350, the user's schedule, hobbies, and the like, and generates a related topic using such information, thereby achieving a so-called cocktail party effect. It may be triggered to get user attention. In addition, the utterance control unit 360 causes the speech synthesis unit 370 to synthesize a sentence (theme SP and other sentences OP) derived from the original notification content OP and related topics with different artificial speech, so that the user can output the speech. You may explicitly indicate that it is a related topic.
- the additional information AP is not limited to the related topic as described above, and may be various information.
- the additional information AP may be music or a radio program.
- the utterance control unit 360 can improve the possibility that the user's attention can be acquired by playing music or a radio program according to the user's preference at the beginning.
- the utterance control unit 360 can determine the output position of the subject according to the level of difficulty level of attention acquisition and cause the information processing terminal 10 to output the notification content according to the output position. According to the above function of the utterance control unit 360 according to the present embodiment, it is possible to present the subject of the notification content at the timing when the user's attention is estimated to be the highest, and effective information notification can be performed by the user. Can be done.
- the output position of the subject described with reference to FIGS. 7 and 9 is merely an example, and the output position of the subject determined by the utterance control unit 360 according to the present embodiment is not limited to the example.
- the utterance control unit 360 may first determine the second threshold value related to the attention acquisition difficulty after the formation of the theme. In this case, the utterance control unit 360 can make the determination of levels 1 to 3 at a time. At this time, if the attention acquisition difficulty level corresponds to level 2, the utterance control unit 360 may arrange the output position of the subject near the center of the notification content.
- the function of determining the output position of the subject by the utterance control unit 360 according to the present embodiment can be flexibly modified according to the original notification content OP and the language used.
- the utterance control unit 360 may perform control to change the output position of the subject when detecting the user's attention action during the output of the notification content.
- the utterance control unit 360 can change the output position of the subject based on the fact that attention behavior is detected during the output of additional information added to the beginning of the notification content. .
- FIG. 10 is a flowchart showing the flow of notification content output control by the utterance control unit 360 according to this embodiment.
- the utterance control unit 360 first determines which level the attention acquisition difficulty level corresponds to (S1201).
- the utterance control unit 360 notifies the information processing terminal 10 at the output position (notification content CS-1) of the subject set in level 1.
- the output start of the content is controlled (S1202), and the notification content is output to the end as it is.
- the utterance control unit 360 uses the subject output position (notification content CS-2) set in level 2 to notify the information content by the information processing terminal 10.
- the output start is controlled (S1203), and the notification content is output to the end as it is.
- the utterance control unit 360 uses the subject output position set in level 3 (notification content CS-3) to notify the information content by the information processing terminal 10.
- the output start is controlled (S1204).
- the utterance control unit 360 continues to execute the detection determination of the attention action during the output (S1205).
- the utterance control unit 360 determines whether additional information is being output (S1206).
- the utterance control unit 360 ends the output of additional information by the information processing terminal 10. (S1207), output of the notification content by the output position of the subject set in level 1 is started. At this time, the utterance control unit 360 may dynamically change the character string information related to the notification content, for example. Further, the utterance control unit 360 generates texts corresponding to the two notification contents related to the level 1 and the level 3 in advance, and converts the text corresponding to the notification contents of the level 1 based on the detection of the attention action. It is also possible to switch.
- FIG. 11 is a diagram showing a specific example of output control of notification contents when the attention acquisition difficulty level is level 3 according to the present embodiment.
- the upper part of FIG. 11 shows the notification content CS-3 set when the attention acquisition difficulty level is level 3.
- the additional information AP is added to the beginning of the notification content CS-3, and the subject SP is arranged at the end of the notification content CS-3.
- the utterance control unit 360 terminates the output of the additional information AP halfway as shown in the lower part of FIG. The position is changed immediately after the additional information AP. At this time, the utterance control unit 360 may end the output of the additional information AP, for example, at the timing when the sentence being output is output to the end. In addition, the utterance control unit 360 can reduce the uncomfortable feeling of switching by performing utterances connecting the additional information AP and the subject SP. Further, when the additional information AP is music or a radio program, the utterance control unit 360 may perform control so that output switching becomes more natural by gradually decreasing the volume.
- the utterance control unit 360 outputs the other sentence OP and the subject SP as they are, as shown in the middle part of FIG. Processing may be terminated.
- the output control of the notification content by the utterance control unit 360 according to the present embodiment has been described above. As described above, according to the utterance control unit 360 according to the present embodiment, the output position of the subject in the notification content can be flexibly controlled according to the user's state, and more effective information notification can be performed. It becomes possible.
- the change control is not limited to this example.
- the utterance control unit 360 according to the present embodiment can change the output position of the subject based on detecting the change of the user's attention acquisition difficulty level during the output of the notification content.
- the utterance control unit 360 detects that the user's attention acquisition difficulty level has changed to level 1 or level 2, and is shown in the lower part of the figure. It is also possible to perform such output control. Further, for example, when the attention acquisition difficulty level is level 2, the utterance control unit 360 changes the theme output position based on the fact that the attention acquisition difficulty level has changed to level 3 during the output of other sentences OP. Additional information AP may be inserted before.
- the utterance control unit 360 can control the re-output of the notification content based on the change in the degree of difficulty for acquiring the attention. For example, when the attention acquisition difficulty level is level 2 or level 3, when the user's attention behavior cannot be detected, the utterance control unit 360 re-sends the notification content at the timing when the user's attention acquisition difficulty level decreases. It may be output.
- the re-output of the notification content may be performed based on a request by the user's utterance or the like.
- the utterance control unit 360 since it is expected that the user's attention was not taken at the time of outputting the theme, the utterance control unit 360 stores the number of requests described above, and the next notification content is output according to the number of requests.
- the attention acquisition difficulty may be calculated to be higher so as to delay the output position of the subject.
- the utterance control unit 360 stores the number of detections described above for each user, and the next notification content is output according to the number of detections of the subject.
- the attention acquisition difficulty level may be calculated to be lower so that the output position is advanced.
- FIG. 12 is a flowchart showing an operation flow of the information processing server 30 according to the present embodiment.
- the communication unit 380 of the information processing server 30 first receives sensor information and operation status from the information processing terminal 10 and the external device 20 (S1301).
- the sensor information includes sound information and image information.
- the acoustic analysis unit 310 performs acoustic analysis based on the sound information received in step S1301 (S1302). At this time, the acoustic analysis unit 310 may perform an analysis related to the noise level and the work sound.
- the image analysis unit 320 performs image analysis based on the image information received in step S1302 (S1302). At this time, the image analysis unit 320 may perform analysis related to the user's situation.
- the situation estimation unit 330 calculates the user's attention acquisition difficulty level based on the operating situation of the external device 20 received in step S1301 and the information analyzed in steps S1302 and S1303 (S1304).
- the utterance control unit 360 extracts the subject from the notification content (S1305).
- the utterance control unit 360 sets the output position of the subject extracted in step S1305 based on the attention acquisition difficulty calculated in step S1304 (S1306).
- the utterance control unit 360 performs utterance control based on the output position of the subject set in step S1306 (S1307).
- FIG. 13 is a block diagram illustrating a hardware configuration example of the information processing terminal 10 and the information processing server 30 according to an embodiment of the present disclosure.
- the information processing terminal 10 and the information processing server 30 include, for example, a CPU 871, a ROM 872, a RAM 873, a host bus 874, a bridge 875, an external bus 876, an interface 877, and an input device 878. , Output device 879, storage 880, drive 881, connection port 882, and communication device 883.
- the hardware configuration shown here is an example, and some of the components may be omitted. Moreover, you may further include components other than the component shown here.
- the CPU 871 functions as, for example, an arithmetic processing unit or a control unit, and controls the overall operation or a part of each component based on various programs recorded in the ROM 872, RAM 873, storage 880, or removable recording medium 901.
- the ROM 872 is a means for storing programs read by the CPU 871, data used for calculations, and the like.
- the RAM 873 for example, a program read by the CPU 871, various parameters that change as appropriate when the program is executed, and the like are temporarily or permanently stored.
- the CPU 871, the ROM 872, and the RAM 873 are connected to each other via, for example, a host bus 874 capable of high-speed data transmission.
- the host bus 874 is connected to an external bus 876 having a relatively low data transmission speed via a bridge 875, for example.
- the external bus 876 is connected to various components via an interface 877.
- the input device 878 for example, a mouse, a keyboard, a touch panel, a button, a switch, a lever, or the like is used. Furthermore, as the input device 878, a remote controller (hereinafter referred to as a remote controller) capable of transmitting a control signal using infrared rays or other radio waves may be used.
- the input device 878 includes a voice input device such as a microphone.
- the output device 879 is a display device such as a CRT (Cathode Ray Tube), LCD, or organic EL, an audio output device such as a speaker or a headphone, a printer, a mobile phone, or a facsimile. It is a device that can be notified visually or audibly.
- the output device 879 according to the present disclosure includes various vibration devices that can output a tactile stimulus.
- the storage 880 is a device for storing various data.
- a magnetic storage device such as a hard disk drive (HDD), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like is used.
- the drive 881 is a device that reads information recorded on a removable recording medium 901 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, or writes information to the removable recording medium 901.
- a removable recording medium 901 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory
- the removable recording medium 901 is, for example, a DVD medium, a Blu-ray (registered trademark) medium, an HD DVD medium, or various semiconductor storage media.
- the removable recording medium 901 may be, for example, an IC card on which a non-contact IC chip is mounted, an electronic device, or the like.
- connection port 882 is a port for connecting an external connection device 902 such as a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface), an RS-232C port, or an optical audio terminal. is there.
- an external connection device 902 such as a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface), an RS-232C port, or an optical audio terminal. is there.
- the external connection device 902 is, for example, a printer, a portable music player, a digital camera, a digital video camera, or an IC recorder.
- the communication device 883 is a communication device for connecting to a network.
- the information processing server 30 determines the output position of the subject in the notification content based on the attention acquisition difficulty level, which is an index of the difficulty of user attention.
- the information processing terminal performs information notification according to the output position. According to such a configuration, it is possible to make the user understand the notification contents more effectively.
- the information processing server 30 controls the audio output of the notification content based on the determined output position of the subject.
- Information output may be controlled.
- the information processing server 30 can control text scrolling related to the notification content in the display device included in the information processing terminal 10. At this time, the information processing server 30 may control the output position of the subject in the text scroll based on the user's attention acquisition difficulty level.
- each step related to the processing of the information processing server 30 in this specification does not necessarily have to be processed in time series in the order described in the flowchart.
- each step related to the processing of the information processing server 30 may be processed in an order different from the order described in the flowchart, or may be processed in parallel.
- a control unit for controlling information notification to the user based on the notification content With The control unit determines the output position of the subject in the notification content based on the calculated attention acquisition difficulty related to the user. Information processing device.
- the control unit controls audio output of the notification content in accordance with the output position.
- the control unit sets the output position of the subject in the first half of the notification content based on the attention acquisition difficulty level being a first threshold value or less.
- the control unit sets the output position of the subject to the second half of the notification content based on the attention acquisition difficulty level exceeding a first threshold.
- the information processing apparatus according to any one of (1) to (3).
- the control unit adds additional information at the beginning of the notification content based on the attention acquisition difficulty level exceeding a second threshold, and causes the notification content including the additional information to be output.
- the information processing apparatus according to any one of (1) to (4).
- the control unit changes the output position of the subject based on the attention action of the user being detected during the output of the notification content.
- the information processing apparatus according to any one of (1) to (5).
- the control unit changes the output position of the subject based on the attention action being detected during the output of additional information included at the beginning of the notification content.
- the information processing apparatus according to (6).
- the control unit terminates the output of the additional information based on detection of the attention behavior during the output of the additional information, and changes the output position of the subject immediately after the additional information.
- the information processing apparatus changes the output position of the subject based on a change in the attention acquisition difficulty during the output of the notification content.
- the information processing apparatus according to any one of (1) to (8).
- the additional information is a related topic related to the notification content.
- the information processing apparatus according to any one of (5), 7, or 8.
- the control unit extracts the subject from the notification content based on a result of natural language processing.
- the information processing apparatus according to any one of (1) to (10).
- the attention acquisition difficulty level is calculated based on at least one of a noise level and the user situation.
- the information processing apparatus according to any one of (1) to (11).
- the user situation includes the user action situation, The attention acquisition difficulty level is calculated based on at least the user's behavior situation, The information processing apparatus according to (12).
- the user's behavior situation is estimated based on at least a working sound that accompanies the user's behavior, The information processing apparatus according to (13).
- the user's behavior status is estimated based on at least the operating status of the external device, The information processing apparatus according to (13) or (14).
- (16) The user behavior situation is estimated based on at least the user image, The information processing apparatus according to any one of (13) to (15).
- a situation estimation unit for calculating the degree of difficulty for acquiring the attention Further comprising The information processing apparatus according to any one of (1) to (16).
- the situation estimation unit detects the user's attention behavior.
- the information processing apparatus according to (17).
- the processor controls information notification to the user based on the notification content; Including The controlling determines a subject output position in the notification content based on the calculated attention acquisition difficulty level of the user; Further including Information processing method.
Abstract
Description
1.実施形態
1.1.実施形態の概要
1.2.システム構成例
1.3.情報処理端末10の機能構成例
1.4.外部装置20の機能構成例
1.5.情報処理サーバ30の機能構成例
1.6.主題の出力位置決定の詳細
1.7.情報処理サーバ30の動作の流れ
2.ハードウェア構成例
3.まとめ
<<1.1.実施形態の概要>>
まず、本開示の一実施形態の概要について説明する。上述したように、近年では、種々のエージェントが開発されている。上記のようなエージェントは、例えば、人工音声を用いた発話により、ユーザに様々な情報を提示することができる。
次に、本実施形態に係る情報処理システムのシステム構成例について説明する。図2は、本実施形態に係る情報処理システムのシステム構成図の一例である。図2を参照すると、本実施形態に係る情報処理システムは、情報処理端末10、外部装置20、および情報処理サーバ30を備える。また、情報処理端末10と情報処理サーバ30、外部装置20と情報処理サーバ30は、ネットワーク40を介して互いに通信が行えるように接続される。
本実施形態に係る情報処理端末10は、情報処理サーバ30による制御に基づいて、ユーザに対し種々の情報通知を行う情報処理装置である。特に、本実施形態に係る情報処理端末10は、情報処理サーバ30が決定した主題の出力位置に基づいて通知内容の音声出力を行うことができる。本実施形態に係る情報処理端末10は、例えば、据え置き型、組み込み型、または自律移動型の専用装置であってもよい。また、本実施形態に係る情報処理端末10は、携帯電話、スマートフォン、PC(Personal Computer)、タブレット、または各種のウェアラブル装置であってもよい。本実施形態に係る情報処理端末10は、音声による情報通知機能を有する種々の装置として定義される。
本実施形態に係る外部装置20は、装置の稼働状況や取集したセンサ情報を情報処理サーバ30に送信する情報処理装置である。上記の稼働状況やセンサ情報は、情報処理サーバ30によるアテンション獲得難易度の算出に用いられ得る。例えば、ある外部装置20が稼働中である場合、ユーザが当該外部装置20の操作に集中していることが想定されるため、アテンションが獲得しづらい状況が推定される。このように、本実施形態に係るアテンション獲得難易度は、騒音のほか、ユーザの状況を加味して算出され得る。
本実施形態に係る情報処理サーバ30は、情報処理端末10によるユーザへの情報通知を制御する情報処理装置である。この際、本実施形態に係る情報処理サーバ30は、ユーザに係るアテンション獲得難易度に基づいて通知内容における主題の出力位置を決定し、当該出力位置に則した通知内容の音声出力を情報処理端末10に行わせることができる。
ネットワーク40は、情報処理端末10と情報処理サーバ30、外部装置20と情報処理サーバ30を接続する機能を有する。ネットワーク40は、インターネット、電話回線網、衛星通信網などの公衆回線網や、Ethernet(登録商標)を含む各種のLAN(Local Area Network)、WAN(Wide Area Network)などを含んでもよい。また、ネットワーク40は、IP-VPN(Internet Protocol-Virtual Private Network)などの専用回線網を含んでもよい。また、ネットワーク40は、Wi-Fi(登録商標)、Bluetooth(登録商標)など無線通信網を含んでもよい。
次に、本実施形態に係る情報処理端末10の機能構成例について説明する。図3は、本実施形態に係る情報処理端末10の機能ブロック図の一例である。図3を参照すると、本実施形態に係る情報処理端末10は、音声収集部110、センサ部120、出力部130、および通信部140を備える。
音声収集部110は、ユーザの音声や周囲の環境音を収集する機能を有する。音声収集部110は、例えば、本実施形態に係る音声収集部110は、ユーザの音声や環境音を電気信号に変換するマイクロフォンにより実現される。
センサ部120は、ユーザの画像を撮像する機能を有する。このために、本実施形態に係るセンサ部120は、撮像センサを備える。また、センサ部120は、ユーザの状況の推定に用いられる種々のセンサ情報を収集してよい。このため、センサ部120は、例えば、赤外線センサ、加速度センサ、ジャイロセンサ、地磁気センサ、振動センサ、圧力センサ、GNSS(Global Navigation Satellite System)信号受信機などを備える。
出力部130は、情報処理サーバ30による制御に基づいて通知内容の出力を行う機能を有する。この際、出力部130は、情報処理サーバ30により合成された人工音声に基づく音声出力を行ってよい。このために、本実施形態に係る出力部130は、スピーカやアンプを備える。
通信部140は、ネットワーク40を介して、情報処理サーバ30との情報通信を行う機能を有する。具体的には、通信部140は、音声収集部110により収集された音情報やセンサ部120により収集された画像情報、センサ情報を情報処理サーバ30に送信する。また、通信部140は、通知内容に係る人工音声情報や、通知内容に係るテキスト情報などを情報処理サーバ30から受信する。
次に、本実施形態に係る外部装置20の機能構成例について説明する。図4は、本実施形態に係る外部装置20の機能ブロック図の一例である。図4を参照すると、本実施形態に係る外部装置20は、稼働状況取得部210、センサ部220、および通信部230を備える。
稼働状況取得部210は、装置の稼働状況を取得する機能を有する。稼働状況取得部210により取得された外部装置20の稼働状況は、情報処理サーバ30によるアテンション獲得難易度の算出に用いられ得る。例えば、外部装置20がPCやスマートフォンなどである場合、稼働状況取得部210は、キーボードやマウス、タッチパネルなどがユーザにより操作されていることを検出してもよい。また、例えば、外部装置20がゲーム機器である場合、稼働状況取得部210は、コントローラなどがユーザなどにより操作されていることを検出してもよい。
センサ部220は、外部装置20に係る種々のセンサ情報を収集する機能を有する。センサ部220が収集したセンサ情報は、稼働状況取得部210による稼働状況の取得に用いられてもよい。また、センサ部220は、ユーザや周囲の状況に係るセンサ情報を収集してもよい。センサ部220は、例えば、ユーザの発話やユーザの画像などを取得することが可能である。このために、本実施形態に係るセンサ部220は、種々のセンサ装置を備えてよい。センサ部220は、例えば、マイクロフォン、撮像センサ、熱センサ、振動センサ、照度センサ、人感センサ、加速度センサ、ジャイロセンサ、地磁気センサなどを備える。
通信部230は、ネットワーク40を介して、情報処理サーバ30との情報通信を行う機能を有する。具体的には、通信部230は、稼働状況取得部210により取得された外部装置20の稼働状況を情報処理サーバ30に送信する。また、通信部230は、センサ部220により収集されたセンサ情報を情報処理サーバ30に送信してもよい。
次に、本実施形態に係る情報処理サーバ30の機能構成例について説明する。図5は、本実施形態に係る情報処理サーバ30の機能ブロック図の一例である。図5を参照すると、情報処理サーバ30は、音響解析部310、画像解析部320、状況推定部330、自然言語処理部340、ユーザ情報DB350、発話制御部360、音声合成部370、および通信部380を備える。
音響解析部310は、情報処理端末10や外部装置20から送信される音情報に基づいて音の大きさを認識する機能を有する。より具体的には、本実施形態に係る音響解析部310は、周囲の騒音レベルを認識してよい。この際、本実施形態に係る音響解析部310は、例えば、単位時間における音響信号の振幅値の二乗平均平方根(実効値またはRMSとも称する)に基づいて騒音レベルを算出することができる。なお、上記の単位時間は、例えば、情報処理端末10により撮像される1画像のフレーム時間が用いられてもよい。本実施形態に係る音響解析部310により算出される騒音レベルは、後述する状況推定部330によるアテンション獲得難易度の算出に用いられる。
画像解析部320は、情報処理端末10や外部装置20から送信される画像情報やセンサ情報に基づいてユーザの状況を認識する機能を有する。本実施形態に係る画像解析部320は、特に、ユーザのアテンションに係る状況を認識する。画像解析部320は、例えば、情報処理端末10からユーザまでの距離を認識してよい。この際、画像解析部320は、例えば、画像中におけるユーザの顔領域の大きさや、深度センサなどにより収集された情報に基づいて、上記の距離を認識することが可能である。本実施形態に係る画像解析部320により認識されるユーザの状況は、後述する状況推定部330によるアテンション獲得難易度の算出やアテンション行動の検出に用いられる。
本実施形態に係る状況推定部330は、ユーザのアテンションの取りづらさの指標であるアテンション獲得難易度を算出する機能を有する。この際、本実施形態に係る状況推定部330は、騒音レベルやユーザの状況に基づいてアテンション獲得難易度を算出することができる。より具体的には、状況推定部330は、音響解析部310により認識された騒音レベルや、画像解析部320により認識されたユーザの顔や視線の向き、ユーザと情報処理端末10との距離などに基づいてアテンション獲得難易度を算出してよい。
自然言語処理部340は、通知内容に係るテキスト情報に基づいて、形態素解析や係り受け構造解析、意味概念の付与などの自然言語処理を行う機能を有する。なお、上記の通知内容に係るテキスト情報は、予め情報処理サーバ30に保持されていてもよいし、通信部380やネットワーク40を介して取得されてもよい。
ユーザ情報DB350は、ユーザに係る種々の情報を記憶するデータベースである。ユーザ情報DB350は、ユーザの名前やIDのほか、例えば、年齢、性別、使用言語、出身地などの属性情報を記憶する。本実施形態に係るユーザ情報DB350に記憶される属性情報は、後述する発話制御部360による通知内容の成形に用いられる。また、本実施形態に係るユーザ情報DB350には、ユーザの画像情報や音声特徴などが記憶されてもよい。この場合、音響解析部310や画像解析部320は、ユーザ情報DB350に記憶される上記の情報に基づいて、ユーザを識別することも可能である。また、ユーザ情報DB350は、ユーザの趣味嗜好やスケジュールなどを記憶してもよい。
発話制御部360は、通知内容に基づくユーザへの情報通知を制御する機能を有する。より具体的には、本実施形態に係る発話制御部360は、自然言語処理部340による自然言語処理の結果に基づいて、通知内容から主題を抽出する機能と、状況推定部330により算出されたアテンション獲得難易度に基づいて、通知内容における主題の出力位置を決定する機能と、を有する。また、本実施形態に係る発話制御部360は、決定した出力位置に則した通知内容を情報処理端末10に音声出力させる。なお、発話制御部360は、情報処理端末10に通知内容を視覚情報として出力させてもよい。
音声合成部370は、発話制御部360により成形された通知内容に基づく人工音声を生成する機能を有する。音声合成部370が生成する人口音声は、通信部380およびネットワーク40を介して情報処理端末10に送信され、出力部130により音声出力される。
通信部380は、ネットワーク40を介して、情報処理端末10および外部装置20との情報通信を行う機能を有する。具体的には、通信部380は、情報処理端末10から、音情報や画像情報、センサ情報を受信する。また、通信部380は、外部装置20から、外部装置20の稼働状況やセンサ情報を受信する。また、通信部380は、発話制御部360により主題の出力位置が指定された通知内容に係る人工音声やテキスト情報を情報処理端末10に送信する。
次に、本実施形態に係る主題の出力位置決定について詳細に述べる。図7は、本実施形態に係る発話制御部360による主題の出力位置の決定の流れを示すフローチャートである。
次に、本実施形態に係る情報処理サーバ30の全体の動作の流れについて説明する。図12は、本実施形態に係る情報処理サーバ30の動作の流れを示すフローチャートである。
次に、本開示の一実施形態に係る情報処理端末10および情報処理サーバ30に共通するハードウェア構成例について説明する。図13は、本開示の一実施形態に係る情報処理端末10および情報処理サーバ30のハードウェア構成例を示すブロック図である。図13を参照すると、情報処理端末10および情報処理サーバ30は、例えば、CPU871と、ROM872と、RAM873と、ホストバス874と、ブリッジ875と、外部バス876と、インターフェース877と、入力装置878と、出力装置879と、ストレージ880と、ドライブ881と、接続ポート882と、通信装置883と、を有する。なお、ここで示すハードウェア構成は一例であり、構成要素の一部が省略されてもよい。また、ここで示される構成要素以外の構成要素をさらに含んでもよい。
CPU871は、例えば、演算処理装置又は制御装置として機能し、ROM872、RAM873、ストレージ880、又はリムーバブル記録媒体901に記録された各種プログラムに基づいて各構成要素の動作全般又はその一部を制御する。
ROM872は、CPU871に読み込まれるプログラムや演算に用いるデータ等を格納する手段である。RAM873には、例えば、CPU871に読み込まれるプログラムや、そのプログラムを実行する際に適宜変化する各種パラメータ等が一時的又は永続的に格納される。
CPU871、ROM872、RAM873は、例えば、高速なデータ伝送が可能なホストバス874を介して相互に接続される。一方、ホストバス874は、例えば、ブリッジ875を介して比較的データ伝送速度が低速な外部バス876に接続される。また、外部バス876は、インターフェース877を介して種々の構成要素と接続される。
入力装置878には、例えば、マウス、キーボード、タッチパネル、ボタン、スイッチ、及びレバー等が用いられる。さらに、入力装置878としては、赤外線やその他の電波を利用して制御信号を送信することが可能なリモートコントローラ(以下、リモコン)が用いられることもある。また、入力装置878には、マイクロフォンなどの音声入力装置が含まれる。
出力装置879は、例えば、CRT(Cathode Ray Tube)、LCD、又は有機EL等のディスプレイ装置、スピーカ、ヘッドホン等のオーディオ出力装置、プリンタ、携帯電話、又はファクシミリ等、取得した情報を利用者に対して視覚的又は聴覚的に通知することが可能な装置である。また、本開示に係る出力装置879は、触覚刺激を出力することが可能な種々の振動デバイスを含む。
ストレージ880は、各種のデータを格納するための装置である。ストレージ880としては、例えば、ハードディスクドライブ(HDD)等の磁気記憶デバイス、半導体記憶デバイス、光記憶デバイス、又は光磁気記憶デバイス等が用いられる。
ドライブ881は、例えば、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリ等のリムーバブル記録媒体901に記録された情報を読み出し、又はリムーバブル記録媒体901に情報を書き込む装置である。
リムーバブル記録媒体901は、例えば、DVDメディア、Blu-ray(登録商標)メディア、HD DVDメディア、各種の半導体記憶メディア等である。もちろん、リムーバブル記録媒体901は、例えば、非接触型ICチップを搭載したICカード、又は電子機器等であってもよい。
接続ポート882は、例えば、USB(Universal Serial Bus)ポート、IEEE1394ポート、SCSI(Small Computer System Interface)、RS-232Cポート、又は光オーディオ端子等のような外部接続機器902を接続するためのポートである。
外部接続機器902は、例えば、プリンタ、携帯音楽プレーヤ、デジタルカメラ、デジタルビデオカメラ、又はICレコーダ等である。
通信装置883は、ネットワークに接続するための通信デバイスであり、例えば、有線又は無線LAN、Bluetooth(登録商標)、又はWUSB(Wireless USB)用の通信カード、光通信用のルータ、ADSL(Asymmetric Digital Subscriber Line)用のルータ、又は各種通信用のモデム等である。
以上説明したように、本開示の一実施形態に係る情報処理サーバ30は、ユーザのアテンションの取りづらさの指標であるアテンション獲得難易度に基づいて通知内容における主題の出力位置を決定し、当該出力位置に則した情報通知を情報処理端末に行わせることを特徴の一つとする。係る構成によれば、ユーザに通知内容をより効果的に把握させることが可能となる。
(1)
通知内容に基づくユーザへの情報通知を制御する制御部、
を備え、
前記制御部は、算出された前記ユーザに係るアテンション獲得難易度に基づいて、前記通知内容における主題の出力位置を決定する、
情報処理装置。
(2)
前記制御部は、前記出力位置に則した前記通知内容の音声出力を制御する、
前記(1)に記載の情報処理装置。
(3)
前記制御部は、前記アテンション獲得難易度が第1の閾値以下であることに基づいて、前記主題の前記出力位置を前記通知内容の前半に設定する、
前記(1)または(2)に記載の情報処理装置。
(4)
前記制御部は、前記アテンション獲得難易度が第1の閾値を上回ることに基づいて、前記主題の前記出力位置を前記通知内容の後半に設定する、
前記(1)~(3)のいずれかに記載の情報処理装置。
(5)
前記制御部は、前記アテンション獲得難易度が第2の閾値を上回ることに基づいて、前記通知内容の冒頭に付加情報を付加し、前記付加情報を含む前記通知内容を出力させる、
前記(1)~(4)のいずれかに記載の情報処理装置。
(6)
前記制御部は、前記通知内容の出力中に前記ユーザのアテンション行動が検出されたことに基づいて、前記主題の前記出力位置を変更する、
前記(1)~(5)のいずれかに記載の情報処理装置。
(7)
前記制御部は、前記通知内容の冒頭に含まれる付加情報の出力中に前記アテンション行動が検出されたことに基づいて、前記主題の前記出力位置を変更する、
前記(6)に記載の情報処理装置。
(8)
前記制御部は、前記付加情報の出力中に前記アテンション行動が検出されたことに基づいて、前記付加情報の出力を終了させ、前記主題の前記出力位置を前記付加情報の直後に変更する、
前記(7)に記載の情報処理装置。
(9)
前記制御部は、前記通知内容の出力中における前記アテンション獲得難易度の変化に基づいて、前記主題の前記出力位置を変更する、
前記(1)~(8)のいずれかに記載の情報処理装置。
(10)
前記付加情報は、前記通知内容に関連する関連話題である、
前記(5)、7、または8のいずれかに記載の情報処理装置。
(11)
前記制御部は、自然言語処理の結果に基づいて前記通知内容から前記主題を抽出する、
前記(1)~(10)のいずれかに記載の情報処理装置。
(12)
前記アテンション獲得難易度は、騒音レベルまたは前記ユーザの状況のうち少なくともいずれかに基づいて算出される、
前記(1)~(11)のいずれかに記載の情報処理装置。
(13)
前記ユーザの状況は、前記ユーザの行動状況を含み、
前記アテンション獲得難易度は、少なくとも前記ユーザの行動状況に基づいて算出される、
前記(12)に記載の情報処理装置。
(14)
前記ユーザの行動状況は、少なくとも前記ユーザの行動に伴い生じる作業音に基づいて推定される、
前記(13)に記載の情報処理装置。
(15)
前記ユーザの行動状況は、少なくとも外部装置の稼働状況に基づいて推定される、
前記(13)または(14)に記載の情報処理装置。
(16)
前記ユーザの行動状況は、少なくとも前記ユーザの画像に基づいて推定される、
前記(13)~(15)のいずれかに記載の情報処理装置。
(17)
前記アテンション獲得難易度を算出する状況推定部、
をさらに備える、
前記(1)~(16)のいずれかに記載の情報処理装置。
(18)
前記状況推定部は、前記ユーザのアテンション行動を検出する、
前記(17)に記載の情報処理装置。
(19)
前記制御部による制御に基づいて前記通知内容を出力する出力部、
をさらに備える、
前記(1)~(18)のいずれかに記載の情報処理装置。
(20)
プロセッサが、通知内容に基づくユーザへの情報通知を制御すること、
を含み、
前記制御することは、算出された前記ユーザに係るアテンション獲得難易度に基づいて、前記通知内容における主題の出力位置を決定すること、
をさらに含む、
情報処理方法。
110 音声収集部
120 センサ部
130 出力部
140 通信部
20 外部装置
210 稼働状況取得部
220 センサ部
230 通信部
30 情報処理サーバ
310 音響解析部
320 画像解析部
330 状況推定部
340 自然言語処理部
350 ユーザ情報DB
360 発話制御部
370 音声合成部
380 通信部
Claims (20)
- 通知内容に基づくユーザへの情報通知を制御する制御部、
を備え、
前記制御部は、算出された前記ユーザに係るアテンション獲得難易度に基づいて、前記通知内容における主題の出力位置を決定する、
情報処理装置。 - 前記制御部は、前記出力位置に則した前記通知内容の音声出力を制御する、
請求項1に記載の情報処理装置。 - 前記制御部は、前記アテンション獲得難易度が第1の閾値以下であることに基づいて、前記主題の前記出力位置を前記通知内容の前半に設定する、
請求項1に記載の情報処理装置。 - 前記制御部は、前記アテンション獲得難易度が第1の閾値を上回ることに基づいて、前記主題の前記出力位置を前記通知内容の後半に設定する、
請求項1に記載の情報処理装置。 - 前記制御部は、前記アテンション獲得難易度が第2の閾値を上回ることに基づいて、前記通知内容の冒頭に付加情報を付加し、前記付加情報を含む前記通知内容を出力させる、
請求項1に記載の情報処理装置。 - 前記制御部は、前記通知内容の出力中に前記ユーザのアテンション行動が検出されたことに基づいて、前記主題の前記出力位置を変更する、
請求項1に記載の情報処理装置。 - 前記制御部は、前記通知内容の冒頭に含まれる付加情報の出力中に前記アテンション行動が検出されたことに基づいて、前記主題の前記出力位置を変更する、
請求項6に記載の情報処理装置。 - 前記制御部は、前記付加情報の出力中に前記アテンション行動が検出されたことに基づいて、前記付加情報の出力を終了させ、前記主題の前記出力位置を前記付加情報の直後に変更する、
請求項7に記載の情報処理装置。 - 前記制御部は、前記通知内容の出力中における前記アテンション獲得難易度の変化に基づいて、前記主題の前記出力位置を変更する、
請求項1に記載の情報処理装置。 - 前記付加情報は、前記通知内容に関連する関連話題である、
請求項5に記載の情報処理装置。 - 前記制御部は、自然言語処理の結果に基づいて前記通知内容から前記主題を抽出する、
請求項1に記載の情報処理装置。 - 前記アテンション獲得難易度は、騒音レベルまたは前記ユーザの状況のうち少なくともいずれかに基づいて算出される、
請求項1に記載の情報処理装置。 - 前記ユーザの状況は、前記ユーザの行動状況を含み、
前記アテンション獲得難易度は、少なくとも前記ユーザの行動状況に基づいて算出される、
請求項12に記載の情報処理装置。 - 前記ユーザの行動状況は、少なくとも前記ユーザの行動に伴い生じる作業音に基づいて推定される、
請求項13に記載の情報処理装置。 - 前記ユーザの行動状況は、少なくとも外部装置の稼働状況に基づいて推定される、
請求項13に記載の情報処理装置。 - 前記ユーザの行動状況は、少なくとも前記ユーザの画像に基づいて推定される、
請求項13に記載の情報処理装置。 - 前記アテンション獲得難易度を算出する状況推定部、
をさらに備える、
請求項1に記載の情報処理装置。 - 前記状況推定部は、前記ユーザのアテンション行動を検出する、
請求項17に記載の情報処理装置。 - 前記制御部による制御に基づいて前記通知内容を出力する出力部、
をさらに備える、
請求項1に記載の情報処理装置。 - プロセッサが、通知内容に基づくユーザへの情報通知を制御すること、
を含み、
前記制御することは、算出された前記ユーザに係るアテンション獲得難易度に基づいて、前記通知内容における主題の出力位置を決定すること、
をさらに含む、
情報処理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP17901790.0A EP3599549A4 (en) | 2017-03-24 | 2017-12-25 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD |
JP2019506947A JP6992800B2 (ja) | 2017-03-24 | 2017-12-25 | 情報処理装置および情報処理方法 |
US16/487,469 US11183167B2 (en) | 2017-03-24 | 2017-12-25 | Determining an output position of a subject in a notification based on attention acquisition difficulty |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-059448 | 2017-03-24 | ||
JP2017059448 | 2017-03-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018173404A1 true WO2018173404A1 (ja) | 2018-09-27 |
Family
ID=63584264
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/046493 WO2018173404A1 (ja) | 2017-03-24 | 2017-12-25 | 情報処理装置および情報処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11183167B2 (ja) |
EP (1) | EP3599549A4 (ja) |
JP (1) | JP6992800B2 (ja) |
WO (1) | WO2018173404A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3893086A4 (en) * | 2018-12-04 | 2022-01-26 | Sony Group Corporation | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013057887A (ja) * | 2011-09-09 | 2013-03-28 | Hitachi Ltd | 音声出力装置、音声出力方法および音声出力プログラム |
JP2015132878A (ja) * | 2014-01-09 | 2015-07-23 | ソニー株式会社 | 情報処理装置、情報処理方法およびプログラム |
JP2015227951A (ja) | 2014-05-30 | 2015-12-17 | シャープ株式会社 | 発話装置、発話制御装置、発話制御システム、発話装置の制御方法、発話制御装置の制御方法、および制御プログラム |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130300759A1 (en) * | 2012-05-08 | 2013-11-14 | Nokia Corporation | Method and apparatus for modifying the presentation of information based on the attentiveness level of a user |
US9196248B2 (en) * | 2013-02-13 | 2015-11-24 | Bayerische Motoren Werke Aktiengesellschaft | Voice-interfaced in-vehicle assistance |
US9075435B1 (en) | 2013-04-22 | 2015-07-07 | Amazon Technologies, Inc. | Context-aware notifications |
US9575971B2 (en) * | 2013-06-28 | 2017-02-21 | Harman International Industries, Incorporated | Intelligent multimedia system |
EP3067782B1 (en) | 2013-11-08 | 2021-05-05 | Sony Corporation | Information processing apparatus, control method, and program |
US9308920B2 (en) * | 2014-02-05 | 2016-04-12 | GM Global Technology Operations LLC | Systems and methods of automating driver actions in a vehicle |
US9639231B2 (en) | 2014-03-17 | 2017-05-02 | Google Inc. | Adjusting information depth based on user's attention |
US20150302718A1 (en) * | 2014-04-22 | 2015-10-22 | GM Global Technology Operations LLC | Systems and methods for interpreting driver physiological data based on vehicle events |
KR101659027B1 (ko) * | 2014-05-15 | 2016-09-23 | 엘지전자 주식회사 | 이동 단말기 및 차량 제어 장치 |
WO2016136062A1 (ja) | 2015-02-27 | 2016-09-01 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
US20160288708A1 (en) * | 2015-03-30 | 2016-10-06 | Panasonic Automotive Systems Company Of America, Division Of Panasonic Corporation Of North America | Intelligent caring user interface |
KR20170100175A (ko) * | 2016-02-25 | 2017-09-04 | 삼성전자주식회사 | 전자 장치 및 전자 장치의 동작 방법 |
US10170111B2 (en) * | 2017-01-19 | 2019-01-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | Adaptive infotainment system based on vehicle surrounding and driver mood and/or behavior |
-
2017
- 2017-12-25 US US16/487,469 patent/US11183167B2/en active Active
- 2017-12-25 EP EP17901790.0A patent/EP3599549A4/en not_active Ceased
- 2017-12-25 WO PCT/JP2017/046493 patent/WO2018173404A1/ja active Application Filing
- 2017-12-25 JP JP2019506947A patent/JP6992800B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013057887A (ja) * | 2011-09-09 | 2013-03-28 | Hitachi Ltd | 音声出力装置、音声出力方法および音声出力プログラム |
JP2015132878A (ja) * | 2014-01-09 | 2015-07-23 | ソニー株式会社 | 情報処理装置、情報処理方法およびプログラム |
JP2015227951A (ja) | 2014-05-30 | 2015-12-17 | シャープ株式会社 | 発話装置、発話制御装置、発話制御システム、発話装置の制御方法、発話制御装置の制御方法、および制御プログラム |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3893086A4 (en) * | 2018-12-04 | 2022-01-26 | Sony Group Corporation | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD |
Also Published As
Publication number | Publication date |
---|---|
JPWO2018173404A1 (ja) | 2020-01-23 |
EP3599549A4 (en) | 2020-03-25 |
US20200193963A1 (en) | 2020-06-18 |
JP6992800B2 (ja) | 2022-01-13 |
EP3599549A1 (en) | 2020-01-29 |
US11183167B2 (en) | 2021-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6738445B2 (ja) | デジタルアシスタントサービスの遠距離拡張 | |
US11227626B1 (en) | Audio response messages | |
KR102279647B1 (ko) | 디지털 어시스턴트 서비스의 원거리 확장 | |
US11620999B2 (en) | Reducing device processing of unintended audio | |
US9720644B2 (en) | Information processing apparatus, information processing method, and computer program | |
JP6229287B2 (ja) | 情報処理装置、情報処理方法及びコンピュータプログラム | |
US11810557B2 (en) | Dynamic and/or context-specific hot words to invoke automated assistant | |
CN110418208A (zh) | 一种基于人工智能的字幕确定方法和装置 | |
JP2019207720A (ja) | 情報処理装置、情報処理方法及びプログラム | |
KR20150138109A (ko) | 수동 시작/종료 포인팅 및 트리거 구문들에 대한 필요성의 저감 | |
JP2017138476A (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US20140303975A1 (en) | Information processing apparatus, information processing method and computer program | |
JP6897677B2 (ja) | 情報処理装置及び情報処理方法 | |
US20200143813A1 (en) | Information processing device, information processing method, and computer program | |
US20210225363A1 (en) | Information processing device and information processing method | |
WO2018173404A1 (ja) | 情報処理装置および情報処理方法 | |
JP2016189121A (ja) | 情報処理装置、情報処理方法およびプログラム | |
WO2017199486A1 (ja) | 情報処理装置 | |
WO2017029850A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
JPWO2019073668A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP2016156877A (ja) | 情報処理装置、情報処理方法およびプログラム | |
JPWO2019017027A1 (ja) | 情報処理装置および情報処理方法 | |
WO2018061346A1 (ja) | 情報処理装置 | |
WO2019026396A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP6897678B2 (ja) | 情報処理装置及び情報処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17901790 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019506947 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2017901790 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2017901790 Country of ref document: EP Effective date: 20191024 |